AI must be integrated in a “useful and safe” way for both patients and clinicians

06 May 2025

Artificial Intelligence (AI) tools must be developed and integrated in a way that is usable, useful and safe, not just for patients but also for the clinicians using them, according to the Medical Protection Society (MPS)

The comments were made as part of MPS’s response to a consultation by the Health Information and Quality Authority’s (HIQA), seeking views on the development of a national framework to drive and promote a safe and responsible approach to the use of AI in healthcare in Ireland.

MPS, which represents the interests of over 16,000 healthcare professionals in Ireland, said that to realise the “incredible opportunities” for improvements in patient care posed by AI, frontline healthcare professionals need to be confident in its use and safety and not wary about how it may impact their decision-making and patient care. 

It called on HIQA to ensure the national framework incorporates the need for sufficient training for clinicians on the tools they will be expected to use, clinician involvement in the design and development of the technologies, and for clinicians to regard the outputs from an AI tool as one part of a wider, holistic picture concerning the patient, rather than the most important part of the decision-making process.

MPS also highlighted the importance of the framework addressing issues around liability. This includes steps that will be needed to reduce the prospect of clinicians becoming “liability sinks” - where they end up absorbing liability for AI-influenced decisions, even when the AI system itself may be flawed.

MPS’s response to the consultation echoes the findings from a recent White Paper, exploring how AI can be responsibly integrated into all aspects of healthcare delivery, which was authored by leading research organisations including MPS’s research arm, the MPS Foundation.

Professor Gozie Offiah, MPS Council member in Ireland and Chair of the MPS Foundation said: “We welcome this consultation from HIQA. At a time when AI and its use in the healthcare system in Ireland are rapidly evolving, it is vital that we have a comprehensive framework to chart its course.

“MPS has a strong interest in ensuring that we collectively make the most of the incredible opportunities for improvements in patient care posed by AI, while at the same time helping healthcare professionals understand and mitigate any emergent risks in relation to its use.

“With that in mind, we believe it is crucial that AI tools are integrated into healthcare systems in a way that is usable, useful, and safe for both patients and the clinicians using them.

“Enabling greater confidence in AI among clinicians is vital if the potential benefits of AI are to be unlocked for patients.

“We hope HIQA will consider the points raised as it develops this important framework, and we look forward to engaging with them during the process.”

END

For further information contact: [email protected]

MPS’s full response to the HIQA consultation can be viewed at: https://www.medicalprotection.org/ireland/about/policy-and-public-affairs/responses-and-reports/mps-response-to-the-health-information-and-quality-authority-on-the-safe-use-of-ai

The seven key MPS recommendations for HIQA to consider in the development of a national framework:

  1. Patient safety: Healthcare providers, clinicians and AI providers should be encouraged to ensure ongoing monitoring and risk assessment on the use of AI tools in order to ensure patient safety.
  2. Adherence to regulatory guidance: to support healthcare providers, clinicians, and AI developers in remaining compliant, all aspects of the framework should be developed in the context of and complement all relevant regulatory guidance. This should include Medical Council guidance, medical device regulation where appropriate, and data protection regulations.
  1. Training: Clinicians should be provided with and ask for training on the AI tools they are expected to use. This will help them to navigate their AI tool use more skilfully and know when confidence in an AI’s outputs would be justified. This training should cover the AI tool’s scope, limitations and decision thresholds, as well as how the model was trained and how it reaches its outputs. As part of this, clinicians should aim to be aware of the data on which the tool relies and be aware of potential bias.
  2. Scope of expertise: Clinicians should only use AI tools within their existing expertise. If there are specific cases where a clinician’s knowledge is limited, clinicians should seek the advice of a human colleague who understands the area well and can oversee the AI tool rather than rely on the AI tool to fill their knowledge gap.
  3. Use of AI in clinical decision making: Clinicians should regard the input from an AI tool as one part of a wider, holistic picture concerning the patient, rather than the most important input into the decision-making process. They should be aware that AI tools can be fallible, and those which perform well for an 'average’ patient may not perform well for the individual in front of them. Clinicians should also feel confident to reject an AI output that they believe to be wrong, or even suboptimal for the patient.
  4. Involvement of clinicians in development of AI: AI developers and clinicians should engage with each other wherever possible to ensure that AI tools are user-focused and fit for purpose for their intended contexts. This should apply not just during the development of AI tools but also in their ongoing upkeep and improvement.
  5. Liability: Clarity around the liability of AI providers will be needed, particularly in relation to AI systems which make recommendations. The framework should consider what steps are needed to reduce the prospect of clinicians becoming 'liability sinks', where they end up absorbing liability for AI-influenced decisions, even when the AI system itself may be flawed. For example, healthcare organisations procuring any AI recommender systems will want to have product liability which covers loss to a patient from an incorrect or harmful AI recommendation, and/or ensure their contract with the AI company includes an indemnity or loss-sharing mechanism in cases where a patient alleges harm by an AI recommendation implemented by a clinician and where the clinician is subsequently held liable. It is also important that the implementation of the new EU Product Liability Directive by December 2026 provides clarity around product liability.

The White Paper is titled Avoiding the AI ‘off-switch’: Make AI work for clinicians, to unlock potential for patients and was published on 26 March 2025 by the MPS Foundation, the Centre for Assuring Autonomy at the University of York, and the Improvement Academy hosted at the Bradford Institute for Health Research.

About MPS

The Medical Protection Society Limited (“MPS”) is the world’s leading protection organisation for doctors, dentists and healthcare professionals. We protect and support the professional interests of more than 300,000 members around the world. Membership provides access to expert advice and support and can also provide, depending on the type of membership required, the right to request indemnity for any complaints or claims arising from professional practice.

Our in-house experts assist with the wide range of legal and ethical problems that arise from professional practice. This can include clinical negligence claims, complaints, medical and dental council inquiries, legal and ethical dilemmas, disciplinary procedures, inquests and fatal accident inquiries.

Our philosophy is to support safe practice in medicine and dentistry by helping to avert problems in the first place. We do this by promoting risk management through our workshops, E-learning, clinical risk assessments, publications, conferences, lectures and presentations.

MPS is not an insurance company. All the benefits of membership of MPS are discretionary as set out in the Memorandum and Articles of Association.