Select country
Refine my search

Common medicolegal dilemmas healthcare professionals are facing with the use of AI

Post date: 21/05/2025 | Time to read article: 9 mins

The information within this article was correct at the time of publishing. Last updated 21/05/2025

Dr Ben White, Deputy Medical Director, addresses some key questions around the use of AI in medical practice.

______

We know healthcare professionals are keen to explore and adopt Artificial Intelligence (AI) tools which may enhance patient care and help to facilitate more efficient working. However, we also know from our medicolegal advice line that members are aware of some risks associated with the use of AI, and want to better understand these risks.

Having reviewed the range of queries Medical Protection has received in relation to AI in the last 12 months, the general concerns raised by members are around whether a healthcare professional can be held liable when using AI should a patient come to harm, in what circumstances a member can request assistance from Medical Protection for matters arising from the use of AI, whether intended use of AI is safe, compliance with data protection legislation, and patient consent - particularly where personal data may need to be shared and when transcribing software is recording consultations.

More specific advice has also been sought on the medicolegal implications of AI use in the transcribing of patient consultations, the generation of “fit to fly” letters or fit notes, the use of clinical “prompts” when using AI, and the processing of laboratory results.

These specific issues will be discussed later in this article, but firstly we will deal with the general considerations.

 

Consent

General Data Protection Regulations (GDPR) requires that there is a lawful basis for processing personal data. Consent is the commonly used lawful basis in medicine.

Medical Protection advise that informed consent should be sought from patients before using AI tools that require the sharing of their personal data with a third party. The way in which consent is obtained would be a matter for the clinician to decide – for example, this might involve verbal consent during consultations, or a physical consent form provided prior to consultations etc. Whatever the method, it is important to document the type of consent given within a patient’s medical record. The GMC’s guidance on decision making and consent provides further guidance on this.

The Information Commissioners Office has published guidance on AI, including a section on consent. The guidance suggests that a clinician “…must ensure that consent is freely given, specific, informed and unambiguous, and involves a clear affirmative act on the part of the individuals.”. Additionally, “…for consent to apply, individuals must have a genuine choice about whether you can use their data” and “…for consent to be valid, individuals must also be able to withdraw consent as easily as they gave it.”.

When it comes to the use of transcription software, it is useful to also read the GMC’s guidance on making and using visual and audio recordings of patients.

 

Data Protection

Members will need to consider their legal and regulatory obligations when deciding on whether to use specific AI software. Paragraph 71 of the GMC’s Good Medical Practice states “You must keep records that contain personal information about patients, colleagues or others securely, and in line with any data protection law requirements and you must follow our guidance on confidentiality: good practice in handling patient information.”

To ensure that the use of a given AI software is secure and in line with data protection legislation, we would recommend seeking guidance from:

  • Hospital Data Protection Officers, who may be able to advise on this in a secondary care setting
  • Integrated Care Boards, who may have policies
  • The Information Commissioners Office (Guidance on AI and data protection | ICO).

If a data breach occurred and a claim arose from this, Medical Protection assistance would be decided on a case-by-case basis. Further information on this can be found on the Understanding your membership section of the Medical Protection website.

 

Indemnity

Understandably, members may have questions not just about the medicolegal implications of using AI in practice and how they can protect themselves from the risks, but also whether they can request assistance from Medical Protection in the event that any medicolegal issues arise from their use of AI.

Members can request assistance in the usual way where a clinical negligence claim arises from a member’s use of AI software, provided the issue relates to the member’s own clinical judgement or actions. This assistance would apply where Medical Protection is providing the member with indemnity for clinical negligence claims.

Medical Protection would not normally provide indemnity for issues relating to the failure of AI software itself—for example, if the software has been incorrectly programmed or developed.

A member’s individual membership does not extend to indemnifying the practice they work for, or other partners, if the practice is named in a claim. However, if other individuals at the practice are also Medical Protection members and are named in the claim, we would consider supporting them individually in line with usual procedures.

Members can also seek assistance with other medicolegal matters, such as regulatory investigations, arising from the use of AI software as part of their clinical practice, in the usual way.

Where a doctor is working for a hospital Trust, or is a GP working in England and Wales, indemnity is likely to fall to state indemnity schemes so doctors may wish to clarify the indemnity situation with the providers of those schemes.

It is important to be aware that clinicians remain responsible for the decisions they make when diagnosing and treating patients when using AI tools.

It is sensible to take care with any contracts or agreements you enter into with AI suppliers, and to be cautious about agreeing to indemnify against a claim.

 

Transcription

Transcription programmes usually take the form of software that records a consultation and transcribes this into written note form, often integrated into clinical systems. This has the potential advantage of saving time taking written notes and allowing doctors to focus more on active engagement with patients.

Doctors need to ensure that they are acting in a manner consistent with Good Medical Practice which states that “You must make sure that formal records of your work (including patients’ records) are clear, accurate, contemporaneous and legible. It goes on to state that you should take a “proportionate” approach to the level of detail included.

Doctors will therefore need to ensure that the notes produced by the transcription software are an accurate reflection of the consultation and do not omit potentially relevant information. It is also important that notes are proportionate and do not contain a lot of unnecessary information which may impact future patient care.

Issues such as data protection and consent, as discussed above, will also be important to consider when using such programmes.

NHS England has recently produced guidance on the use of AI enabled scribing products which can be found here. This considers regulatory compliance, whether scribing products are considered medical devices and, where they are, whether the product is registered with the Medicines and Healthcare products Regulatory Agency (MHRA).

 

Use of generative AI such as Chat GPT

There are many potential benefits in the use of generative AI in medical practice - for both the doctor and patient directly. For example, patients may be able to utilise these tools to find an explanation of a medical terminology or a diagnosis that they have been given. Doctors may find utilising generative AI can aid diagnosis or identify potential drug interactions.

Of course, it must be remembered that generative AI is not always correct, and can sometimes have so-called ‘hallucinations’ where it confidently presents incorrect facts as if they were true. There is also potential for bias in the generated content. It is incumbent on the clinicians using this tool to ensure that the information provided to a patient is from a reliable source and is accurate. One must also consider whether this information can be appropriately applied to the patient population that the doctor is managing. 

It is important to remember that the use of generative AI should not permit doctors to begin working outside the limits of their competence. There should not be an overreliance on the software, and doctors ultimately remain responsible for the decisions they make.

A further point is that of data protection and confidentiality. When considering the use of generative AI, you must be compliant with GDPR. Generative AI tools, by their nature, store, share and learn from the information entered into them and can be accessed by anyone. Care must be taken to not enter any personal data relating to a patient. Simply removing the patient’s name may not be sufficient to anonymise their information, as patients may be indirectly identifiable from other information about them entered into an AI tool. Members are advised to seek further guidance from the relevant people or organisations referred to in the section relating to data protection earlier in this article.

 

Relying on ‘prompts’ from AI software

Some AI software can provide clinicians with ‘prompts’ as to the clinical decisions that might follow - for example, treatment for a particular condition, considerations when prescribing or triage of requests for appointments.

Medical Protection advises that members exercise caution when using such prompts. The ultimate decision on appropriate management of a patient rests with the treating clinician and they must be confident that the management they provide is evidence based and serves the patient’s best interests.

AI software that provides prompts can of course reduce risk when used as an aid, for example, by highlighting next steps in treatment according to relevant guidelines, or by highlighting interactions between medications.

 

Generating letters

We have had questions from members regarding use of AI to generate “fit to fly” letters and fit notes. The AI system used asks a patient to complete an online form. Further questions to the patient are generated using AI. The doctor then decides on whether to issue the letter and if so, this letter is also generated using AI. The doctor does not have access to the medical records.

Our advice is based on relevant sections of Good Medical Practice - firstly around making an adequate assessment of the patient. Doctors need to consider if, when using AI, the process and questions enable the gathering of information that is adequate to make an informed decision.

Furthermore, we highlight para 89 of Good Medical Practice in ensuring that information communicated is factually accurate and does not deliberately leave out relevant information.

We would advise that members exercise caution if considering the use of AI software to complete forms declaring patients fit to participate in physical events.

Good and careful communication is key when completing such forms - from the wording used on the declaration form to be signed through to the conversation with a patient. The ability to take a detailed current and past medical history (and family history) from the patient, and seek advice from a specialist, if necessary, is also important. You must also be confident that you fully understand the nature of the event the patient wishes to participate in.

Further information on the medicolegal risks associated with fit to participate forms, can be found on the Medical Protection website.

 

Processing of laboratory results

AI software is available to process lab results including the processing of both normal and abnormal blood results. This might involve work flowing results to relevant members of the practice team or communicating directly with patients via text message, advising on next steps in care – e.g. the purchase of over-the-counter supplements.

Practices can tailor the software deciding on which elements to switch on and off. For example, they may automate the processing of HBA1c, but not lipid profiles.

Practices need to assure themselves that any system is fit for purpose and does not risk patient safety.

If normal results are being automatically filed without clinician input, the practice should consider if there are any risks that need to be addressed – for example, could a situation come about where the result is normal but still requires action.

A scenario may be haematuria on a urine dipstick test. The MSU comes back normal, but the haematuria requires further investigation as there is no infection to explain it.

Results – whether processed by a human or AI - must also be clearly documented in the patient’s records and it is important to consider how information about lab results is shared with patients, including the impact of an abnormal result being sent to a patient.

 

Further reading

A White Paper published by the MPS Foundation, the Centre for Assuring Autonomy at the University of York, and the Improvement Academy hosted at the Bradford Institute for Health Research includes further useful guidance for doctors to consider on the use of AI tools, for example:

  • Doctors should ask for training on the AI tools they are expected to use. This will help them to navigate their AI tool use more skilfully and know when confidence in an AI’s outputs would be justified, supporting their autonomy. This training should cover the AI tool’s scope, limitations and decision thresholds, as well as how the model was trained and how it reaches its outputs.
  • Doctors should regard the input from an AI tool as one part of a wider, holistic picture concerning the patient, rather than the most important input into the decision-making process. They should be aware that AI tools can be fallible, and those which perform well for an 'average’ patient may not perform well for the individual in front of them.
  • Doctors should only use AI tools within areas of their existing expertise. If there are specific cases where a clinician’s knowledge is limited, clinicians should seek the advice of a human colleague who understands the area well and can oversee the AI tool, rather than rely on the AI tool to fill their knowledge gap.
  • Doctors should feel confident to reject an AI output that they believe to be wrong, or even suboptimal for the patient. They should resist any temptation to defer to an AI’s output to avoid or reduce the likelihood of being held responsible for negative outcomes.
  • Clinicians should engage with healthcare AI developers, when asked and where possible, to ensure that AI tools are user-focused and fit for purpose for their intended contexts.

 

Summary

There is clearly huge potential for AI to assist in the provision of healthcare, and several ways are referred to within this article. There are obvious benefits to patients and clinicians in the provision of effective and safe clinical care. In this rapidly progressing area, doctors should ensure they continue to work in a manner consistent with GMC guidance and relevant legislation.

If in doubt about the use of AI tools and any medicolegal implications, members should contact Medical Protection to request advice.

By Dr Ben White, Deputy Medical Director at Medical Protection

Share this article

Share
New site feature tour

Introducing an improved
online experience

You'll notice a few things have changed on our website. After asking our members what they want in an online platform, we've made it easier to access our membership benefits and created a more personalised user experience.

Why not take our quick 60-second tour? We'll show you how it all works and it should only take a minute.

Take the tour Continue to site

Medicolegal advice
0800 561 9090
Membership information
0800 561 9000

Key contact details

Should you need to contact us, our phone numbers are always visible.

Personalise your search

We'll save your profession in the "I am a..." dropdown filter for next time.

Tour completed

Now you've seen all of the updated features, it's time for you to try them out.

Continue to site
Take again