Select country
Membership information
0800 561 9000
Medicolegal advice
0800 561 9090
Menu
Refine my search

Artificial intelligence: who’s liable?

Post date: 29/08/2018 | Time to read article: 3 mins

The information within this article was correct at the time of publishing. Last updated 02/04/2019

For many, the concept of artificial intelligence conjures images from the darkest recesses of Hollywood imagination: robots running amok and rogue algorithms instigating World War 3.

In medicine, however, its benefits are impossible to ignore – only recently a study in Nature Medicine journal reported on an algorithm that can learn to read complex eye scans.1 When tested, it performed as well as two of the world’s leading retina specialists and did not miss a single urgent case.

But what has not been proven is the infallibility of artificial intelligence (AI). When a mistake does occur, where does the liability lie?

Robots in the dock

Clinicians should ensure any robot or algorithm is used as part of – not in place of – sound clinical judgement and proficiency. Algorithms, including those used by triaging apps, should not be blindly followed without regard to a patient’s particular clinical features or circumstances, such as geographical location, which may impact on the probability of certain diagnoses. Medical Protection membership can provide protection with regard to allegations against your clinical judgement. 

However, we do not currently offer protection against errors arising from the programming or functioning of an AI programme, app or platform. It is expected that the creators and/or producers of these will seek independent advice regarding their indemnity requirements, which may include the potential for multiple serial claims to arise from errors or service disruption affecting an AI product. Similarly, with regard to the use of any surgical equipment, product liability would apply in relation to robot malfunction, whether hardware or software.

A Medical Protection member using a robot as part of a surgical procedure would however remain liable for any alleged negligent use of the robot, and as such, would be eligible to request assistance from Medical Protection should an issue arise. 

In order to minimise the risk of malfunction or errors, any clinician intending to rely on AI equipment should ensure they are satisfied that it is in good working order, that it has been maintained and serviced according to the manufacturer’s instructions, and that product liability indemnity arrangements are in place. 

Clinicians should also:

  • Adhere to any local checklists before ‘on the day’ use.
  • Only use equipment on which they have received adequate training and instruction. 
  • Consider the possibility of equipment malfunction, including whether they have the skills to proceed with the procedure regardless, and ensure the potential availability of any additional equipment or resources required in that event.

CASE STUDY

A GP contacted Medical Protection with concerns over the provision of an online service, commissioned by the CCG from a private company. The service was to take the form of a practice weblink that takes patients through an algorithm: the conclusion being that the patient may be directed to an ambulance, the community pharmacy or the GP.

The practice had two concerns:

  1. That the algorithm may not pick up on subtle signs of serious illness requiring urgent attention.
  2. That patients are promised a reply within 48 hours, potentially causing a problem if a patient uses the triage programme on a Friday evening and does not receive a reply until Monday morning, breaching the 48-hour target.

The practice set up procedures to deal with requests that are received in office hours but were concerned about vulnerabilities in relation to the above.

Medicolegal advice

While symptom-checking algorithms are becoming increasingly sophisticated, it remains possible that subtle physical signs may be missed if no clinical examination takes place. To mitigate that risk, doctors should have a low threshold for recommending that the patient attends a local clinician for a face-to-face appointment so they may undergo detailed clinical assessment.

With regard to the second concern, the GP was advised to explore what options were available within their practice to review patient contacts via the online service on Saturdays and Sundays to see whether the risk could be managed within available resources.

If that was not possible, it was recommended that they approach their CCG to discuss their concerns and learn whether the CCG, in commissioning the service, had already considered that possibility and the need for a solution to be implemented covering multiple practices in the locality, allowing review in the specified timeframe of patients directed to seek GP input.

If no such plan was under consideration or accepted as necessary, given that patient safety is paramount, the GP was advised to follow GMC guidance in relation to raising patient safety concerns and to adhere to local procedures for doing so.  

References

1. De Fauw J et al. ‘Clinically applicable deep learning for diagnosis and referral in retinal disease’, Nature Medicine 2018; DOI:10.1038/s41591-018-0107-6.

Share this article

Share
New site feature tour

Introducing an improved
online experience

You'll notice a few things have changed on our website. After asking our members what they want in an online platform, we've made it easier to access our membership benefits and created a more personalised user experience.

Why not take our quick 60-second tour? We'll show you how it all works and it should only take a minute.

Take the tour Continue to site

Medicolegal advice
0800 561 9090
Membership information
0800 561 9000

Key contact details

Should you need to contact us, our phone numbers are always visible.

Personalise your search

We'll save your profession in the "I am a..." dropdown filter for next time.

Tour completed

Now you've seen all of the updated features, it's time for you to try them out.

Continue to site
Take again