Artificial intelligence: who’s liable?

05 February 2019

Advancements in medical technology can bring huge benefits for patients and clinicians alike – but new approaches can also mean new risks. Dr Helen Hartley, head of underwriting policy at Medical Protection, looks at where the liability lies for artificial intelligence.

For many, the concept of artificial intelligence conjures images from the darkest recesses of Hollywood imagination: robots running amok and rogue algorithms instigating World War 3.
In medicine, however, its benefits are impossible to ignore – only recently a study in Nature Medicine journal reported on an algorithm that can learn to read complex eye scans.1 When tested, it performed as well as two of the world’s leading retina specialists and did not miss a single urgent case.
But what has not been proven is the infallibility of artificial intelligence (AI). When a mistake does occur, where does the liability lie?

Robots in the dock

Clinicians should ensure any robot or algorithm is used as part of – not in place of – sound clinical judgment and proficiency. Algorithms, including those used by triaging apps, should not be blindly followed without regard to a patient’s particular clinical features or circumstances, such as geographical location, which may impact on the probability of certain diagnoses. Medical Protection membership can provide protection with regard to allegations against your clinical judgment. 
However, we do not currently offer protection against errors arising from the programming or functioning of an AI programme, app or platform. It is expected that the creators and/or producers of these will seek independent advice regarding their indemnity requirements, which may include the potential for multiple serial claims to arise from errors or service disruption affecting an AI product. Similarly, with regard to the use of any surgical equipment, product liability would apply in relation to robot malfunction, whether hardware or software. 

A Medical Protection member using a robot as part of a surgical procedure would however remain liable for any alleged negligent use of the robot, and as such, would be eligible to request assistance from Medical Protection should an issue arise.  

In order to minimise the risk of malfunction or errors, any clinician intending to rely on AI equipment should ensure they are satisfied that it is in good working order, that it has been maintained and serviced according to the manufacturer’s instructions, and that product liability indemnity arrangements are in place. 

Clinicians should also:

• Adhere to any local checklists before ‘on the day’ use. 

• Only use equipment on which they have received adequate training and instruction. 
• Consider the possibility of equipment malfunction, including whether they have the skills to proceed with the procedure regardless, and ensure the potential availability of any additional equipment or resources required in that event. 

1. De Fauw J et al. ‘Clinically applicable deep learning for diagnosis and referral in retinal disease’, Nature Medicine 2018; DOI:10.1038/s41591-018-0107-6.