AI in healthcare: how could liability arise?
In 2017, the House of Lords Select Committee on Artificial Intelligence received written evidence from industry experts regarding the implementation of AI in the healthcare sector of the United Kingdom, ahead of its report, AI in the UK – ready, willing and able?
The Royal College of Radiologists welcomed AI advancements, explaining that it could greatly benefit medical imaging due to the availability of meticulously curated data, which overcomes one of the primary obstacles in the development of AI. Microsoft Research in Cambridge developed the “InnerEye” technology to aid oncologists in analysing x-ray and MRI scans. The Medicines and Healthcare products Regulatory Agency (“MCHR”) provided the committee with an extensive catalogue of potential applications of AI in healthcare, such as genomics, personalised medicine, detection and monitoring of pandemics or epidemics, and the gathering of evidence for medicine submissions. Industry experts indicated that AI could yield benefits beyond clinical applications in the National Health Service, such as administrative advantages, especially during a time where burnout, staff shortages and backlogs have become major challenges.
Of course, industry experts also questioned the introduction of AI in our NHS, citing issues such as acceptance by the public of AI playing a role in their treatment, the use of patient data, whether the NHS is equipped to deploy new technology, and the problem of training staff in how to use it.
In the realm of clinical decision-making (a domain for long exclusively overseen by human clinicians), AI will likely play a supportive role and supplementary function due to its immense capacity to analyse extensive medical datasets and identify patterns and insights that may not be apparent to healthcare professionals. Although AI and machine learning have the potential to revolutionise healthcare by enabling faster and more accurate diagnosis, personalised treatments, and better patient outcomes, naturally they also raise a key question for widespread AI adoption: who is responsible when something goes wrong?
There is a pressing need for the introduction of a comprehensive legal framework that effectively addresses the evolving landscape of clinical decision-making, wherein AI assumes a role in making or assisting clinical decisions. Such a framework should at the very least delineate and establish the respective duties of clinicians and the developers of the AI technology.
Establishing clinical negligence
The law surrounding clinical negligence can be intricate and challenging to navigate. Before considering the question of legal liability of AI in healthcare, it’s important to consider the basic current legal framework for establishing clinical negligence.
Duty of care
Clinicians owe their patients a duty of care. The starting point for establishing clinical negligence is the landmark case of Hunter v Hanley 1955 SLT 213. It outlines three facts which must be established:
- that there is usual and normal practice;
- that the defender did not adopt that practice; and
- that the course the clinician adopted is one which no professional man of ordinary skill would have taken if he had been acting with ordinary care.
Two further key decisions serve as the foundation for almost all clinical negligence cases: Bolam v Friern Hospital Management Committee [1957] 1 WLR 582, and Bolitho v City & Hackney Health Authority [1998] AC 232. According to Bolam, clinical conduct is not usually considered negligent if it aligns with a responsible body of opinion, satisfying the standards of other responsible healthcare professionals. Bolitho, on the other hand, requires that the standard relied on has a logical basis.
Causation
Demonstrating that a clinician was negligent is only the first hurdle in a clinical negligence case. The case must also clear the second hurdle of causation. The pursuer must be able to prove that they have suffered injury or illness, and, on the balance of probabilities, that the alleged negligence caused that injury or illness. This is achieved by applying the “but for” test (see Barnett v Chelsea & Kensington Hospital [1969] 1 QB 428 and Chester v Afshar [2004] 3 WLR 927), which asks: what would the outcome have been had the negligence not occurred?
Artificial intelligence and healthcare
Since the beginning of time, healthcare professionals have been the driving force behind clinical decision-making for patients. Dentistry is thought to be one of the oldest healthcare professions, dating back to 7000 BC with the Indus Valley civilization (see American Dental Education Association, History of Dentistry). By the early 1500s, professional societies began regulating and licensing healthcare practitioners (see History of the Royal College of Physicians). The AI revolution brings about a novel and non-human element to clinical decision-making.
The NHS has already started some preliminary collaborations with software development companies, and such collaborations indicate the reality for AI to be formally and widely adopted into the UK’s healthcare system at some point in the future: see The NHS AI Lab. The introduction of AI in healthcare has the capacity to transform the entire domain of clinical negligence, leading to a shift from traditional clinical negligence to a new form of product liability.
Food for thought: self-driving cars
The UK Government announced in August 2022 its plan to introduce self-driving cars by 2025. Under this plan, the manufacturer, not the driver, would be responsible for any accidents that occur while the car is in self-driving mode. As it notes, France, Germany, and Japan have all introduced various levels of liability safeguards for users of automated vehicles. However, in the United States, where regulation is delegated to state governments, there is no national regulatory framework, resulting in a fractured regulatory landscape and liability ambiguity. In 2019, an individual in a self-driving car was charged with vehicular manslaughter in California, marking the first criminal prosecution of its kind. It is, therefore, not at all unreasonable to demand a legal framework which is adequate in addressing liability issues when using AI to aid clinical decision-making in a healthcare setting.
Case study: IBM’s Watson for Oncology
IBM’s disaster with “Watson for Oncology” should be a stark reminder of the danger of introducing AI in a clinical setting without considering patient safety and a regulatory framework for liability. (Stat News, “IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close” (5 September 2017)).
Perhaps the key takeaway from Watson for Oncology is this: effectively building and deploying AI and machine learning systems necessitates significant amounts of data. AI is a data hog, and the efficacy of AI is contingent on the availability of data – an area in which Watson for Oncology experienced an insufficiency.
Watson for Oncology was designed to evaluate data from patient records and present potential treatment options. The technology was deployed at UB Songdo Hospital in Mongolia, where clinicians who had minimal to no experience in cancer treatment followed its recommendations almost 100% of the time. However, the system inaccurately suggested administering the drug taxane to a patient whose medical history would prohibit the use of that medication. Although an oncologist recognised the mistake, it might have been overlooked by a clinician. If it had been overlooked, who would have been liable? The clinician, the software developer of the AI, or would they have been jointly liable?
In Wilsher v Essex Area Health Authority [1987] QB 730 (CA) (appealed to the House of Lords on another point at [1988] AC 1074), it was held that the duty of care can be discharged by referring to a senior knowledgeable colleague for assistance. Could a clinician, therefore, discharge a duty of care by referring to AI technology for assistance?
Watson for Oncology would propose treatments that were unavailable in the region, or endorse protocols that varied depending on the clinical context or geographical location. Perhaps, then, IBM’s inadequacies can be ascribed to deficiencies existing not in the AI technology itself, but in business operations, flawed strategies, and tactical oversights?
In the United Kingdom, the MHRA – which is responsible for regulating compliance with and enforcement of the law on medical devices, as well as ensuring that medicines and medical devices work and are acceptably safe through a range of investigatory and enforcement powers – has published a roadmap which outlines the regulatory approach it will likely adopt when overseeing software and AI medical devices. It has also published 10 guiding principles in relation to good machine learning practice for medical device development.
Software and AI as a medical device
In law, the definition of “medical device” includes software and AI: Medical Devices Regulations 2002, reg 2(1). A "medical device" is any instrument, software, material, or article, including its accessories, with an “intended purpose” for diagnosing or treating patients in vivo (inside the body) or in vitro (outside the body or in a laboratory). These devices can be used for a wide range of purposes, including inter alia diagnosing, preventing, monitoring, treating, or alleviating diseases and injuries, as well as investigating or modifying physiological processes or anatomy, and assisting in decision-making. Medical devices can also include those designed to administer medicine or those that contain a substance that would be considered a medicine on its own. The software and AI must go beyond mere communication, storage, and retrieval of information.
The issue is that the current regulations treat all software indiscriminately without considering the varying degrees of potential harm it could cause. Moreover, there is a need for clearer pre-market requirements. This would help software manufacturers navigate the process more easily as well as ensure that patients and the public are better protected. We need more AI rigour to ensure that software and AI as medical devices achieve the appropriate level of safety and meet their intended purpose. A legal framework must sufficiently consider the importance of human interpretability and how it affects the safety and effectiveness of AI systems.
Product liability
Defective product or product liability cases refer to situations where a purchased product is faulty and causes injury or illness. The common law liability of manufacturers for injury or damage caused by defective products has been partially replaced by the Consumer Protection Act 1987 and its strict liability regime. Product liability in respect of medical devices is a vast area, ranging from faulty joint replacements, breast implants and laser eye surgery to serious side effects of medicines that have not been properly tested.
Recently, the Court of Session in Scotland considered this statutory regime in Hastings v Finsbury Orthopaedics Ltd 2019 SLT 1411, which involved allegations of defects in a hip replacement product. In 2009, the appellant was implanted with a supposedly defective hip by medical device manufacturers. He alleged that the metal-on-metal (“MoM”) hip was defective in terms of s 2 of the 1987 Act. The Inner House refused his reclaiming motion following a decision by the Lord Ordinary that he had failed to prove that the hip replacement product was defective. The appellant appealed to the UK Supreme Court ([2022] UKSC 19).
Before that court his case became considerably narrower than his original contention that the device was defective due to design flaws. The question for the Supreme Court was whether the prima facie evidence of defect alone, as advanced by the appellant, could be relied on. Orthopaedic surgeons expressed serious professional concerns regarding high revision rates and challenges in revising MoM prostheses, but these concerns were related to the overall performance of MoM prostheses, not specific to the product in question. It was acknowledged that revision rates varied between different MoM prostheses and the Lord Ordinary emphasised that fact at first instance ([2022] UKSC 19 at paras 42-43).
The appellant claimed that the withdrawal of the product from the market hindered his ability to prove his case with statistical analysis, but there was no evidence to support the claim that the withdrawal was a deliberate tactic. The Lord Ordinary determined that the product was withdrawn due to commercial reasons and did not support the argument for a defective product (paras 44-47).
Notices issued regarding the product seemed to support a failure to meet the expected standard, but they alone could not determine a breach. When evaluating compliance with the standard, the court considered new evidence not available when the notices were issued. In the Outer House, uncontested statistical analysis was before the court, and it was concluded that the appellant’s claim was not supported by statistics. The expert evidence did not leave the category of prima facie evidence unchallenged, but undermined it (paras 61-62). To put it succinctly, the appeal was nothing more than an attempt to appeal against the Lord Ordinary’s findings of fact and there were no basis to interfere with those findings (para 65). None of the prima facie grounds advanced by the appellant was sufficient to establish defect.
A trend?
There is certainly a noticeable trend that decisions under the 1987 Act favour defenders, perhaps with the exception of the notorious A v National Blood Authority [2001] EWHC 446 (QB) (more colloquially known as the hepatitis C litigation), which concerned blood transfusions, blood products and organ transplants. A pattern can be observed from more recent cases such as Gee v Depuy International Ltd [2018] EWHC 1208 (QB), where the court held that the "inherent propensity" of a MoM hip to shed metal debris in the course of normal use was not a defect, even if some patients might suffer an adverse immunological reaction, or Mr Justice Hickinbottom’s seminal decision in Wilkes v DePuy [2016] EWHC 3096 (QB), which signified a substantial departure from A v National Blood Authority.
Hastings not only highlights the evidentiary hurdles faced by pursuers and leaves us questioning what a defective product might actually look like; it offers little to no help in the arena of AI where the defective “product” is concerned with clinical decision-making. This issue is significantly more nuanced and convoluted. We can, however, draw some inferences from general principles. To establish liability for the developers of the AI technology, the starting point would be the landmark case of Donoghue v Stevenson 1932 SC (HL) 31, which illustrates the responsibility manufacturers have to end-users of their products, particularly if the AI model, like the bottle of ginger beer in Donoghue, is opaque, with its decision-making processes not fully understood. Software developers would likely argue, however, that AI technology relies on expert clinicians to intervene if it suggests an unsafe course of action, effectively passing on all liability to the clinician and/or their employer.
Proving a clinician’s decision was independent from the AI’s influence would pose grave evidentiary hurdles in clinical negligence cases. When it comes to establishing causation and assessing multiple causes which run and operate concurrently, it may prove factually impossible to determine which is the cause. In Bonnington Castings Ltd v Wardlaw [1956] AC 613, the claimant developed pneumoconiosis due to inhaling silica particles at work. The defendant breached a statutory duty by not installing an extractor fan which would have reduced, but not eliminated, the particles. The trial judge ruled that, as the defendant failed to prove that their breach did not cause the disease, they were liable. The defendant appealed, arguing that the burden of proof rested with the claimant. The House of Lords held that the burden remained with the claimant, but they only had to demonstrate that the defendant's breach made a “material contribution” to the disease, not that it was the sole cause.
Therefore, in the context of liability of AI and its developers, the court would have to consider not whether the AI was the sole cause, but whether it materially contributed to the patient’s injury or illness. For the purposes of a claim based on joint liability, the conduct of the software developer of the AI technology must be analysed separately from that of the clinician. The court must also determine whether the developers of the AI took adequate measures to reduce any potential harm.
We therefore go back to old school principles of delictual liability and duty of care:
- Was the harm resulting from the defendant’s conduct reasonably foreseeable? This would be extremely hard to refute as it would be foreseeable that AI technology aiding clinical decision-making in a healthcare setting may, due to an error or malfunction, harm or put a patient at risk.
- The question of proximity. In order to escape liability, developers of AI could argue that clinicians are ultimately primary guardians of patient safety, and certainly the legal framework may very well put the onus on clinicians to check for AI errors. However, as we know from basic principles of delict, proximity is not based on geographical proximity, but the notion of “close” and “direct” relations. In Darnley v Croydon Health Services NHS Trust [2018] UKSC 50 it was held that within the clinical environment, a duty of care towards patients is held by both medical and non-medical staff, so why can it not extend to the developers of AI technology deployed in healthcare?
- Is it fair, just, and reasonable to impose a duty of care on the developers of the AI technology? This will of course be fact-sensitive, but I’ll leave this question to the imagination of the reader.
As previously stated, clinical behaviour is not deemed negligent if it meets the Bolam and Bolitho standard of care; but what happens if a clinician chooses to override the recommendation of AI technology? AI may have just made a scientific breakthrough, or (more cynically) it may be leading the clinician and patient to disaster. In a clinical setting, under pressure and with time constraints, a clinician may struggle when faced with a decision to agree or dissent from an AI’s clinical recommendation. Moreover, healthcare professionals cannot be treated as one homogenous group; every discipline and specialty serves the needs of a particular realm of care, comes with its own risks and challenges; and the application of AI is also better suited to certain niches.
Let’s go back to causation and the “but for” test. One question may be: But for the malfunctioning of the AI technology, would the clinician have been prompted to apply the harmful recommendation to the patient?
In delict (and tort), the principle of novus actus interveniens, or new intervening act, has been long established. In law, such an act may break the chain of causation, removing liability from a prior actor. The legal test applicable will depend upon whether the new act was that of a third party or of the pursuer. Take the example of the clinician being an intermediary or third party. If the intervening conduct is so negligent that it would not be fair for the initial defender to continue to carry responsibility for those later acts, then the clinician may find themselves solely causally responsible. However, we all know that actions of third parties do not generally break the chain of causation, in which case, would the developers of the AI be solely responsible? Will the characteristics of Bolam live on, and will the assessment be that the clinician ought to have identified the AI’s recommendation as inappropriate, as would be expected of a professional man of ordinary skill exercising ordinary care?
Concluding remarks
The use of AI in healthcare can enhance efficiency and quality of care, but its implementation raises concerns about patient safety and liability, particularly in the absence of a clear legal framework. The current liability framework is inadequate to encourage both safe clinical implementation and the disruptive innovation of AI. Various stakeholders are involved in the artificial ecosystem beyond clinicians, making it crucial to establish a more balanced liability system.
There are two types of liability that could arise from the use of AI in healthcare: clinical negligence (where a healthcare professional fails to provide an appropriate standard of care, resulting in harm to the patient), and product liability (where defective AI software causes harm to a patient). To address these concerns, several policy options could ensure a more balanced liability system. One such option is altering the standard of care to ensure that healthcare professionals use AI safely and effectively, making it clear who is responsible for any harm that may occur.
Insurance could provide a way to mitigate the financial risks associated with AI use in healthcare, covering the costs of any harm that may occur and incentivising healthcare professionals to use AI responsibly. Indemnification could limit the liability of healthcare professionals and manufacturers of AI algorithms, encouraging the development and use of AI in healthcare. Another possibility is a special/no-fault adjudication system, which would provide a separate legal process for claims arising from the use of AI in healthcare, handling claims that are difficult to adjudicate under existing liability frameworks. Lastly, regulation could establish minimum standards for the safety and efficacy of AI algorithms and provide a framework for liability in the event of harm.
Perspectives
Features
Briefings
- Criminal court: Dangerous or careless?
- Corporate: Bill gives CMA consumer enforcement powers
- Agriculture: A question for the Land Court?
- Intellectual property: Who owns AI generated copyright?
- Succession: Variation by an attorney?
- Sport: Participation in LIV Golf ruled out of bounds
- Scottish Solicitors' Discipline Tribunal: June 2023
- Data protection: Meta's mega matter
- In-house: Scanning wider horizons