Artificial intelligence and virtual reality might be carving out a place for themselves in Scottish healthcare, but they bring their own challenges. From ethics to accuracy, there’s much to consider…
The incorporation of artificial intelligence (AI) and virtual reality (VR) technologies is reshaping industries worldwide, with healthcare being no exception. AI is progressing by the day and demonstrating that it can do many of the same things we can – often more quickly, accurately (sometimes) and cost-effectively.
The global pandemic brought healthcare workforce shortages to the fore and while resources are becoming increasingly stretched, these technological innovations offer tremendous potential for transforming the delivery of healthcare, relieving the pressures on health professionals – and, in turn, health boards. However, they bring with them several unique medicolegal challenges.
Supporting role
AI and VR are emerging as valuable tools across various areas of healthcare, from training and therapy to diagnostics, patient care and administrative tasks. These technologies are helping to streamline operations, allowing healthcare professionals to focus more on clinical practice, thereby increasing efficiency.
AI algorithms are being developed to analyse medical images, aiding in the detection and diagnosis of abnormalities; however, their accuracy has varied and, as such, these processes are having to be utilised with caution. (In fact, how much these systems are being put to use in practice is unclear, and there does not currently appear to be huge reliance on them in day-to-day diagnostic decision-making). In addition, predictive analytics can assist in understanding the effects of policy changes on healthcare systems, as well as providing data on the likelihood of particular patients developing specific conditions.
VR is also gaining recognition in medical education, offering immersive and interactive training simulations that allow medical students and healthcare workers to practice procedures safely and in a risk-free environment. Furthermore, VR-based therapies have shown promise in treating certain mental health conditions, including post-traumatic stress disorder (PTSD) and phobias.
Responsibility and standards
Despite the exciting potential of these technologies, there are a number of concerns from a medicolegal perspective. One of the primary issues involves accountability and liability, especially in the case of diagnoses or treatment plans influenced by AI. While it’s unlikely that AI will ever replace clinicians entirely in the diagnosis of patients or the formulation of treatment plans, it raises the question of who is ultimately responsible when both AI and human input are involved in decision-making.
The use of AI in healthcare also raises questions about the standard of care. In Scotland, the test for negligence hinges on whether a clinician meets the standard expected of an ordinarily competent clinician in similar circumstances (in terms of the legal test for professional negligence as set out in the case of Hunter v Hanley). How would this apply to an AI system? Given the potential for technical glitches, it’s unlikely that an AI-driven diagnosis would be infallible.
In most cases, the treating clinician will likely bear the ultimate responsibility, regardless of whether or not AI or VR was involved in the diagnosis or treatment process. This means it’s essential that safeguards be put in place in order to protect both patients and healthcare professionals.
Patient privacy and protection
Another significant issue with the use of AI is its interaction with patient data. Healthcare providers must ensure robust security measures are in place to protect private patient information from unauthorised access, in compliance with data protection laws.
Informed consent is also a matter requiring consideration when introducing AI and VR into medical practice. Clinicians will need to ensure that patients fully understand the nature of these technologies, including their risks and potential benefits, before agreeing to their use.
Multidisciplinary collaboration
To ensure responsible and safe implementation, clear guidelines and standards are necessary to regulate the use of AI and VR in healthcare. Collaboration between healthcare regulators, technology developers and legal experts will be crucial in navigating these developments and addressing the medicolegal implications.
While these technologies offer exciting possibilities for innovation and improving healthcare efficiency in Scotland, addressing the associated medicolegal concerns is vital for safeguarding patient safety, privacy and ethical practices.
Written by Carolyn McPhee, Senior Associate at Balfour+Manson LLP