Common law and artificial life
How far will our law have to adapt to cope with issues arising from the deployment of artificial intelligence? And given the pace of technological advance, how is it to keep abreast? The questions are raised both in an address by Lord Hodge, Scottish Justice of the UK Supreme Court, and at an international expert conference sponsored by the Faculty of Advocates and others, taking place in Edinburgh on 31 May.
“AI could be a wonderful way to improve office productivity, but it brings some really tough issues of a legal and indeed social nature,” conference organiser Iain Mitchell QC, chair of the Scottish Society for Computers & Law, told the Journal.
What exactly do we mean by AI? “In one sense it has been around a very long time,” Mitchell observes, instancing a personal address database on a phone or computer. “Where people really get engaged, and what raises issues about ethics, liability, human rights and so on, is what are effectively self-learning systems.” These he divides into weak AI, and strong AI.
Strong AI, meaning machines that actually think like humans, remains science fiction at present, “and who knows when we may develop it – though that doesn’t stop people worrying about liability!”
Weak AI, on the other hand, is very much with us and becoming more prevalent, whether in the form of chess-playing computers, self-driving vehicles or radical medical diagnosis systems, typically trained by inputting large amounts of data.
Systems now exist that produce results that a human might produce, but it is impossible to say how the algorithms within them worked to produce that result. “That’s known as the black box syndrome, and it gives rise to huge liability problems.”
Mitchell notes that difficult human rights issues can arise even without weak AI being involved. “You can still have decision-making algorithms which decide whether you get benefit, or do a first sift when you apply for a job. Under the GDPR you can demand a manual process, that a human looks at it, but the black box and the training of data give rise to enormous issues about how you can figure out who or what is liable.”
Where the law falls short?
Hodge, who delivered the First Edinburgh FinTech Lecture in March, focused on the financial world, but highlighted some important general legal issues. These include:
- in contract law, the implications if computers are developed to use machine learning to optimise the transactions they enter into, and unintended consequences ensue. For him this raises questions about intention to enter into legal relations, and how the terms of a computer-generated contract are to be recorded and interpreted;
- in delict and tort, the law “will need to be revised to attribute liability for harm caused by machines exercising AI”; more difficult questions of liability and causation will arise in the context of fintech transactions;
- in property law, Scots law will need to recognise a new form of intangible moveable property; the question whether the AI involved in fintech should give rise to intellectual property also needs to be addressed.
He went on to float the idea of whether a computer could be given a separate legal personality. “While at first blush, that may sound far-fetched, there is no reason in principle why the law cannot create such personality”: the separate legal personality of a “one-person” company had been recognised in English law since 1897, and in Bumper Development Corporation v Metropolitan Police Commissioner [1991] 1 WLR 1362, “it has recognised the separate legal personality in Indian law of a ruined temple which was little more than a pile of stones”.
Humans in the process
While Mitchell accepts that there is a respectable tradition of arguing that you should accord AI systems a form of legal personality, he believes there are problems with this analysis. “Ultimately a human or corporation is going to have to carry the can or underwrite. Even with AI systems there will always be a human involved somewhere.”
Various proposals have been put forward to impose strict liability on the creators and/or operators of a system where a connection is proved between operation of the system and someone’s loss, without proof of performance less than can be reasonably expected from the system, even though present product liability systems do demand proof that the product fell short of expected standards. But that, Mitchell points out, would be to set a standard of perfection. “Also, how far down the chain do you go? Compulsory insurance has been suggested, but you might have a teenager somewhere abroad contributing lines to an open source software program that they have no idea will be used as a component in an AI system. At a moral and ethical level they have no knowledge of what they are contributing to.”
Yet the biggest operations, including much of Google’s system, can be built on open source software. “Gone are the days for the most part when we just download software, an entire program, and it just sits on our computer. So from an economic point of view it’s perfectly possible, and indeed it’s very common now, to have these services developed and delivered on the basis of open source software. The whole Android operating system is actually open source,” Mitchell points out.
Limits of the common law?
Hodge speaks of “fundamentally important questions of legal policy [to which] the common law does not provide any ready-made answers... it is not practicable to develop the common law through case law to create a suitable legal regime for fintech”. Yet Mitchell does not altogether discount its ability to devise solutions. “Personally I have the view that there is nothing new under the sun, and because we have this new situation doesn’t necessarily mean we need his new law of legal personality. It means that one has to carefully develop and adapt the law to cover the new situations.
“If a person is injured by a self-driving car, we are talking about damages for being hit by a vehicle, and in one sense that’s nothing to do with AI. But of course you have to ask how you make the well developed principles of reparation fit with the new technology and the new way of doing things. I’m fairly confident the common law is capable of developing that, with a little help from the legislature, but it may just be going too far to invent a whole new category of vehicle person. It might be taking a sledgehammer to crack a nut.”
As respects Hodge’s scenario of two banks’ computer systems interacting with each other to make contracts, and producing unexpected results, Mitchell observes: “That is an interesting issue that might be worth looking at. But in a sense we’ve already been there, because one of the great problems of boom and bust during the financial crash was computers programmed to buy and sell stocks and shares depending on the market, and that exaggerated all the effects.
“I’m not saying he’s wrong; he could be right and there might be scope for focused legislative intervention, but one just needs to be proportionate and measured. You need to think about how existing well known and well recognised legal principles apply to it all.”
Ethical choices
One scenario raised in relation to self-driving vehicles raises ethical issues that may not too far from reality. Known as the “trolley problem”, it asks: if, faced with a runaway tramcar, you had the controls of a set of points and had to make a split second decision whether to knock down one person in one direction or three in the other, what would you do? Would it affect your answer if the one was an Einstein and the three a gang of housebreakers? While in Mitchell’s view this probably awaits the development of strong AI, he points out that the Chinese are now giving every citizen a compulsory social score, in which they gain or lose points depending on how they behave (which generally means conform), whether they have been in trouble and so on. “If you were to put that on a chip and require people to embed the chip, the car is perfectly capable of making the choice to spare Einstein.”
Blockchain technology (discussed at Journal, March 2019, 24), also creates AI possibilities, and attendant legal issues. Enthusiasts for blockchain in the art market point out that blockchain makes it much easier to democratise art ownership by facilitating multiple ownership of a work of art. “The really interesting thing is that new works of art can actually be physically embedded in the blockchain as data,” Mitchell explains. “You create a piece of digital art and embed it as data and then people with access rights can make prints of it or whatever. Combine that with the fact that AI systems can now themselves create works of art without human intervention – one produced a ‘Rembrandt’ a few months ago. There is no economic cost involved in creating that form of digital media. Put this together, and you have all the conditions for a boom as people clamour to get into art ownership, and a subsequent bust.” He speculates: “Are we replicating the conditions which led to the Dutch tulip boom and bust?”
Hodge believes that a wide collaborative exercise, “focusing on AI and fintech and aiming to produce facilitating legislation”, is probably needed if the UK is to facilitate the development of fintech without harming the integrity of the markets or financial consumers. Further, “The current conflicting approaches to the treatment of cryptoassets by key jurisdictions such as the USA, the EU and the UK support the case for international cooperation in the creation of fintech law.”
Given that our Law Commissions are taking three years just to look at self-driving cars, and technology is advancing rapidly, how can the law respond in enough time? “You’re summarising the aspiration of the conference, to look at exactly those things,” Mitchell replies.
Beyond the hype
“AI Beyond the Hype – Benefits, Challenges and Liabilities,” takes place in the Faculty of Advocates’ Mackenzie Building on 31 May 2019. Organised by the European Association of Lawyers (AEA-EAL) in partnership with the Faculty of Advocates, the Scottish Society for Computers & Law, the SCRIPT Centre at Edinburgh University, Rooney Nimmo Solicitors, and other sponsors, it is open to all legal and IT professionals and others interested. Enquiries to the Faculty of Advocates.
In this issue
- Time to promote shared care?
- Client medical records: a matter of right
- Search for the route to healing
- Rights after “same roof”
- Are you a qualified creditor?
- Reading for pleasure
- Opinion: Allan Jamieson
- Book reviews
- Profile: John Laughland
- President's column
- ScotLIS update
- People on the move
- Common law and artificial life
- FAIs: addressing the concerns
- Challenging times
- Shared humanity
- Cases of the paperless will
- How to manage your legal practice for success
- Fairness v Convenience
- Moorov then and now
- Personal licences: the uncertainty continues
- Is Airbnb use a planning matter?
- Insolvency Rules: a positive realignment
- IR35 compliance moves up the ladder
- “Best interests” in the balance
- Scottish Solicitors' Discipline Tribunal
- PSG tackles index-linked rent reviews
- Finding the right seat
- Public policy highlights
- Accredited paralegal update
- Events, and more, for members
- Accredited Paralegal Committee profile
- Second thoughts on executor declarations
- Client communication – a continuous journey
- Reflections from the Commission
- Love my tender
- Ask Ash