Will automated vehicles disclose cause of accidents? Faculty asks
Advances in technology used in automated vehicles could make it impossible to identify the cause of accidents in which they are involved, the Faculty of Advocates suggests in a paper published today.
Faculty was responding to a joint consultation by the Scottish Law Commission and the Law Commission of England & Wales, which are undertaking a three-year review of the legal position in relation to self-driving vehicles.
Faculty notes that current technology for automated driving systems is based on algorithms, processes or rules to be followed in problem-solving operations. However, research is seeking to develop “neural networks” – systems which make their own autonomous decisions.
"It is a feature of such systems that their internal ‘reasoning’ processes tend to be opaque and impenetrable (what is known as the ‘black box’ phenomenon) – the programmers may be unable to explain how they achieve their outcomes", Faculty states.
"If the operation of the system causes an accident, it might be perfectly possible to determine the cause through examination of the source code of a conventional system (there might be a clearly identifiable bug in the system, or one of the algorithms might be obviously flawed), but where a neural network is involved, it may be literally impossible to determine what produced the behaviour which caused the accident."
Faculty points out that whereas the consultation appears to assume that automated driving systems will be self-contained, they could interact with other vehicles or with traffic control systems to optimise traffic flow and minimise risk of accidents. Failure of systems to interoperate might be considered equivalent to a human driver failing to comply with traffic signs or driving without due care and attention, and consideration should be given to how such incidents would be detected and possibly prosecuted.
In a comprehensive paper, Faculty goes on to address the particular issues raised by the consultation, covering the need for a "user in charge" of the vehicle, the position of "fallback drivers" – Faculty proposes the further defined role of "operator", who will also have certain legal responsibilities and who may or may not be present in the vehicle; a scheme for authorising automated systems; driver training on systems; the application of s 2 of the Consumer Protection Act 1987 to automated systems, with possible presumptions of liability; and possible offences of interfering with vehicles or their systems.
Faculty further believes that, at this stage in their development, automated vehicles should not be programmed to let them mount the pavement, for example to allow emergency vehicles to pass, or to "edge through" pedestrians, or to exceed the speed limit in certain circumstances.