Algorithms in justice must be regulated, LSEW report claims
An uncritical reliance on algorithms in the criminal justice system could lead to wrong decisions that threaten human rights and undermine public trust in the justice system, the Law Society of England & Wales claims in a report published today.
Algorithms are employed in techniques including facial recognition systems, DNA profiling, predictive crime mapping and mobile phone data extraction. The Society's concern is that while they play a vital role, being based on historic data they are vulnerable to biased or oversimplified data, which can lead to discriminatory decisions, shallow understandings of complex issues and a lack of long-term analysis.
This has consequences for personal dignity, such as loss of individuality and autonomy and human rights such as privacy and freedom from discrimination, as well as reduced transparency in decision-making, leading to a lack of proper scrutiny and greater potential for abuse of power.
There are also risks to specific elements of the justice system, such as procedural flaws leading to unfair trials, and complex cases that could establish important legal precedents being managed out.
The report is authored by the Society's Technology & Law Public Policy Commission, which was created to explore the role of, and concerns about, the use of algorithms in the justice system. Among its principal recommendations are:
- a range of new mechanisms and institutional arrangements to improve oversight of algorithms in the criminal justice system;
- strengthening and clarifying the protections concerning algorithmic systems in part 3 of the Data Protection Act 2018;
- in relation to procurement, algorithmic systems in the criminal justice system must allow for "maximal control", amendment and public-facing transparency, and be tested and monitored for relevant human rights considerations;
- clear and explicit advance declaration of the lawful basis of all algorithmic systems in the criminal justice system;
- significant investment to support the ability of public bodies to understand the appropriateness of algorithmic systems and, where appropriate, how to deploy them responsibly.