'Using AI in the Justice System' was a well-received session on Day 2 of the Society’s 2024 Annual Conference.
Paul Mosson, Executive Director of Member Services & Engagement at the Society introduced Ellen Lefley, senior lawyer at JUSTICE, who provided an insightful overview of the complexities and implications of integrating AI into legal and judicial processes.
The session outlined both opportunities and risks presented by AI, particularly in enhancing access to justice and the impartiality of decision-making and covered the questions:
- What are the key opportunities for using AI in the justice system?
- How do we navigate the risks? Do we need new frameworks to help us? Whose responsibility is it?
- How important is the human element in decision-making? Could or should decision makers in the justice system (e.g. police, judges or juries) be replaced by AI)?
JUSTICE mission
JUSTICE is an independent law reform charity founded in 1957 in England and Wales, and 2012 for its Scotland branch.
JUSTICE operates as an independent body advising policymakers within the judiciary and the Ministry of Justice, advocating for a system that upholds the rule of law and human rights. The organisation's work spans reactive measures, such as responding to government consultations and legislative reviews, and proactive projects aimed at identifying and promoting necessary reforms. Ellen highlighted that, “it’s within this proactive stream of work that we’re working on artificial intelligence, human rights and the law”. The current focus on AI forms part of the ongoing commitment to ensure the justice system evolves in a manner that benefits society comprehensively.
The role of technology in justice
Technology's influence on the justice system is not new. “JUSTICE has been engaged in questions about video technology and the way in which that could improve how our justice system does what it does”, for a while, commented Ellen. As many are familiar, during the COVID-19 pandemic, video technology was scrutinised for its potential to improve procedural efficiency while posing new risks, such as digital exclusion. The AI work stream initiated in 2024 seeks to build upon this by exploring both the potential and the perils of AI applications in legal contexts.
You can read more about JUSTICE's AI, human rights and the law guide here.
Defining AI
Whilst not being a computer scientist herself, Ellen highlighted early on in her presentation, that if attendees were going to take away one thing from her session, it would be “that there is no agreed upon, authoritative, ultimate and singular definition of AI”.
However, the Organisation for Economic Co-operation and Development (OECD) defines it as “a machine-based system that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content recommendations, or decisions that can influence physical or virtual environments”. The UK government’s 2023 consultation also emphasised ‘autonomy’ and ‘adaptiveness’ as essential features, but Ellen noted, “the adaptivity of AI can make it difficult to explain... the intent or logic of the system”.
Key Opportunities in AI for Justice
Ellen’s discussion highlighted three primary areas where AI could positively impact the justice system:
- Equal and Effective Access to Justice: The current system struggles with accessibility, notably due to austerity measures that have limited public funding. “There are examples of AI chatbots... delivering legal advice to those who can’t afford a lawyer”. High translation costs and complex procedures further disadvantage individuals, particularly litigants in person. AI tools such as chatbots powered by large language models (LLMs) and automatic translation services could ameliorate these challenges by providing more accessible legal advice and reducing procedural barriers.
- Independent, Impartial, and Competent Decision-Making: Existing disparities in criminal justice outcomes, particularly affecting Black and ethnic minority groups, underscore the need for unbiased judicial processes. Ellen mentioned, “AI might be able to improve the training and make training more personalised for decision makers”. However, it must be ensured that AI systems operate effectively and do not entrench existing biases.
- Openness to Scrutiny and Public Engagement: The current system's opacity is marked by unpublished judgments and limited public access to case details. AI could facilitate automatic transcription and data analysis, increasing transparency and enabling better scrutiny. “AI can help demystify everything that goes on in our justice system”. This would help ensure that justice is not only done but seen to be done.
Notable risks
Ellen emphasised that while AI offers significant promise, it carries inherent risks that must be navigated to prevent undermining the justice system's goals. “Will this make things worse rather than better?” is a question JUSTICE asks when assessing new AI initiatives. Examples of the risks to consider, include:
- Accuracy and reliability: AI-powered legal advice, while potentially beneficial, could disseminate incorrect or misleading information. The risks are especially pronounced if users, particularly litigants in person, lack the expertise to verify AI outputs. “If the user... has fewer resources on which they can pull to assess that accuracy... that’s when we get into high-risk scenarios”.
- Bias in decision-making: The example of the COMPAS system in the United States, used for sentencing guidance, illustrated how AI can reinforce discriminatory outcomes due to biases embedded in training data. “The lack of impartiality... was just data-derived rather than explicitly human-derived, because there were societal biases embedded in it”. This highlighted that AI, if not carefully designed and regulated, could replicate and magnify societal biases.
- Privacy concerns: The use of technologies such as facial recognition, already implemented by police forces in South Wales and London, raises significant privacy issues. “There’s definitely a risk to privacy rights” that needs to be carefully weighed.
Mitigation strategies
To address these risks, JUSTICE proposes a tripartite framework for evaluating AI's role in the justice system. Important to note this is very much a draft for now, but will hopefully be proposed to policy makers in due course. The framework currently includes:
- Outcome-focused approach: Policymakers should start with clearly defined objectives, aligning AI applications with the broader goals of ensuring equal access, impartial decision-making, and transparency.
- Risk assessment: Each AI initiative should be rigorously assessed for potential risks to these objectives. This includes evaluating the severity and likelihood of adverse impacts, along with appropriate safeguards.
- Responsible adaptation: Policymakers must be prepared to halt AI applications that present unacceptable risks. “Any response to AI innovation must be willing to stop... and say no” when necessary. A “tick-box” approach to risk assessment is insufficient; genuine willingness to forgo AI when necessary is essential to maintain public trust and uphold the justice system's integrity.
Broader reflections and engagement
Attendees at the session shared their examples of how they are currently utilising AI, noting applications such as diary management for improved efficiency. Questions were understandably raised about the use of tools like ChatGPT, especially in terms of confidentiality and data security, reflecting ongoing concerns about how legal professionals can leverage AI responsibly.
Ellen’s discussion reinforced that while AI holds great potential to revolutionise the justice system, it must be approached with caution and responsibility. “The key is ensuring that any technological integration upholds the rule of law and enhances the protection of human rights,” she stated. By adopting a thoughtful and measured approach, the legal community can harness AI's strengths while mitigating its risks, paving the way for a more accessible, fair, and transparent justice system. The conversation around AI and justice is far from over, and continued dialogue and vigilance will be essential in shaping a future where technology serves as an ally, not an obstacle, to justice.
You can find out more about JUSTICE and the Scottish branch here.
The Society, in partnership with Wordsmith (the 2024 Annual Conference main sponsor), recently launched their AI Guide which you can access here.
Written by Rebecca Morgan, Editor of the Journal, Law Society of Scotland