Welcome to Part 2 of “AI in the Courts – 2024.”  Last month’s post[i] discussed how artificial intelligence (AI) is an accelerating phenomenon and has already become a useful tool of court administration. As Chief Justice Roberts of the U.S. Supreme Court recently wrote in his 2023 end of year report, AI is here and has the potential to greatly enhance public access, court services, and thereby increase public trust and confidence.

Now, let’s move on to cover how AI is currently being used by courts; what are probable future applications (where are we headed); things to be aware of and concerned about; and what are key issues for courts to consider in preparing for an AI-enabled future.

Current AI “use cases”:

  • Judicial work, such as legal research, analysis of documents, as well as drafting orders and opinions.
  • Conversational chatbots used by several courts for interacting with the public (especially self-represented litigants) and internally among court staff.  Such systems respond to questions and provide focused information, use avatars in multiple languages to enhance the experience, and generate forms.[ii]
  • Enhanced online dispute resolution (ODR) applications.
  • Case management systems (CMS) are being enhanced with AI to classify/sort cases and documents (e.g., for differentiated case management), and extracting data for analysis.
  • CMS also are using AI to integrate bots that perform case processing and management tasks like data entry, docketing, generating forms, scheduling, setting review dates, and resolution of data quality issues.
  • Translation of court web sites, FAQs, and documents into the many languages of potential litigants.

Future applications:

  • Courts can use AI to enhance internal administrative areas:  human resources  (e.g., recruitment and selection); finance and budget management; jury management; and information technology (e.g., cybersecurity and application development).
  • Court data analytics can be enhanced to provide new insights with the ability to quickly review large amounts of data and generate results for review.  For instance, pretrial release algorithms and sentencing guidelines are areas that AI could help define.
  • New and better tools to help court employees work more efficiently as AI systems gain capabilities (e.g., in CMS and ODR).  Redacting documents and generating needed draft orders are good examples.
  • Enhanced access to public records and information, including summaries of records, via better channels for serving litigants, attorneys, and the general public. 
  • Expanded voice-response systems in multiple languages will be common.
  • Sophisticated voice to text systems will assist self-represented litigants and provide enhanced access for disabled persons.  Such systems may also be capable of generating the written record of court proceedings and orders after hearing.

Issues to be aware of, concerned about, and address:

  • Ethical and legal issues, such as copyright law and confidentiality of data, are challenging. Governments are moving to adopt overarching AI legal regulations, too (for example, the EU’s AI Act and President Biden’s 2023 Executive Order).[iii]
  • Ensuring robust cybersecurity against attack vulnerabilities are needed.
  • Safeguarding against unauthorized third-party data extraction is an often-hidden problem.
  • Detecting and countering deepfakes and shallowfakes (such as voice cloning and image modification), both in filings and in hearings.
  • Carefully planning and budgeting for AI implementation and support costs.
  • Assessing the validity of the training datasets used by large-language models (LLM’s) when using generative AI applications. Do the training datasets used introduce unwanted bias? Are the generative AI systems prone to “hallucinations?”
  • Vendors will be taking a leading role in providing AI functionality, thus courts should proactively define what they want and ultimately use. Courts should also require vendors to disclose any use of AI in their products.
  • Over-reliance on vendors to develop products, risking bias and short-comings from not consulting potential users and affected communities during development.

What should courts do now to prepare for their AI future?

  • Ensure robust court governance over AI implementation.  Enact clear policies and define appropriate uses and disallowed uses, for instance.
  • Educate and train judges, managers, and staff about AI in the courts.
  • Implement user-centric and inclusive design.
  • Reengineer processes before implementing AI.
  • Be transparent about the use of AI systems (internally and externally).
  • Cultivate a future-fit workforce and workplace, ready for, and comfortable with use of, AI tools.
  • Have a data-driven mindset, ensuring that only trustworthy, court-vetted foundational training datasets are used in AI applications.
  • Include “human in the loop” components in AI applications.  Make sure there are routine reviews/audits and evaluations of system performance.

The use of AI in the courts is a complex challenge for the courts.  These blog posts have attempted to summarize a wealth of information that is coming at the courts from many angles — every day there are new articles, podcasts, research papers, and government actions.  At the recent Arizona AI Summit there was some great advice on what court leaders need to do to meet the challenge of AI:[iv]

Embrace — Educate – Evolve – Empower

Embrace: Welcome the opportunity presented by AI and GAI. Understand these technologies as powerful tools that can uplift modern legal practice and access to justice.

Educate: Adopt an active approach to learning about AI advancements. Get hands-on by testing these technologies early and often, increasing familiarity and encouraging integration within workflows.

Evolve: Make it a priority to stay ahead of the curve. Go beyond passive learning to apply the ‘trust but verify’ principle when using AI tools. Understand that no technology is perfect, and validation of AI outputs remains crucial.

Empower: Empowering users is imperative to successfully integrating artificial intelligence into legal service delivery. Training on effective use and interpreting outputs to make strategic choices empowers users to feel ownership and engagement, foster trust, overcome resistance and fear, and promote the ethical use of AI.

Court leaders who heed this advice will be able to take their courts well into the AI world we now live in.

NOTE:  The January blog post (Part One) listed many useful resources to become acquainted with AI.  The latest Court Leaders Advantage podcast has another great list of resources, including papers from the recent AI Summit held in Arizona.[v]  The upcoming Court Leaders Advantage podcast later this month will have a list of further resources.  Finally, a team of experts is working hard to produce a new digital AI Guide which will be published by the National Association for Court Management (NACM) later this year.  This should be a definitive resource and will be updated regularly.


[i] AI in the Courts – 2024 (Part One) – Court Leader & Oops! Correcting non-functioning links in yesterday’s “AI in the Courts” blog post – Court Leader

[ii] An excellent resource on Chatbots:  NCSC offers guidance about using court chatbots to expand access to justice | NCSC

[iii] Ethics:  ai-and-legal-ethics-final-white-paper.pdf (wordpress.com)

[iv] Artificial Intelligence and the Practice of Law, Vaughn and Stefko (2023), page 10: ai-and-practice-of-law-final-white-paper.pdf (wordpress.com)

[v] Artificial Intelligence and the Courts – Omen or Opportunity? – Court Leader

One thought on “AI in the Courts – 2024 (Part Two)

Leave a comment