Human-First, AI-Forward: Reshaping the Court Workforce

This content is not available in the selected language.

State courts are facing a convergence of challenges. Experienced staff are leaving the workforce, while courts struggle with shortages, mounting caseloads, and growing backlogs. At the same time, artificial intelligence is rapidly emerging, offering both a major challenge and a rare opportunity to rethink how justice is delivered.

To explore these dynamics, the National Center for State Courts (NCSC) and the Thomson Reuters Institute co-hosted a webinar on June 18, 2025, titled, “Exploring AI and Generational Shifts in the Court Workforce,” as part of the AI Policy Consortium series. This event marked the 11th webinar in the series. The session brought together experts to analyze insights from the Staffing, Operations, and Technology: A 2025 Survey of State Courts, and to explore how courts can strategically leverage technology to boost efficiency, bridge generational divides, and expand access to justice.

 

A System Under Pressure

Recent survey results show that courts are under serious pressure. Nearly 70% reported staffing shortages in the past year, and 61% expect even more in the year ahead. On top of that, caseloads are rising, and getting more complex, causing serious delays. About 78% of courts report hearing delays every week.

This pressure is directly affecting the workforce:

  • A large portion of court staff are working long hours, 25% put in 46–50 hours a week, and 13% work even more than that.
  • Despite the extra hours, more than half (52%) say they still don’t have enough time to properly handle their duties.

The human cost of all this is significant. Many staff report feeling stressed and overwhelmed by a crushing workload.

 

AI in Courts: Curious but Cautious

In the face of these challenges, generative AI has emerged as the number one trend on the minds of court leaders, ranking higher than the political climate or economic recession. Yet, despite 91% of them believing AI will have a significant impact, there’s no rush to adopt it. This hesitation comes from a mix of real, practical barriers that courts cannot ignore.

 

Barriers to Implementation

The slow pace of adoption is not without reason. Three primary obstacles stand in the way of widespread AI integration in courts:

  • Fear and Cultural Resistance: There is a great deal of fear in the workforce about implementing new technology. Concerns include security risks of AI systems to a potential over-reliance on technology at the expense of human skill.
  • Lack of Resources: Most courts lack the resources to dedicate to AI projects. Staff are already fully deployed managing daily operations, and many systems have limited technology support or funding.
  • Ethical Concerns: Court leaders remain cautious due to valid concerns about confidentiality, data security, and potential bias in AI outputs. Preserving public trust and confidence is paramount, and any integration must be approached with care and deliberation.

In Los Angeles, for example, a severe shortage of court reporters means that hundreds of thousands of hearings go without a verbatim record in courtrooms where electronic recording is not permitted. Even after hiring 58 court reporters in one year, the court remains understaffed. Similar shortages affect interpreter roles. These challenges highlight why AI tools, such as automated transcription and translation, are increasingly seen not as replacements, but as essential supplements to services that courts struggle to fully staff.

 

Bridging the Generational Divide

The departure of Baby Boomers and Gen X professionals from the workforce is accelerating a loss of institutional knowledge within the courts. In a large system like the Los Angeles Superior Court, for example, staff with 35 to 45 years of service are nearing retirement, creating a significant knowledge gap that is difficult to replace. At the same time, courts are welcoming younger generations, such as Millennials and Gen Z, who bring new expectations and digital skills.

Yet, assumptions about digital fluency do not always hold. Younger staff may be familiar with smartphones, but they often lack formal training in using generative AI for professional work. In many cases, early-career professionals are as confused by AI’s complexity and unpredictability as their more senior colleagues.

This highlights the importance of cross-generational learning. Creating a culture where staff of all ages can learn together not only promotes collaboration but also helps preserve institutional continuity.

 

Training as the Cornerstone of Adoption

Education and training remain the single most effective strategy for overcoming these barriers. Training helps demystify the technology, giving personnel a foundational understanding of how AI works, what it is capable of, and just as importantly, what it cannot do. This knowledge is the best antidote to fear and is essential for ensuring staff use these new tools responsibly.

And this need for training applies to everyone. This approach challenges the myth of the ‘digital native,’ showing that even younger generations are often unfamiliar with generative AI and its complexities. It is essential to educate across all generations together, creating a culture where staff can learn, be creative, and build cohesion.

Despite its importance, only 25% of court systems currently offer any form of AI training. One example comes from New York, where a summer training institute for judges now includes a dedicated session on generative AI. The goal is to build foundational understanding of what AI is, and what it is not to promote informed, responsible use among judicial officers. In another sign of growing institutional awareness, a New York legislator recently questioned whether the court system’s $5 million AI budget request was sufficient, underscoring the increasing policy attention being directed at technological innovation in the judiciary.

 

A Human-First, AI-Forward Strategy

A guiding philosophy for implementation is to be human-first, but AI-forward. This approach insists that people must be at the center of the process, designing how technology is used rather than being led by it. By setting clear and transparent guardrails, courts can create an environment where staff feel safe to explore, be curious, and think creatively, all within a secure framework.

 

Practical First Steps for AI Integration

AI integration can start with practical tools that address everyday challenges. Some courts are already using AI to:

  •  Provide internal knowledge bases to help staff navigate thousands of policies and procedures. In Los Angeles, for example, AI-powered tools present procedural information in a conversational format, helping staff quickly find the answers they need.
  •  Assist the public through website chatbots that answer questions and offer legal information. LA’s new court website, launching in July 2025, will feature a generative AI chatbot trained exclusively on verified legal content from the LA Superior Court and the California Judicial Council.
  •  Automate transactions, such as enabling the public to reschedule jury duty via a chatbot. One LA-based tool allows users to complete this process online through a conversational interface that interacts directly with the court’s backend system, eliminating the need for phone calls.

 

Building Trust with Ethical Guardrails

Ultimately, the success of AI in courts depends on trust, and certain ethical lines must not be crossed. It is considered unethical to use AI in ways that take over a judge’s role in making decisions.

Confidentiality is another major concern. Courts must use private, secure AI systems rather than public tools for sensitive information. Most importantly, AI isn’t meant to replace human workers. It’s seen as a way to fill essential gaps, especially in areas where hiring is difficult, like for court reporters or interpreters.

 

Conclusion: A Path Forward

The challenges facing state courts are significant, but not impossible to overcome. Moving forward means acknowledging real barriers such as fear, lack of resources, and ethical risks. By making a strong investment in staff training and embracing a human-first approach, courts can start to use AI in ways that ease workloads, address staffing shortages, and improve access to justice in meaningful ways.

 

Learn More

For those interested in learning more, the full webinar recording and additional resources are available here:

📌 Webinar Recording: Exploring the Effects of AI and Generational Shifts in the Workforce

📌 Presentation Resources: Resource Folder

For more details on AI applications in legal assistance, visit:
🌍 NCSC AI Initiative

This content has been updated on 01/09/2026 at 9 h 52 min.