AI Tools, Self-Represented Litigants, and the Future of Access to Justice

This content is not available in the selected language.

Image générée par ChatGPT

As artificial intelligence becomes part of everyday legal problem-solving, courts and legal institutions are facing a new reality. Self-represented litigants are increasingly turning to AI tools. They use them to understand procedures, complete forms, and navigate court processes. These tools sit alongside more familiar sources such as court websites, self-help centers, legal aid services, online searches, and advice from friends or family.

In other words, AI is becoming one element within a broader ecosystem of informal and institutional sources of legal assistance. For many court users, this is not simply a matter of convenience. Rather, it reflects the limited options available when professional legal assistance is out of reach.

That reality set the stage for a recent webinar titled AI Tools, Self-Represented Litigants, and the Future of Access to Justice, part of an ongoing series co-hosted by the National Center for State Courts (NCSC) and the Thomson Reuters Institute, which explores how artificial intelligence is reshaping courts, legal services, and access to justice. More broadly, the discussion reflects a growing recognition that AI is already embedded in everyday legal practices, whether courts are ready for it or not.

Rather than centering on technology for its own sake, the focus today is on how people are already using AI in legal contexts, the risks this use creates, and how courts and legal aid organizations can respond in responsible ways.

One thing is already clear: AI is here. The real question is how justice systems should engage with it.

 

When AI Becomes Part of Everyday Legal Problem-Solving

Self-represented litigants have always relied on a mix of resources to manage their legal issues. Court websites, self-help centers, legal aid programs, search engines, online forums, and informal advice have long been part of that landscape. AI tools are simply the newest addition. In that sense, AI does not replace existing pathways to information; instead, it becomes layered onto an already fragmented support ecosystem.

What makes this moment different is the way generative AI presents information. More specifically, a clear distinction has emerged between purpose-built tools created by courts or legal aid organizations and general-purpose systems. Tailored tools such as LIA (pronounced “Leah”) in North Carolina and Beagle Plus in British Columbia are designed with specific guardrails and instructions for court users. By contrast, general tools like ChatGPT or Gemini, which are built for broad use and do not reflect local court rules or procedures. Another example is a narrowly tailored probate court chatbot in Alaska, designed specifically to guide users through probate processes using locally grounded information.

These systems respond conversationally, appear confident, and can tailor answers to specific questions. As a result, for someone facing a legal problem without professional support, that combination can be both empowering and risky.

Legal issues are not low-stakes. Unlike asking for vacation ideas or drafting a grocery list, legal matters involve deadlines, procedural rules, and consequences that may not allow for second chances. Indeed, small mistakes can carry lasting impact.

This is why simply directing people to general-purpose AI tools is not a neutral choice. Access to justice is not about access to any information, but to information that is relevant, accurate, complete, and up to date. In other words, access to justice presupposes a certain quality and reliability of legal guidance. At present, there is no guarantee that commercial AI systems consistently meet that standard across jurisdictions, case types, and court procedures.

Equally important is the skill required to use these tools effectively. Access to AI is not the same as knowing how to use it well. Asking the right questions, providing relevant context, and recognizing incomplete or misleading answers all require practice. Without those skills, even powerful tools can produce results that confuse rather than clarify. Using AI effectively is itself a learned skill, and this insight has, in turn, reshaped how institutions approached public guidance.

This moment has been compared to the “AOL phase” of the internet, an early, experimental stage in which technologies are powerful but uneven, and far from mature or fully reliable for complex legal needs. AI is also uniquely “seductive”: unlike static websites, users often cannot see what information is missing. Put differently, you don’t know what you don’t know, and confident responses can mask critical gaps.

 

A Shift in Focus: From Teaching AI Use to Supporting Human Conversations

Early efforts assumed that writing guidance for court users would be relatively straightforward. This initial optimism, however, quickly proved misplaced. What initially seemed simple turned out to be far more complex once real-world variability, legal nuance, and human vulnerability were taken into account. In practice, the challenge lay not only in the technology itself, but in the diversity of situations and needs encountered by court users.

Legal questions vary by court, by location, and by case type. A generic prompt can easily generate answers based on aggregated or outdated information. Even within the same courthouse building, identical filings may be handled differently across divisions. Moreover, relying on AI to “sound like a lawyer” can obscure what actually matters in a case, omit critical facts, or introduce fabricated citations.

Rather than attempting to teach litigants how to prompt AI systems, the approach evolved. This reorientation was informed in part by internal polling that revealed a striking gap: while roughly 39% of organizations reported having some form of AI guidance for staff, only about 7% had comparable guidance for litigants. Internal experimentation was seen as a safer starting point, where organizations retained greater control over risk and quality, which helps explain why early efforts focused on staff rather than the public.

39% of organizations → AI guidance for staff
7% of organizations → comparable guidance for litigants

The focus is now on helping court staff, librarians, clerks, and self-help providers engage in informed conversations with court users about AI. In other words, the emphasis has shifted from technical instruction to relational support.

At its core, this guidance encourages:

  • Welcoming questions about AI and approaching conversations with curiosity rather than judgment
  • Acknowledging AI as a powerful tool, while clearly explaining its limits
  • Explaining why legal matters require extra caution
  • Educating users about risks such as inaccuracy, outdated information, or fabricated content
  • Directing people toward trusted court and legal aid resources
  • Explaining applicable court rules and policies
  • Offering opportunities to correct AI-related mistakes when possible
  • Modeling responsible AI use within court operations

The goal is not to prohibit AI outright, but to help people understand where it fits, where it falls short, and how to avoid harm. Ultimately, this reflects a broader shift toward safeguarding access to justice through informed, human-centered engagement.

 

Why Legal Cases Require Special Care

Legal problems depend on precise, context-specific information. To begin with, court location matters, since procedures and rules can change not only from state to state, but from one court to another.

At the same time, case type also matters. Filing an answer in a family matter is not the same as responding in a housing or probate case, even within the same building.

Equally important is clear communication. Judges and court staff need to understand a person’s situation and arguments as they truly are. However, AI systems may reframe narratives, emphasize the wrong details, or leave out relevant history in ways that distort the underlying issue.

Several risks consistently arise:

  • AI outputs are not always accurate or current
  • Systems can become confused or fabricate information
  • Tools are designed to be agreeable, not authoritative
  • AI cannot provide legal advice
  • Personal information should not be shared
  • Evidence should never be created or altered using AI

AI can be a starting point for exploration, but it should never be the final authority. Any information it produces must be verified against reliable sources.

 

Guiding People Back to Trusted Resources

Because AI use is unlikely to disappear, the emphasis is on mitigation rather than prohibition. Court users are encouraged to rely on resources developed by those responsible for the legal process itself: court websites, official forms, self-help materials, and services offered by legal aid organizations. These sources reflect current procedures and local requirements.

When possible, court staff can redirect people from generic AI outputs to tools built specifically for court users, including purpose-built chatbots, online guides, and self-help portals.

In some situations, AI-related errors can become teaching moments. They offer an opportunity to explain what went wrong and how to correct it before filings are finalized.

This approach rests on an important assumption: most self-represented litigants are doing their best in a complex system, often under stress. The comparison is not always between AI and professional legal advice. In practice, people already rely on Reddit, friends, or family–what might be described as an informal “Aunt Helen” model of support! The question then becomes: is AI better than Reddit? Better than Aunt Helen? The answer is often “probably,” but with caution. AI is so convincing and eager to please that bad information can feel especially authoritative.

Treating AI-generated problems as opportunities for education rather than punishment helps preserve both fairness and trust.

 

How Legal Aid Organizations Are Using AI to Extend Their Reach

Legal aid programs are experimenting with AI to serve more people while preserving human judgment. Rather than focusing solely on public-facing chatbots, many initiatives start internally. In Middle Tennessee expungement clinics, AI-assisted workflows reduced the time pro bono attorneys spent on paperwork from roughly one hour to just four minutes. The impact was not only speed, but also scale: the ability to help far more people with the same limited human resources.

By reducing time spent on repetitive tasks, automating administrative processes frees advocates to spend more time with clients. Internal AI tools help hotline staff access accurate information more quickly, which in turn improves referrals and brief legal assistance for people who will ultimately proceed on their own.

Some programs use AI in clinics to streamline routine paperwork, allowing attorneys to focus on listening to clients and identifying additional legal needs. Others are exploring higher-risk but innovative applications, such as tools in Ontario that help individuals draft personal narratives for protection-from-abuse or domestic violence petitions. In these contexts, AI is used to support storytelling, while careful human oversight is maintained.

Voice-based AI is also being explored for intake processes, gathering basic information such as household size or income. This shift avoids reserving high-level legal expertise for purely administrative questions and allows experienced lawyers and advocates to focus their time on strategy, advice, and client engagement rather than routine data collection.

Across these examples, AI is treated as one tool among many, supporting human work rather than replacing it. Put simply, the technology is positioned as an enabler, not a substitute.

Importantly, these efforts are not viewed as substitutes for broader access to justice reforms. AI may help scale services, but it is not a silver bullet.

Taken together, these initiatives show how AI is being used in concrete and carefully bounded ways.

AI Use Case
What It Enables
Internal workflows Faster processing and greater service capacity
Administrative automation More time for client-facing legal work
Clinic support tools Focus on listening and identifying legal needs
Narrative drafting assistance Help users tell their stories, with human oversight
Voice-based intake Efficient data collection without legal judgment
Overall approach AI as support for human work, not a replacement

 

The Court Perspective: Potential, Patience, and Practical Limits

From the court side, purpose-built AI tools hold promise, particularly in roles similar to self-help facilitators. At least in theory, these systems could provide clear procedural information, guide users to appropriate forms, and connect them with non-legal resources relevant to their situation.

In practice, however, developing such tools is resource intensive. Extensive testing and refinement are required to achieve acceptable accuracy across the wide range of questions people bring to courts. As a result, expectations that these systems can be built quickly or effortlessly often collide with reality.

For now, many courts focus on steering users toward trusted information and setting clear expectations about what AI can and cannot do. At the same time, there is recognition that some people will rely on general AI tools regardless. This reality makes transparency about risks and limitations even more important.

Looking ahead, some see a future where AI could support more complex legal interactions. Even so, there is broad agreement that current systems are not yet capable of reliably handling the full nuance of legal disputes.

 

Wrestling With Responsibility and Risk

Balancing innovation with the principle of “do no harm” remains one of the hardest challenges. On the one hand, providing detailed instructions on AI use risks being perceived as endorsement. On the other, remaining silent leaves people to navigate powerful tools without guidance. Comparing AI to legal representation can also be misleading. Such framing creates a false choice. For many people, the realistic comparison is not AI versus an attorney, but AI versus no meaningful help at all.

This reality complicates traditional notions of accountability. Courts are accustomed to controlling the content on their websites. By contrast, AI introduces dynamic outputs that cannot be fully predicted. Even with extensive testing, perfect accuracy cannot be guaranteed.

A simple illustration captures this tension. During one presentation, an AI-generated slide showed characters with an abnormal number of fingers, an error noticed in real time. Even experienced professionals using AI carefully can encounter hallucinations and errors. The flaw itself was minor, but its implications were not. It reinforces the need for vigilance in legal contexts, where small inaccuracies can have outsized consequences.

Many institutions are therefore adopting cautious approaches, including internal experimentation first, clear disclosures, and continuous evaluation of AI tools once deployed. Throughout these efforts, the emphasis remains on supporting users while protecting the integrity of court processes.

 

Emerging Best Practices

Across jurisdictions and programs, a set of practical patterns is beginning to take shape:

  • Start with internal AI use to build institutional understanding
  • Keep humans in the loop, especially for medium- and high-risk tasks
  • Develop clear policies and procedures before launching public tools
  • Invest in foundational self-help content before layering AI on top
  • Engage communities and court users as tools evolve

AI alone cannot fill the access to justice gap. It works best when integrated into broader efforts that include clear information, human support, and thoughtful system design.

 

Moving Forward, Deliberately

AI is already part of the justice ecosystem. The task ahead, therefore, is not to decide whether it belongs, but how to guide its use in ways that are responsible, transparent, and grounded in real human needs.

For self-represented litigants, AI may offer a first point of entry into complex legal systems. At the same time, for courts and legal aid organizations, it presents both opportunity and obligation. Progress will depend on collaboration, careful experimentation, and a willingness to adapt as technology evolves.

What emerges is not a single solution, but a shared commitment to moving forward thoughtfully, recognizing both the promise of AI and the profound responsibility that comes with deploying it in matters that shape people’s lives.

 

Learn More

For those interested in learning more, the full webinar recording and additional resources are available here:

📌 Webinar Recording: AI Tools, Self‑Represented Litigants, and the Future of Access to Justice

📌 Presentation Resources: Resource Folder

For more details on AI applications in legal assistance, visit:

🌍 NCSC AI Initiative

 

This content has been updated on 02/09/2026 at 16 h 52 min.