Rethinking Unauthorized Practice of Law in the Age of AI

This content is not available in the selected language.

As artificial intelligence becomes increasingly present in legal services and court processes, courts are facing questions that go beyond efficiency or speed. The growing use of AI tools raises new questions for access to justice, the design of court systems, and regulatory frameworks such as unauthorized practice of law. These issues were at the heart of the webinar Modernizing Unauthorized Practice of Law Regulations to Embrace AI-Driven Solutions and Improve Access to Justice, the 17th and final session of the 2025 webinar series co-hosted by the National Center for State Courts and the Thomson Reuters Institute.

The discussion explored how existing rules on unauthorized practice of law, originally designed to regulate human actors, are increasingly challenged by the widespread availability of AI-driven legal tools. As courts see a growing number of self-represented litigants and generative AI becomes part of everyday legal problem-solving, the question is no longer whether these technologies are being used, but how justice systems should respond. The webinar highlighted the practical tensions this shift creates, from consumer protection and court integrity to innovation and regulatory uncertainty.

Image generated by Gemini.

One of the clearest takeaways from the session is that the focus was not on the technology itself, but on how people interact with legal systems. AI-enabled tools are already helping address gaps in access to justice, while existing unauthorized practice of law rules struggle to keep pace. Rather than pointing to a single solution, the session outlined several possible paths forward, encouraging courts, regulators, and legal institutions to move deliberately and collaboratively in response to realities already unfolding on the ground.

To understand why these questions have become so urgent, it helps to look more closely at the broader access to justice context in which they are emerging.

 

Access to Justice as the Underlying Context

Discussions about artificial intelligence and unauthorized practice of law cannot be separated from the broader access to justice crisis. Across jurisdictions, a large proportion of the population encounters legal problems, yet many of these issues remain unresolved. For low-income households in particular, legal needs often accumulate while meaningful support remains out of reach.

In the United States, this crisis is reflected in widely cited access to justice data. The country ranks 107th out of 142 jurisdictions for civil justice affordability in the World Justice Project Index. Over a four-year period, approximately 66 percent of the population experienced at least one legal problem, yet fewer than half of those issues were resolved. These figures highlight the scale of unmet legal need that courts and legal institutions continue to face.

At the same time, court systems were largely designed on the assumption that parties would be represented by lawyers. Procedures, forms, and legal language continue to reflect that model. Today, however, self-representation has become common in many areas of civil justice. Many individuals appearing in court are not simply “self-represented” by choice, but effectively unrepresented, navigating systems that were never designed for them to use on their own.

This disconnect between institutional design and lived reality forms the backdrop against which AI-enabled legal tools are increasingly being adopted. For many people, these tools are not a substitute for professional legal representation, but the only form of assistance realistically available. Against that backdrop, the regulatory framework governing unauthorized practice of law becomes harder to ignore.

 

A Regulatory Framework Shaped For a Different Reality

Unauthorized practice of law sits at the center of this debate. One of its defining features is the absence of a uniform definition. Rules governing unauthorized practice vary significantly from one jurisdiction to another, both in scope and in enforcement. At their core, these rules generally prohibit non-lawyers from representing others, providing legal advice, or holding themselves out as licensed attorneys, even though the precise contours of these prohibitions differ widely.

Some states adopt relatively narrow and permissive approaches, such as Alaska, focusing primarily on preventing misrepresentation and direct advocacy. Others take a far broader and more restrictive view, as seen in jurisdictions like Georgia or Texas, where a wide range of activities may be captured as legal advice. This contrast highlights how fragmented the regulatory landscape has become. In some jurisdictions, this fragmentation has led courts or regulators to rely on disclaimers or warnings to preserve a distinction between general legal information and prohibited legal advice.

Despite these differences, most unauthorized practice frameworks share a common underlying assumption: that legal services are delivered by human professionals. That assumption is now under strain. AI-driven tools do not claim to be lawyers, yet they can provide guidance that appears tailored to individual circumstances. This raises uncertainty about where existing rules apply and where they fall short, particularly when those rules were never designed to address automated systems.

Legal challenges involving platforms such as Upsolve, which raised First Amendment objections to certain applications of unauthorized practice rules, illustrate how these boundaries are already being tested. Earlier regulatory responses involving LegalZoom in North Carolina further show that, in some cases, these boundaries are being reconsidered. This uncertainty becomes even harder to manage when AI tools deliver information in a way that looks and feels like individualized advice.

 

When Information Begins to Look Like Advice

Generative AI has transformed how people access and engage with legal information. These tools are widely available, conversational in nature, and capable of responding to specific fact patterns. What once appeared to be a relatively clear distinction between legal information and legal advice has become increasingly difficult to maintain.

Courts are already seeing the practical effects of this shift. Litigants arrive with AI-generated drafts, arguments, or citations, sometimes relying heavily on these tools to determine how to proceed. While AI itself is not a legal person and cannot practice law, the corporations that design and deploy these tools are legal persons subject to regulation, including companies such as OpenAI or Anthropic, which remain accountable under existing legal frameworks.

At the same time, uncertainty surrounding unauthorized practice rules can discourage innovation. Developers may limit functionality or withdraw from certain jurisdictions not because harm has occurred, but because legal boundaries are unclear. In practice, even the receipt of a warning letter alleging unauthorized practice can be enough to prompt an immediate exit from a market, illustrating how regulatory ambiguity can lead to self-censorship and slow innovation. Against that backdrop, courts are also dealing with a more immediate question. How should AI-generated content be handled inside the courtroom itself.

 

How Courts Are Experiencing the Change

From the court’s perspective, the growing presence of AI-generated material introduces new pressures but also familiar challenges. Concerns include the accuracy of filings, the risk of fabricated or unreliable authorities, and the potential impact on efficiency and courtroom management.

There is broad agreement, however, that courts already possess the authority and tools needed to address these issues. Judges routinely assess reliability, enforce procedural rules, and maintain the integrity of proceedings. The use of AI does not diminish that role. Instead, it adds complexity to questions courts have long managed. Responding to these developments does not necessarily require new legislative authority, as the management of procedural integrity, courtroom conduct, and unreliable or disruptive practices already falls within powers judges exercise as part of their ordinary judicial function.

This leads to a practical issue. Should courts acknowledge the reality of AI use by offering guidance to litigants on responsible practices, rather than treating these tools as invisible or prohibited by default. Silence, in this context, is increasingly no longer a viable option. While the reform of unauthorized practice of law frameworks is likely to unfold over time, the regulation of AI use within the courtroom has already become an immediate and practical responsibility for judges.

At the same time, AI is not only reshaping expectations for litigants. The professional standard of care for licensed attorneys may also be evolving, with growing recognition that competent legal practice can include the responsible use of AI tools to support legal research, accuracy, and efficiency. As AI becomes more deeply embedded in legal work, questions arise as to whether the responsible use of such tools may increasingly be understood as part of professional competence, and whether a failure to engage with them could, over time, be viewed as falling short of emerging standards of legal practice. These developments help explain why the conversation is increasingly turning toward reform options, rather than temporary fixes.

 

Three Possible Paths Forward

Several approaches can be considered when examining how rules on unauthorized practice of law might evolve in response to AI-driven legal tools. These paths reflect different ways of balancing oversight, innovation, and consumer protection, and they may resonate differently depending on institutional capacity and regulatory priorities.

One possible path would involve explicitly permitting certain AI legal tools under defined conditions. The objective would be to move these tools out of legal grey zones and into a framework that offers clarity and accountability. In practice, this could involve measures such as

  • registering AI legal tools with a public authority
  • ensuring clear and ongoing disclosures to users about the nature and limits of the tool
  • implementing quality control, testing, and data protection measures
  • providing mechanisms to report errors or potential risks

This approach has the advantage of strengthening transparency and consumer protection. At the same time, it raises practical questions about the administrative burden required to monitor compliance, particularly in jurisdictions with limited resources.

A second path would rely on regulatory sandboxes. Under this model, AI legal tools could operate within a controlled environment for a limited period, subject to defined conditions and oversight. The purpose would be to allow experimentation while collecting data, observing real-world effects, and assessing risks before committing to broader reforms. This approach offers flexibility and learning opportunities, though it also depends on ongoing coordination and institutional capacity.

A third path would refocus unauthorized practice rules on human conduct. Rather than extending these rules to AI tools, enforcement would target individuals who hold themselves out as lawyers or represent others. AI systems would fall outside the scope of unauthorized practice so long as they do not claim legal authority. This option provides greater clarity for developers and courts, and may reduce the chilling effect on innovation. At the same time, it assumes that consumer protection concerns would be addressed through alternative legal mechanisms.

These approaches can be summarized as follows.

Path Primary strategy Key benefit
Path 1 Explicit enablement of certain AI legal tools under defined conditions Transparency and stronger consumer protection
Path 2 Use of regulatory sandboxes to allow controlled experimentation Data-driven learning and regulatory flexibility
Path 3 Refocusing unauthorized practice rules on human conduct Clear boundaries and a more innovation-friendly environment

 

Regulation, Innovation, and Responsibility

Thinking about unauthorized practice of law in an AI context brings broader regulatory tensions into focus. Artificial intelligence evolves quickly, often faster than legal frameworks can realistically adapt. Overly rigid rules risk freezing innovation in an environment where tools and uses change rapidly from one year to the next. At the same time, the absence of guidance creates its own set of problems, leaving courts, developers, and users to navigate uncertainty without clear reference points.

The challenge of regulating AI’s effectiveness has been compared to “trying to nail spaghetti to the wall,” a vivid way of capturing how quickly these tools evolve and how difficult it is for static rules to keep pace with their changing capabilities.

This tension highlights the limits of purely binary approaches. Treating regulation as the only safeguard may slow down developments that could meaningfully improve access to justice. Yet allowing AI tools to operate entirely unchecked can place a heavy burden on individuals who may already be vulnerable.

In this context, it is important to reconsider the baseline comparison. For many individuals, AI tools are not an alternative to legal representation, but the only form of assistance realistically available. The relevant comparison is therefore not between AI and a lawyer, but between AI and the absence of any help at all.

Questions of accountability naturally arise from this context. Lawyers operate under professional obligations and are subject to malpractice regimes and disciplinary mechanisms. AI tools do not fit into these structures. But comparing AI assistance directly to legal representation can be misleading. For many individuals, the realistic alternative to using an AI tool is not hiring a lawyer, but receiving no assistance at all.

Seen this way, the issue is less about replicating professional liability models and more about managing risk in a landscape where support is unevenly distributed. Consumer protection law, transparency requirements, and regulatory oversight outside unauthorized practice frameworks offer potential ways to address harm without forcing AI tools into categories designed for human professionals. Collaboration between courts, regulators, and technology developers therefore becomes essential, not as a substitute for regulation, but as a complement that allows standards and safeguards to evolve alongside the technology itself.

 

Moving Forward Deliberately

AI is already embedded in the legal ecosystem. The central question is no longer whether these tools belong in that space, but how institutions should respond to their growing role.

Access to justice pressures continue to intensify. AI tools are already filling gaps. Unauthorized practice frameworks were not built for this reality. Moving forward will require deliberate and collaborative action. Courts and regulators have an opportunity to shape this transition thoughtfully, balancing innovation and protection in a way that reflects both the risks and the realities of today’s justice system. In the meantime, practical initiatives are already emerging to help courts and litigants navigate this shift responsibly.

 

Ongoing Initiatives

The National Center for State Courts AI Sandbox now enables users to choose among multiple generative AI models, including GPT-based and Claude-based tools. A refined library of prompts is also available, and a set of recommendations designed to support responsible use of AI by litigants is currently being developed.

 

Learn More

📌 Webinar Recording: Modernizing Unauthorized Practice of Law Regulations to Embrace AI-Driven Solutions and Improve Access to Justice

For more details on AI applications in legal assistance, visit:

🌍 NCSC AI Initiative

This content has been updated on 01/14/2026 at 15 h 14 min.