
Image générée avec ChatGPT.
Artificial intelligence is no longer a distant or theoretical concept for the legal profession. It is rapidly becoming part of everyday legal practice. Lawyers increasingly rely on AI tools to review documents, summarize materials, draft pleadings, and conduct legal research. Courts, for their part, are increasingly receiving AI-generated submissions, while clients are also beginning to experiment with these tools before consulting counsel.
These developments formed the backdrop of a recent webinar titled Navigating the AI Landscape: Best Practices for Legal Professionals, part of the “AI and Courts” webinar series co-hosted by the National Center for State Courts (NCSC) and the Thomson Reuters Institute. This series brings together court leaders, technologists, and legal professionals to explore how artificial intelligence is reshaping legal practice and court operations.
Rather than focusing on hypothetical futures, the discussion examined the realities already unfolding in practice. Where is AI currently being used? What new risks are emerging? And how should professional and ethical obligations be interpreted in this rapidly evolving environment?
Throughout the discussion, one central message stood out clearly: AI can be powerful, but its use requires careful judgment and professional discipline.
AI Is Already Reshaping Legal Work
Artificial intelligence is already influencing legal practice in both significant and incremental ways. In high-volume environments such as e-discovery, AI helps lawyers review vast collections of documents far more efficiently. In research contexts, AI tools can surface relevant cases and statutes faster than traditional search methods. Contract analysis, predictive analytics, and automated filing systems are also becoming more widespread in legal practice.
Yet some of the most immediate benefits are practical rather than revolutionary. AI can flag inconsistencies in a draft, suggest edits, identify potential gaps, and speed up the preparation of first drafts. It can generate deposition questions or help structure interrogatories. It can also summarize lengthy records, making complex materials easier to navigate.
AI is also beginning to play a role in transcription services and multilingual environments, particularly in courts that face shortages of interpreters or transcription professionals.
At the same time, lawyers and courts are not the only ones adopting these tools. Clients are increasingly using AI before seeking legal advice, and self-represented litigants are submitting filings that have been partially generated with AI tools. This growing external use is already beginning to affect courtroom dynamics.
While generative AI may expand access to legal information, it also introduces important risks. Self-represented litigants may unknowingly submit AI-generated filings that contain hallucinated citations or inaccurate legal arguments. Courts have already imposed sanctions in several instances where fabricated authorities appeared in submissions. Individuals who rely on these tools without legal training may therefore face serious legal consequences. In short, AI is no longer approaching the legal profession. It has already become part of it.
Relieving the Back Office Burden
AI’s value is not limited to legal reasoning. Many practitioners spend significant time on billing, time entries, note-taking, and administrative documentation. These tasks are necessary but often divert attention from substantive legal work.
AI tools can help automate or streamline some of these processes, reducing repetitive administrative effort. In practice, this may free lawyers to focus more on analysis, advocacy, and client strategy.
Efficiency, in this context, is not about replacing professional judgment. Rather, it is about freeing time so that judgment can be applied where it matters most.
Beyond Generative AI: The Rise of Agentic Systems
Much of the public conversation focuses on generative AI systems that draft text in response to prompts. However, the discussion also suggested that the next phase may be the development of agentic AI.
Agentic AI refers to systems capable of performing structured tasks across an entire workflow, rather than simply generating isolated outputs. In legal environments, this could involve multiple AI agents handling different stages of the same process. One agent conducts research. Another drafts. A third verifies citations. Each reviews the others’ work before it reaches human review.
This layered approach offers one possible strategy for reducing hallucinations and bias. Rather than relying on a single output, organizations may develop internal frameworks in which different AI agents cross-validate results before the final human decision-maker steps in.
Agentic systems are not theoretical. They are already beginning to appear, and their influence on legal workflows may grow quickly in the coming years.
Testing Legal Arguments Before They Reach the Courtroom
One particularly innovative possibility extends beyond drafting and research. Today, litigation teams sometimes use mock juries to test arguments before trial. In the future, AI could serve as a complementary analytical tool, helping lawyers assess how certain arguments or pieces of evidence might be perceived by a judge or jury.
Such systems could simulate reactions, highlight vulnerabilities in reasoning, or identify alternative persuasive framings. They would not replace human strategy, but could provide an additional analytical lens before high-stakes proceedings.
AI’s impact, therefore, may extend beyond efficiency to strategic preparation.
The Hallucination Problem
No discussion of AI in law can avoid the problem of hallucinations. Hallucinations occur when AI systems generate information that appears confident and authoritative but is factually incorrect. These errors may take several forms, including:
- Fabricated case citations
- Distorted facts
- Unsupported legal propositions
- Incorrect procedural guidance
- Blended or confused legal concepts across jurisdictions
Generative AI systems predict likely word sequences based on training data. They are optimized for plausibility, not accuracy. When context is incomplete or the law is evolving, they may produce answers that sound persuasive but lack a factual foundation.
Even more subtle than fabricated cases is omission. An AI tool may cite valid authority yet fail to identify controlling precedent, or quote accurately while removing essential context. For that reason, the answer is not to avoid AI, but to verify its outputs carefully.
Professional Responsibility Does Not Change
Ethical obligations remain fully intact. Under the ABA Model Rules of Professional Conduct, lawyers must maintain competence, diligence, and candor toward the tribunal. They are also required to supervise subordinate lawyers and nonlawyer assistance appropriately, and the rules governing professional misconduct continue to apply.
Technology does not dilute these duties. If anything, it reinforces them. As a result, the use of AI tools requires careful verification. In practice, this means confirming:
- The accuracy of case names and citations
- The correctness of legal propositions
- The integrity of quotations in their full context
- The reliability of factual statements
The guiding principle is straightforward: AI outputs should never be relied upon without verification. Failure to do so may result in sanctions, including monetary penalties, fee awards, referrals to disciplinary bodies, or mandatory continuing education.
Beyond verification, another important safeguard is maintaining a “human in the loop.” Artificial intelligence can assist with research, drafting, or analysis, but legal decision-making ultimately requires professional judgment. Lawyers are trained to interpret nuance, assess credibility, and apply contextual reasoning developed through years of experience. These forms of judgment remain fundamentally human responsibilities.
Buy a Tool or Build One?
AI adoption is not merely a technical question. It is also a strategic one. Law firms and legal departments must decide whether to license commercial AI tools available on the market or invest in building proprietary in-house systems tailored to their specific practice areas.
Commercial tools offer speed and convenience. In contrast, custom systems may offer differentiation and strategic advantage. Both paths require financial investment, governance, and ongoing oversight.
This build-versus-buy decision ultimately reflects broader business strategy considerations, not just technological preference.
New Cybersecurity Risks and IP Concerns
Traditional cybersecurity concerns do not disappear with AI; rather, they evolve. One example is prompt injection, which refers to attempts to manipulate AI systems through carefully structured inputs designed to override safeguards or alter intended outputs. Another concern is data poisoning, which involves introducing flawed or malicious data that can gradually distort a model’s behavior over time.
Because of these vulnerabilities, AI systems require continuous monitoring and structured governance. They cannot simply be deployed and then ignored.
At the same time, intellectual property concerns remain significant. Ongoing litigation has raised questions about whether certain AI models were trained on protected data without authorization. Organizations must therefore consider not only the risks associated with AI outputs, but also the legality of the data used to train these systems.
In this context, structured risk-assessment frameworks – such as guidance developed by the National Institute of Standards and Technology – provide helpful models for managing these concerns.
AI in the Courtroom: The Indiana Transcription Model and Its Limits
A notable example comes from Indiana, where courts faced a shortage of court reporters. The program uses AI-assisted transcription capable of handling multilingual proceedings and generating searchable records in real time.
Human reviewers remain involved, and corrections are documented transparently. The system reportedly captures overlapping speech more effectively than traditional methods and allows testimony to be retrieved quickly from the transcript. Together, these improvements illustrate how AI-assisted transcription can support court operations.
At the same time, important limitations remain. The NCSC does not recommend the use of AI for live simultaneous interpretation in courtroom proceedings. While AI may assist with translating documents – provided the results are reviewed by certified human interpreters – real-time courtroom interpretation involves nuance and ethical judgment that current systems cannot reliably guarantee.
In this context, innovation must not outpace reliability in environments where accuracy directly affects legal rights and outcomes.
Practical Tools for Managing Risk
Several practical resources can help legal professionals manage AI-related risks. One example is a sanctions chart developed specifically for AI-related hallucination cases, which compiles federal and state decisions and details sanctionable conduct, mitigating and aggravating factors, and imposed penalties. Available at premierai.us, this resource offers a useful overview of how courts have responded to these issues.
Additional guidance can also be found in the AI Risk Management Framework developed by the National Institute of Standards and Technology, which addresses ethical, operational, data, and cybersecurity risks.
The National Center for State Courts has likewise created an AI Sandbox, a controlled environment where judges, lawyers, and court staff can safely experiment with emerging tools and better understand their capabilities and limits.
Together, these initiatives illustrate an important principle: responsible AI adoption requires structured governance rather than informal experimentation.
Two Practical Steps Legal Professionals Should Take Now
Beyond broader governance strategies, two practical steps stand out.
1. Update retainer agreements
Lawyers may wish to include clauses in their written retainer agreements requiring clients to disclose any use of generative AI before or during the engagement. This helps ensure transparency if AI-generated materials are later submitted for legal review.
A similar requirement may also be appropriate in agreements with outside experts retained in litigation. If an expert relies on AI tools to form opinions, that fact should be disclosed early, particularly where admissibility and evidentiary integrity may be at issue.
2. Review privacy settings carefully
Even when using enterprise AI tools, legal professionals sometimes overlook the internal privacy settings of the platforms they use. Lawyers should therefore review these settings and ensure that options such as “do not store my history” and “do not share my data” are enabled.
Privacy protections are not automatically guaranteed simply because a tool is licensed or widely used. They must be intentionally configured.
These steps may appear simple, but they reflect a broader principle: responsible AI use requires active oversight rather than passive reliance on technology.
Moving Forward with Discipline
AI is already embedded in legal practice. The question is no longer whether to engage with it, but how it should be used responsibly.
The technology offers genuine opportunities for efficiency, scalability, and analytical support. At the same time, it introduces real risks, including hallucinations, confidentiality breaches, evidentiary complications, cybersecurity vulnerabilities, and strategic missteps.
In navigating these opportunities and risks, professional responsibility provides the compass. Competence, diligence, verification, and candor remain the core guiding principles.
AI can enhance legal work when used carefully, but it can also undermine it when used carelessly. Ultimately, the difference lies not in the tool itself, but in the discipline of the professional who uses it.
Learn More
For those interested in learning more, the full webinar recording and additional resources are available here:
📌 Webinar Recording and Resources: Navigating the AI Landscape: Best Practices for Legal Professionals
📌 Presentation Resources: Resource Folder
📌 Legal Practitioners Guide on AI Hallucinations
For more details on AI applications in legal assistance, visit:
Ce contenu a été mis à jour le 26 mars 2026 à 17 h 23 min.
