As artificial intelligence rapidly reshapes the legal landscape, courts and law firms are being called to strike a delicate balance between innovation and integrity. The October webinar Key Considerations for the Use of GenAI Tools in Legal Practice and Courts, hosted by the National Center for State Courts (NCSC) and the Thomson Reuters Institute, shed light on this challenge. The session explored how generative AI can support legal practice and judicial work while preserving public confidence and ethical standards.

Image générée par Sora.
The conversation was not about technology for its own sake. It focused on how to adopt it responsibly, through a risk-informed approach that distinguishes low-risk administrative tools from high-stakes applications that may affect fundamental rights such as liberty, property, and due process. Participants emphasized that successful integration depends as much on human oversight, professional competence, and transparency as on technical performance.
Framed by practical insights and real-world examples, the conversation offered a roadmap for legal professionals seeking to embrace AI in a way that remains safely anchored in governance, ethics, and public trust.
Balancing Promise and Risk: A Pragmatic Approach to AI in Law
A central message emerged from the discussion, emphasizing that AI in the legal field should not be viewed as a single and uniform technology, since its risks vary depending on how it is applied and in what context. Each use case, whether in drafting, research, or judicial support, requires a level of scrutiny proportional to its potential impact.
This means that oversight must be scaled according to the level of risk. Routine administrative tools, such as meeting-note summarizers or document organizers, pose limited risks. In contrast, tools used for drafting legal arguments or informing judicial reasoning involve more substantial risks, and their deployment requires stronger supervision and verification.
The goal is not to create unnecessary barriers but to avoid two extremes. On one side, low-risk tools that could improve efficiency and access to justice should not be overregulated. On the other, high-risk systems that may affect people’s rights, or the fairness of proceedings must be closely supervised.
The framework proposed in the Key Considerations document identifies five main categories of tools, each corresponding to a different level of risk. These levels progress from minimal, where the impact of AI errors is limited, to moderate and high, where oversight becomes essential, and finally to unacceptable, where misuse could seriously affect fundamental rights or due process:
- Productivity tools (minimal to moderate risk), such as internal drafting or scheduling assistants.
- Research tools (moderate risk), which require verification of sources and jurisdictions.
- Drafting tools (moderate to high risk), particularly when used for motions or judicial opinions.
- Decision-support tools (high risk), which can directly influence legal reasoning or outcomes.
- Client-facing and public-facing tools (moderate to high risk), where errors could affect how legal information is presented or relied upon by non-experts.
This risk-based framework encourages institutions to match human oversight to the sensitivity of each task. The concept is often described as « human in the loop », which means direct human supervision at each stage, or « human on the loop », which refers to active human engagement through ongoing monitoring. In practice, this approach ensures that automation supports human judgment rather than replacing it, preserving the ethical and procedural safeguards that define the rule of law.
Ethics and Competence: The Human Foundations of AI Adoption
As governance frameworks evolve, the human element remains central. A more fundamental challenge emerges in ensuring that the profession continues to meet its ethical duties in the age of generative AI. Competence today includes not only legal knowledge but also a practical understanding of AI’s capabilities and limits.
For lawyers and judges, this means learning to question how a system works, what data it relies on, and where its reasoning may fail. Knowing how to use AI responsibly has become part of professional competence itself. A lawyer unable to critically review AI outputs may not meet the diligence and preparation standards required by the Model Rules of Professional Conduct.
There is also a growing need for structured supervision. In law firms, senior partners must oversee not only how AI tools are used but also how younger lawyers rely on them. Supervision now applies both to the technology and to the people using it.
Ethics is equally important on the judicial side. Judges must stay alert to how AI tools shape their reasoning, especially when these systems process sensitive data or suggest outcomes. Public trust remains the real measure of responsible innovation. If citizens believe that legal or judicial decisions are being driven by opaque algorithms, confidence in the justice system can erode quickly.
Transparency therefore becomes essential. Legal institutions should document their verification practices and ensure clear internal guidelines are in place so that technology supports human judgment and not replaces it. Keeping that distinction clear is key to maintaining accountability and public confidence in legal decisions.
Verification First: Safeguarding the Integrity of Legal Work
Putting these principles into practice begins with one key step: verification.
As AI tools become more common in legal work, verification has become essential. Even the most advanced systems can produce convincing but incorrect results, often called hallucinations. In law, such mistakes are not harmless. They can mislead a court, put a client’s case at risk, or even lead to sanctions.
Recent cases where lawyers submitted AI-generated pleadings with invented citations show the risk of using these tools without proper checks. Verifying each output is now part of basic professional diligence. Review steps should be built into each stage of legal work.
To reduce these risks, several practical habits can help lawyers and judges use AI responsibly:
- Show the source: Every AI-generated text should clearly indicate where the information comes from.
- Compare results: Checking the same question across different tools helps identify inconsistencies.
- Check jurisdiction: Make sure the cited laws and cases apply to the correct context.
- Keep a human review: For moderate- or high-risk work, a human must review the output before it is used.
Law firms are also encouraged to record these checks. Doing so protects the firm and shows transparency to clients. Clearly noting the use of AI in files or billing helps make it clear that technology is there to support human expertise rather than replace it.
Another key safeguard concerns data protection. Before adopting any AI product, firms should confirm that vendors do not reuse client data or confidential material to train their models. This is critical for protecting attorney-client privilege and preventing private information from appearing elsewhere. Contracts should clearly forbid data reuse and define how long information can be stored.
Courts face similar responsibilities. They should not rely only on vendor claims but conduct their own evaluations and monitor performance regularly. Continuous testing helps ensure that tools remain accurate, fair, and secure over time. By making these verification habits part of daily practice, legal professionals can use AI efficiently without undermining public trust in the justice system.
From the Bench: Why Risk Is Never Static
From a judicial perspective, one of the strongest messages is that risk is never static. What seems low-risk today may become critical tomorrow as technologies evolve and court practices change. Conversely, some applications initially seen as high-risk can become safer once validation frameworks are in place.
Risk also depends on context. A scheduling tool might usually be minimal risk, but in cases involving witness protection or national security, even small data leaks could have serious consequences. Likewise, a translation system may seem harmless when used for research but could cross into decision-support if it influences the interpretation of arguments.
Judges must therefore exercise informed judgment rather than rely solely on predefined categories. AI frameworks and charts can guide reflection but cannot replace situational awareness. Each decision requires reflection: Would I be comfortable delegating this task to another person and defending it publicly? If not, it should not be delegated to an AI either.
Another subtle concern arises around the possibility of reverse-engineering judicial reasoning. As AI becomes capable of analyzing judges writing styles or ruling patterns, it may enable parties to tailor arguments strategically. Protecting the integrity of judicial reasoning must therefore remain a priority.
Transparency in AI use must be balanced with confidentiality in deliberation. Courts need clear boundaries on when and how AI can assist in decision-making, ensuring that human judgment always remains the final authority.
Training, Testing, and Trust: Building Competence with AI
As AI tools multiply across the legal ecosystem, competence must extend beyond individual users. Courts and institutions need the capacity to evaluate and monitor technologies independently.
Independent evaluation has become a central requirement. Relying only on vendor claims is not enough. Courts should develop their own testing protocols, creating benchmarks that measure performance, reliability, and bias in real conditions. These independent data sets act as neutral reference points, much like bar exams test human lawyers before they practice.
Benchmarking must also be continuous, since AI models evolve and their accuracy may degrade as new data and precedents emerge. Regular testing ensures that systems remain aligned with ethical and legal standards.
Equally important is the creation of structured training programs for legal AI tools. Just as lawyers pursue ongoing education, advanced systems in high-risk areas should undergo regular retraining. Transparency about training data, updates, and limitations is essential, allowing users to trace a model’s reasoning and detect weaknesses.
In this context, documentation becomes a form of accountability. Knowing whether a model was fine-tuned on legal texts, when it was last updated, or which jurisdictions it covers helps determine whether it is appropriate for a given task.
Ultimately, AI in the justice system should be held to the same expectations as human expertise, being trained, tested, and subject to oversight. Through education and rigorous evaluation, courts can build the literacy needed to use these tools wisely without compromising independence or public trust.
When Seeing Isn’t Believing: Deepfakes and Juror Perception
The issue of deepfake evidence introduces new challenges that go beyond the technical use of AI in courts. Questions arise about how judicial systems should respond when video or digital materials can no longer be taken at face value, and whether jurors can still trust what they see. Video evidence, once considered the most reliable form of proof, now requires a higher level of scrutiny.
Some courts have begun using detection software, but its effectiveness remains uneven. This raises a broader question about whether rules of evidence need to evolve to address AI-generated content. Educating jurors in clear, accessible language is also essential, helping them understand what AI is, what it can and cannot do, and how to assess AI-influenced evidence without falling into either skepticism or blind trust.
These developments highlight a deeper shift: technology is transforming not only how justice is administered but also how it is perceived. Maintaining public confidence will depend on combining robust safeguards with communication that makes AI understandable to everyone involved in the judicial process.
Toward Responsible Innovation in Justice
The adoption of generative AI in courts and legal practice is not only a question of technology but also one of governance, ethics, and trust. AI can strengthen justice only when it remains anchored in human oversight and shared professional values.
Building such an ecosystem requires ongoing commitment. Policies must evolve, evaluation frameworks must remain transparent, and users at every level, from clerks to judges, must stay aware of both the benefits and the limits of automation. The balance between innovation and accountability is delicate yet essential.
AI should never replace legal reasoning but refine it. It can accelerate research, simplify procedures, and enhance access to justice, as long as its use is guided by competence, documentation, and proportional oversight. Each tool must be assessed not only for what it can do but also for what it should do within the ethical boundaries of the legal system.
By promoting a risk-informed approach, investing in education, and encouraging collaboration between courts, vendors, and academia, the legal community can advance toward responsible and transparent innovation. The future of justice will depend not on how quickly institutions adopt AI but on how carefully they build the structures that make it trustworthy.
Learn More
For those interested in learning more, the full webinar recording and additional resources are available here:
📌 Webinar Recording: Key Considerations for the Use of GenAI Tools in Legal Practice and Courts
📌 Presentation Resources: Resource Folder
For more details on AI applications in legal assistance, visit:
🌍 NCSC AI Initiative
Ce contenu a été mis à jour le 9 janvier 2026 à 9 h 51 min.
