
Image générée par ChatGPT
Artificial intelligence is reshaping the way evidence enters courtrooms. What was once limited to documents, testimony, and forensic reports now includes AI-generated material, from enhanced exhibits to deepfake recordings. The real challenge emerges when juries are asked to weigh such evidence. Jurors bring human judgment, but also human vulnerabilities: a tendency to trust what appears sophisticated, or to grow skeptical after encountering manipulated content.
This new reality raises urgent questions. How should jurors approach AI-generated evidence? What risks come with it? And can the existing rules of evidence keep up?
To explore these challenges, the National Center for State Courts (NCSC) and the Thomson Reuters Institute co-hosted a webinar titled AI Evidence in Jury Trials. Part of the ongoing AI and the Courts series, this thirteenth session examined how AI-generated evidence intersects with jury psychology, rules of authentication, and the broader search for fairness. Rather than speculation, the conversation focused on the immediate dilemmas already confronting courts. Jurors may over-trust or dismiss AI outputs, judges must determine how to authenticate unfamiliar material, and lawyers remain bound by ethical duties even as technology accelerates. The discussion highlighted the fragile balance between innovation, fairness, and public trust.
When AI Becomes Evidence: Credibility and Skepticism
Jurors often see AI outputs as inherently factual, which can give them undue credibility. Media generated or enhanced by AI can leave a powerful emotional and cognitive imprint that sometimes outweighs traditional forms of evidence.
When AI evidence is presented openly and transparently, it can complement expert testimony. Visual reconstructions or AI-assisted exhibits, for instance, may help jurors remember details more clearly, strengthen their understanding of events, and support a more balanced evaluation alongside eyewitness accounts or physical records.
The risks are just as pressing. Deepfakes and unacknowledged AI evidence can distort memory, create unwarranted doubt, or overwhelm jurors. In a domestic violence trial, for instance, a bad actor could take a person’s voice recording and generate audio of them saying something incriminating that they never actually said. Once an AI-generated image or video is introduced, it is notoriously difficult to “unring the bell.” The opposite danger is equally real. Excessive skepticism creates another problem. Jurors may begin to doubt even legitimate evidence. Courts are therefore confronted with a delicate balance between encouraging critical analysis and avoiding cynicism.
Jury Psychology and Courtroom Safeguards
Psychological factors play a decisive role in how jurors respond to AI evidence. Research shows that audiovisual material is remembered far more vividly than text, which makes manipulated media especially powerful. This “stickiness” can cause jurors to confuse what they saw in a fabricated video with an actual memory.
Another concern is the “Novelty and Authority Effect,” where jurors may place undue trust in technology simply because it feels modern and authoritative. There is also the risk of a “CSI Effect 2.0,” where jurors develop unrealistic expectations about digital evidence. Anchoring and confirmation bias can further distort interpretation once an initial impression has taken hold.
These psychological risks have already prompted courts to rethink how they guide jurors. Courts are beginning to consider updated jury instructions that explicitly address AI. Draft proposals emphasize reliability, bias, and verification, while cautioning jurors that not everything produced by AI is trustworthy. At the same time, some worry that placing too much emphasis on AI’s risks could make jurors overly distrustful.
One example already developed is a model jury instruction created by retired Justice William Deino using Microsoft Copilot. It warns jurors that AI outputs may be shaped by training data, algorithms, and bias, and instructs them to weigh reliability, credibility, verification, and limitations in the same way they would with any other type of evidence.
Courtroom practices are also beginning to adapt. Proposed safeguards include:
- Pretrial tutorials to familiarize jurors with AI evidence
- Transparent disclosure when AI tools are used in preparing exhibits
- Jury instructions that include error rate disclosures and clarify the limitations of the tools
- Procedures requiring experts to explain how AI outputs were generated
- Opportunities for jurors to ask questions, allowing judges and lawyers to address confusion in real time
These measures aim to ensure that jurors, the “engine of the process,” are protected from undue influence while preserving their critical role in deliberation.
Evidence Rules Under Pressure
The U.S. Federal Rules of Evidence provide an important lens for examining AI-generated content. Rules such as 901 (authentication) and 902 (self-authenticating documents) rest on the assumption that records presented in court are reliable. When AI can fabricate realistic documents, videos, or voices, that assumption begins to falter.
These challenges are not just theoretical. An AI-altered document filed with a public office, for example, could later be admitted as an official record under Rule 902 even if it was false from the beginning. Similarly, proposals for a new Rule 707, now open for public comment, suggest holding machine-generated evidence to the same reliability standards as expert testimony under Rule 702.
And the difficulties do not stop there. The more complex challenge arises with unacknowledged AI evidence, such as deepfakes. A potential new Rule 901(c) has been debated, which would set out procedures when the authenticity of AI-generated content is disputed or fabricated. In such cases, Rule 104(b) on conditional relevance may also come into play, determining whether evidence is sufficiently supported to be presented to a jury. Some proposals argue for weighing probative against prejudicial value under Rule 403, while others suggest shifting the decision from juries to judges. Yet even judges, without specialized expertise, may not be better equipped to determine authenticity.
These debates make one point clear. Courts should not simply follow the momentum of new technologies. They must remain cautious and deliberative, ensuring that any reform is grounded in evidence, just as they have historically done when facing systemic challenges like language access.
This ongoing uncertainty underscores the urgent need for clearer guidance and practical solutions.
Experts, Ethics, and Access to Justice
Technical hurdles make it extremely difficult to distinguish real from fabricated AI evidence. Watermarks can be erased without a trace, automated detection tools are unreliable, and even experts can only speak in terms of probability. Automated tools often fall short because generator and discriminator models evolve together so quickly that errors become harder to detect. These challenges suggest that courts may need new models of expertise. One proposal is to establish panels of court-appointed AI experts, similar to the way courts already appoint specialists in family law or competency hearings.
Beyond expertise, ethical obligations are just as important. Lawyers remain responsible for ensuring that any evidence they submit is authentic, regardless of whether AI was involved in its creation. Several state bars have already issued guidelines stressing competence, diligence, and disclosure. In California, for example, the Judicial Council adopted specific rules on AI ethics in July 2023, which took effect on September 1. Commercial tools like Lexis have also begun providing resources to help lawyers track ethical requirements across different states.
These obligations are not merely theoretical. The consequences of neglecting these duties are already visible. In some cases, the failure to reveal the use of AI in preparing briefs or producing evidence has led to overturned judgments. This underscores the obligation of lawyers to disclose when and how AI tools have been used.
Finally, equity concerns highlight another challenge. Self-represented litigants, who already face structural disadvantages, are unlikely to have the same resources as prosecutors or large law firms to contest AI-manipulated evidence. Without safeguards, disparities in technical capacity could deepen inequities in the courtroom.
Looking Ahead: Preserving Trust in Jury Trials
Despite the challenges, trust in the jury system remains strong. Many judges emphasize the capacity of jurors to rise to their “sacred duty” when they are properly guided. Practical tools such as clear instructions, opportunities for questions, and close judicial oversight can help jurors navigate the complexity of AI evidence.
At the same time, the law will inevitably lag behind technology. Courts must therefore remain flexible, cautious, and proactive. The path forward lies in combining ethical safeguards, technical expertise, and transparent processes. AI can enhance justice, but only if it is consistently treated as a tool under human control and never as a substitute for human judgment.
In the end, protecting fairness and trust in the courtroom is less about the power of machines than about the responsibility of people.
Final Reflections
AI evidence in jury trials represents both an opportunity and a risk. When used responsibly, it can clarify testimony, support expert analysis, and help jurors retain essential information. Left unchecked, it threatens to erode trust, distort memory, and overwhelm courts with disputes about authenticity.
These possibilities point to a broader lesson. The future of AI in the courtroom will not be determined by technology alone, but by the human systems that shape its use. Judges, lawyers, and jurors must adapt together to safeguard fairness, preserve public trust, and ensure that justice remains at the heart of every trial.
Learn More
For those interested in learning more, the full webinar recording and additional resources are available here:
📌 Webinar Recording: AI Evidence in Jury Trials
📌 Presentation Resources: Resource Folder
For more details on AI applications in legal assistance, visit:
🌍 NCSC AI Initiative
Ce contenu a été mis à jour le 9 janvier 2026 à 9 h 52 min.
