The rise of deepfake technology is challenging courts to reconsider how evidence is authenticated and presented. While courts have long dealt with fabricated evidence, the accessibility of artificial intelligence (AI) tools has made it easier than ever to create convincing fake images, videos, and audio recordings.
To address these challenges, the National Center for State Courts (NCSC) and the Thomson Reuters Institute co-hosted a webinar on April 16, 2025, titled “Deepfakes: Evidentiary Issues for State Courts.” This session is the ninth webinar in the AI and the Courts series organized by the AI Policy Consortium brought together judges and legal scholars to discuss how AI-generated evidence is impacting state court proceedings.
The webinar explored the evidentiary challenges posed by both acknowledged and unacknowledged uses of AI-generated evidence, offering judges concrete strategies to detect, manage, and respond to deepfake evidence. It also highlighted real-world case examples to illustrate the risks deepfakes pose to the integrity of judicial proceedings and the steps courts can take to ensure that justice is not undermined by fabricated media.
Traditional Evidence Authentication
Courts traditionally authenticate evidence by requiring the party offering it to provide enough external proof to show that the item is what it claims to be. Under Federal Rule of Evidence 901, the threshold for authentication is relatively low, It is sufficient if a reasonable jury could find the evidence authentic. Authentication can be established through witness testimony, documentation of the chain of custody, or supporting metadata. Certain types of electronic evidence can also be self-authenticated under Rule 902, which permits certification instead of live testimony.
While these traditional principles still provide the basic framework for evaluating evidence, applying them to AI-generated material, raises complex new challenges. The ease with which AI tools can fabricate highly convincing audio, video, and image files means that courts must be especially vigilant, even when standard authentication procedures are followed.
Acknowledged vs. Unacknowledged AI-Generated Evidence
When it comes to AI-generated content, courts must distinguish between two main types of evidence: acknowledged and unacknowledged.
Acknowledged AI-generated evidence refers to situations where the use of artificial intelligence is disclosed and recognized. For example, an accident reconstruction video enhanced with AI tools and presented transparently in court would fall into this category. In these cases, the question is not whether the evidence is authentic, but whether the AI methods used are reliable and scientifically valid, often assessed under standards such as Frye or Daubert. This type of evidence is typically introduced by experts, such as in accident litigation, where AI-generated reconstructions are clearly labeled and scrutinized for methodological validity.
In contrast, unacknowledged AI-generated evidence involves content created or manipulated by AI without disclosure, often with the intent to mislead. Fabricated audio recordings, altered photographs, or fake documents are all examples in which authenticity itself becomes the core issue. For instance, a party might submit a receipt created using an AI-powered app, intended to look genuine but fabricated, raising concerns about evidentiary manipulation. This type of evidence poses a far greater challenge, as traditional authentication methods may not easily detect sophisticated falsifications.
Recognizing this growing risk, judges are encouraged to apply more rigorous scrutiny to digital evidence. To support this effort, the National Center for State Courts and the Thomson Reuters Institute developed specialized bench cards, through their AI Policy Consortium for Law and Courts, providing practical guidance to help judges identify potential signs of manipulation and verify the provenance of AI-influenced materials.
Real-World Examples from U.S. Courts
Several recent cases illustrate how AI-generated content is entering courtrooms:
- State of Washington v. Puloka: In this criminal case, a bystander video enhanced with AI was excluded because the technology used to modify the footage lacked acceptance within the forensic community and introduced misleading alterations.
- Family Court (UK): In a UK family court case, a party submitted an AI-generated audio recording falsely portraying a spouse making threats in order to seek immediate custody and a protective order. The court determined the audio was fabricated, raising serious concerns about the use of deepfakes to manipulate sensitive family law proceedings.
- ChatGPT Use – J.G. v. New York City Department of Education: In a federal court case, outputs from ChatGPT were offered to support a claim for attorneys’ fees. The court rejected this evidence, emphasizing concerns about the tool’s reliability and lack of transparency regarding its inputs.
- Avatar Oral Argument (New York): In the New York Appellate Division, a self-represented litigant who had previously lost his voice due to throat cancer sought to present his oral argument via video. Instead of appearing himself, he submitted a video featuring an AI-generated avatar without prior disclosure, leading to the court’s strong disapproval.
- Florida VR Reenactment (Stand Your Ground Hearing): In a criminal hearing in Florida, a virtual reality reenactment was used to present the defendant’s perspective of an altercation. While the VR experience was permitted for the judge’s review, the example raised broader questions about the use of immersive technologies and their potential influence on fact-finding.
- Tesla Litigation (California): Tesla argued that a video of Elon Musk speaking publicly should be excluded because it might be a deepfake. The court rejected this argument, warning against speculative challenges to authentic evidence without a factual foundation.
These examples highlight the need for careful scrutiny of both disclosed and undisclosed AI-generated content.
Technical Challenges in Detecting Deepfakes
Detecting AI-generated evidence remains a difficult and often unreliable process. Detection tools may hallucinate details or produce inconsistent results, even when applied to the same content multiple times. As generative models evolve quickly, detection tools often fall behind. Courts should avoid placing undue reliance on technical software alone. Instead, digital evidence must be approached with caution, combining traditional authentication methods with a critical understanding of what detection tools can and cannot guarantee.
Self-Represented Litigants and Deepfake Evidence
Cases involving self-represented litigants present unique challenges when AI-generated content is introduced. In non-jury trials, judges have broad discretion to assess credibility using surrounding context. But when juries are involved, the risks multiply, litigants without legal training may submit altered or AI-generated evidence without understanding how to properly authenticate it, leaving jurors to navigate complex evidentiary questions without clear guidance.
Judges must often step in to explain evidentiary standards and warn litigants outside the jury’s presence about potential sanctions for submitting false or misleading materials. While judicial ethics allow some flexibility to help SRLs present their case, courts must walk a fine line between ensuring access to justice and maintaining procedural integrity. The use of AI tools by non-lawyers, whether intentional or not, increases the burden on courts to scrutinize digital submissions with added care.
Ethical Risks: The Liar’s Dividend
AI tools currently operate with technological, ethical, and cultural limitations, creating a pressing need for comprehensive frameworks to guide their responsible use. Legal AI systems should undergo rigorous training, testing, and continuous evaluation, similar to the standards expected of human legal professionals. Judges and lawyers alike must understand how these tools function, remain attentive to disclosure obligations when AI is used in practice, and continuously adapt as the technology evolves.
One specific manifestation of these ethical challenges is the phenomenon known as the “liar’s dividend,” where parties attempt to exploit public awareness of deepfakes to cast doubt on legitimate evidence. To counter this tactic, courts increasingly require a factual basis when evidence authenticity is challenged and may sanction frivolous or unfounded claims. Lawyers also have a professional duty not to accuse content of being fake without a good-faith foundation. Preserving trust in judicial fact-finding demands constant vigilance against both fabricated media and speculative attacks on real evidence.
Best Practices and Resources for Judges
To help judges navigate this evolving landscape, several tools and strategies were highlighted:
- Bench Cards: Practical checklists for evaluating AI-related evidence.
- Updated Jury Instructions: Proposed updates to address AI risks without undermining legitimate evidence.
- Evidentiary Sanctions: Sanctions for submitting fabricated content or unfounded fake evidence claims.
- Procedural Vigilance: Verifying suspicious submissions, requesting original sources, or consulting experts.
- Rigorous Testing of AI Tools: Encouraging courts and legal professionals to demand scientifically validated AI tools, paralleling standards for human legal training.
A forthcoming white paper from the AI Policy Consortium will offer further detailed guidance.
Conclusion: Staying Ahead of the Deepfake Threat
Deepfakes pose real challenges for courts, but existing legal principles if applied thoughtfully remain a strong foundation. Courts must pair traditional rules with new vigilance and demand higher reliability standards for both AI-generated evidence and the tools themselves. Attorneys and judges alike bear responsibility to ensure that technological advances do not compromise the integrity of legal proceedings.
Ongoing education, expert collaboration, and careful procedural updates will be key to preserving fairness, truth, and trust in this new reality.
For those interested in learning more, the full webinar recording and additional resources are available here:
📌 Webinar Recording: Deepfakes: Evidentiary Issues for State Courts
📌 Presentation Resources: Resource Folder
📌 Bench Cards Discussed in the Webinar:
For more details on AI applications in legal assistance, visit:
🌍 NCSC AI Initiative
Ce contenu a été mis à jour le 9 janvier 2026 à 9 h 53 min.
