Artificial intelligence is quietly but profoundly reshaping the justice system. From legal research platforms to court scheduling tools, AI is becoming part of courts’ daily routines. With these advances come difficult questions: Where should we draw the line? Which tasks can be safely delegated to machines? And how do we protect fairness and public trust in judicial decisions?
To explore these challenges, the National Center for State Courts (NCSC) and the Thomson Reuters Institute co-hosted a webinar titled “AI in Courts: Insights from South Korea, Australia, and Singapore.” Part of the AI and the Courts series, this twelfth session brought together judicial representatives from three countries to share practical lessons and compare approaches. Rather than debating futuristic scenarios, the conversation focused on grounded, real-world efforts already underway.
Setting the Stage: A Spectrum of Judicial Strategies in the Asia-Pacific
Across the Asia-Pacific, courts are adopting AI with varying levels of ambition, regulation, and experimentation. Their strategies vary depending on local priorities, risks, and available resources.
Some jurisdictions are advancing cautiously, guided by ethical frameworks and judicial oversight. Others are piloting targeted solutions to support judges and litigants, while a few are pushing the boundaries with more extensive implementations. Taken together, these efforts reflect not only different stages of AI maturity, but also distinct philosophies on how best to balance innovation with judicial responsibility.
Across the region, courts are exploring AI for a range of functions, from AI-assisted legal research and chatbots that guide litigants, to automated case scheduling, real-time courtroom transcription, and even tools designed to assist with sentencing recommendations.
This variety of use cases reflects broader national strategies at work across the region:
- Singapore is taking a structured, innovation-driven approach. It is piloting generative AI tools in its Small Claims Tribunals to support self-represented litigants and developing applications to help judges summarize case files.
- South Korea has built on decades of digital transformation with the recent launch of an e-litigation platform enhanced by AI for case management and document analysis. A dedicated AI oversight committee ensures responsible use through expert guidance and internal review.
- Australia and New Zealand have adopted more precautionary strategies, largely in response to concerns over unverified AI-generated content. New practice notes and judicial guidelines restrict the use of generative AI in legal submissions and prohibit its use in drafting judgments. Transparency, verification, and professional accountability remain central.
- China, by contrast, has rolled out a bold, centralized “smart court” system that integrates AI and big data throughout nearly all stages of judicial operations. Judges regularly use AI tools for legal research, document drafting, and consistency checks. These tools significantly reduce workloads, while the courts maintain a clear stance: AI is there to support, not replace, human judicial independence.
Across all jurisdictions, AI is being explored for its potential to streamline legal research, automate administrative tasks, and enhance access to justice, especially for self-represented litigants. Yet, shared concerns persist, such as hallucinated citations, systemic bias, data privacy, and the need for transparency.
A common understanding is beginning to emerge: AI can help modernize justice systems and improve efficiency, but only if its use remains transparent, verifiable, and firmly under human control. Judicial independence, fairness, and public trust must never be compromised.
🇸🇬 Singapore: Careful Innovation with Clear Boundaries
Singapore’s judiciary is adopting a deliberate and well-structured approach to AI integration. A centralized office coordinates these efforts, emphasizing controlled pilot programs and providing detailed guidance for legal professionals.
Recent initiatives include a generative AI assistant used in Small Claims Tribunals to help self-represented litigants navigate their rights and procedures. Other tools assist judges with tasks such as summarizing pleadings, comparing arguments, and extracting key case details, especially in high-volume environments.
A central conceptual framework adopted by the courts is a three-zone typology for AI use:
🟢 Green Zone: Low-risk applications, such as summarization and classification
🟡 Yellow Zone: Medium-risk uses, including AI-assisted drafting or public-facing chatbots
🔴 Red Zone: High-risk tasks, such as outcome prediction and automated decision-making, are firmly avoided.
This reflects a broader concern about how predictive models are used in legal practice. While technically impressive, their real-world utility remains limited. For instance, telling a client they have a “60% chance of success” may sound precise, but it does little to support meaningful legal strategy or guide ethical decision-making.
This caution stems from a critical understanding of how large language models (LLMs) operate. These models are probabilistic rather than knowledge-based. They generate output based on patterns in training data, without truly understanding legal hierarchies or the importance of recency. Some researchers have described them as « stochastic parrots » — sophisticated statistical engines capable of mimicking legal language without actual comprehension. As a result, LLMs may fail to prioritize newer rulings over outdated precedents unless explicitly prompted. This creates serious challenges in legal contexts, where the most recent decision may override decades of jurisprudence.
Courts are particularly wary of using LLMs in access-to-justice scenarios, where inaccurate or misleading information could harm litigants. An AI tool might, for example, incorrectly state a 30-day appeal period when the law allows only 14 days, potentially jeopardizing someone’s right to appeal.
In response, Singapore has opted for more controlled tools like decision trees and precedent tables. These systems offer clarity, consistency, and transparency, and they help users understand how a conclusion is reached, which is not always guaranteed by generative AI.
To support this careful approach, detailed guidance has been issued to judges, court staff, and administrators on how and when case materials can be shared with AI systems. For example, courts distinguish between:
- Public-facing LLMs, where uploading sensitive data is prohibited
- Internally developed tools, where redacted documents may be used under specific conditions.
This level of guidance helps prevent improper data handling and discourages what has been described as the « unconstrained use » of AI.
Singapore is also exploring localized speech-to-text engines and customized language models tailored to the national legal and linguistic context. Still, large-scale commercial models continue to dominate in terms of capability. The scale required to train these models is immense. Even a dataset covering 50 years of national case law was deemed insufficient for training a robust model.
The overarching philosophy remains consistent: AI can assist, but it must never replace human judgment. Ethical safeguards, accuracy, and legal responsibility must remain at the core of its use in judicial settings.
🇰🇷 South Korea: Digital Infrastructure and AI in Practice
South Korea’s judiciary has been steadily digitizing since the late 1970s. Its most recent development, the Next-Gen e-Litigation System launched in January 2025, builds on this long-standing modernization effort with a more streamlined structure and expanded digital features.
This approach reflects a deep technical understanding of artificial intelligence, often framing complex legal issues as machine learning problems. For instance, a verdict on guilt or innocence may be treated as a classification task, while determining a sentence can be approached as a regression problem.
At the center of this platform is the electronic record viewer, which allows parties and judges to search, annotate, compare, and review documents, including video evidence, through a unified interface. Features like color-coded timelines, mobile access, and internal messaging between clerks and judges help streamline workflows and reduce reliance on paper.
Complementing the electronic record viewer is the e-Cabinet system, which organizes cases by procedural stage, from filing to hearings. This setup helps judges manage high caseloads more efficiently. Courtrooms now include video inspection tools and display systems, and remote hearings have been standard practice since 2016.
Several AI-powered tools are still under development, including:
- An AI law clerk that summarizes arguments and highlights contested issues
- A natural-language legal search engine informed by user behavior
- A similar case recommender trained on deep learning models
- An AI litigation assistant for self-represented users, offering templates, guidance, and even early settlement suggestions
To ensure responsible development and use, South Korea issued detailed Guidelines on AI Use in the Judiciary in February 2025. Drafted by judges through the Association for AI Studies, these guidelines encourage:
- Avoiding the input of sensitive or confidential data into public AI tools
- Maintaining a critical distance from AI-generated outputs
- Requiring disclosure if AI was used to generate submissions or evidence, including images, video, or audio
Oversight is reinforced by the Judicial Policy Advisory Committee, which reviews all reform proposals to ensure consistency with the principles of transparency, accountability, and human oversight.
🇦🇺 Australia: Guideline-Driven and Cautious
Australia’s courts have so far adopted a more reactive and policy-oriented approach. Rather than developing in-house AI tools, the focus has been on regulating how external actors, particularly lawyers and litigants, use generative AI in court settings.
This has brought challenges, especially with self-represented litigants who may use AI to “dump case after case and information” into proceedings, significantly increasing judicial workload and raising concerns about procedural fairness.
Several recent incidents involving fake case citations generated by ChatGPT have prompted strong responses. Courts such as the Supreme Court of New South Wales have issued practice directions warning against the use of AI-generated content that has not been rigorously fact-checked. Judges are explicitly instructed not to use generative AI when drafting or editing rulings. Meanwhile, lawyers are reminded that they remain ethically responsible for all materials submitted to the court, regardless of whether AI was used in their preparation.
At the magistrate level, there is some informal experimentation. For instance, generative AI is occasionally used to refine the wording of sentencing decisions, though this practice remains unofficial and limited in scope.
One ongoing debate centers around affidavit evidence. If AI helps draft the text, does it still reflect the voice of the person swearing the statement? Courts are beginning to explore whether new declarations may be needed to clarify authorship and legal responsibility, especially in cases where AI tools might inadvertently alter the tone or meaning of the text.
Despite this caution, Australia is also exploring how AI might support access to justice. There is growing interest in using AI to assist self-represented litigants, particularly those facing language or cultural barriers. Some jurisdictions are also considering AI for calculating standard penalties in low-discretion cases, such as speeding fines, although no formal implementation has taken place to date.
Collaboration with the Victorian Law Reform Commission is ongoing, as Australia continues to refine its policy position. The underlying message remains clear: ethical obligations must come first, even when AI is used discreetly or behind the scenes.
📌 Final Reflections: Common Ground, Shared Caution
While each country is following its own path, several common themes emerge across these judicial approaches:
- AI is a support tool, not a decision-maker.Judges must remain at the center of the legal process.
- Human oversight is essential.AI outputs must be verified, contextualized, and held to legal standards.
- Disclosure is increasingly important.Courts are beginning to require parties to indicate whether AI was used to prepare filings or evidence.
- Technical safeguards are evolving.Some jurisdictions are investing in internal models; others are emphasizing guidance, training, and public education.
- Ethical risks remain real.Hallucinated citations, embedded bias, privacy concerns, and overreliance continue to present challenges.
Despite different levels of technological maturity, the shared goal is clear: to ensure that AI enhances justice, rather than disrupts it.
Resources and Tools Mentioned
🔹 ISO 42001 Certification: International standard for ethical AI development
🔹 AI Verify (Singapore): A tool used to evaluate AI applications.
🔹 Litigation Document Analyzer by Thomson Reuters (Australia)
🔹 AI Practice Directions from the Supreme Court of New South Wales
🔹 AI Guidelines for the Judiciary (South Korea, Feb. 2025)
Learn More
For those interested in learning more, the full webinar recording and additional resources are available here:
📌 Webinar Recording: AI in Courts: Insights from South Korea, Australia, and Singapore
📌 Presentation Resources: Resource Folder
For more details on AI applications in legal assistance, visit:
🌍 NCSC AI Initiative
Ce contenu a été mis à jour le 29 octobre 2025 à 15 h 39 min.
