When Courts Adopt AI in the Dark: Privacy, Legitimacy, and the Democratic Stakes of Los Angeles’s Learned Hand Experiment

When Courts Adopt AI in the Dark: Privacy, Legitimacy, and the Democratic Stakes of Los Angeles’s Learned Hand Experiment

by J.R. Howell

Introduction

On March 18, 2026 the Los Angeles Superior Court (LASC)—the nation’s largest trial court—quietly announced a limited pilot program with Learned Hand, a proprietary generative‑AI platform built specifically for judges. Court administrators described the tool as a “judicial sous chef” that gives a select group of civil judicial officers access to a secure “workbench” capable of summarizing filings, conducting legal research, and drafting preliminary orders.  The program’s stated objective is to reclaim hours consumed by routine preparation so that judges can focus on deliberation. Judicial leadership emphasizes that the AI’s outputs are automatically hyper‑linked to source materials and must be reviewed and edited by human judges before becoming tentative rulings . The vendor likewise stresses that the platform is a closed universe, designed to assist judges, not to make decisions. 

This pilot arrives amid crushing caseloads and a broader debate about how far courts and lawyers should rely on algorithmic tools. While efficiency is an understandable imperative, the deployment of an opaque machine‑learning system inside judicial chambers raises substantial questions about transparencyprivacyprocedural fairness and the non‑delegable nature of judicial power. This alert evaluates the legal framework governing the pilot, explains why the program invites scrutiny under California’s new AI rules for courts, and outlines what litigators and public‑law observers should watch as this experiment unfolds. 

Efficiency Imperative vs. Operational Reality

The LASC’s workload is daunting: nearly 600 judicial officers handle about 1.2 million case filings each year for over 10 million residents across 36 courthouses. Court leaders, including Presiding Judge Sergio C. Tapia II and Executive Officer David Slayton, argue that the Learned Hand pilot is essential to manage this volume. In announcing the collaboration, Judge Tapia stressed that the court was “carefully evaluating emerging technologies” to enhance preparation without compromising judicial independence, while Slayton emphasized that generative AI is limited to “administrative or research support” and that judicial officers remain bound by California’s new AI policies.

How Learned Hand Works

The Learned Hand platform operates by ingesting large case files and structuring preparation tailored to specific motion types. Judges upload filings, and the system organizes the record, synthesizes applicable law, and drafts proposed orders using samples of the judge’s own writing style.  The platform’s signature feature, “Deep Verify,” automatically hyperlinks each sentence of its output to the underlying document so a judge can confirm the source. According to the vendor, multiple verification passes are built in, and the system draws solely from verified legal authorities and court‑specific guidance.

The Verification Paradox 

Although linking to sources is important, the system invites a verification paradox. Judges can confirm that a quoted passage appears in the record, but the AI itself decides which facts and arguments to include. A spot‑check of hyperlinked citations does not reveal whether the algorithm silently omitted nuanced testimony, mitigating facts, or equitable considerations because it deemed them statistically irrelevant. True judicial analysis requires weighing the entire record. Allowing an AI to serve as the initial filter risks narrowing the field of view and could create a decision that rests on an artificially curated subset of reality. This paradox underscores why human oversight alone may not resolve the deeper question: how much cognitive delegation is occurring when a machine provides the first draft of judicial reasoning?

Privacy, Confidentiality, and Data Governance: Sensitive Data in a Closed Universe

Generative AI tools operate by processing large datasets. For the LASC, that means potentially feeding medical records, sealed corporate trade secrets, sensitive family‑law filings, and personally identifying information into a private vendor’s servers. California’s new Rule 10.430 of the Rules of Court requires any court that allows generative AI to adopt a written policy by December 15, 2025. Courts that choose to permit AI must ensure that policies cover the use of generative AI by staff for any purpose and by judicial officers for non‑adjudicative tasks.

The rule specifies several mandatory safeguards. It prohibits entering confidential, personal identifying, or other non‑public information into a public generative‑AI system and forbids using AI to discriminate or produce disparate impacts.  It also requires personnel to take reasonable steps to verify accuracy and remove biased or harmful content. Importantly, Rule 10.430 mandates disclosure when the final version of a work provided to the public consists entirely of AI‑generated output. Courts may opt to make their policies more restrictive than the rule requires and may prohibit AI use entirely.

Learned Hand asserts that the LASC pilot uses a closed, purpose‑built architecture insulated from public training datasets. Even so, standard Software‑as‑a‑Service contracts frequently allow vendors to retain data for backup or to improve services, and log files may persist even in closed systems. However, they do not generally contain mechanisms to control disclosure and use by subprocessors. Unanswered questions remain about how long the vendor stores intermediate drafts and prompts, whether those logs could be subject to discovery or public‑records requests, what subprocessors may be involved and the extent of their use of applicable data, and whether the vendor may reuse insights to train future models. Under California’s robust privacy landscape, including statutes like the California Invasion of Privacy Act, unauthorized retention or secondary use of sensitive litigant data could expose courts or vendors to future litigation. 

Transparency, Disclosure, and Procedural Opacity: Rules vs. Standards

A core tension in the LASC pilot is the difference between Rule 10.430 and its companion Standard 10.80 of the California Standards of Judicial Administration. Rule 10.430 requires courts to disclose generative‑AI use only when a final work provided to the public is entirely AI‑generated. Standard 10.80, which governs judicial officers acting within their adjudicative role, is phrased more softly: a judge “should consider whether to disclose” AI use when it is used to create content provided to the public. The standard also instructs judges not to enter confidential information into public AI, to verify outputs, and to take reasonable steps to remove biased or harmful content. The discretionary language means judges retain broad latitude to decide when and whether to inform litigants that AI contributed to a tentative ruling.

The Transparency Gap

LASC officials have confirmed that current rules do not require judges to disclose whether an AI authored a preliminary draft. Thus, litigants may never know if a machine played a substantial role in shaping the order they receive. This opacity hampers the ability of parties to challenge the underlying reasoning, cross‑examine the logic of the AI’s analysis, or argue that the AI overlooked material facts. Critics note that generative‑AI hallucinations have already produced sanctioned filings and public scandals, illustrating that black‑box drafting is not merely hypothetical. Without clear auditing and disclosure mechanisms, the risk is that courts inadvertently outsource parts of the adjudicative process to a proprietary algorithm while masking that delegation behind human signatures. 

Delegation, Psychological Anchoring, and the Rule of Law: Constitutional Non‑Delegation

Article VI of the California Constitution vests judicial power in human officers. Judicial authority is non‑delegable, and the legitimacy of a ruling depends on the judge’s independent, human reasoning. Court leaders emphasize that Learned Hand merely provides administrative support and that judges will remain the final arbiters. Yet behavioral research and candid comments from judges suggest that generative‑AI drafts can exert powerful cognitive effects.

Psychological Anchoring

An anonymous LASC judge not participating in the pilot warned that even tentative drafts prepared by a human clerk can set a baseline; the anchor influences subsequent analysis. When that draft is produced by a machine trained to mimic judicial style, the risk of anchoring may be more pronounced because the AI’s output arrives formatted, confident and comprehensive. District Attorney Nathan Hochman expressed concern that an AI‑generated tentative ruling could “greatly influence what the judge’s position should be.” In psychological terms, the initial document becomes a point of reference, adjusting away from it requires conscious effort and time—resources already in short supply. What begins as administrative assistance may thus become de facto cognitive delegation, subtly shifting who is doing the core legal reasoning. 

Why Courts Are Different 

Generative‑AI tools are transforming private legal practice, but courts occupy a distinct constitutional role. Judicial decisions bind the public and carry institutional legitimacy only if they are rendered through impartial, reasoned human judgment. The rule of law requires not just accurate outcomes but also processes that are transparent and subject to adversarial testing. Replacing or filtering parts of that process through a proprietary model that cannot be cross‑examined invites questions about equal treatment, due process, and public confidence. Further, generative models produce text by predicting statistically probable sequences. They do not weigh equities, exercise discretion, or reflect moral judgment. Automating the synthesis of a record risks sanitizing the law of its humanity in pursuit of statistical efficiency.

Comparative Governance: The Michigan Benchmark 

Other courts experimenting with Learned Hand have adopted more demanding safeguards. Michigan Supreme Courtofficials entered into a contract with the company after a pilot program that benchmarked the AI’s tools against court‑generated work. Michigan’s agreement emphasizes that the system operates in a closed universe, drawing exclusively from verified legal authorities, and that every claim is linked side‑by‑side to its source. Independent testing to measure “Algorithmic Justice Risk” was conducted, and the contract requires the vendor to log every query and interaction for potential external inspection. These provisions aim to ensure that if an anomalous or biased ruling occurs, its technological origin can be audited and corrected.

By contrast, California’s framework leans on internal self‑policing. Standard 10.80 asks judges to take “reasonable steps” to verify accuracy and remove bias and only “suggests” that they consider disclosure. Rule 10.430 mandates disclosure only when final work consists entirely of AI output. There is no statewide requirement for external audits, red‑teaming, or query logging. The LASC pilot thus proceeds with weaker governance than its Michigan counterpart, demonstrating that more robust safeguards are both available and already implemented elsewhere.

Practical Implications for Litigators

The LASC pilot is in its early stages. According to local reporting, about six civil‑court judges began using the system in March 2026 under a contract worth just over $300,000. The program runs through early 2027 and currently covers civil motions such as summary‑judgment and class‑action settlement requests. Judges must review and edit any AI‑generated draft before it becomes a tentative ruling. 

For practitioners, several emerging tactics warrant consideration: 

  1. Monitor disclosures and ask on the record.  Because current rules do not compel judges to disclose AI use, attorneys may begin requesting clarity during case management conferences or in writing. These requests should be framed as efforts to protect the record, not accusations of wrongdoing.
  2. Scrutinize tentative rulings. Counsel should read tentative decisions with an eye for patterns characteristic of large language models—overly confident tone, formulaic phrasing, or omission of nuanced facts. If there is reason to believe AI played a substantial role, lawyers may preserve objections or request supplemental explanations.
  3. Preserve discovery options. Future litigation may test whether AI prompts, interaction logs, or intermediate drafts are discoverable. Practitioners should consider whether to seek preservation orders or discovery regarding AI‑assisted drafting when challenging an adverse ruling.
  4. Stay informed about evolving policies. Rule 10.430 and Standard 10.80 may be amended as courts gain experience.  Additional local rules or standing orders could emerge requiring disclosure or limiting AI use in certain case types. Lawyers appearing in LASC should track these developments and adjust their litigation strategies accordingly.

Conclusion

The LASC’s Learned Hand experiment reflects a judiciary grappling with caseload pressures and technological possibilities. The pilot’s design—limited scope, internal verification, and assertions of judicial control—suggests a desire to modernize responsibly. Yet the program also exposes fault lines in current governance: unclear disclosure obligations, the potential for cognitive delegation, unresolved data‑governance questions, and the absence of external auditing. Courts are not ordinary enterprise adopters. Their legitimacy depends on transparency, accountability and human judgment.  Efficiency alone is not a constitutional value. To uphold the rule of law, any integration of generative AI must be accompanied by robust policies, meaningful consultation with stakeholders, independent evaluation, and clear public communication. As the LASC pilot moves forward, litigators, scholars and court reform advocates will watch closely to see whether the promise of a “judicial sous chef” can be harnessed without eroding the very foundations of justice.

Subscribe to AMERICAN COUNSEL

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe