How Ai helps clear court backlogs in los angeles without replacing judges

How AI Is Being Used to Clear Court Backlogs in LA

Courts across the globe are struggling to keep up with a steady rise in cases, and Los Angeles is no exception. In response, the Los Angeles Superior Court has begun experimenting with a new tool that aims to speed up the flow of civil cases without replacing the judges who decide them.

At the center of the pilot is an artificial intelligence system called Learned Hand. Rather than issuing decisions on its own, the software is designed to work in the background: it digests dense filings, structures the evidence, and produces draft opinions that judges can then review, edit, or reject. The ambition is straightforward-cut the time spent on repetitive administrative work so that human judges can devote more attention to legal reasoning and judgment.

Learned Hand’s founder and CEO, Shlomo Klapper, described the moment as a turning point. Courts, he said, are under “tremendous strain.” Caseloads continue to rise, but staffing and funding have not kept pace. At the same time, rapid advances in AI are “massively dropping the cost of litigation,” which means it is becoming cheaper and faster for lawyers to generate filings. That, in turn, creates even more paperwork for courts to process.

AI tools already help attorneys draft motions, assemble exhibits, and generate lengthy legal arguments at a fraction of the former cost. While that can expand access to legal services for some litigants, it also leads to a flood of longer, more complex documents landing on judges’ desks. The Los Angeles pilot is an attempt to re-balance that equation: if AI helps lawyers write more, it may also need to help judges read and respond more efficiently.

In practice, Learned Hand functions as an assistant rather than an authority. When a civil case is filed, the system can pull together the relevant documents, highlight key arguments from each side, and create a structured summary of the issues in dispute. Instead of wading through hundreds of pages, a judge can start with a synthesized overview and then dive into the specific filings and evidence as needed.

Another major function is evidence organization. Civil cases often involve a tangle of contracts, emails, expert reports, and prior court decisions. The AI tool can categorize these materials, link them to specific claims or defenses, and surface the most important passages. That reduces the risk that crucial information is simply buried in the volume of paperwork.

Perhaps the most sensitive feature is its ability to draft proposed rulings. Based on the claims, evidence, and applicable legal standards, the system can generate a preliminary decision or order. This draft is not final and has no legal force until the judge reviews it, makes changes, and formally issues it. The court’s intention is to preserve human discretion and accountability, while still using automation to accelerate the most time-consuming parts of opinion writing.

Supporters argue that this approach could help clear growing backlogs, especially in complex civil matters that might otherwise take months or years to resolve. By shortening the time between filings, hearings, and decisions, courts could move cases along more predictably, which benefits plaintiffs, defendants, and the broader economy. Faster resolution can reduce legal costs, uncertainty for businesses, and stress for individuals whose livelihoods or rights are tied up in litigation.

At the same time, the experiment is unfolding against a backdrop of concern about how far AI should be allowed to go in legal settings. Judges, lawyers, and legal scholars worry about accuracy, transparency, and fairness. AI systems can misinterpret facts, overlook nuances, or produce “hallucinations”-confident but incorrect statements-if not carefully designed and supervised. In a court, such mistakes could have serious consequences.

There is also the question of bias. If an AI system is trained on historical court decisions, it may reproduce or even amplify existing inequities, such as harsher outcomes for certain groups of litigants. That is one reason the Los Angeles pilot is being framed as a tool for summarizing and organizing information rather than for deciding outcomes. Judges remain responsible for the final legal analysis and for ensuring that decisions are consistent with the law and with constitutional protections.

Another challenge is explainability. Judges must be able to justify their rulings and show how they arrived at a conclusion. If an AI system plays a role in drafting a decision, courts will need clear protocols about how that assistance is documented and how much reliance on AI is acceptable. The credibility of the judicial system depends on transparency, and any technology introduced into the process must be compatible with that principle.

From a practical standpoint, the success of tools like Learned Hand will depend heavily on training and adoption. Judges and clerks need to understand what the system can and cannot do, how to interpret its outputs, and when to override or ignore its suggestions. The pilot in Los Angeles provides an opportunity to test not only the technology, but also the workflows, safeguards, and cultural changes required to integrate AI into a traditionally cautious institution.

If the experiment proves effective, similar tools could be expanded beyond civil cases to other areas with heavy caseloads, such as family law or small claims. Even modest reductions in processing time per case could translate into thousands of hours saved across a large court system each year. That, in turn, might help address one of the most persistent criticisms of modern justice systems: that justice delayed is, too often, justice denied.

Still, any expansion is likely to be gradual. Courts must weigh efficiency gains against risks to due process and the appearance of impartiality. Public trust may be undermined if people believe machines, rather than judges, are deciding their disputes. That places a premium on clear messaging: AI is being used as a high-powered research and drafting assistant, not as a replacement for judicial thinking.

In the longer term, pilots like the one in Los Angeles could shape broader standards for the use of AI in law. They may influence how court systems set rules for data security, how they protect sensitive case information, and how they audit AI tools for accuracy and bias. As more jurisdictions confront similar backlogs and resource constraints, the experience gained in early adopters will likely inform how-and how quickly-other courts embrace or limit these technologies.

Ultimately, the Los Angeles Superior Court’s pilot is an experiment in balancing innovation with institutional responsibility. By deploying an AI assistant such as Learned Hand, the court hopes to absorb the shock of rising caseloads and AI-accelerated litigation, without surrendering what makes judicial decision-making distinct: human judgment, accountability, and a commitment to fairness under the law.