A bipartisan group of lawmakers introduced legislation in the House and Senate to create a task force to evaluate the use of artificial intelligence (AI)-powered speech technologies in the federal court system. 

According to the Research and Oversight of AI in Courts Act of 2026 – introduced by Rep. Harriet Hageman, R-Wyo., and Sens. Roger Wicker, R-Miss., and Peter Welch, D-Vt., – the task force would evaluate civil liberty implications, privacy issues, and concerns over the accuracy of AI technologies in courts. 

A 15-member task force set by the National Institute of Justice would include four federal officials – such as court staff, judges, or prosecutors – and 11 outside experts in court record technology, civil liberties, and judicial review. Non-federal members are barred from ties to AI companies to prevent conflicts of interest, according to the proposal’s text. 

“Artificial intelligence is being integrated into every aspect of our society in the 21st century, including our court systems,” Hageman said in a statement. “As an attorney for over three decades, I know our justice system demands precision and security. Congress must protect the integrity of our courts with vigorous oversight that remains up to date with emerging technologies.” 

Welch added that, “When it comes to the use of AI in the courtroom, there are still substantial privacy and civil liberty concerns that need to be addressed. Accuracy, privacy, and security are paramount. It is critical we allow experts who are actively working in the courts to weigh in on use of emerging AI speech-to-text services and technologies.”  

While the use of AI in the federal court system is slower than civilian and defense agencies, interest in the use of the technology has increased, a 2025 report from the Thomson Reuters Institute and the National Center for State Courts AI Policy Consortium for Law and Courts found. 

That report – based on a survey of 443 state, county, and municipal court judges and court professionals – found that only 17% of respondents said their court was using generative AI, and 70% said their court prohibits the use of AI tools. Yet, 55% of respondents cited AI as likely having a high impact within the next five years. 

Last year, Judge Robert Conrad, director of the Administrative Office of the U.S. Courts, told Congress that the federal judiciary created an advisory AI task force to distribute AI guidance across the court system. 

That notification followed reports of two district judges using AI to write court orders that allegedly contained “serious factual inaccuracies.” Those errors included misquoting sources, referencing nonexistent or irrelevant parties, and misstating case details. 

Hageman, Wicker, and Welch’s legislative proposal aims to ensure tighter oversight of AI and evaluate AI’s impact on court record accuracy and litigants’ constitutional rights. It would also evaluate AI risks to cybersecurity systems. 

“Artificial intelligence capabilities continue to expand and become part of daily life,” Wicker said. “Federal courts have begun using this technology to improve their processes. This legislation would examine the legal, technical, and constitutional implications of AI in the U.S. judicial system. Ensuring accuracy is critical to fair justice.” 

The legislation is backed by the National Court Reporters Association.

Read More About
Recent
More Topics
About
Weslan Hansen
Weslan Hansen is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags