The Department of Justice (DoJ) Office of Inspector General (OIG) is calling out the agency for not having an updated AI strategy, as the agency’s latest public AI strategy dates back to 2020.

In a report outlining the top challenges faced by the Justice Department over the past year, the OIG explained that the lack of an updated strategy presents challenges for the DoJ in proactively strategizing and responding to emerging technology risks.

“As the use of more advanced AI increases, the department cannot afford to be reactive to the risks and consequences of AI, as GAO reported in May 2023,” the report says.

“The U.S. Department of Commerce, National Institute of Standards and Technology, has issued an initial framework to manage the risks of generative AI this year, but the management of AI risks undoubtedly poses a major challenge to the department as the technology is new and constantly evolving and the standards and regulations around AI are few and in their infancy,” it adds.

The report notes that the DoJ has made some efforts to adapt to the changing tech landscape, such as hiring the department’s first chief science and technology advisor and chief AI officer.

Additionally, it points out AI techniques in use by the department, such as machine learning to detect anomalies in drug samples, as well as topic modeling and clustering to consolidate records reviews.

However, the Justice Department published its AI strategy before the emergence of generative AI tools such as ChatGPT, which was publicly released in November 2022. The strategy also predates President Biden’s October 2023 AI executive order.

“Emerging technologies, such as AI, will significantly affect the DoJ’s efforts to uphold the rule of law, keep our country safe, and protect civil rights over time,” the report says. “When utilizing AI models and tools, DoJ must understand that there is currently a lack of robust and verifiable measurement methods for risk and trustworthiness.”

“To prevent the use of AI in ways that are irrelevant and potentially harmful, the department must identify flaws and vulnerabilities, such as unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system,” it adds.

As part of this effort, the OIG said it is conducting an audit of the Drug Enforcement Administration’s and FBI’s integration of AI and other emerging technology as members of the intelligence community. The goal, the OIG said, is to evaluate compliance with requirements related to AI and other emerging technologies.

In response to the report, the DoJ outlined some of the steps it has taken to address emerging technologies, which it says present both “important opportunities” and “risks for misuse.”

“The department therefore formed an Emerging Technology Board that brings together the Justice Department’s law enforcement and civil rights teams, along with other experts,” the DoJ said. “The department has charged this board with advising department leadership on responsible and ethical uses of AI by the Justice Department.”

“The department also launched the Justice AI Initiative earlier this year, to inform the Justice Department’s AI policy,” it added. “Justice AI brings together stakeholders across civil society, industry, academia, and law enforcement to share outside expertise and a wide range of perspectives on both the promise of AI and the perils of its misuse.”

The DoJ did not address whether or not it plans to release an updated AI strategy.

Read More About
Recent
More Topics
About
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags