Guidance issued by the Office of Management and Budget (OMB) last year on how to accelerate the federal government’s use of artificial intelligence (AI) technologies falls short in addressing key privacy risks, the Government Accountability Office (GAO) said in a report issued on March 26.

GAO warned that gaps in the guidance could leave agencies without sufficient direction to manage sensitive data exposures tied to AI use – a growing concern as agencies scale deployment of AI tools across mission systems.

OMB’s AI guidance to agencies, GAO found, “doesn’t fully address all the major privacy-related risks and challenges to” implementation.

GAO said it reached that conclusion after convening a panel of experts to discuss government AI implementation thus far.

“The experts noted that using AI may reveal sensitive information in raw data sets, potentially exposing personal and private information, among other privacy risks,” the report says.

“At the same time, the experts identified several challenges that federal agencies face in addressing these risks,” the report says, adding, “These include the lack of technology to implement AI with appropriate privacy protections and the potential performance tradeoff when adjusting or removing certain data for the sake of privacy.”

GAO said it made two recommendations to OMB, neither of which drew comments from the agency.

The first is that OMB “should specify examples of known privacy-related risks that agencies should consider when updating their policies as they pertain to AI.”

The second recommendation is more detailed and calls on OMB to issue governmentwide guidance related to:

  • “How agencies should consider privacy when evaluating and auditing AI models that contain sensitive information;
  • Storing data in a manner where sensitive data can be separated from the dataset;
  • Clear rules, norms, and best practices with respect to privacy that agencies should use when developing AI solutions internally;
  • Performance metrics agencies can use to assess privacy-related impacts when using AI;
  • Actions agencies can take to ensure that members of the public who interact with their AI technologies understand what they are consenting to;
  • Technological tools agencies can use to protect sensitive data when using AI;
  • Incorporating AI-specific considerations into privacy impact assessments, including identifying risks and informing the public about how PII is involved in the use of AI; and
  • Potential tradeoffs between privacy and performance agencies can consider when using AI.”

GAO also suggested that OMB could use existing interagency bodies – such as the Chief AI Officer Council and the Federal Privacy Council – to share best practices and coordinate approaches to AI-related privacy challenges.

“Without this additional direction, risks are increased that agencies’ use of AI would disclose sensitive data, or compromise privacy in other ways,” GAO concluded.

Read More About
Recent
More Topics
About
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags