Government agencies are excited to implement artificial intelligence (AI) and automated technologies, but should remain aware of the risks and challenges of doing so, says a new report from the Partnership for Public Service and the IBM Center for the Business of Government.

The report – part of the More Than Meets AI series the two organizations are releasing – finds that agencies will face many familiar challenges with AI, including workforce issues, procurement delays, and cybersecurity. And agencies will also need to address AI-specific concerns, like bias and transparency to a non-technical audience.

“It is important for federal organizations to move forward with implementing AI technologies as they address AI’s risks. Their approach to lessening AI risks also must evolve rapidly if they hope to use AI to address government’s most pressing challenges,” the report says.

Bias in AI tools could be a concern for the public’s trust in government’s use of AI technologies. The report notes that agencies will need to understand how AI algorithms work, check the work of AI tools when unexpected outputs emerge, and make sure that the data used to train AI is of high quality. The report also notes that the National Institute of Standards and Technology (NIST) is looking at AI trustworthiness, and suggests that NIST create a framework for assessing bias.

Transparency is also a key to the success of AI, the report states. The “black box” nature of AI could harm public trust in the government’s usage of these technologies, and make it hard to understand how conclusions were reached. The authors cite a case in the Houston school district, where AI faced a backlash after it was used to make personnel decisions without explaining the processes behind the decisions. The report recommends support for research on “explainable AI” as part of the government’s strategy.

On cybersecurity, the report highlights how attacks could corrupt AI training data or reveal personally identifiable information. Citing the Department of Defense’s work to develop reliable and secure AI systems, the report suggests using people to monitor for attacks, conducting test breaches, and collaborating with other countries to address security concerns.

For workforce issues, the report notes that agencies have difficulties getting the funding to properly train employees on AI skills. The report suggests starting with small efforts, like training employees on AI terms and definitions, emphasizing expertise in digital and data skills, and communicating clearly on AI. However, agencies will need to find the resources to train existing employees on AI.

When it comes to Federal procurement, the report acknowledges the difficulties of moving fast and acquiring AI on an iterative basis, but suggests that agencies “[take] full advantage of the tools and flexibilities available in the budget and procurement processes.”

The report concludes by suggesting that White House agencies like the Office of Management and Budget and the Office of Science and Technology Policy lead efforts to manage the risks, and pull from the experiences of similar countries, like Canada.

Read More About
Recent
More Topics