A new white paper released by American Council for Technology and Industry Advisory Council (ACT-IAC) on Jan. 4 says that Federal agencies have an opportunity to “lean in” and take steps to address accountability in their use of artificial intelligence (AI) technologies.
The paper also outlines recent Federal policy actions on AI accountability, and what the paper describes as minimum requirements and policy guidance for agencies to consider in order to manage AI accountability risks.
“Artificial Intelligence (AI) has been implemented for hundreds of use cases across the Federal government,” says the white paper titled “AI Accountability in the US Federal Government – A Primer,” and developed by an ACT-IAC artificial intelligence working group.
“Given this increased adoption, Federal leaders have an opportunity to lean in to take responsible and prudent measures to address AI accountability in the context of their unique mission,” the paper says.
“Federal guidance does not currently provide a precise definition of Artificial Intelligence (AI) accountability,” the paper says. For that reason, it continues, “Agencies therefore have a lot of latitude and a significant opportunity to lean in and take responsible and prudent measures to address AI accountability in the context of their unique mission.”
The white paper lists a series of recommendations that Federal agencies should undertake in order to mitigate AI issues. Some of those recommendations include the following.
- Agencies should know their AI use cases, based on agency inventories and use cases compiled by the Office of Management and Budget (OMB);
- In compliance with White House Executive Order 13960, agencies must justify the use of each of the AI systems in their inventory, or be prepared to retire that system;
- Agencies should decide how AI accountability applies in the context of agency operations and mission, and be on the lookout for additional guidance in the form of executive orders, regulations, and law;
- Agencies should establish or reinforce existing AI accountability practices in keeping with guidance already issued by the National Institute of Standards and Technology (NIST) and the Government Accountability Office in its AI Accountability Framework; and
- Agencies should consider using a Responsible, Accountable, Consulted, and Informed (RACI) matrix to reach a common understanding of how those factors contribute to AI accountability.
The report also recommends that the Federal CIO Council facilitate an AI accountability discussion group, “possibly in partnership with industry.”