The Department of Defense (DoD) Chief Digital and Artificial Intelligence Office (CDAO) has launched the first of two artificial intelligence (AI) Bias Bounty exercises, the agency announced last week.

Bias bounties are new crowdsourced efforts to help detect bias in AI systems. The department may utilize the results from the bounty exercises for further research, analysis, best practices, and policy recommendations on AI.

“Given the department’s current focus on risks associated with LLMs, the CDAO is actively monitoring this area; the outcome of the AI Bias Bounties could powerfully impact future DoD AI policies and adoption,” Chief Digital and Artificial Intelligence Officer, Craig Martell, stated in the release.

According to the department, the exercises are conducted to “generate novel approaches to algorithmically auditing and red teaming AI models, facilitating experimentation with addressing identified risks, and ensuring the systems are unbiased, given their particular deployment context.”

The first exercise is currently open to the public and a second exercise will soon follow.

The goal of the first bounty exercise is specifically to identify unknown areas of risk in Large Language Models, beginning with open-source chatbots.

The first bounty exercise will run from Jan. 29 to Feb. 27, 2024.

The CDAO Responsible AI (RAI) Division is spearheading the two AI Bias Bounties, which are developed, and executed through partnerships with ConductorAI-Bugcrowd and BiasBounty.AI and advised by the CDAO Defense Digital Services Directorate.

“The RAI team is thrilled to lead these AI Bias Bounties, as we are strongly committed to ensuring that the Department’s AI-enabled systems – and the contexts in which they run – are safe, secure, reliable, and bias free,” said Matthew Johnson, acting chief of the DoD’s RAI Division, in a statement.

Read More About
About
Lisbeth Perez
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags