Law enforcement agencies that use forensic algorithms to aid in criminal investigations face numerous challenges, according to the Government Accountability Office (GAO), including difficulty interpreting and communicating results, as well as addressing potential bias or misuse.

“Law enforcement agencies primarily use three kinds of forensic algorithms in criminal investigations: latent print, facial recognition, and probabilistic genotyping,” wrote GAO in a new report. “Each offers strengths over related, conventional forensic methods, but analysts and investigators also face challenges when using them to assist in criminal investigations.”

MerITocracy
Critical issues that sit at the nexus of policy and technology. Learn more.

GAO explained that law enforcement agencies use forensic algorithms to help assess whether evidence originates from a specific individual, thus improving speed and objectivity of many investigations. To address potential challenges in using forensic algorithms, GAO developed three policy options, which include:

  1. Increased training;
  2. Standards and policies on appropriate use; and
  3. Increased transparency in testing, performance, and use of the algorithms.

“The policy options identify possible actions by policymakers, which may include Congress, other elected officials, Federal agencies, state and local governments, and industry,” wrote GAO.

GAO said that increased training for law enforcement agencies could help: reduce risks associated with analyst error and decision-making; understand and interpret the results they receive; raise awareness of cognitive bias and improve objectivity; and increase consistency across agencies.

Through supporting standards and policies on appropriate use, agencies that use these algorithms can address the quality of data inputs and reduce improper use, increase consistency across law enforcement agencies, and help reassure the public and other stakeholders that algorithms are providing reliable results, GAO said.

The government watchdog agency listed several potential benefits for increasing transparency related to testing, performance, and use of algorithms. Those include: the public may gain more trust in algorithms; non-technical users may find algorithm usage easier to understand; agencies may be better able to select the best-performing algorithms; public confidence in facial recognition technologies may improve; and the demographic effects of algorithm use may be reduced.

Read More About
About
Jordan Smith
Jordan Smith
Jordan Smith is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags