
Law enforcement agencies dominate the Department of Homeland Security’s (DHS) latest update to its artificial intelligence (AI) use-case database, with many use cases employing biometrics and data analysis.
Updated on Wednesday to reflect current DHS AI efforts, the use case database lists 209 active projects. Eighty-six of those use cases are for law enforcement purposes, with 49 of those at Customs and Border Protection (CBP) and 29 at Immigration and Customs Enforcement (ICE).
Biometric-related AI use cases make up 38 of the total use cases, and 66% of those are used by CBP or ICE. Thirty of those uses involve facial recognition, and are intended for face detection, facing matching and recognition, and facial image analysis, according to the database.
Other biometric AI includes fingerprint and voice recognition technology.
One high-impact tool used by ICE includes online image screening capabilities that helps law enforcement agents “discover potentially relevant photographs or profiles that they might not otherwise find.”
Mobile Fortify is also listed in the database. The facial recognition app allows agents to scan faces and retrieve information on individuals captured by their cameras. The app fell under sharp criticism last fall from congressional Democrats who raised concerns over ICE’s use of biometrics.
One question that lawmakers posed to ICE was how and whether the app was tested before its deployment. DHS noted in its update that “ICE does not own and did not train, test, or evaluate the AI models that power the Mobile Fortify application.”
According to ICE’s description of the app, it is “intended to solve the problem of confirming individuals’ identities in the field when officers and agents must work with limited information and access multiple disparate systems to identify individuals.”
Another AI tool used by ICE scans license plates, which “helps investigators more quickly identify potentially relevant vehicle movements and patterns that might otherwise be missed,” according to DHS’ description.
Both Mobile Fortify and the license plate screening AI are still undergoing independent evaluations and impact assessments. DHS said it is working to establish “sufficient and periodic training for operators” on both use cases and has not yet developed an “appropriate fail-safe that minimizes the risk of significant harm.”
Data-related use cases listed in the database are largely related to travel and border crossing information, in addition to investigative and case-related data.
DHS also reported CBP AI use cases for help with detecting, classifying, and flagging objects, people or vehicles, and to improve situational awareness in operational environments.
Across all active DHS AI active use cases, the department listed 51 as high impact, 108 as not high impact, and 46 as “presumed high-impact but determined not high-impact.” Thirty-two of the high-impact use cases are at CBP and ICE, according to the database.
Under a memo from the Office of Management and Budget, all use cases considered high impact must meet minimum risk management practices by April.
DHS’ use of technology has continued to prompt congressional concern. On Thursday, Democratic Virginia Sens. Mark Warner and Time Kaine sent a letter to DHS Inspector General Joseph Cuffari, requesting that he investigate DHS technology procurements that may infringe on Americans’ Fourth Amendment protections.
“… It’s important that your office shine light on activities that undergird ICE’s enforcement actions including a muddled patchwork of technology procurements that have significantly expanded DHS’ ability to collect, retain, and analyze information about Americans,” the senators wrote.