
A congressman who is influential on homeland security matters is asking the Government Accountability Office (GAO) to examine the use of artificial intelligence (AI) by violent extremists and other illicit actors, saying it poses “a broad and evolving national security threat.”
Rep. August Pfluger, R-Texas, who is chairman of the House Homeland Security Committee’s Subcommittee on Counterterrorism and Intelligence, singled out generative AI and agentic AI as potentially harmful in his March 17 letter to GAO.
“As artificial intelligence (AI) increasingly becomes a part of the everyday lives of Americans, so too do malicious actors seek to exploit emerging AI technologies and applications to pursue harmful, even deadly, agendas,” Pfluger wrote. “Violent extremists and other illicit actors – including but not limited to insiders who pose threats and malicious cyber actors – will inevitably seek inventive ways to exploit these emerging technologies to support a wide range of terrorist tactics and other criminal activities.”
Specifically, Pfluger requested that GAO review how the use of AI by violent extremists and other illicit actors has changed their ability to conduct terrorist activities, how it has changed federal law enforcement and intelligence agency efforts to fight such activities, and how agencies work with technology companies to disrupt the use of AI by those with ill intent.
In a statement to MeriTalk, GAO said that it has accepted Pfluger’s request and is “awaiting staff that are working on other engagements before the work gets underway.”
The letter is Pfluger’s latest effort to highlight the potential national security risks of AI. In November, the House passed a bill he sponsored that would require the Department of Homeland Security (DHS) to annually assess terrorism threats to the United States posed by terrorist groups using GenAI applications.
The Senate has not voted on the measure, which remains in the Committee on Homeland Security and Governmental Affairs.
Pfluger is not alone in his concern. A pair of bipartisan senators recently sounded the alarm after the first documented case of a successful cyberattack using AI targeted 30 entities.
Sens. Maggie Hassan, D-N.H., and Joni Ernst, R-Iowa, called for a coordinated effort with Congress and other federal agencies to address the new and emerging threat.
A Defense Advanced Research Projects Agency (DARPA) official also recently highlighted AI cybersecurity vulnerabilities. DHS, while saying the responsible use of AI holds great promise, has warned of potential threats AI poses to chemical and nuclear safety.
In his letter to GAO, Pfluger said GenAI and agentic AI “are rapidly evolving technologies that could fundamentally alter the global terrorism landscape.”
GenAI, for example, “enables violent extremists and other illicit actors to create large volumes of tailored propaganda, misinformation, and recruitment content at low cost and machine-level speed, significantly lowering barriers to radicalization,” he said.