Artificial intelligence (AI) presents a growing set of cybersecurity vulnerabilities that extend beyond traditional software threats, according to Matthew Turek, deputy director of the Information Innovation Office at the Defense Advanced Research Projects Agency (DARPA).

Speaking in the latest episode of the Billington CyberSecurity Cyber and AI Outlook Series, hosted by Federal News Network on Sept. 30, Turek addressed the complex and evolving risks posed by AI systems.

While AI shares many of the same vulnerabilities as conventional software, it introduces unique challenges that require new security strategies, Turek emphasized. Among these is the susceptibility to adversarial attacks, such as attempts to manipulate AI systems into making unintended or incorrect decisions.

“There’s an entire research community focused on adversarial attacks on AI systems – how might I get an AI system to make a decision different than what the system owner wants?” he noted.

AI models are also at risk of being reverse engineered, particularly through repeated queries that can reveal proprietary data or decision-making structures, Turek said. This becomes especially concerning in scenarios involving sensitive government information and national security applications.

“One of the foundational research problems is identifying and preventing malicious attempts to mine an AI model,” he said. “Trying to differentiate a pattern of queries used for malicious purposes from normal use is a very difficult problem.”

Current mitigation measures – such as restricting access through application programming interfaces and applying conventional security controls – are important but not fully sufficient, he said. On the data side, Turek warned of the challenge in verifying large-scale training datasets, especially when sourced from the open internet.

“Having some strong assurance statement about what is in your dataset is going to be difficult,” he said.

Turek outlined several DARPA initiatives aimed at advancing secure AI adoption across government and critical infrastructure. One program is the agency’s Constellation effort with U.S. Cyber Command, which aims to transition promising research into operational capabilities through a shared budgeting and governance process.

Another initiative is the AI Cyber Challenge, which requires winning participants to open-source their defensive tools. According to Turek, this approach is intended to promote wide-scale adoption of innovative security solutions across the federal government and the private sector.

“Sometimes it’s not just the U.S. government that has particular equity in a defensive problem,” he said. “We need to partner with industry and with critical infrastructure owners to adopt those defenses.”

Read More About
Recent
More Topics
About
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags