With the prevalence of AI technology becoming ubiquitous in daily life, the National Institute of Standards and Technology (NIST) recently released a new paper warning of hackers looking to potentially manipulate or “poison” AI data sets for malicious purposes.

The Jan. 4 paper warns of adversaries looking to target AI and machine learning systems to create real-world problems, which have been steadily increasing with the growth of AI technology in both the private and public sectors.

“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said Apostol Vassilev, a NIST computer scientist and one of the publication’s authors.

“We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses,” he added.

The report continues by outlining four major attack vectors hackers utilize, including:

  • Evasion Attacks: occur after an AI system is deployed and attempt to change some of the systems’ input to alter how the system responds;
  • Poisoning Attacks: happen when trying to introduce corrupted data to a system;
  • Privacy Attacks: occur when an AI system has been deployed and attempts are made to learn of the AI and its data; and
  • Abuse Attacks: involve inserting false or incorrect information into a source that the AI model uses to absorb information such as a web page.

“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said co-author Alina Oprea, a professor at Northeastern University.

“Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set,” she added.

The paper offers various mitigation techniques that potential targets can use to defend their AI or ML systems against the onslaught of attacks, but it warns that many security gaps have yet to be filled.

“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” said Vassilev. “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.”

Read More About
About
Jose Rascon
Jose Rascon
Jose Rascon is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags