Smarter Gov Tech, Stronger MerITocracy

AI Is the Solution to Stop AI Data Theft

By Fadi Fadhil, Field Chief Information Officer at Palo Alto Networks and former CIO for the City of Minneapolis

For many years, data retention has been a defensive game. To keep data safe, organizations have needed to protect their systems from outside cybersecurity threats like malware, phishing attempts, and ransomware attacks. This focus on guarding against intruders has spurred the creation of new systems that monitor cyberattacks and implement zero trust architecture, thereby helping prevent theft of critical or sensitive data.

But what if the biggest threat isn’t coming from outside intruders, but instead from the actions of your own employees?

When employees use new AI tools that promise to make them more efficient or productive at work, there’s no firewall system to prevent the sharing of data – knowingly or not – with bad actors embedded within these programs. This type of data theft happens without leaving any obvious trace – not only delaying organizational response but eliminating the chance to get that stolen data back.

Take school systems, for example. Schools across the country are consistently working with less funding and fewer staff than in the past. Stretched thin by competing demands on their time, an administrative employee at a high school might use a large-language model AI tool to help sort student demographic data to save time. What they may not know or remember is that the web-based AI tool they are using is not a part of the schools’ IT environment – meaning this student data is being shared with a third party that has no obligation to follow the school’s data guidelines. Students’ addresses, health records, grades and other identifying information are all at risk of being stolen by a threat actor.

As scary as it sounds, that’s the reality both public and private sector organizations across the country are facing with the rise of third-party AI tools. According to a recent Gallup poll, up to a third of U.S. employees are using AI in at least some of their work. That’s tens of millions of workers feeding possibly proprietary data into unknown third-party applications outside of their organization’s IT infrastructure, often without realizing how their inputs will be viewed, used, sold, or even stolen.

In the cybersecurity space, we recognize this phenomenon as “Silent AI,” or the hidden vulnerabilities associated with employee use of third-party AI tools. While uses for these tools may seem innocuous and even beneficial, like helping employees create to-do lists or organize information visually for ease of reading, they require users to share sensitive data outside of cybersecurity perimeters that IT departments can track. The use of AI programs has truly become the Wild West, and most organizations don’t have a sheriff. Once that data is outside of an organization’s systems, there’s no way to get it back.

The solution for Silent AI is to identify data sharing before it happens. Organizations need to have a solid understanding of which AI programs their employees are using so they can monitor what information is coming in and out of their networks. This can’t be accomplished with a patchwork of cybersecurity protections – rather, organizations must be able to look at their entire network to properly manage vulnerabilities.

But solving for Silent AI isn’t just about technology. Organizations both public and private also need to make a much larger investment in employee training around the risks associated with AI, including both the sharing of confidential information, and how this information can be an attack vector for more traditional forms of cyberattacks. And while employees can be a vulnerability themselves in cybersecurity systems when they make data sharing errors, employees can also be leveraged as a strength in developing a culture of vigilance around responsible, sanctioned use of AI tools that limit exposure to threat actors.

Addressing Silent AI doesn’t mean giving up on AI technologies altogether. These are important, useful tools that have very quickly become part of our everyday working experience and have made organizations more efficient. Instead, IT leaders and cybersecurity professionals need to set clearer parameters for which AI tools are safe to use, and how to use them. Then, they need the technology in place to monitor compliance with those policies and minimize risks.

The cybersecurity industry is already working on solutions. In my role as Field CIO, one of my department’s main goals is to help IT departments sleep at night by creating a holistic approach to their organization’s cybersecurity plans. We work with organizations to create comprehensive monitoring systems for their networks that can track whenever data leaves and who receives it. Because many of these systems are automated – using AI to track AI – IT infrastructure can detect or shut down sensitive data sharing almost immediately.

AI can be a silent threat, but there are still ways to manage its risks. As CIOs and CTOs across the country grapple with the consequences of using AI, cybersecurity systems need to be a part of their organizational policies. With the proper monitoring tools in place, the public sector can become more efficient and cut costs using AI while still protecting critical data.