Your agency has acquired the latest and greatest cloud business communications platform. It allows employees to collaborate on projects, store documents in the cloud, instant message, and hold video calls – increasing employee productivity. Your agency isn’t alone. Gartner reports that almost 80 percent of workers used collaboration tools in 2021, an increase of 44 percent since the start of the pandemic.

But increased productivity comes with a heightened cybersecurity risk. Justin Wilkins, director of sales engineering, public sector, at Varonis, explains the dangers: “Greater collaboration transfers the security model from the IT administrators to the end users themselves. Team members can create their own collaboration links not only with individuals inside the organization, but with people outside it – without requiring authorization from IT security.”

How can agencies mitigate risk while still enjoying better document sharing and increased productivity? This challenge is part of a larger issue facing government agencies: identifying sensitive data and ensuring that only the right people have access to it.

The White House recognizes these challenges. The Biden administration’s May 2021 Executive Order on Improving the Nation’s Cybersecurity (EO) mandates that Federal agencies move to a zero trust architecture that controls access to data. The Office of Management and Budget’s (OMB) Memo M-22-09, which followed the EO early this year, gives agencies 120 days to develop a set of initial categorizations for sensitive electronic documents, with the goal of automatically monitoring and potentially restricting the sharing of these documents.

Most organizations struggle to identify sensitive data at scale, Wilkins observes. However, such identification is foundational to zero trust; if you cannot identify the sensitive data, you cannot secure it. A Varonis report found that this is a widespread problem with civilian and military agencies. On average, agencies had 12 percent of sensitive files and 15 percent of folders exposed to every employee; the average agency also had 3,068 exposed, sensitive files and 22,248 exposed folders per terabyte.

“Large government organizations hold hundreds of terabytes of information, covering millions of folders,” Wilkins says. “If you extrapolate the number of permissions, access rights, and amount of data to the scale of a large Federal agency or Department of Defense mission, you quickly see how difficult this problem is to manage.”

Global access groups, such as all users, domain users, or authenticated users, compound the problem by putting organizations at risk from insider, malware, and ransomware attacks. One click on a phishing email can set off a chain reaction that encrypts or destroys all accessible files for every member of the global access group, significantly expanding the blast radius of a single breach.

Addressing this issue is not a simple process. IT professionals estimate it takes about six to eight hours per folder to locate and manually remove global access groups by identifying users that need access, creating and applying new groups, and subsequently populating them with the right users.

Agencies also need to identify and categorize their sensitive data so they can control access and usage. However, identification alone is insufficient. The data landscape is constantly changing, so organizations struggle to keep up with a flood of new data and ever-changing permissions.

Wilkins outlines a process by which agencies can eliminate global access and also identify and secure their sensitive data:

  1. The discovery process – as called out within National Security Agency zero trust guidance and the OMB memo – where information is cataloged and classified, and the agency develops criteria to identify sensitive information
  2. Data collection about user activity and automatic mapping of permissions to understand who has access to sensitive data and identify atypical behavior
  3. Identification and remediation of global access groups that give access to sensitive data. Creation of tightly managed groups and elimination of the excessive permissions that have accumulated over time
  4. Implementation of segmentation boundaries on the data, ensuring that the right users have access to the right data and preventing users from moving laterally to unauthorized resources
  5. Automation of threat detection, based upon the organization’s new understanding of normal and abnormal activity
  6. Creation of a sustainable, long-term least privilege security model by empowering data stewards. These stewards are responsible for regularly reviewing whether access permissions are still appropriate

Varonis helps agencies work through this process in a unique way, Wilkins notes, by exploring how users interact with data, correlating that information to their permissions, and identifying access can be safely removed and tighter security controls can be implemented without affecting agencies’ missions.

The end result of this process is that threat actors must work much harder to access sensitive content and agencies are several steps closer to achieving zero trust. “This a fundamental shift in our architecture model,” Wilkins acknowledges. “But the agencies that undergo this process to identify sensitive data and fully commit to the zero trust model are the ones that are most successful in identifying threats and preventing cyberattacks.”

Read More About
About
MeriTalk Staff
Tags