Through the SolarWinds and MOVEit supply chain attacks and a wave of open-source package compromises, federal systems integrators (FSIs) have defended against threats that start far upstream in vendor software but ultimately impact their own networks and services.

In a recent MeriTalk webinar, experts warned that supply chain attacks are increasing in volume and sophistication, as nation-state actors weaponize artificial intelligence (AI) to move faster across the software supply chain. FSIs need to rethink how they apply zero trust and data security to keep pace, they advised.

“[Adversaries] are definitely using AI to attack much more frequently and more successfully,” said Josh Salmanson, vice president and Defensive Cyber Practice lead at systems integrator Leidos.

In the on-demand session, “Upstream Attacks, Downstream Risk: How the Federal Community Is Hardening the Supply Chain,” Salmanson and Travis Rosiek, public sector chief technology officer at Rubrik, outlined how FSIs can respond when build pipelines, updates, or third-party providers are compromised. Their core message for systems integrators: assume breach, extend zero trust down to the data layer, and treat immutable, validated recovery as a strategic control – not an afterthought.

AI and DevOps are supercharging supply chain exposure

The attack landscape facing FSIs has shifted on two fronts, Salmanson said: AI-enabled attackers at the edge, and increasingly complex software supply chains inside the enterprise.

On the attacker side, he described a blend of new and old tactics: AI-assisted probing and targeting layered on top of patient, traditional tradecraft where adversaries study processes and wait for defenders to relax controls before striking.

From a supply chain perspective, “we have huge issues” as developers, tools, and services proliferate across environments, Salmanson said. Developers across many generations of tools and platforms have “created a really hard-to-secure web of shadow IT,” he noted, as well as insecure DevOps efforts in a rush to ship capabilities faster.

Those pressures surface in the federal ecosystem, where FSIs may integrate commercial software as a service, on-premises products, custom code, and open-source components into mission systems, often across multiple primes and subcontractors. The result, Salmanson said, is that “it’s not hard to compromise the software delivery chain, regardless of if it’s commercial codebases or custom-developed software.”

SBOMs help, but vendor risk and visibility still lag

Secure software supply chain initiatives and requirements for software bills of materials (SBOMs) are intended to give FSIs and their customers more insight into what’s inside the stack. In practice, Salmanson noted, visibility is uneven.

Typical corporate risk management teams have decades of experience looking at companies, their histories, and financial relationships, but doing the same for deeply nested software dependencies is a different challenge, he observed. From a software perspective, “even though we’re supposed to get detailed SBOMs, we don’t always get them from our partners,” he said.

That means FSIs must assume gaps in their upstream view and layer in their own telemetry and controls. At Leidos, Salmanson said, that includes:

  • Continuously testing security, rather than treating it as a periodic exercise
  • Using detection engineering as a full-time function in the security operations center to keep up with changing attack techniques
  • Focusing on a central performance indicator: mean time to know, which measures how long a compromise has been in place before it’s discovered

“Our biggest metric,” he explained, is understanding when an attack started and how long it took to detect it – a focus that reflects the reality that sophisticated supply chain compromises may be in progress well before they generate obvious alerts.

Zero trust must extend to the data layer

FSIs can’t prevent every upstream compromise, Rosiek emphasized, but they can dramatically limit downstream damage by extending zero trust principles beyond users and networks to the data and backup layer.

In a zero trust-aligned data strategy, Rosiek said, organizations must “assume the adversary’s already in there” and remove implicit trust. That applies not just to production systems, but also to the backups and recovery processes that organizations historically trust by default, he said.

He pointed to the need for:

  • Immutability and survivability of backups, so attackers cannot alter or encrypt historical copies
  • Sensitive data awareness, so FSIs understand which datasets and tenants carry the most risk if they are exposed or corrupted
  • Pre-validated, “known-good” restore points, ideally tested in isolated environments before production recovery
  • Tight control of privileged access to reduce opportunities for abused credentials

The goal, Rosiek said, is to treat modern data security controls as active zero trust enforcement points. For FSIs, that means resilience is not only about meeting contractual recovery time and recovery point objectives, but also about proving integrity to agency customers after an upstream incident.

FSIs can reduce downstream risk when upstream risk is out of reach

For FSIs, the panelists agreed, the hard reality is that no amount of diligence can fully eliminate upstream risk, whether from a compromised software supplier, a poisoned dependency, or a managed service provider breach.

What FSIs can control is how quickly they understand their exposure, how precisely they can map tenant and data-level impact, and how confidently they can restore known-good services under pressure. That requires AI-aware threat modeling, pragmatic SBOM and vendor risk practices, and zero trust extended through the data layer all the way to immutable, validated recovery.

To hear more insight from Rosiek and Salmanson, including how Leidos continually tunes its detection engineering and how Rubrik approaches data-layer zero trust, watch the full webinar on demand.

Read More About
Recent
More Topics