Smarter Gov Tech, Stronger MerITocracy
This page is not built out yet. If you are seeing this page, please contact an administrator.

The Clock is TICing: Accelerating Innovation With Cloud Security Modernization

As remote work shifts to hybrid work for the long term, Federal agencies need continued (and even stronger) cloud security.

I recently moderated a panel of leading Federal cyber experts from the Department of Veterans Affairs (VA), General Services Administration (GSA), and Department of State to discuss how Trusted Internet Connection 3.0 is helping agencies accelerate cloud modernization. The updated policy is allowing agencies to move from traditional remote virtual private network solutions to a scalable network infrastructure that supports modern technology and enables digital transformation.

TIC 3.0 Driving Modern Security and Innovation

“TIC 3.0 removes barriers for the adoption of new and emerging technologies, and it is a key enabler for IT modernization and digital transformation,” said Royce Allen, Director of Enterprise Security Architecture at VA.

Traditional networks often do not support the technologies needed for today’s modern cloud and hybrid IT environment. Agencies have had to make drastic shifts in technology to connect their data center and cloud providers, increase bandwidth, improve security, and more to drive innovation.

For example, by following the TIC 3.0 guidance, the VA has been able to expand the number of users it can support on the network at one time to enable more productivity, and open the door to innovative data sharing solutions.

Hospital systems that previously supported 150 to 200 simultaneous users are now supporting up to 500,000 with a zero trust architecture and cloud-based desktop application. The zero trust architecture helped the VA transition from a network-centric environment to an application-centric environment. In this use case, microsegmentation allowed VA to utilize any network, anywhere, including the internet, to meet the TIC 3.0 guidelines and provide massive on-demand scalability to meet pandemic demands.

The Department of State piloted TIC 3.0 use cases to improve application performance and user experience, especially as employees share data and connect with embassies overseas.

State was managing employees in more locations, using a greater variety of devices than ever before – and thus increasing cyber risks. Protections included backhauling all data internationally through domestic MTIPS/TICs. This slowed down application performance and negatively impacted the user experience, especially on SaaS applications. For example, O365 became virtually unusable due to this hairpinning. TIC 3.0 enabled the agency to pilot a solution that allowed for local internet breakouts across the country, increasing network mobility, while still meeting the rigor of FedRAMP authorization and TIC 3.0 guidelines.

The agency now has full visibility of their servers, can securely direct traffic straight to the cloud, and can allow for more data mobility across embassies around the world, while still storing all sensitive data – i.e. public key infrastructure and telemetry data – in a U.S.-based FedRAMP cloud.

Gerald Caron, Director of Enterprise Network Management, Department of State, noted that TIC 3.0 enabled the agency to focus on risk tolerance. “TIC 3.0 is definitely an enabler to modernization…while still leveraging or maintaining secure data protection,” said Caron.

Pushing for Continued Modernization and Aligning Solutions to TIC 3.0 Guidance

We need to continue to work together to modernize the evolving remote work environment and threat landscape. The next step for TIC 3.0 is to provide additional baseline implementation guidance to agencies, including more information on hybrid cloud guidance, examples of risk profiles and risk tolerance, and the latest use cases.

An important aspect of TIC 3.0 is alignment with other contracts and guidance, including GSA’s Enterprise Infrastructure Solutions. The EIS contract is a comprehensive solution-based vehicle to address all aspects of federal agency IT telecommunications and infrastructure requirements. As the government’s primary vehicle for services including high-speed Internet, government hosting services, and security encryption protocols – it’s critically important that the TIC 3.0 guidance is used to provide the foundation for secure connections across solutions.

GSA recently released draft modifications to add the TIC 3.0 service as a sub security service to EIS. Allen Hill, Acting Deputy Assistant Commissioner for Category Management, Office of Information Technology Category (ITC), Federal Acquisition Service, GSA, said he hopes this collaboration will help agencies mature their zero trust architectures.

“Having the TIC 3.0 guidance allowed us to aggressively push the envelope,” said the VA’s Allen.

The Cybersecurity and Infrastructure Security Agency’s efforts over this past year, as well as TIC’s alignment with EIS, are great examples of what we can accomplish through innovation and strong collaboration. The team demonstrated real leadership, quickly putting the TIC 3.0 Interim Telework Guidance in place to support agencies as they scaled up the remote workforce. This progress is a permanent, positive shift for the Federal government – supporting the move to modernize remote access and enable secure cloud services. We’re still learning – but we’ve taken a giant leap forward.

My Cup of IT – GovTech4Biden

Like many of you, I have read the news every day for the last four years. Every day was like a visit to the proctologist – anger, fear, frustration. And, yes, the A word – anxiety.

So, I decided to put up or shut up – and I founded www.govtech4biden.com in June. I discovered that many of you felt the same way – 150-plus in fact. We embarked on a curious, scary, and fulfilling odyssey. We raised more than $100,000 for the Biden-Harris campaign.

On this journey, we hosted all the leading Democratic Congressman and Senators focused on tech. Fittingly, Congressman Gerry Connolly kicked us off – and leading lights on tech and our economy gave us the momentum to raise over $100,000 for the Biden campaign. Congressman Ro Khanna, Congresswoman Mikie Sherrill, Senator Jackie Rosen, the New Democrat Coalition – and closing out with Senators Maggie Hassan, Sheldon Whitehouse, and Chris Van Hollen.

If you’d like to hear more about GovTech4Biden – our political and tech odyssey – and thoughts on the tech agenda for the future, please join us for a webinar on Tuesday, November 24th from 1:00-2:00 p.m. ET./10:00-11:00 a.m. PT.

I’d like to salute the brave folks that banded together to support the Biden-Harris campaign – and provide a voice for the government technology community in the new administration. That took courage – here’s the tribute movie. We look forward to working with the new administration to champion innovation in government and across America.

To those that sent in unkind emails – I’m trying to understand you. Also happy if you’d like to resubscribe to MeriTalk – just shoot me an email.

We look forward to the opportunity to build back better together – and new tech for government is critical to that success.

Open Source Gives Agencies Long-Term Cloud Flexibility That Powers Cloud-based Telework

After a decade-long initiative to expand telework, the COVID-19 pandemic has shifted the federal government’s workforce to cloud-based telework, practically overnight. While improving workforce flexibility seems like the obvious benefit, federal agencies can also take this opportunity to leverage the extensive ecosystem of open source partners and applications to boost their multicloud modernization efforts.

Agencies that work with the global open source development community are able to accelerate service delivery and overcome many of the common barriers to cloud modernization.

“Within the open source community, there remains a strong focus in helping enterprises adapt to cloud computing and improve mission delivery, productivity and security,” says Christine Cox, Regional Vice President Federal Sales for SUSE. Developing applications with open source tools can also help federal agencies future-proof digital services by avoiding vendor lock-in, enhancing their enterprise security and supporting their high-performance computing requirements.

Why open source is important to federal agencies as they continue to telework

Agencies are working to solve unique and complex orchestration challenges to run applications and sensitive data across multiple cloud environments. They need to be able to respond quickly, with agility, and at scale. Open source solutions allow governments to design customized and secure environments as the interoperability of their agencies’ IT systems and the need to share information in real time across multicloud environments becomes more critical.

“Open source technologies like Kubernetes and cloud native technologies enable a broad array of applications because they serve as a reliable connecting mechanism for a myriad of open source innovations — from supporting various types of infrastructures and adding AI/ML capabilities, to making developers’ lives simpler and business applications more streamlined,” said Cox.

Ultimately, open source projects will help lower costs and improve efficiencies by replacing legacy solutions that are increasingly costly to maintain. Up-to-date open source solutions also create a more positive outcome for the end-users at all agencies — be they the warfighter or taxpayers.

How open source helps cloud migration in a remote environment

The archaic procurement practices based on vendor lock-in don’t allow for effective modernization projects, which is why implementing open source code can help agencies adapt tools to their current needs.

“One of the great benefits about SUSE, and open source, is that we offer expanded support, so that regardless of what you’re currently running in your environment, we can be vendor-agnostic,” Cox says.

In order to take greater advantage of open source enterprise solutions, agency leaders should practice a phased approach to projects, with the help of trusted partners who can guide them in their cloud computing efforts. This allows leaders to migrate to hybrid-cloud or multicloud environments in manageable chunks and in a way that eliminates wasteful spending.

Learn more at SUSEgov.com

Congress Should Evolve – Not Eliminate – the FITARA MEGABYTE Category

Following the release of the FITARA Scorecard 10.0 in August, discussion about sunsetting the MEGABYTE category of the scorecard has picked up. But, is that really a good idea?

The MEGABYTE category measures agencies’ success in maintaining a regularly updated inventory of software licenses and analyzing software usage. With most agencies scoring an “A” in that category, the sense seems to be that MEGABYTE’s mission has been accomplished, and it can now rest easy in retirement.

However, just because a goal has been achieved does not mean the method used to achieve the goal should be discarded. A student who graduates Algebra I doesn’t completely declare victory over math for the rest of her academic career; she moves onto Algebra II.

The same principle should apply to the MEGABYTE category. Instead of getting rid of it, Congress should consider building on it to fit the current market dynamics – which are a lot different than they were in 2016, when the MEGABYTE Act became law.

A Changing MEGABYTE for Changing Times

Back then, cloud computing wasn’t quite as ubiquitous as it is today. Agencies were still buying specific licenses for specific needs, owning software, and getting their occasional updates.

As software procurement evolves and changes in the cloud environment, so too will the methods required to accurately track applications and usage – a challenge which could actually make MEGABYTE’s call for accountability more important than ever.

In some cases, agencies may not even know what they’re paying for. As such, they could end up paying more than necessary. Reading a monthly cloud services bill can be the equivalent of scanning a 30-page phone bill, with line after line of details that can be overwhelming. Many time-starved managers might be inclined to simply look at the amount due and hit pay without considering that they may be paying for services their colleagues no longer need or use.

There’s also the prospect of shadow IT, which appears to have been exacerbated by the sudden growth of the remote workforce. Employees could simply be pulling out their credit cards and ordering their own cloud services – not for malicious purposes, but just to make their jobs easier and improve productivity. In the process, agency employees might sign up for non-FedRAMP certified cloud services or blindly agree to terms and conditions that their agency procurement colleagues would not agree to. These actions can open agencies to risk, and must be governed.

A new MEGABYTE for a new era could be a way to measure accountability and success in dealing with these challenges. Agencies, for instance, could be graded on their effective use of cloud services. The insights gained could lead to more efficient use of those services including the potential to cancel services that are no longer needed. Finally, they could be evaluated based on how well they’re able to illuminate the shadow IT that exists within their organizations for a more accurate overview and governance of applications.

Not Yet Time for MEGABYTE to Say Bye

Just because the MEGABYTE category has turned into an “easy A” for most agencies does not mean that it’s time to eliminate it from the FITARA scorecard. Yes, let’s revisit it, but let’s not let it go just yet. Instead, let’s take it to a new level, commensurate with where agencies stand today with their software procurement.

Reimagining Cybersecurity in Government Through Zero Trust

As the seriousness of the coronavirus pandemic became apparent early this year, the first matter of business for the Federal government was simply getting employees online and ensuring they could carry on with their critical work and missions. This is a unique challenge in the government space due to the sheer size of the Federal workforce and the amount of sensitive data those workers require – everything from personally identifiable information to sensitive national security information. And yet, the Department of Defense, for one, was able to spin up secure collaboration capabilities quite quickly thanks to the cloud, while the National Security Agency recently expanded telework for unclassified work.

Connectivity is the starting line for the Federal government, though – not the finish line. Agencies must continue to evolve from a cybersecurity perspective in order to meet new demands created by the pandemic. Even before the pandemic, the Cyberspace Solarium Commission noted the need to “reshape the cyber ecosystem” with a greater emphasis on security. That need has been further cemented by telework. A worker’s laptop may be secure, but it’s likely linked to a personal printer that’s not. Agencies should assume there is zero security on any home network.

Building a New Cyber World

In the midst of the pandemic, MeriTalk surveyed 150 federal IT managers to understand what cyber progress means and how to achieve it. The need for change was clear; only 11 percent of respondents described their current cybersecurity system as ideal. What do Federal IT pros wish was different? The majority of respondents said they would start with a zero trust model, which requires every user to be authenticated before gaining access to applications and data. Indeed, zero trust has, to a large degree, enabled the shift we are currently seeing. But not all zero trust is created equal.

Federal IT pros need to be asking questions like: How do you do microsegmentation in sensitive environments? How do you authenticate access in on-premises and cloud environments in a seamless way? In the government space especially, there is a lot of controlled information that’s unclassified. As such, it’s not sufficient to just verify users at the door before you let them in. Instead, agencies must reauthenticate on an ongoing basis – without causing enormous friction. A zero trust model is only as good as its credentialing capabilities, and ongoing credentialing that doesn’t significantly disrupt workflow requires behavioral analytics.

Agencies must be adept at identifying risk in order for zero trust to be both robust and frictionless. In this new era, they should be evaluating users based on access and actions. This means understanding precisely what normal, safe behavior looks like so they can act in real-time when users deviate from those regular patterns of behavior. Having such granular visibility and control will allow agencies to dynamically adjust and enforce policy based on individual users as opposed to taking a one-size-fits-all approach that hurts workers’ ability to do their jobs.

The Role of the Private Sector

The current shift in the Federal workforce may seem daunting to some, but it represents a huge opportunity for the government and private sector alike. The Cyberspace Solarium Commission highlighted the importance of public-private partnerships – partnerships that can help make modernized, dynamic zero trust solutions the new normal if they can overcome the unique scaling challenge that Federal IT presents. The government must not just embrace commercial providers, but work closely with them to enable such scale, as it could help the government continue to reimagine its workplace.

Shifting to a zero trust model means improved flexibility and continuity, which can help expand the talent pool that agencies attract. Government jobs were previously limited to one location, with no option for remote work. Thus, agencies lost out on great talent that was simply in the wrong part of the country. Now, some agencies are claiming they don’t need DC headquarters at all.

Additionally, more flexible work schedules may also boost employees’ productivity. A two-year Stanford study, for one, showed a productivity boost for work-from-home employees that was equal to a full day’s work. In recent months, the government has seen that firsthand that flexible and secure remote work can happen through the novel application of existing technologies – including zero trust architecture.

The Bottom Line

Agencies must evolve cybersecurity in a way that allows them to embrace remote work without being vulnerable to attack. It’s not enough to get Federal employees online; users and data must be secure as well. The mass shift to telework represents a huge opportunity for the public sector – which is growing both its remote work capabilities and its potential pool for recruitment – and for those in the private sector who can be responsive to this need.

The majority of Federal IT leaders would implement a zero trust model if they could start from scratch. But once again, zero trust is only as good as your credentialing technology and your ability to understand how users interact with data across your systems. The key to seamless and secure connectivity is behavioral analytics, which allows for ongoing authentication that doesn’t hinder users’ ability to do their jobs or leave sensitive information vulnerable.

Driving IT Innovation and Connection With 5G and Edge Computing

The COVID-19 pandemic has influenced the way agencies function, and forced many to redefine what it means to be connected and modernize for mission success.

Agencies have reprioritized automation, artificial intelligence (AI), and virtualization to continue delivering critical services and meeting mission requirements through recent disruptions, and to predict and navigate future disruptions more efficiently. These transformative technologies open the door to accelerated innovation and have the potential to help solve some of today’s most complex problems.

Still, there is work to be done. While nearly half of Federal agencies have experimented with AI, only 12 percent of AI in use is highly sophisticated.[1] Agencies must rely on a solid digital transformation strategy to leverage next-gen technology, including the fifth generation of wireless technology (5G) and edge computing, to drive these innovations in Federal IT – regardless of location or crisis.

Faster Connections, Better Outcomes

Building IT resiliency and a culture of innovation across the public sector requires greater connectivity and data accessibility to power emerging technologies that enable faster service and better-informed decisions. In a traditional 4G environment, users connect to the internet through a device at a given time. In contrast, 5G integrates devices into the environment, allowing them to connect and stay connected at all times.

This constant connectivity enables agencies to generate data in real-time – not just when they sync with the cloud. Imagine some of the real-life applications of this capability. Healthcare providers would have instant, continuous health data to use in patient care. Soldiers on the battlefield would have constant connectivity for more accurate intel and defense strategies. These insights not only drive efficiency and security, but they save on time and resources.

Dell Technologies’ John Roese recently shared the importance of the U.S. driving these innovations – and the positive implications for the Federal space. “By doing so, we can increase market competitiveness, prevent vendor lock-in, and lower costs at a time when governments globally need to prioritize spending. More importantly, we can set the stage for the next wave of wireless,” he explained.

As an open technology, 5G infrastructure is a high-speed, interconnected mesh provided by multiple vendors at the same time. This prevents challenges presented to agencies by vendor lock-in, and reduces costs associated with creating and maintaining individual access points.

However, with perpetual connectivity, devices require a connection point with low latency. As 5G technology progresses, edge computing becomes a powerful necessity. Gartner reported that by 2022, more than 50 percent of enterprise-generated data will be created and processed outside the data center or cloud.

Dell Technologies’ edge capabilities enable agencies to get the data they need and avoid data siloes by applying logic in the edge – immediately. Dell Technologies has also started to specialize in providing 5G-compatible devices built with edge computing in mind.

These capabilities allow data to be processed in real-time, closer to the source. Devices can intelligently communicate anomalies and changes back to the core data center, enabling a better, more capable edge.

As time progresses, the edge will become smarter in making decisions and reducing the amount of data that needs to be transferred back to the core, while also ensuring the core is updated more frequently to support AI and machine learning.

New Challenges Require New Strategies

As the technology landscape changes yet again, agencies face the challenge of investing in new technology – one that has to be built from the ground up. However, as next-gen technologies continue to develop, government has no choice but to keep up.

Whether providing critical services to the public or creating strategies for the battlefield, agencies need access to the best tools and most accurate insights to drive mission success.

Agencies should leverage support from industry partners like Dell Technologies to get the support they need to accelerate their efforts, drive efficiencies, and innovate. As Roese noted, “when the technology industry of the United States is fully present in a technical ecosystem, amazing innovation happens, and true progress occurs.”

At the end of the day, these efforts lead to better, stronger outcomes for all.

Learn more about how Awdata and Dell Technologies are driving Federal innovation and connection with next-gen tech.

[1] Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, 2020

My Cup of IT – Vote for America, First

I’m a foreigner who’s proud of my heritage – and I’m an American patriot ready to stand up for this great country’s principles.

Feeling shipwrecked by politics and pandemic? This has been a year where too many of us have felt separated – now’s the time to come together and celebrate our American democracy. Whether you’re a Republican or a Democrat your voice must be heard – and your ballot counted. Vote is not a four-letter word. Do it early in person, early by mail, or day of (with protection) – but please do it.

I came to this county for opportunity – and I stayed based on the welcome and the sunshine. Please vote in the election, and respect the votes of others when they’re tallied.

Sometimes it takes a community to get to the truth – a word from somebody you know and trust. That’s why I’m speaking up now.

I know you’re a patriot – America’s depending on you. Let’s put America first – and vote.

Evolving the Remote Work Environment With Cloud-Ready Infrastructure

When agencies began full-scale telework earlier this year, many were not anticipating it would evolve into a long-term workplace strategy. However, in the wake of COVID-19, recent calculations estimate $11 billion in Federal cost-savings per year due to telework, as well as a 15 percent increase in productivity. Agencies are determining how they can continue to modernize – and therefore optimize – to support greater efficiency into the future.

Though many agencies began implementing cloud and upgrading infrastructures well before the pandemic, legacy technology presents a unique challenge in the remote landscape. IT teams and employees who were directly connected to a data center now needed remote access to infrastructure, while keeping security a top priority.

How can agencies ensure they have secure and specific connections that serve their needs and also optimize performance? They must adapt, shifting access to where it is needed and augmenting existing technology with solutions that allow flexibility, agility, and the additional security needed within a distributed environment.

A New Approach for the New Normal

To address issues with remote access, many agencies have turned to software-defined networking in a wide area network (SD-WAN) as it provides secure connection between remote users and the data center or cloud. However, long-term success with telework will require more than access. It will require teams to change the way they use the technology they have.

Recently, I spoke on a MeriTalk tech briefing where I discussed how agencies can leverage cloud-ready infrastructures to accelerate modernization with operational cost-savings and increased efficiency. Dell Technologies VxRail with VMware Cloud Foundation is perfectly suited for the distributed workforce, allowing teams of any size, and in any stage of their modernization journey, to build what they need when they need it.

Remote employees don’t have the same access to the on-prem data center’s compute resources as they had when working on-site. VxRail acts as a modern data center, enabling virtual desktops, compute, and storage in one appliance while providing users with secure access and network flexibility.

Teams can design a VxRail component for as many users as needed and then scale by units. With this flexibility, agencies don’t require as much local infrastructure to function optimally and can scale their services faster and more affordably with one-click upgrades and maintenance.

Teams can also bring the local data center and cloud into one management portfolio – whether a multi-cloud or hybrid environment – integrating all of these capabilities into a single platform that is easy to consume.

These technologies offer cybersecurity advantages as well. The VMware Cloud Foundation can utilize VMware’s NSX, a virtualization and security platform. NSX enables teams to create granular micro-segmentation policies between applications, services, and workloads across multi-cloud environments. Agencies can control not only how many users are in their environment and what resources they are allowed to access, but also where and how users connect to those resources.

Create a Culture of Collaboration

The switch to Federal telework has caused agencies to take a closer look at how they can continue to modernize and optimize IT for mission success – no matter where their employees are located.

Beth Cappello, Deputy Chief Information Officer for the Department of Homeland Security, recently noted, “as we go forward … we’ll look back at the fundamentals: people, processes, technologies, and examine what our workforce needs to be successful in this posture.”

Whether using new technologies or augmenting existing technologies, success will come down to collaboration. Agencies should look to collaborate early and often, and bring in developers and key team members to leverage their knowledge and drive efficiency and agility from the start.

This cultural change will allow government to become more flexible and agile in their approach to modernization – exactly what they need to take Federal IT to the next level.

Learn more about how Awdata and Dell Technologies are helping improve Federal telework with cloud-ready solutions.

Shift to Telework: Enabling Secure Cloud Adoption for Long-Term Resiliency

Over the past few months, agencies have strengthened remote work tools, increased capacity, improved performance, and upgraded security to enable continuity of operations as employees work from home and in various new locations.

However, as networks become more distributed across data centers, cloud, and remote connections, the attack surface increases, opening up the network to potential cybersecurity threats. Agencies have been forced to balance operations and security as they shift how users connect to government networks while remote.

The Department of Homeland Security Cybersecurity and Infrastructure Security Agency (DHS – CISA) has played a key role in providing telework guidance through updates to the Trusted Internet Connections 3.0 guidance (TIC 3.0). This was an important step to provide more immediate telework guidance, open the door for modern, hybrid cloud environments, and provide agencies with greater flexibility.

In a recent webinar, I had the opportunity to speak with Beth Cappello, Deputy CIO, DHS, about IT lessons learned from the pandemic and the future of modern security with TIC 3.0 and zero trust.

TIC 3.0 and the Cloud Push

“When you think about TIC 3.0 and you think about the flexibility that it introduces into your environment, that’s the mindset that we have to take going forward,” said Cappello. “No longer can it be a traditional point-to-point brick and mortar fixed infrastructure approach.”

TIC 3.0 has enabled agencies to take advantage of much-needed solutions, such as cloud-based, secure web gateways and zero trust architecture to support secure remote work.

Prior to the pandemic, DHS had begun adopting cloud – moving email to the cloud and allowing for more collaboration tools and data sharing – enabling the agency to transition from about 10,000 to 70,000 remote workers almost overnight. Many other agencies have similar stories – moving away from legacy remote access solutions to cloud and multi-cloud environments that offer more scalability, agility, and security.

IT administrators must be able to recognize where threats are coming from, and diagnose and fix them through “zero-day/zero-minute security.” To do this, they must turn to the cloud. Cloud service providers that operate multi-tenant clouds can offer agencies an important benefit – the cloud effect – which allows providers to globally push hundreds or thousands of patches a day with security updates and protections to every cloud customer and user. Each day, the Zscaler cloud detects 100 million threats and delivers more than 120,000 unique security updates to the cloud.

Secure Connections From Anywhere 

When the pandemic hit, agencies needed to find a way to connect users to applications, security as-a-service providers, O365, and the internet, without having to backhaul traffic into agency data centers and legacy TICs – which often result in latency and a poor user experience. Agencies required better visibility to identify who is connecting to what, see where they are connecting to, and send that telemetry data back to DHS.

Rather than focusing on a physical network perimeter (that no longer exists), the now finalized TIC 3.0 guidance recommends considering each zone within an agency environment to ensure baseline security across dispersed networks.

As telework continues, many agencies are evolving security by adopting zero trust models to connect users without ever placing them on the network. We know bad actors cannot attack what they cannot see – so if there is no IP address or ID to attack on the network, these devices are safe. Instead, agencies must verify users before granting access to authorized applications, connecting users through encrypted micro-tunnels leading to the right application. This allows users to securely connect from any device in any location while preventing east-to-west traffic on the network.

The Move to the Edge

For long-term telework and beyond, the next big shift in security architectures will need to address how agencies can continue optimizing working on devices in any location in the world. As agencies move to 5G and computing moves to the edge, security should too. Secure Access Service Edge (SASE) changes the focus of security from network-based to data-based, protecting users and data in any location and improving the overall user experience.

A SASE cloud architecture can provide a holistic approach to address the “seams” in security by serving as a TIC 3.0 use case and building security functions of zero trust into the model for complete visibility and control across modern, hybrid cloud environments.

For agencies like DHS, who have a variety of sub-agencies and departments of different sizes and missions, cloud is ideal to facilitate secure data sharing and collaboration tools.

“So, when we’re securing our environment, we’re provisioning, monitoring, and managing. We have to be mindful of those seams and mindful of the gaps and ensure that as we’re operating the whole of the enterprise that we are keeping track of how resilient the entire environment is,” said Cappello.

Managing and Securing Federal Data From the Rugged Edge, to the Core, to the Cloud

The Federal government collects and manages more data outside of traditional data centers than ever before from sources including mobile units, sensors, drones, and Artificial Intelligence (AI) applications. Teams need to manage data efficiently and securely across the full continuum – edge to core to cloud.

In some cases, operating at the edge means space constrained, remote, and harsh environments – with limited technical support. Our new Dell EMC VxRail D Series delivers a fully-automated, ruggedized Hyperconverged Infrastructure (HCI) – ideal for demanding federal and military use cases.

VxRail is the only HCI appliance developed with, and fully optimized for, VMware environments. We built the solution working side by side with the VMware team. Both administrators and end users get a consistent environment, including fully automated lifecycle management to ensure continuously validated states. How? More than 100 team members dedicated to testing and quality assurance, and 25,000 test run hours for each major release.

Users can manage traditional and cloud-native applications across a consistent infrastructure – in winds up to 70 mph, temperatures hot enough to fry an egg and cold enough to freeze water, and 40 miles-per-hour sandstorms. Whether you are managing Virtual Desktop Infrastructure (VDI), or mission-critical applications in the field, your team can take advantage of HCI benefits and ease of use.

As Federal teams collect and manage more data, they also have to be able to put that data (structured and unstructured) to work, creating new insights to help leaders deploy the right resources to the right place, anticipate problems more effectively, and achieve new insights.

Dell Technologies recently announced a new PowerScale family, combining the industry’s number one network-attached storage (NAS) file system, OneFS, with Dell EMC’s PowerEdge servers, at a starting point of 11.5 terabytes raw and the capability to scale to multi-petabytes. PowerScale nodes include the F200 (all-flash), F600 (all-NVME), and Isilon nodes. End users can manage PowerScale and Isilon nodes in the same cluster, with a consistent user experience – simplicity at scale.

Federal teams – from FEMA managing disaster relief, to the Department of Justice working on law enforcement programs, to the Department of Defense managing military operations, can start small and grow easily on demand.

PowerScale is OEM-Ready – meaning debranding and custom branding is supported, while VxRail D Series is MIL-STD-810G certified and is available in a STIG hardening package. Both PowerScale and VxRail D Series enjoy the Dell Technologies secure supply chain, dedicated engineering, and project management support.

As the Federal government continues to deploy emerging technology, and collect and manage more and more data outside of the data center, government and industry need to collaborate to continue to drive innovation at the edge, so we can take secure computing capabilities where the mission is – whether that’s a submarine, a field in Kansas, a tent in the desert, or a dining room table.

Cyber Resiliency Means Securing the User

The recent, rapid shift to remote work has been a lifeline for the economy in the wake of the COVID-19 virus. But that shift also took an already-growing attack surface and expanded it. Government agencies were being called to rethink their cybersecurity posture and become more resilient even before the pandemic. Now, the novel coronavirus has added an indisputable level of urgency on that demand.

The Cyberspace Solarium Commission (CSC) was created as part of the National Defense Authorization Act (NDAA) for the 2019 fiscal year. On March 11, its final report was released, articulating a strategy of layered cyber deterrence through more than 80 recommendations. One of its policy pillars was the need to “reshape the cyber ecosystem,” improving the security baseline for people, tech, data, and processes.

Shortly after the report’s release, the virus upended the work environment of most public sector employees, prompting the CSC to publish a follow-on whitepaper evaluating and highlighting key points and adding four new CSC recommendations, focused heavily on the Internet of Things (IoT). This focus, coupled together with the evolving cyber threat, means that “reshaping the cyber ecosystem” requires the government to move beyond investments in legacy technologies, and focus on the one constant that has driven cybersecurity since the beginning – people and their behaviors.

People Are the New Perimeter

The cyber ecosystem has, to some degree, already been dramatically reshaped. The security baseline needs to catch up. Currently, a large percentage of the Federal workforce is working from home – often relying on shared family networks to do so – and that may continue even as the pandemic subsides. In turn, agencies must look beyond the traditional, office-based perimeter as they secure employees and data. Data and users were already beginning to spread beyond walled-off data centers and offices; mass telework has simply pushed it over the edge.

We’ve already seen bad actors take advantage of this new perimeter by targeting unclassified workers via phishing and other attacks. Recent research found that, as of March, more than half a million unwanted emails containing keywords related to coronavirus were being received each day. Attackers are gaining compromised access, with many simply learning the network for now and lying in wait. Even traditionally trustworthy employees are under tremendous stress and may feel less loyal given the current physical disconnect.

In order to achieve the CSC’s vision of more proactive and comprehensive security, organizations must begin to think of people as the new perimeter. This is not a temporary blip, but the new normal. Agencies must invest in cybersecurity beyond the realm of old-school perimeter defenses. Methods like firewalls or data loss prevention strategies are important, but they are not enough. With people as the new perimeter, there is simply no keeping bad actors out. Instead, agencies need to keep them from leaving their network with critical data and IP – which can only be done with a deep understanding of people and data’s behavior at the edge.

Behavioral Analytics Should Be the Baseline

Putting the commission’s guidance into action must mean putting users at the center of the equation. Once again, it’s insufficient to simply rely on blocking access from bad actors. A more proactive and adaptive approach is required. Agencies must first understand which users pose the greatest risk, based on factors such as what types of data they have access to, and then develop dynamic policies that are tailored to that specific risk and are flexible enough to change with evolving circumstances.

Additionally, organizations must have an understanding of what normal behavior looks like for all users – based on information from traditional security systems and other telemetry inputs. By detecting anomalies in these patterns, analysts can identify potential threats from malicious insiders to external bad actors and take rapid and automated action in real-time. Behavioral analytics lets organizations separate truly malicious behavior from simple mistakes or lapses, and tailor the security response accordingly. The aim is to replace broad, rigid rules with individualized, adaptive cybersecurity – creating a far better baseline of security, as the CSC called for.

The Bottom Line

Understanding how people interact with data is key to our nation’s security and should be a part of the push to put the CSC’s recommendations into action. The commission also emphasized collaboration with the private sector, mostly suggesting its resources and capabilities could help private sector actors stay safe. The collaboration should flow in the other direction as well. Capabilities coming from the private sector need to be incorporated into the public sector, especially in the wake of the pandemic.

The federal government cannot simply be investing in legacy tech. Instead, they need to be throwing their weight behind innovative approaches – like behavior-centric security – that will move agencies closer to the CSC’s vision. With people as the new perimeter, a more targeted and adaptive cyber defense must be the new baseline.

Understanding COVID-19 Through High-Performance Computing

COVID-19 has changed daily life as we know it. States are beginning to reopen, despite many case counts continuing to trend upwards, and even the most informed seem to have more questions than answers. Many of the answers we do have, though, are the result of models and simulations run on high-performance computing systems. While we can process and analyze all this data on today’s supercomputers, it will take an exascale machine to process quickly and enable true artificial intelligence (AI).

Modeling complex scenarios, from drug docking to genetic sequencing, requires scaling compute capabilities out instead of up – a method that’s more efficient and cost effective. That method, known as high-performance computing, is the workhorse driving our understanding of COVID-19 today.

High-performance computing is helping universities and government work together to crunch a vast amount of data in a short amount of time – and that data is crucial to both understanding and curbing the current crisis. Let’s take a closer look.

Genomics: While researchers have traced the origins of the novel coronavirus to a seafood market in Wuhan, China, the outbreak in New York specifically appears to have European roots. It also fueled outbreaks across the country, including those in Louisiana, Arizona, and even California. These links have been determined by sequencing the genome of SARS-CoV-2 in order to track mutations, as seen on the website The Next Strain and reported in the New York Times. Thus far, an average of two new mutations appear per month.

Understanding how the virus has mutated is a prerequisite for developing a successful vaccine. However, such research demands tremendous compute power. The average genomics file is hundreds of gigabytes in size, meaning computations require access to a high-performance parallel file system such as Lustre or BeeGFS, etc. Running multiple genomes on each node maximizes throughput.

Molecular dynamics: Thus far, researchers have found 69 promising sites on the proteins around the coronavirus that could be drug targets. The Frontera supercomputer is also working to complete an all-atom model of the virus’s exterior component—encompassing approximately 200 million atoms—which will allow for simulations around effective treatment.

Additionally, some scientists are constructing 3D models of coronavirus proteins in an attempt to identify places on the surface that might be affected by drugs. So far, the spike protein seems to be the main target for antibodies that could provide immunity. Researchers use molecular docking, which is underpinned by high-performance computing, to predict interactions between proteins and other molecules.

To model a protein, a cryo-electron microscope must take hundreds of thousands of molecular images. Without high-performance computing, turning those images into a model and simulating drug interactions would take years. By spreading the problem out across nodes, though, it can be done quickly. The Summit supercomputer, which can complete 200,000 trillion calculations per second, has already screened 8,000 chemical compounds to see how they might attach to the spike protein, identifying 77 that might effectively fight the virus.

Other applications: The potential for high-performance computing and AI to simulate the effects of COVID-19 expand far beyond the genetic or molecular level. Already, neural networks are being trained to identify signs of the virus in chest X-rays, for instance. When large scale AI and high-performance computing are done on the same system, you can feed those massive amounts of data back into the AI algorithm to make it smarter.

The possibilities are nearly endless. We could model the fluid dynamics of a forcefully exhaled group of particles, looking at their size, volume, speed, and spread. We could model how the virus may spread through ventilation systems and air ducts, particularly in assisted living facilities and nursing homes with extremely vulnerable populations. We could simulate the supply chain of a particular product, and its impact when a particular supplier is removed from the equation, or the spread of the virus based on different levels of social distancing.

The bottom line: The current crisis is wildly complex and rapidly evolving. Getting a grasp on the situation requires the ability to not just collect a tremendous amount of data on the novel coronavirus, but to run a variety of models and simulations around it. That can only happen with sophisticated, distributed compute capabilities. Research problems must be broken into grids and spread out across hundreds of nodes that can talk to one another in order to be solved as rapidly as is currently required.

High-performance computing is what’s under the hood of current coronavirus research, from complex maps of its mutations and travel to the identification of possible drug therapies and vaccines. As it powers even faster calculations and feeds data to even more AI, our understanding of the novel coronavirus should continue to evolve—in turn improving our ability to fight it.

Mission Success Demands an Outstanding User Experience

By Sarah Sanchez, Vice President, Managed Solutions, SAIC

Our government is facing a challenging moment. Confronted with an unprecedented pandemic, agencies are having to dramatically ramp up their services for our country while large portions of the workforce cannot be in the office. Now more than ever, it is essential that government employees have access to IT systems and tools that are optimized to help them achieve their critical missions.

Fortunately, technology is accelerating in ways that can reduce the friction of obtaining IT support and empower government workers to keep their focus on the challenges at hand. By carefully designing systems and leveraging artificial intelligence, machine learning, and other automation tools, we can bring government users the most user-friendly experience possible.

I think SAIC’s Chief Technology Officer Charles Onstott said it well recently. “Agencies should be looking to deliver the outstanding user experience their workforce and citizens expect as an outcome of their digital transformation efforts,” he noted. “Effective integration of technologies like ServiceNow and Splunk will deliver not only improved operations, but will elevate the overall experience of the agency’s services.”

Delivering an outstanding user experience means more than simple web design or ticket resolution. It demands accountability for delivering quality services to end users by utilizing innovative methods and mechanisms that support effective collaboration across organizations to resolve complex issues. How do we give government employees easy access to the tools they need to do their jobs? How do we make processes more intuitive, information more accessible, and empower work across all the platforms – including mobile devices – that employees have become accustomed to using in their everyday lives? How do we create the environment to facilitate and incentivize innovation that moves our country forward?

At SAIC, we’ve developed an approach that we call U-Centric, because we’re focused entirely on improving that user experience. By unlocking the full functionality of technology platforms like ServiceNow and Splunk, we streamline IT services and transform the end-to-end customer service experience that increases value and improves efficiency. We’re also making agencies more secure through process automation and advanced analytics, and we’re driving costs down and improving the customer experience with self-help and artificial intelligence.

Successful delivery starts with understanding how a user wants to interact with the systems they use, and recognizing that one size does not fit all. U-Centric is built on the premise of providing Omni-Channel access to services, giving the user the ability to choose how they engage with their IT services. This is fundamental to the understanding that IT systems exist to support the user in accomplishing their mission critical job.

U-Centric includes capabilities such as persona-based portals, which contain easy-to-access dashboards to collect and convey useful information quickly. It also utilizes self-help features such as knowledge articles and step-by-step how-to videos, robotic process automation, automated workflow processes, and self-healing systems. It is transparent, so that agencies can have visibility into how data is being collected and utilized in support of these efforts and facilitates informed decisions. And it is built upon the critical business systems and rules that underpin agency work, so that new systems can be as seamless to the user.

The results of our U-Centric approach in these efforts have been fantastic. By streamlining workflows and building user-friendly systems, we’ve reduced the human labor hours dedicated to redundant processes and IT support. In one recent example, we reduced the work hours necessary to complete account administration from more than 16 hours to under 30 minutes. That not only means that IT support staff are more efficient and can serve more users, but it also reduces the time users spend dealing with IT support issues so they can stay focused on the vital tasks at hand.

Now, with our recent acquisition of Unisys Federal, we can do even more. We’ve incorporated their top-flight end user services programs to our existing capabilities, and I believe that combined strength makes us the best in the industry in delivering a comprehensive, seamless user experience for government customers.

By focusing relentlessly on serving user needs and making full use of available workflow tools, we can ensure that technology serves the government employees who serve our country, and facilitate the innovation we need. I’m excited about what’s possible, and I’m proud that SAIC is playing a leading role in this effort.

The Right Policy to Protect Remote Workers

In March, the White House released guidance that encouraged government agencies to maximize telework opportunities for those at high risk of contracting the coronavirus, as well as all employees located in the D.C. area. Though there are still many government employees not yet authorized to telework, this guidance marks a turning point.

Telework modes of operation are not new – and neither are the threats that accompany them. But the attack surface has grown significantly in the past month. A large number of workers are operating in insecure environments ripe for phishing and malware attacks, while new tools like video conferencing solutions can be targeted for malicious use or expose data to attacks.

Old, binary policies are insufficient to meet the new security challenge. Previously, policy could be split between enterprise and remote workers. But when everyone from senior to entry-level employees are all working from home, more granular policy controls are required. Those controls still rely on the same bread-and-butter IT best practices, though, from hardware-based security to patching and data protection. Here are some security controls government IT pros should implement today to ensure their newly remote workforce isn’t a tremendous liability. 

Managing Unsecured Environments

BYOD users, naturally, manage and own their own devices, and these devices live in unsecured environments and are exposed to attacks on the network. Consider a user who has four kids simultaneously logging into distinct telelearning systems on the same network he is now using for government work. How secure are the laptops, links, and teachers those kids are accessing? The reality is that network security is only as good as the link your kid clicked on last. As such, IT needs to push the latest patches as a requirement, enable multi-factor authentication (MFA) and enterprise rights management, and enforce good access control.

These best practices apply to workers who took a managed enterprise device home as well. Those devices also need protection against everything happening on the local wi-fi, in addition to enterprise access control (EAC). Before EAC, users connected to a network—and were only authenticated once they were already in. EAC, on the other hand, stops you at the front gate, verifying not just the user, but also that they have the proper local security software agents and updates. EAC was popular when the BYOD trend first gained steam, but many people saw it as too intrusive to be sustainable. Now, EAC is a key tool for helping to better manage laptops living in unsecured environments.

Cloud Services and SaaS 

Implementing security for VDI systems and cloud services includes some security basics as well: data protections, virtualization security for both the enterprise data center and at the access points, application security, secure boot, and so on. With software-as-a-service (SaaS), client access to cloud services should be protected through MFA and complemented with network transport encryption to offer protection on both sides. Appropriate data protection in enterprise rights management (ERM) can control access to the data through the cloud services and back to the data center. Understanding how clients are using the services and what data they are accessing is where the ERM decisions come into play.

Monitoring Threat Intelligence

IT pros also need to take a renewed focus on managing the threat of mistakes, misuse, and malicious insiders. There is always the risk of a user doing something careless or malicious, but that risk is exacerbated now; people are stressed and more apt to use shortcuts and make bad decisions. Normally, protecting against such risks means monitoring for anomalous use, like an employee working at midnight. But in the new world order, everyone’s hours are off. Many employees are working unpredictable “shifts” in an attempt to balance childcare and other responsibilities. Agencies need to be able to sift through these anomalous behaviors quickly and extend their threat intelligence and monitoring capabilities to the new edge where the users are now.

Policy-based access control and enforcement for applications and data at both the enterprise and the cloud level are also important to thwart misuse and abuse by users who are already authenticated. Enforcing ERM along with encryption, for instance, can further protect data so it can’t leave a laptop, or prevent it from being copied onto a USB.

The bottom line is that agencies now have to think differently about security issues related to teleworking. IT pros must monitor threats and secure everything from services to endpoints. While the modes of operation for telework are the same, the threat surface has grown. Policy controls must be far more granular in order to be effective.

The Best Things We Build Are Built to Last

We’ve spent the last several months in a bit of a surreal version of normal but there is light at the end of the proverbial tunnel. When we emerge from the current environment, the reality is that we will be better off from a security perspective than we were when we went in. The additional need to increase the capacity of access of cloud-based apps, VPN or “other” have required us to think a lot harder about the security that comes along with this extra access to the point where “building it in” makes a lot more sense than “bolting it on.”

Basic security hygiene items like DNS security and multi-factor authentication(MFA) can be the first, and best line of defense for any access environment which certainly includes an extreme telework scenario. The good news is that the protections don’t stop when our access environments return to “normal.” Since these security capabilities are part of a Zero Trust lifestyle, we get to carry these protections forward as they now have become our best practices.

We were gonna get there eventually, but we were forced to step on the gas

Some of the biggest challenges some Federal agencies have faced, beyond the capacity issue, is trying to figure out how to marry the legacy technologies we have kept running by sheer will, with the more cloud and mobile-focused innovative technologies that make the most sense for a more remote deployment. Agencies have been moving in this direction for years but the “extreme telework scenario” has accelerated this, to the point of making it uncomfortable and sometimes painful. One example of this is the legacy government agency authentication and user authorization. We’ve spent the past decade building out the “I” in PKI (public key infrastructure) and this works fairly well in our old world (users sitting in offices accessing applications from a desktop with a smartcard reader), it doesn’t work so well in this new normal. The good news is that there is a compromise to be made. A way to leverage the existing investments and make them work in a more innovative world.

Duo has been focused on being a security enabler for agencies as they make their journey to a cloud and mobile world, but we also realize that there has been lots of work and resources invested in the smartcard infrastructure that has powered our identity, credentialing, and access management (ICAM) systems. We have partnered with experts in this arena, folks like CyberArmed, to leverage that investment and to leverage the strong identity proofing that solutions like this provide.

CyberArmed’s Duo Derived Solution

NIST has shown us the way

When NIST, smartly, separated the LOA structure of 800-63 into proofing (IAL) and authentication (AAL), they provided guidance to allow agencies the flexibility to deploy the right tools for the right job and also allowed those agencies to apply a risk based, Zero Trust approach to secure access. The Office of Management and Budget (OMB) followed suit and aligned their updated ICAM guidance (M-19-17) to provide agencies with the flexibility to make risk based deployment decisions. This flexibility helps agencies to be more agile in support of whatever might be thrown at them, while still providing strong, consistent identity security. This identity focus is exactly what we need as we make our cloud journeys.

Now that we’re getting back to a small amount of normal, we need to take stock in the things we’ve been able to accomplish and the investments we’ve made to shore up our security and prepare us for the accelerated cloud & mobile journey. The things we’ve done will not be in vain.

How Organizations can Respond to Risk in Real Time

The NIST Cybersecurity Framework, initially issued in early 2014, outlines five functions with regard to cybersecurity risk: identify, protect, detect, respond, and recover. Of these functions, those on the far left encapsulate measures that could be considered pre-breach; those on the right, post-breach. Far too often, however, government agencies tip the scales too far to the left.

While the NIST Cybersecurity Framework offers a solid foundation, security teams remain overwhelmed in reactive strategies – a tremendous problem considering those steps limit an organization’s ability to become more proactive in identifying and operationalizing actions before the concern becomes significant.

Traditional approaches to data protection usually entail buying and implementing tools that are binary and reactive. A particular event is seen as good or bad – with walls, blocks, and policies put in place to deal with the latter. This leaves government systems drowning in alarms and alerts, while limiting employees’ ability to do their jobs. If your policy is to block all outbound email attachments that include anything proprietary or sensitive, for instance, HR can’t do something as simple as send out a job offer.

By continuously identifying potential indicators of risk at an individual level, organizations can instead take a proactive security posture – one in which responding to and recovering from threats is an ongoing effort, not a piecemeal one. Here are three key components of a truly proactive approach.

Continuous Risk Evaluation

Users are continuously interacting with data, which means organizations must be continuously monitoring those interactions for threats, as opposed to scrambling once a breach has been flagged. Risk is fluid and omnipresent; removing risk wholesale is impossible. Instead, the goal should be to detect and respond to excessive risk, and that can only be done through continuous evaluation. This is especially important as agencies rely on a growing amount of data, which is stored everywhere and accessed anywhere.

Continuous risk evaluation means cybersecurity doesn’t end after a user’s behavior is labeled as “good” and access or sharing is granted (or vice versa) – as would be the case with a traditional, static approach. Instead, risk profiling continues beyond that initial decision, monitoring what a user does when granted access and whether their behavior is trustworthy. Gartner, for one, defines this approach as Continuous Adaptive Risk and Trust Assessment (CARTA).

Leverage Data Analytics

In order for risk levels to be assessed, organizations must have full-stack visibility – into all devices and interactions taking place on its system – and the ability to make sense of a tremendous amount of behavioral data. How does a series of behaviors by Employee A stack up against a different series of behaviors by Employee B? Where’s the risk and how do we mitigate it? Analytics are required to not just answer such questions, but answer them quickly.

Multiple data analytics techniques can help organizations flag excessive risk: baselining and anomaly detection, similarity analysis, pattern matching, signatures, machine learning, and deep learning, to name a few. The key is to focus analysis on how users interact with data. Remember, risk is fluid. The risk of a behavior – even an unusual one – will depend on the sensitivity of the data being accessed or shared.

Automate the Response to Risk

Data analytics can reduce the time to identify a threat, but it’s also important to automate threat response. Once again, too many organizations simply respond to a growing number of alerts by throwing headcount at them. Instead, data loss protection should be programmatic, with policy automated at the individual level.

Resources should be thrown only at the highest immediate risks, while routine security decisions should be handled automatically. With automation, organizations can actually reduce their headcount without compromising security – saving money while achieving precise, real-time risk mitigation.

The Bottom Line

The far-right of the NIST Cybersecurity Framework must ideally focus on proactive detection, response and remediation – steps that must happen concurrently and continuously. Identifying valuable risk insights and turning them into actionable protective measures remains challenging in government environments, especially with more data and devices on networks than ever. But with continuous evaluation, analytics, and automation, it can be done. Too many organizations are drowning in alarms and alerts, while struggling to review and triage security content, adjust system policies, and remediate risk. By taking a holistic, proactive approach, organizations can identify and respond to risks in real-time, adapting their security as needed.

Unleash the Power of Federal Data With Automated Data Pipelines

During these unprecedented times, it’s more important than ever to maximize the value of our data. As first responders, health care professionals, and government officials lead the charge on COVID-19 response efforts, they need real-time insight into rapidly-changing conditions to make the best decisions for public health.

Leverage all the benefits of the cloud to understand data. Learn More
Centers for Disease Control and Prevention (CDC) Director Dr. Robert Redfield recently said that preparation for reopening the economy has “got to be data driven.” With data being a key factor in our next steps for COVID-19 response and prevention, it’s important that vital data is readily available – and without time restraints.

The challenge? All of this information lives inside various agencies, companies, departments, and IT environments. Securely bringing it together for analysis is complicated and time consuming. Add to that the requirements synthesizing and delivering the data to key stakeholders in a way that’s immediately actionable, and you have a tall mountain to climb. Using traditional methods of data analysis can take days, and with the challenges we’re facing as a society, we don’t have that kind of time.

An exciting area of innovation that’s helping agencies shorten the time-to-value in data analysis is automation. Automated data pipelines are vital components to automate the migration and transformation of data into usable, actionable formats. They provide frictionless, real-time ingestion of data, and when paired with on-demand analytics, can be used by agencies to unlock unprecedented levels of visibility, insight, and information sharing. And that empowers stakeholders to make fast, smart decisions.

To learn more about how automated data pipelines can bring speed and efficiency to your data operations in this challenging environment, register today for Qlik and Snowflake’s webinar with MeriTalk, “Improving the Value of Data with a Modern Data Platform,” on Thursday, April 23 at 1:30 p.m. EDT.

My Cup of IT: TMF Boost Can Break Modernization Logjam

What does Covid-relief legislation mean for Fed IT? So far, not nothing, but in the big picture, not enough to move the needle on any large-scale modernization push.

That might be changing, and soon – with the Technology Modernization Fund (TMF) as the vehicle.

While the $2.2 trillion stimulus relief bill rushed through Congress and signed by President Trump last week didn’t feature extra TMF funding, an alternative bill written by House Democrats includes a massive $3 billion expansion of TMF “for technology-related modernization activities to prevent, prepare for, and respond to coronavirus.”

Little is certain in the current environment, but one thing to count on is that more relief bills are on the way, and that $3 billion pop for TMF is ready to roll into the next bill.

Consider as well: Congressman Mark Meadows (R-NC), long-time advocate for IT reform on the House Oversight and Reform Committee, is now moving to serve as chief of staff in the White House. That provides a powerful carrier for the IT modernization message in the hallways of power.

Here’s the low-down on how that $3 billion in IT modernization might play.

Never Let a Good Crisis Go to Waste

The powerful in D.C. are suddenly in touch with telework – and the role of government systems and networks in supporting really mission-critical work.

Agency secretaries and politicians see that cloud systems scale and perform – legacy, on-premise applications and systems, to quote Seinfeld, not so much … Secretaries struggling to host disaster portals and mobilize relief efforts suddenly want one throat to choke – that means concentrating authority in the hands of the CIO. Funny way to finally get those authorities outlined in FITARA. The importance of IT to allowing government to deliver on its mission is suddenly crystal clear – and, clear from the top down.

Why TMF?

So, if that $3 billion in IT funding happens, how will it be distributed and managed? It’s not the only path forward, but it makes a lot of sense to utilize the TMF to get IT modernization done. Here’s why, and some changes that need to happen to make this approach viable.

As it exists now, TMF is both anorexic and awkward. After an original funding of $100 million for Fiscal Year 2018, new funding for FY2019 and FY2020 dwindled to only $25 million per year.

Further, the payback requirements have meant that very few agencies – including some of the agencies that actually sit on the TMF review board – have bothered to apply for those funds themselves.

Why Now?

So, why TMF now – and what needs to change?

TMF has a lot going for it as a framework to distribute and manage new funding for IT modernization – it’s quick (it already exists), it’s tied to the budget process, and it provides for required oversight. However, we need one very significant change to TMF – we need to relieve or remove the requirement to pay back funds invested to support telework and crisis response. If that happens, agencies will rush to access these funds and undertake true modernization efforts. Providing $3 billion in funding through the TMF will not do the slightest bit of good if agencies don’t access those funds because of repayment obligation fears.

Nobody knows when this crisis ends – but investing in Federal IT modernization is critical to maximizing our nation’s relief response, and for better mission performance after that. Better IT is not a nice to have…

Hyperconverged Infrastructure for AI, Machine Learning, and Data Analytics

By: Scott Aukema, Director of Digital Marketing, ViON Corporation

When you hear the terms “artificial intelligence” (AI) and “machine learning” (ML) (and let’s be honest, if you have even a sliver of interest in technology, it’s difficult not to), hyperconverged infrastructure (HCI) may not be the first thing that comes to mind. However, HCI is beginning to play an important role in high-performance use cases like AI and ML with its ability to capture, process, and reduce vast quantities of data at the source of creation in a small form factor. In this blog, the third in a 3-part series on hyperconverged infrastructure, we’ll examine the role HCI is playing in deploying a complete AI solution. If you’d like to read the previous blogs, you can read about the role that HCI plays in enabling a disaster recovery solution and how it is changing the dynamics for edge computing and remote offices.

Hyperconverged infrastructure at the core of a hybrid multi-cloud model bridges the gaps among public cloud, on-prem private cloud, and existing data center infrastructure, enabling organizations to manage end-to-end data workflows to help ensure that data is easily accessible for AI. As organizations develop their AI/ML strategy and architect an IT environment to support it, the resources needed for a successful deployment quickly become evident. This is where many organizations are turning to a multi-cloud environment to support their AI workloads. A recent study by Deloitte found that 49 percent of organizations that have deployed AI are using cloud-based services1, making it easier to develop, test, refine, and operate AI systems. Hyperconverged infrastructure, in concert with a robust Cloud Management Platform (CMP), can accelerate deployment times, making it easier to stand up and take down an AI environment. These AI services in a consumption model provide the agility and resources needed to stand up an AI practice without making a significant investment in infrastructure and tools. HCI is an essential component of the hybrid multi-cloud environment for AI and ML.

In addition to acting as a catalyst between the data center and the cloud, HCI is well positioned to support edge computing – the processing of data outside of the traditional data center, typically at the edge of a network. Data collected at the edge very often is not being used to its full capacity. IT organizations are looking to hyperconverged infrastructure in these instances to capture data where it is created, compress it and transfer it to a cloud or centralized data center at another site. In many instances, the edge can mean hundreds of locations dispersed throughout the country or the world. Consolidating data from these locations allows organizations to create more complete data lakes for analysis to uncover new insights. By combining servers, storage, networking, and a management layer into a single box, HCI eliminates many of the challenges of configuration and networking that come with edge computing. In addition, organizations can coordinate hundreds of edge devices through a CMP, streamlining management, reducing complexity, and reducing costs. Leveraging HCI for edge computing enables data to flow more freely, whether into a centralized data lake or a public or private cloud environment where it can be used to begin the learning and inference process using the available AI models. Once these models are trained in the cloud, they can then be deployed back to the edge to gain further insights.

Hyperconverged infrastructure can streamline edge computing, enable multi-cloud environments, and act as a catalyst to aggregate data for cloud-based AI applications. For agencies that are geographically dispersed and seeking to leverage data from these disparate locations for a more robust analytics practice, HCI should be considered as part of their overall AI strategy. Since we are still in the infant stages of AI and ML, organizations should strive to be flexible and nimble to adapt to changes. Hyperconverged infrastructure provides that agility.

Hyperconverged infrastructure is enabling applications with a versatile platform, which helps organizations accelerate a variety of use cases. Hyperconverged solutions like Nutanix and Fujitsu’s PRIMERGY are helping agencies simplify deployment, reduce cost, improve performance, and easily scale-up and scale-out. Whether it’s AI, edge computing, disaster recovery, enabling a multi-cloud environment, or any other of a multitude of use cases, hyperconverged infrastructure should be considered as part of an IT modernization strategy.

1 – State of AI in the Enterprise, 2nd edition, Deloitte Insights, October 22, 2018

Get Ready for the Passwordless Future

By: Sean Frazier, Advisory Chief Information Security Officer – Federal, Duo Security

Most of us have a standard list of go-to passwords for various logins and websites – each fluctuating slightly with upper or lowercase letters, extra numbers, symbols and punctuation. Some of us keep them scribbled on a notepad, while others click “remember me” when logging onto sites, to speed up the process and relieve the stress of remembering them time and time again.

But as cyberattacks become more sophisticated, and Federal agencies work to modernize their IT systems and protect vital data, passwords are becoming a thing of the past. And the push toward a passwordless world introduces the need for new standards and technical innovation.

Everything old is new Again – Updating Legacy Technology

Truth be told, we have been lazy when it comes to passwords. Administrators put all the onus on the end user to manage the password lifecycle – requiring them to use longer passwords, a mix of characters/cases, etc. – making it harder and harder for users to manage the various passwords they need for different applications and sites.

The idea of a passwordless world is not entirely foreign to the Federal government. But while 80 percent to 85 percent of agencies use Personal Identity Verification (PIV) cards and/or Common Access Cards (CAC), these are not ideal solutions for agile and modern IT and application access. They are difficult to issue and replace when lost; they sometimes can’t be used to authenticate to cloud applications; and they are a non-starter from mobile devices. As such, these legacy identity verification technologies don’t lend themselves well to IT modernization, and agencies haven’t done the appropriate plumbing exercises to update federation by using newer federation technologies such as OIDC or SAML.

Agencies are also dealing with Public Key Infrastructure (PKI) stacks that are, for the most part, at least 15 years old. The financial burden of maintaining these PKI stacks over their lifecycle can be immense, and modern technology is passing them by. Government organizations need to find a balance between working with these pre-implemented legacy systems, in which they have heavily invested, and adopting new, standards-based (more flexible, more affordable) authentication technologies in the commercial technology space.

The Cresting Wave of the Authentication Future

In March 2019, the World Wide Web Consortium (W3C) announced Web Authentication (WebAuthn) as the official passwordless web standard. WebAuthn is a browser-based API that allows for web applications to create strong, public key-based credentials for the purpose of user authentication. It will enable the most convenient and secure authentication method for end users – the device that they are already using – to validate that the user is who they say they are via a biometric.

While WebAuthn is a nascent standard, it is the wave of the future. Five years ago, many organizations and individuals were wary of biometrics. No one trusted fingerprint authentication or facial identification. While these technologies are not perfect, the Apple platform, for example, proves they work at scale by processing millions of transactions per day.

Shifting from traditional passwords can seem burdensome, but a passwordless authentication method doesn’t have to start from the ground up. Apple, Google, and Microsoft have already added WebAuthn support to their products. This commercially available technology can help agencies leverage industry standards like WebAuthn to improve security and drive flexibility. Instead of building custom models, putting trust into top tech providers in the space can help agencies save money and get rid of the security baggage associated with traditional passwords.

Of course, there will always be hiccups in technology. When all else fails, passwords will be necessary as a backup for authentication systems when biometrics fall short. But shifting from the traditional passwords of the past to the authentication mechanisms of the future is the logical next step for the public and private sectors alike. It’s the PKI that we all know and love, but just done the right way with strong protection and ease of use. With government’s buy-in of updated authentication models, agencies can modernize their IT infrastructures more easily and ensure stronger, safer, and more secure protection for their data.

To learn how your agency can make the move toward a passwordless future, check out Duo Security’s website for more information.

Hyperconverged Infrastructure for Remote/Branch Offices & Edge Computing

By: Scott Aukema, Director of Digital Marketing, ViON Corporation

Hyperconverged infrastructure (HCI) is playing a significant role in building an enterprise multi-cloud environment. The benefits are well documented – you can learn more about them in a new white paper developed in collaboration with ViON, Fujitsu, and Nutanix, “Simplifying Multi-Cloud and Securing Mission Progress.” In addition to driving a cloud foundation, hyperconverged infrastructure is driving other use cases. In our first blog, we examined the impact that HCI can have in a disaster recovery solution. In this installment, we’ll discuss how HCI is changing the dynamics for remote offices and edge computing.

Edge computing moves processing power closer to end-users and their devices, in contrast to a traditional, more centralized model, where resources reside within a data center. As applications such as advanced analytics become more resource intensive and latency is an issue, having servers, storage, networking and hypervisor software close to the data source can be a significant advantage.

Currently around 10 percent of enterprise-generated data is created and processed outside a traditional centralized data center or cloud, and by 2025, Gartner predicts this figure will reach 75 percent. They estimate that 40 percent of larger enterprises will adopt edge computing as part of their IT infrastructure in this timeframe. This is driven largely by the growth in raw data, where massive volumes are too large to transmit to a centralized architecture for timely processing.

Organizations are turning to hyperconverged infrastructure to simplify complexity for both hardware and virtualized environments. The nature of hyperconverged infrastructure makes it easy to use, eliminating many of the configuration and network challenges associated with edge computing. Benefits of HCI at the edge include1:

  • High-density compute and storage resources are self-contained, easily deployable, and have a small footprint;
  • Integrated hardware & software stack come preconfigured and can be easily managed as a single system remotely;
  • Scalable architecture can easily scale up and out to support growth, and next-generation applications such as AI and IoT;
  • Faster application performance for end-users and lower network latency with reduced WAN traffic.

Hyperconverged Infrastructure is well suited to keep pace with the rapid growth of data and the need to support multiple remote sites. This is especially true in environments that ingest or create massive data sets and need to conduct real-time or near real-time analysis of that data. In those instances, moving large scale data sets to a central data center is time consuming, inefficient, and can be costly. It is these instances that HCI is well positioned to enable organizations to ingest, analyze, and gain insights from data and quickly act on those insights when needed.

Many organizations don’t have IT support at their remote or branch offices. Edge computing is designed to run with limited, or no dedicated IT staff, which means the infrastructure must be easy to implement and manage. It has to connect quickly back to the primary data center and cloud when needed. These requirements are what make HCI well suited to edge computing. For IT organizations, hyperconverged infrastructure provides the flexibility to quickly stand up infrastructure in new sites, easily manage it from a single remote location and provide local users with high performance compute for critical resource intensive applications. For users, it provides the operational autonomy to gain insights at the source of data ingestion, rather than migrate data to a centralized data center.

Finally, consider HCI’s role in a hybrid multi-cloud environment. A model that has centralized on-prem data center infrastructure integrated with public, private, and hybrid clouds and micro data centers at the edge is an architecture that delivers on multiple fronts. When aligned with a robust cloud management platform, orchestration between the various environments becomes seamless, providing governance and management through a single interface. Organizations get the flexibility and efficiency of cloud computing tightly integrated with on-prem infrastructure to deliver the right level of performance, when and where it is needed.

1 TechTarget Hyper-converged edge computing Infographic

On Trend: Tailored Technology for Customization at the Edge

A perfectly tailored suit is an investment. It’s worth it to pay for the perfect fit, high-quality material appropriate for the occasion, and a color that makes your eyes pop.

So why, when it comes to mission-critical technology solutions, are government agencies expected to buy off-the-rack?

As federal agencies expand nascent AI capabilities, deploy IoT technologies, and collect infinitely more data, missions require a customized, nuanced approach to transform edge capabilities.

To combat the data deluge resulting from AI and IoT advances, the Federal Data Strategy’s first-year action plan was released in late December. It urges the launch of a federal CDO Council, establishment of a Federal Data Policy Committee, and identification of priority data assets for open data – all by the end of January 2020. These are just the first steps to prepare for what’s already underway; government’s mass migration to the edge and the resulting proliferation of data. In just five years, Gartner projects 75 percent of all enterprise-generated data will be processed outside of a traditional data center or cloud.

As we work to manage, analyze, and secure data collected at the edge, we need to evaluate the solutions with the same standards we apply in our data center or cloud. To enable insights at the edge, federal teams need the same (or better) function: high compute, speed, power, storage, security, but now in a durable, portable form. This may require equipment to tolerate a higher level of vibration, withstand extreme thermal ranges, fit precise dimensions, or incorporate specialized security requirements.

Partnering with Dell Technologies OEM | Embedded & Edge Solutions enables Federal SIs and agencies to integrate trusted Tier 1 infrastructure into solutions built for their specific mission requirements, or for those of their end users. For instance, working with our team, you might re-brand Dell Technologies hardware as part of your solution, leveraging specialized OEM-ready designs like our XR2 Rugged Server and Extended Life (XL) option. We also offer turnkey solutions designed by our customers and delivered through Dell Technologies, which allows us to further serve what we know are your very specific use cases.

As an example, our customer Tracewell Systems worked with Dell Technologies OEM | Embedded & Edge Solutions to customize the Dell EMC PowerEdge FX architecture, creating a family of products that meets the needs of their federal customer’s server sled field dimensions. Because Tracewell’s T-FX2 solution is still interoperable with standard Dell EMC server sleds, the end customer can now plug and play powerful Dell EMC compute and storage products from the field to the data center, cutting processing time from 14 to two days.

Feds at the edge need the right solution, and need that solution delivered quickly and securely. Agencies and federal systems integrators need a trusted partner that can help them compress time-to-market while ensuring regulatory compliance and providing a secure supply chain. While conducting a search for an OEM partner, agencies and systems integrators should consider vendors that will embrace challenges and engage in a deep, collaborative relationship. Moreover, dig beyond the design of the technology and ask:

  • Does the vendor have the buying power to guarantee production consistency, so the product can continue to be delivered as designed? If necessary, consider looking for a partner that will guarantee a long-life solution.
  • Are there lifecycle support services from problem identification, to customized design, to build and integration, to delivery, to experience?
  • Can the potential partner supply program management to handle all regulation and compliance complications?
  • Does the vendor have a broad portfolio for easy integration of solutions from edge to core to cloud?
  • Does the vender have a deep focus on security – from the chip level through to delivery and support?

These critical aspects will help you design those faster, smaller, smarter solutions, and get them in the field more quickly.

With 900+ dedicated team members, the Dell Technologies OEM | Embedded & Edge Solutions group has embraced challenges for 20 years, creating more than 10,000 unique project designs. For more information about how our capabilities can provide you with the tactical advantage, click here.

About the Author:

Ron Pugh serves as VP & GM, Americas, for Dell Technologies OEM |Embedded & Edge Solutions division. To learn more, visit DellTechnologies.com/OEM.

Hyperconverged Infrastructure for Disaster Recovery

By: Scott Aukema, Director of Digital Marketing, ViON Corporation

The benefits of hyperconverged infrastructure (HCI) as a foundation for building a cloud platform are well documented, as organizations are turning to HCI to simplify complexity for both hardware and virtualized environments. We’ve recently published a white paper in collaboration with Nutanix and Fujitsu on this topic, Simplifying Multi-Cloud and Securing Mission Progress. But recent research from Meritalk1 highlights that HCI represents many other use cases beyond cloud. In this first installment in a series of 3 blogs, we’ll examine how HCI is impacting disaster recovery (DR) in the data center.

Disaster recovery incidents generally occur as a result of either catastrophic loss of a data center like a fire or flood, equipment loss within the data center, component failure of a single server (or set of devices), or lack of network access due to networking issues.2 Hyperconverged Infrastructure, combining storage, compute, and network hardware with hypervisor software to create and run virtual machines, is well suited to perform disaster recovery functions to protect against these types of incidents.

Similar to traditional disaster recovery, production environments can be regularly replicated in the HCI system. In the event the data center suffers a failure, the replica virtual machine could be quickly brought online and hosted on the hyper-converged system, in a secondary datacenter. The HCI environment could host the failed over workload until the primary location is back up and running, even running those workloads indefinitely, if needed. There are a number of factors that make hyper-converged infrastructure ideally suited for a DR scenario:

  • Rapid (Instant) Recovery – Most HCI solutions include a comprehensive backup and recovery capability for short Recovery Time Objective (RTO) windows. Through the use of virtual machine snapshots, IT managers can replicate the secondary copy to an HCI system in the secondary data center or a replicated environment in the public cloud. This provides on-site and off-site copies of the latest version of all VMs in the snapshot.3 Compared with traditional infrastructures, HCI offers a faster means to failover data in a disaster.
  • Cloud Integration – The benefits of hyperconverged infrastructure are well-suited to building a hybrid cloud environment and the software defined nature of HCI means that public or private cloud can be used as a replication site for disaster recovery. In a multi-cloud world, managing public and private cloud environments, operating on-prem physical infrastructure, and moving virtual machines between these environments is essential to data protection. Hyperconverged Infrastructure, along with a cloud management platform (CMP) like ViON’s Enterprise Cloud can make it easier to orchestrate workloads across clouds and streamline the recovery process in the event of a disaster.
  • Software-Defined Infrastructure – Software defined infrastructure simplifies the automation and orchestration of data replication and can provide continuous updates of remote copies with little or no impact on local performance. The built-in snapshot capabilities of the HCI hypervisor can streamline disaster recovery by replicating data to the DR environment, providing flexibility, speed, and reliability to recover quickly during a failure.
  • Scale-Out Architecture – The inherent scale-out nature of a hyperconverged infrastructure allows IT organizations to quickly procure additional storage, compute, networking, and virtualization resources to expand capacity as needed, quickly and easily. This enables a greater volume of workloads in the same amount of time, with more resources available. Scale out architecture not only provides the resources for backup and recovery, but also allows for architectural freedom.

If you examine each of these factors independently, they deliver strong value for using HCI for DR purposes. But collectively, they represent a significant leap in efficiency, resiliency, and flexibility. While hyperconverged infrastructure may not be the right backup and recovery solution for every IT organization, it’s worthy of consideration.

In our next blog, we’ll examine how hyperconverged infrastructure is helping agencies accelerate their data analytics capabilities. In the meantime, we’d like to hear from you – what role is HCI playing in your data center?

1 – Meritalk Infographic, “Hyper Converged Without the Compromise: How Next Gen HCI Can Modernize Fed IT.
2 – ComputerWeekly.com – https://www.computerweekly.com/feature/Hyper-converged-infrastructure-and-disaster-recovery
3 – Network World.com – https://www.networkworld.com/article/3389396/how-to-deal-with-backup-when-you-switch-to-hyperconverged-infrastructure.html

Federal Contracting Trends: What can we Expect in 2020?

If 2019 is any indication, Federal contracting in 2020 promises to be a very interesting year. Federal contracting has experienced steady growth – an average of 6 percent year-over-year – for the last five years according to Bloomberg Government. Spending jumped 9 percent in 2018, representing a 50 percent uptick from previous years.

A continuation of this growth next year will be driven primarily by an increase in defense spending and wider scaled deployment of new technologies across the Federal government.

This growth will present new opportunities, and of course, new challenges for those operating in this space. Here are some top-line issues that will impact Federal contracting in 2020 and beyond.

Teaming Agreements are on The Rise

Teaming arrangements have become more popular over the last several years in Federal contracting, and for good reason. Teaming helps contractors gain access to work, minimize risk, increase knowledge and offer a more competitive price point.

Small businesses view teaming as the most effective way to thrive in the competitive Federal market. There will be a significant uptick in teaming in 2020 as both smaller and larger contractors look to provide the types of capabilities needed to fulfill a wide variety of requirements on larger contracts.

Large Contracts are Becoming More Accessible to Small Businesses

The Federal government has made it a priority to award more Federal contracts to small businesses. In fact, nearly a quarter of prime contract dollars have gone to small businesses over the past five years. In 2020, small businesses will continue to have greater access to large contacts that previously only went to big contractors. A primary reason for this is the focus on HUBZone small businesses, which helps distribute contract proceeds to underutilized areas.

The Importance of Being “Employee-Centric”

It is no secret that top talent is at a premium in today’s Federal contracting market. In an effort to attract the best and the brightest, employers are offering perks such as flexible work schedules, increased benefits and telework. These efforts help attract and retain qualified candidates. However, a more “employee-centric” work environment is key for maximizing employee satisfaction. This can include processes and procedures that ensure open communication and flow of positive feedback. It also means offering flexibility in terms of the types of projects that team members can support.

Increasing Need for Best in Class Contracting Vehicles

For several years now, the Federal government has been pushing agencies to access best-in-class (BIC) government-wide acquisition contracts (GWACs) to increase their buying power. The overall assessment is that this increases the need for teaming across the small business community. Small businesses that are adequately prepared for these requirements should expect to expand in the coming year. The use of these vehicles will grow, which will allow government to access specific skills more efficiently.

Get Ready for 2020

With a growing economy and expanding government priorities, Federal contracting has a bright future in 2020. Being on top of trends will help contractors gain opportunities and smartly navigate whatever issues arise. While the trends discussed in this article are not all-encompassing, they will likely be in the headlines throughout the next twelve months and beyond.

Increase your awareness of these key trends and get ready to take advantage of the opportunities 2020 will bring.

By: Walter Barnes III, president and founder of PM Consulting Group

Taking Legacy Systems Off Life Support with Modern IT Service Management

As we all know too well, eighty cents of every Federal IT dollar still supports legacy IT.  Agencies miss opportunities to innovate – and more. A 2019 GAO study recognized a few of the consequences of keeping legacy systems on life support, including security risks, unmet mission needs, staffing issues, and increased costs.

Worse, the report found most agencies lack complete plans for modernizing the most critical legacy systems with marked milestones to complete modernization, a description of the necessary work, and details regarding the disposition of their legacy system.

Alongside infrastructure and application modernization, IT leaders are considering opportunities to deliver seamless user experiences with modern IT Service Management (ITSM) – updated, automated processes that are intuitive and personalized, enable self- service, and provide agency leaders visibility into trends and needs.

Elevating the service experience and improving productivity

Cloud-based ITSM implementation is already contributing to modernization efforts and transforming the government service experience on both Federal and State/Local levels.

The National Cancer Institute (NCI), for example, wanted efficient, streamlined services to enable staff members to focus on supporting cancer research and advancing scientific knowledge to help people live longer, healthier lives.

Using a ServiceNow IT Service Management solution, NCI was able to lower its number of incident tickets, request tickets, and change tickets with a single portal to submit, review, and address incoming items.  With the new system in place between 2014 and 2018, the organization reduced its incident tickets from 372,000 tickets down to just 94,200; request tickets from 162,000 to 51,700 and change tickets from 5,400 to 900.  This progress allowed the IT team to spend more time developing strategic priorities to give users a significantly better experience.

In another example, North Carolina took the step to replace legacy systems to create a more innovative platform for customers, as well as improve service rates and achieve economies of scale. The system helped manage and mitigate issues faster.  The state has since implemented the solution across nine state agencies.  On a single platform, each agency has a customized portal that meets their unique needs while providing enterprise-level views and analytics.  The best part?  On the back end, the state’s Department of IT has only one platform to maintain and upgrade, saving significant time and resources.

How can agencies ensure success?

GAO reported that over the last five years, successful modernization efforts have included several common success factors, including actively engaging end-users and stakeholders and creating an interface that is consistent across systems.

What can your agency do to ensure a successful, seamless migration?

Lay the groundwork by digging deep into your current ITSM system – determine what works well and what needs to be changed.  Then, document the most important parts of your current legacy ITSM solutions and establish your core team.  Identifying current service delivery challenges will help prioritize successful processes, areas for improvement, and more.

When implementing, keep your systems simple, so you can scale as needed.  Communicate regularly with users before launching and through implementation.

Silo free = success

Technology leaders focused on modernization should avoid point solutions designed to do one thing. That approach only re-creates silos and complexity.

Alternatively, taking a platform approach means program teams can quickly deploy out of the box service management capabilities, configure and build targeted applications, and integrate with outside applications.

These capabilities provide business and IT leaders with a platform to deliver new efficiencies and services and provide visibility and control to all stakeholders.

To learn more, download ServiceNow’s IT service management blueprint, “Why you shouldn’t be afraid of replacing your legacy ITSM system.”

Breaking Down the White House ICAM Memo: Key Steps for Federal Agencies

By: Bryan Murphy, Director, Consulting Services & Incident Response, CyberArk

Digital transformation is happening everywhere – and with increasing urgency in the Federal government.  Advances in cloud technology have allowed the acceleration of these initiatives; yet with those innovations come critical cybersecurity challenges, especially as it relates to identity management and data privacy.

The Federal government houses some of the most sensitive information anywhere, including Social Security numbers, medical records and national security data – a virtual goldmine for attackers and other bad actors.  The government reports tens of thousands of cyber incidents each year – numbers that are expected to grow with attacks that are only get more sophisticated.  As government agencies modernize their digital infrastructures, new processes must be put in place to address the reality of today’s landscape.

This summer, the White House released a new policy memorandum for Identity, Credential, and Access Management (ICAM), addressing its importance in digital government operations and outlining new procedures agencies must adhere to.

There are two critically important parts:

  • Agencies of the Federal Government must be able to identify, credential, monitor, and manage subjects that access Federal resources. This includes the ability to revoke access privileges when no longer authorized in order to mitigate insider threats associated with compromised or potentially compromised credentials. It also includes the digital identity lifecycle of devices, non-person entities (NPEs), and automated technologies such as Robotic Process Automation (RPA) tools and Artificial Intelligence (AI).
  • Each agency shall define and maintain a single comprehensive ICAM policy, process, and technology solution roadmap. These items should encompass government-wide Federal Identity, Credential, and Access Management (FICAM) Architecture and CDM requirements, incorporate applicable Federal policies, standards, playbooks, and guidelines, and include roles and responsibilities for all users.

This guidance makes clear that federal agencies must now shift toward a dynamic, risk-based approach to securing federal information and infrastructure, one that requires a measurable, fully-documented risk management and technology adoption process. To ensure compliance with the new ICAM policy, agencies need to start with the following baseline essentials:

  • Understand the Breadth of “Identity”
    More than just a single user, identity encapsulates every device and application a user accesses through credentials, which present one of the greatest risk factors. An admin may be one single user, but if their credentials get compromised, an attacker can see everything they have access to – making it critical to have the right mechanisms in place to authenticate and track all of the identities within your infrastructure. Safeguards like step-up authentication and managerial approval processes help mitigate risk from privileged credential-based attacks before allowing access to critical assets and resources.
  • Manage Risk Though Privilege
    Since privileged and administrative accounts have been exploited in nearly every major attack affecting federal government agencies, the first priority needs to be securing privileged access. Security frameworks such as the Council on Cyber Security Top 20 Critical Security Controls, NIST, and others, have always maintained the importance of protecting, managing and monitoring privileged and administrative accounts and provide excellent resources for agencies on how to most effectively do so.
  • Address Common Attacks
    Attackers often harvest credentials and move laterally across the infrastructure, for example using Pass-the-Hash techniques in which an attacker steals account credentials from one device and uses them to authenticate across other network entry points in order to steal elevated permissions, or by leveraging unmanaged SSH keys in order to login with root access. Understanding where your agency is most vulnerable and take actions to fortify these weaknesses while prioritizing the most important credentials. Implement automated controls to respond when it’s necessary to respond.
  • Measure Continuously
    Regularly audit infrastructure to discover potentially hidden and unprotected privileged access, including cloud and DevOps environments – which Federal agencies are increasingly using. Ensure continuous reassessment and improvement in privileged access hygiene to address a changing threat environment and identify and pre-define the key indicators of malicious activity.

While the adoption of transformative technologies like cloud environments does expand an agency’s attack surface, the solution is not to eschew modern technology but rather to account for the risks that these technologies introduce and make them part of the solution. The White House’s new guidelines provide a comprehensive focus for agencies to do just that – make the most of opportunities afforded by digital transformation, while instituting a risk-based approach that protects agencies’ most important resources simultaneously.

By zeroing in on the critical area of privileged access, addressing common types of attacks, and measuring outcomes continuously, federal agencies will be well-equipped to adopt this new risk-based approach to security now required, but without sacrificing technological advancements that are integral to modern organizations.

Why Agencies Should Make Zero Trust Their Mission

By: Lisa Lorenzin, Director of Emerging Technology Solutions for the Americas, Zscaler

Federal CIOs will be working harder than ever to deploy cloud applications and infrastructure over the next year as they work to meet 2020 Data Center Optimization Initiative (DCOI) deadlines, continue to deploy shared services, and work to meet evolving mission requirements.

The cloud push brings new opportunities for flexibility and efficiency. But alongside this progress, federal cyber leaders need new cyber defenses to protect increasingly complex environments that now span multiple cloud providers in addition to existing data centers.

It’s not news that security concerns have stymied cloud progress. Furthermore, agencies are saddled with technical debt that makes innovation difficult and leads to a slower-than-expected cloud adoption. As a result, in 2019, 80 percent of the federal IT budget is spent supporting legacy systems rather than on driving innovation.

To accelerate cloud adoption, overcome technical debt, and support 21st-century missions and citizen services, agencies need flexible security solutions that provide a consistent user experience across both cloud and data center environments. Increasingly, federal agencies are considering a zero trust approach to help address these requirements.

Based on the idea that an organization should not inherently trust any user or network, zero trust helps agencies balance security and productivity. Under this model, any attempt to access a system or application is verified before the user is granted any level of access. Authorized users receive secure, fast access to private applications in the data center or cloud, regardless of whether the user is on-site or remote, an agency worker, or a third party.

Zero trust is ideal for federal agencies, given the need to protect data on a massive scale in an increasingly hybrid environment. The list of devices connected to an agency’s network continues to grow.  Also, agencies increasingly manage assets that are beyond their traditional network perimeter – effectively creating a larger attack surface. Considering the variety and sensitive nature of government data, and the criticality of federal missions, agencies clearly need an equivalent level of protection.

Connect the Right User to the Right Application

Zero trust prevents unauthorized users from accessing data and systems – but that’s only the beginning. The real goal is to get the right users connected to what they need to complete their mission as quickly and seamlessly as possible. Agencies that implement zero trust solutions can take advantage of four primary advantages: security, user experience, cost, and simplicity.

From a security standpoint, agencies need a solution that provides granular, context-based access to sensitive resources. With a zero trust solution, security can follow both the application and the user consistently across the organization.

While applications are hosted in multiple environments and users will connect from diverse locations, the user experience can be consistent and transparent. Users will not have to manage added complexity if they are off-network versus on-network, or if an application is hosted in the cloud versus a physical data center.

From a cost perspective, agencies need a solution that enables them to invest at an initial level to solve an initial use case, and then expand organically as the number of use cases grows. Unlike many traditional security models that rely on network-based controls, zero trust should not require a fixed investment – making it ideal for scalable, flexible cloud environments.

Finally, agencies need simplicity. Implementing a zero trust solution should make it easy for users and administrators to consistently access the information they need. Who is using which applications and how often? What is the user experience when accessing a specific application or when accessing from a particular location?

TIC 3.0 Changes the Game

The traditional security process for remote access in federal environments, as we know, is not optimal.  The agency establishes a security perimeter and deploys a virtual private network (VPN) to connect endpoints to the network when the user is outside that perimeter. Then the user connects to the agency data center through a stack of various infrastructure devices (DMZ firewalls, load balancers, etc.) supporting the VPN appliance. If users are accessing private applications hosted on public cloud providers, their traffic is routed back out through a Trusted Internet Connection (TIC), traversing another stack of security appliances before it finally arrives at its destination.

Federal CIO Suzette Kent released the updated TIC 3.0 policy in draft form this past year. These new guidelines are more flexible than previous TIC requirements – they open the door for agencies to use modern security solutions and models like zero trust to protect data and applications in cloud environments. This is a game changer. A FedRAMP-certified zero trust solution can provide modern security, usability, and flexibility – and meet the new TIC 3.0 guidelines.

Where from Here?

TIC 3.0 is expected to accelerate cloud adoption as it enables agencies to take advantage of modern security models like zero trust. There are several steps that can help ease the learning curve for federal teams.

First, consider your current infrastructure. Many agencies have elements of zero trust in place, such as endpoint management, Continuous Diagnostics and Mitigation (CDM), application and data categorization, micro-segmentation, and cloud monitoring.

Next, consider the application landscape. Zero trust is inherently complex to implement, but zero trust network access (ZTNA) solutions like Zscaler Private Access (ZPA), a FedRAMP-authorized cloud-based service, can provide a scalable zero trust environment without placing a significant burden on the IT team. ZPA connects users to applications without placing them on the network or relying on an inbound listener, instead leveraging a global cloud platform to broker inside-out connections that carry authorized user traffic in using TLS-encrypted micro-tunnels. These tunnels provide seamless connectivity to any application regardless of where it’s running, creating a secure segment of one and ensuring apps remain invisible to the internet. The approach reduces the attack surface and eliminates the risk of lateral movement within the security perimeter.

Finally, take advantage of federal community resources. ACT-IAC just published the ACT-IAC Zero Trust White Paper, developed by a government/industry task force. The document shares key concepts around zero trust, recommended steps, and specific lessons learned working within federal environments. ACT-IAC recently hosted a panel discussion on zero trust among industry and agency technologists that explored these concepts at their recent IT Modernization Forum.

As the National Transportation Safety Board recently demonstrated, leveraging a zero trust approach now means agency teams will gain the ability to access and share mission-critical information quickly –anywhere, anytime, from any device. As agencies build cloud confidence, they can, finally, start to shift spending. This means less legacy, more innovation, and ultimately secure, modern government services built to deliver an experience that agency teams will appreciate.

Unlocking the Security Benefits of Multicloud and Network Modernization

By: Greg Fletcher, Business Development Director, Federal Civilian Agencies, Juniper Networks

The government’s modernization effort has evolved over time with the help of policy developments, increased funding and a cultural shift toward embracing technology. Federal leaders dedicated years to planning the impending digital transformation and now, agencies are beginning to leverage innovative forms of technology to reach their diverse mission goals.

Cloud adoption continues to play a critical role in this modernization effort for agencies, ranging from the U.S. Department of Homeland Security to U.S. Department of Defense. When looking to move their critical data, many agencies are turning to a hybrid multicloud environment, which enables data sets to live on-premise and in the cloud. Accomplishing a successful cloud adoption is no small feat – in fact, many agencies were tasked first with retrofitting the path in which this data moves from one environment to another – the network. There are many security benefits to modernizing federal networks and adopting a hybrid multicloud environment, but three key outcomes include:

Greater Visibility

With the enactment of the Modernizing Government Technology Act and the Cloud Smart Strategy, the federal government’s migration to the cloud is imminent. And yet, many agencies are still concerned that their data could be compromised when migrating sensitive information to public cloud environments. Legacy networks lack the sophistication that federal agencies need to monitor for suspicious activity and uncover nefarious threats. After all, federal agencies can’t mitigate security threats if they don’t know they exist.

Using a common operating system and a single, open orchestration platform, multicloud solutions help you manage the complexity of operating in different environments and can provide a methods-driven approach that lets agencies map their own path. They would also be able to operate with consistent policy and control across all places in the network, with support to launch workloads on any cloud and in any server across a multivendor environment. By adopting unified and integrated networks across the public cloud and on-premise IT infrastructure, federal agencies can achieve greater visibility and therefore, seamlessly determine if there are holes in their security posture or if unauthorized devices are accessing the network.

Faster Response Times

It takes as little as a few seconds for a cyberattack to occur – but the aftermath can cost millions and take years to overcome. Federal agencies hold the keys to citizens’ most critical data, whether it’s their social security information or health insurance. For this reason, it’s imperative that this data always remains secure and agencies can mitigate potential threats quickly.

However, when it comes to agencies that haven’t modernized, agility can be a pain point, simply because older networks are prone to latency and jamming if too many devices and bandwidth-intensive applications are running at the same time. As federal agencies begin to migrate some of their data to the public cloud, they can facilitate the migration and optimize the future-state multicloud environment by deploying software defined wide-area networks (SD-WAN) and advanced threat protection (ATP) software that not only transport bandwidth intensive data between public cloud environments and on-premise IT infrastructure quickly and securely, but also respond to suspicious activity immediately.

Lays Foundation for Adoption of Emerging Tech

Technology plays a central role in the administration’s mission to achieve a 21st century government. Most recently, the Alliance for Digital Innovation released a report, which found that the federal government could have reduced IT spending by $345 billion over the last 25 years if it invested in more commercially available solutions as opposed to architecting systems itself.

The high cost of custom and proprietary developed IT leaves federal agencies with limited resources to ensure the security of their technology platforms, networks and applications. By modernizing their networks using state of the art, commercially available items, government agencies can reduce operation and maintenance costs. In addition, a modern best-of-breed network can support secure, cloud-based applications and other forms of cutting-edge technology, such as drones, artificial intelligence, augmented reality and virtual reality, all of which can enable government to meet its modern-day mission goals.

The administration released the “Executive Order on Strengthening Cybersecurity of Federal Networks and Critical Infrastructure”  two years ago, and there is still much work to be done when it comes to federal agencies modernizing their networks. By overhauling its networks, government addresses a key element to successfully migrate to a hybrid multicloud environment realizing the agility, security, and cost benefits it offers.

 

TIC 3.0 Will Remove a Significant Cloud Barrier

By: Stephen Kovac, Vice President of Global Government and Compliance at Zscaler

The Office of Management and Budget in coordination with the Department of Homeland Security recently proposed an update to the Trusted Internet Connections (TIC) policy: TIC 3.0. Still in draft form, TIC 3.0, proposes increased cloud security flexibility for federal agencies, and the opportunity to use modern security capabilities to meet the spirit and intent of the original TIC policy.

During MeriTalk’s Cloud Computing Brainstorm Conference, I had the opportunity to present a session with Sean Connelly, Senior Cybersecurity Architect, CISA, DHS – or as I like to call him “Mr. TIC.” We discussed how the revised TIC 3.0 policy will remove cloud barriers and accelerate Federal cloud transformation. Connelly, who has been with DHS for the last 6 years, helped lead the TIC initiative, including recent updates to TIC 3.0.

Challenges for TIC in today’s environment

Connelly first explained that the policy originated in 2007 as a way for OMB to determine how many external connections were being used by Federal networks. The number of connections was “eye-opening” – and, OMB found the security surrounding these connections wasn’t consistent, even within the same agency. The original policy required external connections to run through the TIC with a standard set of firewalls to give agencies baseline security. But today, as the number of mobile devices and cloud adoption expands, the perimeter is dissolving. This evolving landscape makes it difficult for agencies to determine what connections are internal or external to their network.

Where do we go from here?

When I asked Connelly how TIC 3.0 will modernize internet security, he echoed Federal CIO Suzette Kent by saying “flexibility and choice”. Instead of having two choices – internal or external – TIC 3.0 allows three different choices: low, medium, and high trust zones. He said, “it changes the game entirely.” Agencies now have a responsibility to determine the appropriate trust zone for their networks.

Connelly added, “If you look at today’s environment, you’ve gone from fixed assets and desktops – and now you have mobile assets, mobile devices, and pretty soon the platform is not even going to matter… so we have to make sure the policy and reference architecture can support all three models going forward.”

Catalog of use cases

One important aspect of the draft TIC 3.0 policy is the addition of use cases that encourage moving TIC functions away from perimeter-based, single-tenant appliances to a multi-tenant, cloud service model. As agencies develop TIC 3.0 solutions, it is vital they share them across government, providing other IT leaders the opportunity to compare their security requirements, review the viable and tested options, and avoid reinventing the wheel.

Connelly shared that the use cases will come out on a consistent basis and will result in a “catalog approach to use cases.” Agencies can propose pilot programs through the Federal CISO Council; then DHS and OMB will work with the agencies on their pilots. The pilot programs will provide agencies with the use case examples and lessons learned.

When can we expect the final policy?

The final TIC 3.0 policy will be issued later this year. Connelly confirmed the final policy will look “very similar” to the draft policy.

Increased cloud adoption across the federal space will lay the foundation for emerging technology, shared services, and the ability to meet the expectations of a federal workforce that wants simple, seamless access to applications and data.

TIC 3.0 is an important step forward to expand cloud security options and remove a significant cloud barrier. With these flexible new guidelines, we should see accelerated cloud adoption in government. I’m excited to see the innovation ahead.

1 3 4 5 6 7 20