Smarter Gov Tech, Stronger MerITocracy
This page is not built out yet. If you are seeing this page, please contact an administrator.

The Right Policy to Protect Remote Workers

In March, the White House released guidance that encouraged government agencies to maximize telework opportunities for those at high risk of contracting the coronavirus, as well as all employees located in the D.C. area. Though there are still many government employees not yet authorized to telework, this guidance marks a turning point.

Telework modes of operation are not new – and neither are the threats that accompany them. But the attack surface has grown significantly in the past month. A large number of workers are operating in insecure environments ripe for phishing and malware attacks, while new tools like video conferencing solutions can be targeted for malicious use or expose data to attacks.

Old, binary policies are insufficient to meet the new security challenge. Previously, policy could be split between enterprise and remote workers. But when everyone from senior to entry-level employees are all working from home, more granular policy controls are required. Those controls still rely on the same bread-and-butter IT best practices, though, from hardware-based security to patching and data protection. Here are some security controls government IT pros should implement today to ensure their newly remote workforce isn’t a tremendous liability. 

Managing Unsecured Environments

BYOD users, naturally, manage and own their own devices, and these devices live in unsecured environments and are exposed to attacks on the network. Consider a user who has four kids simultaneously logging into distinct telelearning systems on the same network he is now using for government work. How secure are the laptops, links, and teachers those kids are accessing? The reality is that network security is only as good as the link your kid clicked on last. As such, IT needs to push the latest patches as a requirement, enable multi-factor authentication (MFA) and enterprise rights management, and enforce good access control.

These best practices apply to workers who took a managed enterprise device home as well. Those devices also need protection against everything happening on the local wi-fi, in addition to enterprise access control (EAC). Before EAC, users connected to a network—and were only authenticated once they were already in. EAC, on the other hand, stops you at the front gate, verifying not just the user, but also that they have the proper local security software agents and updates. EAC was popular when the BYOD trend first gained steam, but many people saw it as too intrusive to be sustainable. Now, EAC is a key tool for helping to better manage laptops living in unsecured environments.

Cloud Services and SaaS 

Implementing security for VDI systems and cloud services includes some security basics as well: data protections, virtualization security for both the enterprise data center and at the access points, application security, secure boot, and so on. With software-as-a-service (SaaS), client access to cloud services should be protected through MFA and complemented with network transport encryption to offer protection on both sides. Appropriate data protection in enterprise rights management (ERM) can control access to the data through the cloud services and back to the data center. Understanding how clients are using the services and what data they are accessing is where the ERM decisions come into play.

Monitoring Threat Intelligence

IT pros also need to take a renewed focus on managing the threat of mistakes, misuse, and malicious insiders. There is always the risk of a user doing something careless or malicious, but that risk is exacerbated now; people are stressed and more apt to use shortcuts and make bad decisions. Normally, protecting against such risks means monitoring for anomalous use, like an employee working at midnight. But in the new world order, everyone’s hours are off. Many employees are working unpredictable “shifts” in an attempt to balance childcare and other responsibilities. Agencies need to be able to sift through these anomalous behaviors quickly and extend their threat intelligence and monitoring capabilities to the new edge where the users are now.

Policy-based access control and enforcement for applications and data at both the enterprise and the cloud level are also important to thwart misuse and abuse by users who are already authenticated. Enforcing ERM along with encryption, for instance, can further protect data so it can’t leave a laptop, or prevent it from being copied onto a USB.

The bottom line is that agencies now have to think differently about security issues related to teleworking. IT pros must monitor threats and secure everything from services to endpoints. While the modes of operation for telework are the same, the threat surface has grown. Policy controls must be far more granular in order to be effective.

The Best Things We Build Are Built to Last

We’ve spent the last several months in a bit of a surreal version of normal but there is light at the end of the proverbial tunnel. When we emerge from the current environment, the reality is that we will be better off from a security perspective than we were when we went in. The additional need to increase the capacity of access of cloud-based apps, VPN or “other” have required us to think a lot harder about the security that comes along with this extra access to the point where “building it in” makes a lot more sense than “bolting it on.”

Basic security hygiene items like DNS security and multi-factor authentication(MFA) can be the first, and best line of defense for any access environment which certainly includes an extreme telework scenario. The good news is that the protections don’t stop when our access environments return to “normal.” Since these security capabilities are part of a Zero Trust lifestyle, we get to carry these protections forward as they now have become our best practices.

We were gonna get there eventually, but we were forced to step on the gas

Some of the biggest challenges some Federal agencies have faced, beyond the capacity issue, is trying to figure out how to marry the legacy technologies we have kept running by sheer will, with the more cloud and mobile-focused innovative technologies that make the most sense for a more remote deployment. Agencies have been moving in this direction for years but the “extreme telework scenario” has accelerated this, to the point of making it uncomfortable and sometimes painful. One example of this is the legacy government agency authentication and user authorization. We’ve spent the past decade building out the “I” in PKI (public key infrastructure) and this works fairly well in our old world (users sitting in offices accessing applications from a desktop with a smartcard reader), it doesn’t work so well in this new normal. The good news is that there is a compromise to be made. A way to leverage the existing investments and make them work in a more innovative world.

Duo has been focused on being a security enabler for agencies as they make their journey to a cloud and mobile world, but we also realize that there has been lots of work and resources invested in the smartcard infrastructure that has powered our identity, credentialing, and access management (ICAM) systems. We have partnered with experts in this arena, folks like CyberArmed, to leverage that investment and to leverage the strong identity proofing that solutions like this provide.

CyberArmed’s Duo Derived Solution

NIST has shown us the way

When NIST, smartly, separated the LOA structure of 800-63 into proofing (IAL) and authentication (AAL), they provided guidance to allow agencies the flexibility to deploy the right tools for the right job and also allowed those agencies to apply a risk based, Zero Trust approach to secure access. The Office of Management and Budget (OMB) followed suit and aligned their updated ICAM guidance (M-19-17) to provide agencies with the flexibility to make risk based deployment decisions. This flexibility helps agencies to be more agile in support of whatever might be thrown at them, while still providing strong, consistent identity security. This identity focus is exactly what we need as we make our cloud journeys.

Now that we’re getting back to a small amount of normal, we need to take stock in the things we’ve been able to accomplish and the investments we’ve made to shore up our security and prepare us for the accelerated cloud & mobile journey. The things we’ve done will not be in vain.

How Organizations can Respond to Risk in Real Time

The NIST Cybersecurity Framework, initially issued in early 2014, outlines five functions with regard to cybersecurity risk: identify, protect, detect, respond, and recover. Of these functions, those on the far left encapsulate measures that could be considered pre-breach; those on the right, post-breach. Far too often, however, government agencies tip the scales too far to the left.

While the NIST Cybersecurity Framework offers a solid foundation, security teams remain overwhelmed in reactive strategies – a tremendous problem considering those steps limit an organization’s ability to become more proactive in identifying and operationalizing actions before the concern becomes significant.

Traditional approaches to data protection usually entail buying and implementing tools that are binary and reactive. A particular event is seen as good or bad – with walls, blocks, and policies put in place to deal with the latter. This leaves government systems drowning in alarms and alerts, while limiting employees’ ability to do their jobs. If your policy is to block all outbound email attachments that include anything proprietary or sensitive, for instance, HR can’t do something as simple as send out a job offer.

By continuously identifying potential indicators of risk at an individual level, organizations can instead take a proactive security posture – one in which responding to and recovering from threats is an ongoing effort, not a piecemeal one. Here are three key components of a truly proactive approach.

Continuous Risk Evaluation

Users are continuously interacting with data, which means organizations must be continuously monitoring those interactions for threats, as opposed to scrambling once a breach has been flagged. Risk is fluid and omnipresent; removing risk wholesale is impossible. Instead, the goal should be to detect and respond to excessive risk, and that can only be done through continuous evaluation. This is especially important as agencies rely on a growing amount of data, which is stored everywhere and accessed anywhere.

Continuous risk evaluation means cybersecurity doesn’t end after a user’s behavior is labeled as “good” and access or sharing is granted (or vice versa) – as would be the case with a traditional, static approach. Instead, risk profiling continues beyond that initial decision, monitoring what a user does when granted access and whether their behavior is trustworthy. Gartner, for one, defines this approach as Continuous Adaptive Risk and Trust Assessment (CARTA).

Leverage Data Analytics

In order for risk levels to be assessed, organizations must have full-stack visibility – into all devices and interactions taking place on its system – and the ability to make sense of a tremendous amount of behavioral data. How does a series of behaviors by Employee A stack up against a different series of behaviors by Employee B? Where’s the risk and how do we mitigate it? Analytics are required to not just answer such questions, but answer them quickly.

Multiple data analytics techniques can help organizations flag excessive risk: baselining and anomaly detection, similarity analysis, pattern matching, signatures, machine learning, and deep learning, to name a few. The key is to focus analysis on how users interact with data. Remember, risk is fluid. The risk of a behavior – even an unusual one – will depend on the sensitivity of the data being accessed or shared.

Automate the Response to Risk

Data analytics can reduce the time to identify a threat, but it’s also important to automate threat response. Once again, too many organizations simply respond to a growing number of alerts by throwing headcount at them. Instead, data loss protection should be programmatic, with policy automated at the individual level.

Resources should be thrown only at the highest immediate risks, while routine security decisions should be handled automatically. With automation, organizations can actually reduce their headcount without compromising security – saving money while achieving precise, real-time risk mitigation.

The Bottom Line

The far-right of the NIST Cybersecurity Framework must ideally focus on proactive detection, response and remediation – steps that must happen concurrently and continuously. Identifying valuable risk insights and turning them into actionable protective measures remains challenging in government environments, especially with more data and devices on networks than ever. But with continuous evaluation, analytics, and automation, it can be done. Too many organizations are drowning in alarms and alerts, while struggling to review and triage security content, adjust system policies, and remediate risk. By taking a holistic, proactive approach, organizations can identify and respond to risks in real-time, adapting their security as needed.

Unleash the Power of Federal Data With Automated Data Pipelines

During these unprecedented times, it’s more important than ever to maximize the value of our data. As first responders, health care professionals, and government officials lead the charge on COVID-19 response efforts, they need real-time insight into rapidly-changing conditions to make the best decisions for public health.

Leverage all the benefits of the cloud to understand data. Learn More
Centers for Disease Control and Prevention (CDC) Director Dr. Robert Redfield recently said that preparation for reopening the economy has “got to be data driven.” With data being a key factor in our next steps for COVID-19 response and prevention, it’s important that vital data is readily available – and without time restraints.

The challenge? All of this information lives inside various agencies, companies, departments, and IT environments. Securely bringing it together for analysis is complicated and time consuming. Add to that the requirements synthesizing and delivering the data to key stakeholders in a way that’s immediately actionable, and you have a tall mountain to climb. Using traditional methods of data analysis can take days, and with the challenges we’re facing as a society, we don’t have that kind of time.

An exciting area of innovation that’s helping agencies shorten the time-to-value in data analysis is automation. Automated data pipelines are vital components to automate the migration and transformation of data into usable, actionable formats. They provide frictionless, real-time ingestion of data, and when paired with on-demand analytics, can be used by agencies to unlock unprecedented levels of visibility, insight, and information sharing. And that empowers stakeholders to make fast, smart decisions.

To learn more about how automated data pipelines can bring speed and efficiency to your data operations in this challenging environment, register today for Qlik and Snowflake’s webinar with MeriTalk, “Improving the Value of Data with a Modern Data Platform,” on Thursday, April 23 at 1:30 p.m. EDT.

My Cup of IT: TMF Boost Can Break Modernization Logjam

What does Covid-relief legislation mean for Fed IT? So far, not nothing, but in the big picture, not enough to move the needle on any large-scale modernization push.

That might be changing, and soon – with the Technology Modernization Fund (TMF) as the vehicle.

While the $2.2 trillion stimulus relief bill rushed through Congress and signed by President Trump last week didn’t feature extra TMF funding, an alternative bill written by House Democrats includes a massive $3 billion expansion of TMF “for technology-related modernization activities to prevent, prepare for, and respond to coronavirus.”

Little is certain in the current environment, but one thing to count on is that more relief bills are on the way, and that $3 billion pop for TMF is ready to roll into the next bill.

Consider as well: Congressman Mark Meadows (R-NC), long-time advocate for IT reform on the House Oversight and Reform Committee, is now moving to serve as chief of staff in the White House. That provides a powerful carrier for the IT modernization message in the hallways of power.

Here’s the low-down on how that $3 billion in IT modernization might play.

Never Let a Good Crisis Go to Waste

The powerful in D.C. are suddenly in touch with telework – and the role of government systems and networks in supporting really mission-critical work.

Agency secretaries and politicians see that cloud systems scale and perform – legacy, on-premise applications and systems, to quote Seinfeld, not so much … Secretaries struggling to host disaster portals and mobilize relief efforts suddenly want one throat to choke – that means concentrating authority in the hands of the CIO. Funny way to finally get those authorities outlined in FITARA. The importance of IT to allowing government to deliver on its mission is suddenly crystal clear – and, clear from the top down.

Why TMF?

So, if that $3 billion in IT funding happens, how will it be distributed and managed? It’s not the only path forward, but it makes a lot of sense to utilize the TMF to get IT modernization done. Here’s why, and some changes that need to happen to make this approach viable.

As it exists now, TMF is both anorexic and awkward. After an original funding of $100 million for Fiscal Year 2018, new funding for FY2019 and FY2020 dwindled to only $25 million per year.

Further, the payback requirements have meant that very few agencies – including some of the agencies that actually sit on the TMF review board – have bothered to apply for those funds themselves.

Why Now?

So, why TMF now – and what needs to change?

TMF has a lot going for it as a framework to distribute and manage new funding for IT modernization – it’s quick (it already exists), it’s tied to the budget process, and it provides for required oversight. However, we need one very significant change to TMF – we need to relieve or remove the requirement to pay back funds invested to support telework and crisis response. If that happens, agencies will rush to access these funds and undertake true modernization efforts. Providing $3 billion in funding through the TMF will not do the slightest bit of good if agencies don’t access those funds because of repayment obligation fears.

Nobody knows when this crisis ends – but investing in Federal IT modernization is critical to maximizing our nation’s relief response, and for better mission performance after that. Better IT is not a nice to have…

Hyperconverged Infrastructure for AI, Machine Learning, and Data Analytics

By: Scott Aukema, Director of Digital Marketing, ViON Corporation

When you hear the terms “artificial intelligence” (AI) and “machine learning” (ML) (and let’s be honest, if you have even a sliver of interest in technology, it’s difficult not to), hyperconverged infrastructure (HCI) may not be the first thing that comes to mind. However, HCI is beginning to play an important role in high-performance use cases like AI and ML with its ability to capture, process, and reduce vast quantities of data at the source of creation in a small form factor. In this blog, the third in a 3-part series on hyperconverged infrastructure, we’ll examine the role HCI is playing in deploying a complete AI solution. If you’d like to read the previous blogs, you can read about the role that HCI plays in enabling a disaster recovery solution and how it is changing the dynamics for edge computing and remote offices.

Hyperconverged infrastructure at the core of a hybrid multi-cloud model bridges the gaps among public cloud, on-prem private cloud, and existing data center infrastructure, enabling organizations to manage end-to-end data workflows to help ensure that data is easily accessible for AI. As organizations develop their AI/ML strategy and architect an IT environment to support it, the resources needed for a successful deployment quickly become evident. This is where many organizations are turning to a multi-cloud environment to support their AI workloads. A recent study by Deloitte found that 49 percent of organizations that have deployed AI are using cloud-based services1, making it easier to develop, test, refine, and operate AI systems. Hyperconverged infrastructure, in concert with a robust Cloud Management Platform (CMP), can accelerate deployment times, making it easier to stand up and take down an AI environment. These AI services in a consumption model provide the agility and resources needed to stand up an AI practice without making a significant investment in infrastructure and tools. HCI is an essential component of the hybrid multi-cloud environment for AI and ML.

In addition to acting as a catalyst between the data center and the cloud, HCI is well positioned to support edge computing – the processing of data outside of the traditional data center, typically at the edge of a network. Data collected at the edge very often is not being used to its full capacity. IT organizations are looking to hyperconverged infrastructure in these instances to capture data where it is created, compress it and transfer it to a cloud or centralized data center at another site. In many instances, the edge can mean hundreds of locations dispersed throughout the country or the world. Consolidating data from these locations allows organizations to create more complete data lakes for analysis to uncover new insights. By combining servers, storage, networking, and a management layer into a single box, HCI eliminates many of the challenges of configuration and networking that come with edge computing. In addition, organizations can coordinate hundreds of edge devices through a CMP, streamlining management, reducing complexity, and reducing costs. Leveraging HCI for edge computing enables data to flow more freely, whether into a centralized data lake or a public or private cloud environment where it can be used to begin the learning and inference process using the available AI models. Once these models are trained in the cloud, they can then be deployed back to the edge to gain further insights.

Hyperconverged infrastructure can streamline edge computing, enable multi-cloud environments, and act as a catalyst to aggregate data for cloud-based AI applications. For agencies that are geographically dispersed and seeking to leverage data from these disparate locations for a more robust analytics practice, HCI should be considered as part of their overall AI strategy. Since we are still in the infant stages of AI and ML, organizations should strive to be flexible and nimble to adapt to changes. Hyperconverged infrastructure provides that agility.

Hyperconverged infrastructure is enabling applications with a versatile platform, which helps organizations accelerate a variety of use cases. Hyperconverged solutions like Nutanix and Fujitsu’s PRIMERGY are helping agencies simplify deployment, reduce cost, improve performance, and easily scale-up and scale-out. Whether it’s AI, edge computing, disaster recovery, enabling a multi-cloud environment, or any other of a multitude of use cases, hyperconverged infrastructure should be considered as part of an IT modernization strategy.

1 – State of AI in the Enterprise, 2nd edition, Deloitte Insights, October 22, 2018

Get Ready for the Passwordless Future

By: Sean Frazier, Advisory Chief Information Security Officer – Federal, Duo Security

Most of us have a standard list of go-to passwords for various logins and websites – each fluctuating slightly with upper or lowercase letters, extra numbers, symbols and punctuation. Some of us keep them scribbled on a notepad, while others click “remember me” when logging onto sites, to speed up the process and relieve the stress of remembering them time and time again.

But as cyberattacks become more sophisticated, and Federal agencies work to modernize their IT systems and protect vital data, passwords are becoming a thing of the past. And the push toward a passwordless world introduces the need for new standards and technical innovation.

Everything old is new Again – Updating Legacy Technology

Truth be told, we have been lazy when it comes to passwords. Administrators put all the onus on the end user to manage the password lifecycle – requiring them to use longer passwords, a mix of characters/cases, etc. – making it harder and harder for users to manage the various passwords they need for different applications and sites.

The idea of a passwordless world is not entirely foreign to the Federal government. But while 80 percent to 85 percent of agencies use Personal Identity Verification (PIV) cards and/or Common Access Cards (CAC), these are not ideal solutions for agile and modern IT and application access. They are difficult to issue and replace when lost; they sometimes can’t be used to authenticate to cloud applications; and they are a non-starter from mobile devices. As such, these legacy identity verification technologies don’t lend themselves well to IT modernization, and agencies haven’t done the appropriate plumbing exercises to update federation by using newer federation technologies such as OIDC or SAML.

Agencies are also dealing with Public Key Infrastructure (PKI) stacks that are, for the most part, at least 15 years old. The financial burden of maintaining these PKI stacks over their lifecycle can be immense, and modern technology is passing them by. Government organizations need to find a balance between working with these pre-implemented legacy systems, in which they have heavily invested, and adopting new, standards-based (more flexible, more affordable) authentication technologies in the commercial technology space.

The Cresting Wave of the Authentication Future

In March 2019, the World Wide Web Consortium (W3C) announced Web Authentication (WebAuthn) as the official passwordless web standard. WebAuthn is a browser-based API that allows for web applications to create strong, public key-based credentials for the purpose of user authentication. It will enable the most convenient and secure authentication method for end users – the device that they are already using – to validate that the user is who they say they are via a biometric.

While WebAuthn is a nascent standard, it is the wave of the future. Five years ago, many organizations and individuals were wary of biometrics. No one trusted fingerprint authentication or facial identification. While these technologies are not perfect, the Apple platform, for example, proves they work at scale by processing millions of transactions per day.

Shifting from traditional passwords can seem burdensome, but a passwordless authentication method doesn’t have to start from the ground up. Apple, Google, and Microsoft have already added WebAuthn support to their products. This commercially available technology can help agencies leverage industry standards like WebAuthn to improve security and drive flexibility. Instead of building custom models, putting trust into top tech providers in the space can help agencies save money and get rid of the security baggage associated with traditional passwords.

Of course, there will always be hiccups in technology. When all else fails, passwords will be necessary as a backup for authentication systems when biometrics fall short. But shifting from the traditional passwords of the past to the authentication mechanisms of the future is the logical next step for the public and private sectors alike. It’s the PKI that we all know and love, but just done the right way with strong protection and ease of use. With government’s buy-in of updated authentication models, agencies can modernize their IT infrastructures more easily and ensure stronger, safer, and more secure protection for their data.

To learn how your agency can make the move toward a passwordless future, check out Duo Security’s website for more information.

Hyperconverged Infrastructure for Remote/Branch Offices & Edge Computing

By: Scott Aukema, Director of Digital Marketing, ViON Corporation

Hyperconverged infrastructure (HCI) is playing a significant role in building an enterprise multi-cloud environment. The benefits are well documented – you can learn more about them in a new white paper developed in collaboration with ViON, Fujitsu, and Nutanix, “Simplifying Multi-Cloud and Securing Mission Progress.” In addition to driving a cloud foundation, hyperconverged infrastructure is driving other use cases. In our first blog, we examined the impact that HCI can have in a disaster recovery solution. In this installment, we’ll discuss how HCI is changing the dynamics for remote offices and edge computing.

Edge computing moves processing power closer to end-users and their devices, in contrast to a traditional, more centralized model, where resources reside within a data center. As applications such as advanced analytics become more resource intensive and latency is an issue, having servers, storage, networking and hypervisor software close to the data source can be a significant advantage.

Currently around 10 percent of enterprise-generated data is created and processed outside a traditional centralized data center or cloud, and by 2025, Gartner predicts this figure will reach 75 percent. They estimate that 40 percent of larger enterprises will adopt edge computing as part of their IT infrastructure in this timeframe. This is driven largely by the growth in raw data, where massive volumes are too large to transmit to a centralized architecture for timely processing.

Organizations are turning to hyperconverged infrastructure to simplify complexity for both hardware and virtualized environments. The nature of hyperconverged infrastructure makes it easy to use, eliminating many of the configuration and network challenges associated with edge computing. Benefits of HCI at the edge include1:

  • High-density compute and storage resources are self-contained, easily deployable, and have a small footprint;
  • Integrated hardware & software stack come preconfigured and can be easily managed as a single system remotely;
  • Scalable architecture can easily scale up and out to support growth, and next-generation applications such as AI and IoT;
  • Faster application performance for end-users and lower network latency with reduced WAN traffic.

Hyperconverged Infrastructure is well suited to keep pace with the rapid growth of data and the need to support multiple remote sites. This is especially true in environments that ingest or create massive data sets and need to conduct real-time or near real-time analysis of that data. In those instances, moving large scale data sets to a central data center is time consuming, inefficient, and can be costly. It is these instances that HCI is well positioned to enable organizations to ingest, analyze, and gain insights from data and quickly act on those insights when needed.

Many organizations don’t have IT support at their remote or branch offices. Edge computing is designed to run with limited, or no dedicated IT staff, which means the infrastructure must be easy to implement and manage. It has to connect quickly back to the primary data center and cloud when needed. These requirements are what make HCI well suited to edge computing. For IT organizations, hyperconverged infrastructure provides the flexibility to quickly stand up infrastructure in new sites, easily manage it from a single remote location and provide local users with high performance compute for critical resource intensive applications. For users, it provides the operational autonomy to gain insights at the source of data ingestion, rather than migrate data to a centralized data center.

Finally, consider HCI’s role in a hybrid multi-cloud environment. A model that has centralized on-prem data center infrastructure integrated with public, private, and hybrid clouds and micro data centers at the edge is an architecture that delivers on multiple fronts. When aligned with a robust cloud management platform, orchestration between the various environments becomes seamless, providing governance and management through a single interface. Organizations get the flexibility and efficiency of cloud computing tightly integrated with on-prem infrastructure to deliver the right level of performance, when and where it is needed.

1 TechTarget Hyper-converged edge computing Infographic

On Trend: Tailored Technology for Customization at the Edge

A perfectly tailored suit is an investment. It’s worth it to pay for the perfect fit, high-quality material appropriate for the occasion, and a color that makes your eyes pop.

So why, when it comes to mission-critical technology solutions, are government agencies expected to buy off-the-rack?

As federal agencies expand nascent AI capabilities, deploy IoT technologies, and collect infinitely more data, missions require a customized, nuanced approach to transform edge capabilities.

To combat the data deluge resulting from AI and IoT advances, the Federal Data Strategy’s first-year action plan was released in late December. It urges the launch of a federal CDO Council, establishment of a Federal Data Policy Committee, and identification of priority data assets for open data – all by the end of January 2020. These are just the first steps to prepare for what’s already underway; government’s mass migration to the edge and the resulting proliferation of data. In just five years, Gartner projects 75 percent of all enterprise-generated data will be processed outside of a traditional data center or cloud.

As we work to manage, analyze, and secure data collected at the edge, we need to evaluate the solutions with the same standards we apply in our data center or cloud. To enable insights at the edge, federal teams need the same (or better) function: high compute, speed, power, storage, security, but now in a durable, portable form. This may require equipment to tolerate a higher level of vibration, withstand extreme thermal ranges, fit precise dimensions, or incorporate specialized security requirements.

Partnering with Dell Technologies OEM | Embedded & Edge Solutions enables Federal SIs and agencies to integrate trusted Tier 1 infrastructure into solutions built for their specific mission requirements, or for those of their end users. For instance, working with our team, you might re-brand Dell Technologies hardware as part of your solution, leveraging specialized OEM-ready designs like our XR2 Rugged Server and Extended Life (XL) option. We also offer turnkey solutions designed by our customers and delivered through Dell Technologies, which allows us to further serve what we know are your very specific use cases.

As an example, our customer Tracewell Systems worked with Dell Technologies OEM | Embedded & Edge Solutions to customize the Dell EMC PowerEdge FX architecture, creating a family of products that meets the needs of their federal customer’s server sled field dimensions. Because Tracewell’s T-FX2 solution is still interoperable with standard Dell EMC server sleds, the end customer can now plug and play powerful Dell EMC compute and storage products from the field to the data center, cutting processing time from 14 to two days.

Feds at the edge need the right solution, and need that solution delivered quickly and securely. Agencies and federal systems integrators need a trusted partner that can help them compress time-to-market while ensuring regulatory compliance and providing a secure supply chain. While conducting a search for an OEM partner, agencies and systems integrators should consider vendors that will embrace challenges and engage in a deep, collaborative relationship. Moreover, dig beyond the design of the technology and ask:

  • Does the vendor have the buying power to guarantee production consistency, so the product can continue to be delivered as designed? If necessary, consider looking for a partner that will guarantee a long-life solution.
  • Are there lifecycle support services from problem identification, to customized design, to build and integration, to delivery, to experience?
  • Can the potential partner supply program management to handle all regulation and compliance complications?
  • Does the vendor have a broad portfolio for easy integration of solutions from edge to core to cloud?
  • Does the vender have a deep focus on security – from the chip level through to delivery and support?

These critical aspects will help you design those faster, smaller, smarter solutions, and get them in the field more quickly.

With 900+ dedicated team members, the Dell Technologies OEM | Embedded & Edge Solutions group has embraced challenges for 20 years, creating more than 10,000 unique project designs. For more information about how our capabilities can provide you with the tactical advantage, click here.

About the Author:

Ron Pugh serves as VP & GM, Americas, for Dell Technologies OEM |Embedded & Edge Solutions division. To learn more, visit DellTechnologies.com/OEM.

Hyperconverged Infrastructure for Disaster Recovery

By: Scott Aukema, Director of Digital Marketing, ViON Corporation

The benefits of hyperconverged infrastructure (HCI) as a foundation for building a cloud platform are well documented, as organizations are turning to HCI to simplify complexity for both hardware and virtualized environments. We’ve recently published a white paper in collaboration with Nutanix and Fujitsu on this topic, Simplifying Multi-Cloud and Securing Mission Progress. But recent research from Meritalk1 highlights that HCI represents many other use cases beyond cloud. In this first installment in a series of 3 blogs, we’ll examine how HCI is impacting disaster recovery (DR) in the data center.

Disaster recovery incidents generally occur as a result of either catastrophic loss of a data center like a fire or flood, equipment loss within the data center, component failure of a single server (or set of devices), or lack of network access due to networking issues.2 Hyperconverged Infrastructure, combining storage, compute, and network hardware with hypervisor software to create and run virtual machines, is well suited to perform disaster recovery functions to protect against these types of incidents.

Similar to traditional disaster recovery, production environments can be regularly replicated in the HCI system. In the event the data center suffers a failure, the replica virtual machine could be quickly brought online and hosted on the hyper-converged system, in a secondary datacenter. The HCI environment could host the failed over workload until the primary location is back up and running, even running those workloads indefinitely, if needed. There are a number of factors that make hyper-converged infrastructure ideally suited for a DR scenario:

  • Rapid (Instant) Recovery – Most HCI solutions include a comprehensive backup and recovery capability for short Recovery Time Objective (RTO) windows. Through the use of virtual machine snapshots, IT managers can replicate the secondary copy to an HCI system in the secondary data center or a replicated environment in the public cloud. This provides on-site and off-site copies of the latest version of all VMs in the snapshot.3 Compared with traditional infrastructures, HCI offers a faster means to failover data in a disaster.
  • Cloud Integration – The benefits of hyperconverged infrastructure are well-suited to building a hybrid cloud environment and the software defined nature of HCI means that public or private cloud can be used as a replication site for disaster recovery. In a multi-cloud world, managing public and private cloud environments, operating on-prem physical infrastructure, and moving virtual machines between these environments is essential to data protection. Hyperconverged Infrastructure, along with a cloud management platform (CMP) like ViON’s Enterprise Cloud can make it easier to orchestrate workloads across clouds and streamline the recovery process in the event of a disaster.
  • Software-Defined Infrastructure – Software defined infrastructure simplifies the automation and orchestration of data replication and can provide continuous updates of remote copies with little or no impact on local performance. The built-in snapshot capabilities of the HCI hypervisor can streamline disaster recovery by replicating data to the DR environment, providing flexibility, speed, and reliability to recover quickly during a failure.
  • Scale-Out Architecture – The inherent scale-out nature of a hyperconverged infrastructure allows IT organizations to quickly procure additional storage, compute, networking, and virtualization resources to expand capacity as needed, quickly and easily. This enables a greater volume of workloads in the same amount of time, with more resources available. Scale out architecture not only provides the resources for backup and recovery, but also allows for architectural freedom.

If you examine each of these factors independently, they deliver strong value for using HCI for DR purposes. But collectively, they represent a significant leap in efficiency, resiliency, and flexibility. While hyperconverged infrastructure may not be the right backup and recovery solution for every IT organization, it’s worthy of consideration.

In our next blog, we’ll examine how hyperconverged infrastructure is helping agencies accelerate their data analytics capabilities. In the meantime, we’d like to hear from you – what role is HCI playing in your data center?

1 – Meritalk Infographic, “Hyper Converged Without the Compromise: How Next Gen HCI Can Modernize Fed IT.
2 – ComputerWeekly.com – https://www.computerweekly.com/feature/Hyper-converged-infrastructure-and-disaster-recovery
3 – Network World.com – https://www.networkworld.com/article/3389396/how-to-deal-with-backup-when-you-switch-to-hyperconverged-infrastructure.html

Federal Contracting Trends: What can we Expect in 2020?

If 2019 is any indication, Federal contracting in 2020 promises to be a very interesting year. Federal contracting has experienced steady growth – an average of 6 percent year-over-year – for the last five years according to Bloomberg Government. Spending jumped 9 percent in 2018, representing a 50 percent uptick from previous years.

A continuation of this growth next year will be driven primarily by an increase in defense spending and wider scaled deployment of new technologies across the Federal government.

This growth will present new opportunities, and of course, new challenges for those operating in this space. Here are some top-line issues that will impact Federal contracting in 2020 and beyond.

Teaming Agreements are on The Rise

Teaming arrangements have become more popular over the last several years in Federal contracting, and for good reason. Teaming helps contractors gain access to work, minimize risk, increase knowledge and offer a more competitive price point.

Small businesses view teaming as the most effective way to thrive in the competitive Federal market. There will be a significant uptick in teaming in 2020 as both smaller and larger contractors look to provide the types of capabilities needed to fulfill a wide variety of requirements on larger contracts.

Large Contracts are Becoming More Accessible to Small Businesses

The Federal government has made it a priority to award more Federal contracts to small businesses. In fact, nearly a quarter of prime contract dollars have gone to small businesses over the past five years. In 2020, small businesses will continue to have greater access to large contacts that previously only went to big contractors. A primary reason for this is the focus on HUBZone small businesses, which helps distribute contract proceeds to underutilized areas.

The Importance of Being “Employee-Centric”

It is no secret that top talent is at a premium in today’s Federal contracting market. In an effort to attract the best and the brightest, employers are offering perks such as flexible work schedules, increased benefits and telework. These efforts help attract and retain qualified candidates. However, a more “employee-centric” work environment is key for maximizing employee satisfaction. This can include processes and procedures that ensure open communication and flow of positive feedback. It also means offering flexibility in terms of the types of projects that team members can support.

Increasing Need for Best in Class Contracting Vehicles

For several years now, the Federal government has been pushing agencies to access best-in-class (BIC) government-wide acquisition contracts (GWACs) to increase their buying power. The overall assessment is that this increases the need for teaming across the small business community. Small businesses that are adequately prepared for these requirements should expect to expand in the coming year. The use of these vehicles will grow, which will allow government to access specific skills more efficiently.

Get Ready for 2020

With a growing economy and expanding government priorities, Federal contracting has a bright future in 2020. Being on top of trends will help contractors gain opportunities and smartly navigate whatever issues arise. While the trends discussed in this article are not all-encompassing, they will likely be in the headlines throughout the next twelve months and beyond.

Increase your awareness of these key trends and get ready to take advantage of the opportunities 2020 will bring.

By: Walter Barnes III, president and founder of PM Consulting Group

Taking Legacy Systems Off Life Support with Modern IT Service Management

As we all know too well, eighty cents of every Federal IT dollar still supports legacy IT.  Agencies miss opportunities to innovate – and more. A 2019 GAO study recognized a few of the consequences of keeping legacy systems on life support, including security risks, unmet mission needs, staffing issues, and increased costs.

Worse, the report found most agencies lack complete plans for modernizing the most critical legacy systems with marked milestones to complete modernization, a description of the necessary work, and details regarding the disposition of their legacy system.

Alongside infrastructure and application modernization, IT leaders are considering opportunities to deliver seamless user experiences with modern IT Service Management (ITSM) – updated, automated processes that are intuitive and personalized, enable self- service, and provide agency leaders visibility into trends and needs.

Elevating the service experience and improving productivity

Cloud-based ITSM implementation is already contributing to modernization efforts and transforming the government service experience on both Federal and State/Local levels.

The National Cancer Institute (NCI), for example, wanted efficient, streamlined services to enable staff members to focus on supporting cancer research and advancing scientific knowledge to help people live longer, healthier lives.

Using a ServiceNow IT Service Management solution, NCI was able to lower its number of incident tickets, request tickets, and change tickets with a single portal to submit, review, and address incoming items.  With the new system in place between 2014 and 2018, the organization reduced its incident tickets from 372,000 tickets down to just 94,200; request tickets from 162,000 to 51,700 and change tickets from 5,400 to 900.  This progress allowed the IT team to spend more time developing strategic priorities to give users a significantly better experience.

In another example, North Carolina took the step to replace legacy systems to create a more innovative platform for customers, as well as improve service rates and achieve economies of scale. The system helped manage and mitigate issues faster.  The state has since implemented the solution across nine state agencies.  On a single platform, each agency has a customized portal that meets their unique needs while providing enterprise-level views and analytics.  The best part?  On the back end, the state’s Department of IT has only one platform to maintain and upgrade, saving significant time and resources.

How can agencies ensure success?

GAO reported that over the last five years, successful modernization efforts have included several common success factors, including actively engaging end-users and stakeholders and creating an interface that is consistent across systems.

What can your agency do to ensure a successful, seamless migration?

Lay the groundwork by digging deep into your current ITSM system – determine what works well and what needs to be changed.  Then, document the most important parts of your current legacy ITSM solutions and establish your core team.  Identifying current service delivery challenges will help prioritize successful processes, areas for improvement, and more.

When implementing, keep your systems simple, so you can scale as needed.  Communicate regularly with users before launching and through implementation.

Silo free = success

Technology leaders focused on modernization should avoid point solutions designed to do one thing. That approach only re-creates silos and complexity.

Alternatively, taking a platform approach means program teams can quickly deploy out of the box service management capabilities, configure and build targeted applications, and integrate with outside applications.

These capabilities provide business and IT leaders with a platform to deliver new efficiencies and services and provide visibility and control to all stakeholders.

To learn more, download ServiceNow’s IT service management blueprint, “Why you shouldn’t be afraid of replacing your legacy ITSM system.”

Breaking Down the White House ICAM Memo: Key Steps for Federal Agencies

By: Bryan Murphy, Director, Consulting Services & Incident Response, CyberArk

Digital transformation is happening everywhere – and with increasing urgency in the Federal government.  Advances in cloud technology have allowed the acceleration of these initiatives; yet with those innovations come critical cybersecurity challenges, especially as it relates to identity management and data privacy.

The Federal government houses some of the most sensitive information anywhere, including Social Security numbers, medical records and national security data – a virtual goldmine for attackers and other bad actors.  The government reports tens of thousands of cyber incidents each year – numbers that are expected to grow with attacks that are only get more sophisticated.  As government agencies modernize their digital infrastructures, new processes must be put in place to address the reality of today’s landscape.

This summer, the White House released a new policy memorandum for Identity, Credential, and Access Management (ICAM), addressing its importance in digital government operations and outlining new procedures agencies must adhere to.

There are two critically important parts:

  • Agencies of the Federal Government must be able to identify, credential, monitor, and manage subjects that access Federal resources. This includes the ability to revoke access privileges when no longer authorized in order to mitigate insider threats associated with compromised or potentially compromised credentials. It also includes the digital identity lifecycle of devices, non-person entities (NPEs), and automated technologies such as Robotic Process Automation (RPA) tools and Artificial Intelligence (AI).
  • Each agency shall define and maintain a single comprehensive ICAM policy, process, and technology solution roadmap. These items should encompass government-wide Federal Identity, Credential, and Access Management (FICAM) Architecture and CDM requirements, incorporate applicable Federal policies, standards, playbooks, and guidelines, and include roles and responsibilities for all users.

This guidance makes clear that federal agencies must now shift toward a dynamic, risk-based approach to securing federal information and infrastructure, one that requires a measurable, fully-documented risk management and technology adoption process. To ensure compliance with the new ICAM policy, agencies need to start with the following baseline essentials:

  • Understand the Breadth of “Identity”
    More than just a single user, identity encapsulates every device and application a user accesses through credentials, which present one of the greatest risk factors. An admin may be one single user, but if their credentials get compromised, an attacker can see everything they have access to – making it critical to have the right mechanisms in place to authenticate and track all of the identities within your infrastructure. Safeguards like step-up authentication and managerial approval processes help mitigate risk from privileged credential-based attacks before allowing access to critical assets and resources.
  • Manage Risk Though Privilege
    Since privileged and administrative accounts have been exploited in nearly every major attack affecting federal government agencies, the first priority needs to be securing privileged access. Security frameworks such as the Council on Cyber Security Top 20 Critical Security Controls, NIST, and others, have always maintained the importance of protecting, managing and monitoring privileged and administrative accounts and provide excellent resources for agencies on how to most effectively do so.
  • Address Common Attacks
    Attackers often harvest credentials and move laterally across the infrastructure, for example using Pass-the-Hash techniques in which an attacker steals account credentials from one device and uses them to authenticate across other network entry points in order to steal elevated permissions, or by leveraging unmanaged SSH keys in order to login with root access. Understanding where your agency is most vulnerable and take actions to fortify these weaknesses while prioritizing the most important credentials. Implement automated controls to respond when it’s necessary to respond.
  • Measure Continuously
    Regularly audit infrastructure to discover potentially hidden and unprotected privileged access, including cloud and DevOps environments – which Federal agencies are increasingly using. Ensure continuous reassessment and improvement in privileged access hygiene to address a changing threat environment and identify and pre-define the key indicators of malicious activity.

While the adoption of transformative technologies like cloud environments does expand an agency’s attack surface, the solution is not to eschew modern technology but rather to account for the risks that these technologies introduce and make them part of the solution. The White House’s new guidelines provide a comprehensive focus for agencies to do just that – make the most of opportunities afforded by digital transformation, while instituting a risk-based approach that protects agencies’ most important resources simultaneously.

By zeroing in on the critical area of privileged access, addressing common types of attacks, and measuring outcomes continuously, federal agencies will be well-equipped to adopt this new risk-based approach to security now required, but without sacrificing technological advancements that are integral to modern organizations.

Why Agencies Should Make Zero Trust Their Mission

By: Lisa Lorenzin, Director of Emerging Technology Solutions for the Americas, Zscaler

Federal CIOs will be working harder than ever to deploy cloud applications and infrastructure over the next year as they work to meet 2020 Data Center Optimization Initiative (DCOI) deadlines, continue to deploy shared services, and work to meet evolving mission requirements.

The cloud push brings new opportunities for flexibility and efficiency. But alongside this progress, federal cyber leaders need new cyber defenses to protect increasingly complex environments that now span multiple cloud providers in addition to existing data centers.

It’s not news that security concerns have stymied cloud progress. Furthermore, agencies are saddled with technical debt that makes innovation difficult and leads to a slower-than-expected cloud adoption. As a result, in 2019, 80 percent of the federal IT budget is spent supporting legacy systems rather than on driving innovation.

To accelerate cloud adoption, overcome technical debt, and support 21st-century missions and citizen services, agencies need flexible security solutions that provide a consistent user experience across both cloud and data center environments. Increasingly, federal agencies are considering a zero trust approach to help address these requirements.

Based on the idea that an organization should not inherently trust any user or network, zero trust helps agencies balance security and productivity. Under this model, any attempt to access a system or application is verified before the user is granted any level of access. Authorized users receive secure, fast access to private applications in the data center or cloud, regardless of whether the user is on-site or remote, an agency worker, or a third party.

Zero trust is ideal for federal agencies, given the need to protect data on a massive scale in an increasingly hybrid environment. The list of devices connected to an agency’s network continues to grow.  Also, agencies increasingly manage assets that are beyond their traditional network perimeter – effectively creating a larger attack surface. Considering the variety and sensitive nature of government data, and the criticality of federal missions, agencies clearly need an equivalent level of protection.

Connect the Right User to the Right Application

Zero trust prevents unauthorized users from accessing data and systems – but that’s only the beginning. The real goal is to get the right users connected to what they need to complete their mission as quickly and seamlessly as possible. Agencies that implement zero trust solutions can take advantage of four primary advantages: security, user experience, cost, and simplicity.

From a security standpoint, agencies need a solution that provides granular, context-based access to sensitive resources. With a zero trust solution, security can follow both the application and the user consistently across the organization.

While applications are hosted in multiple environments and users will connect from diverse locations, the user experience can be consistent and transparent. Users will not have to manage added complexity if they are off-network versus on-network, or if an application is hosted in the cloud versus a physical data center.

From a cost perspective, agencies need a solution that enables them to invest at an initial level to solve an initial use case, and then expand organically as the number of use cases grows. Unlike many traditional security models that rely on network-based controls, zero trust should not require a fixed investment – making it ideal for scalable, flexible cloud environments.

Finally, agencies need simplicity. Implementing a zero trust solution should make it easy for users and administrators to consistently access the information they need. Who is using which applications and how often? What is the user experience when accessing a specific application or when accessing from a particular location?

TIC 3.0 Changes the Game

The traditional security process for remote access in federal environments, as we know, is not optimal.  The agency establishes a security perimeter and deploys a virtual private network (VPN) to connect endpoints to the network when the user is outside that perimeter. Then the user connects to the agency data center through a stack of various infrastructure devices (DMZ firewalls, load balancers, etc.) supporting the VPN appliance. If users are accessing private applications hosted on public cloud providers, their traffic is routed back out through a Trusted Internet Connection (TIC), traversing another stack of security appliances before it finally arrives at its destination.

Federal CIO Suzette Kent released the updated TIC 3.0 policy in draft form this past year. These new guidelines are more flexible than previous TIC requirements – they open the door for agencies to use modern security solutions and models like zero trust to protect data and applications in cloud environments. This is a game changer. A FedRAMP-certified zero trust solution can provide modern security, usability, and flexibility – and meet the new TIC 3.0 guidelines.

Where from Here?

TIC 3.0 is expected to accelerate cloud adoption as it enables agencies to take advantage of modern security models like zero trust. There are several steps that can help ease the learning curve for federal teams.

First, consider your current infrastructure. Many agencies have elements of zero trust in place, such as endpoint management, Continuous Diagnostics and Mitigation (CDM), application and data categorization, micro-segmentation, and cloud monitoring.

Next, consider the application landscape. Zero trust is inherently complex to implement, but zero trust network access (ZTNA) solutions like Zscaler Private Access (ZPA), a FedRAMP-authorized cloud-based service, can provide a scalable zero trust environment without placing a significant burden on the IT team. ZPA connects users to applications without placing them on the network or relying on an inbound listener, instead leveraging a global cloud platform to broker inside-out connections that carry authorized user traffic in using TLS-encrypted micro-tunnels. These tunnels provide seamless connectivity to any application regardless of where it’s running, creating a secure segment of one and ensuring apps remain invisible to the internet. The approach reduces the attack surface and eliminates the risk of lateral movement within the security perimeter.

Finally, take advantage of federal community resources. ACT-IAC just published the ACT-IAC Zero Trust White Paper, developed by a government/industry task force. The document shares key concepts around zero trust, recommended steps, and specific lessons learned working within federal environments. ACT-IAC recently hosted a panel discussion on zero trust among industry and agency technologists that explored these concepts at their recent IT Modernization Forum.

As the National Transportation Safety Board recently demonstrated, leveraging a zero trust approach now means agency teams will gain the ability to access and share mission-critical information quickly –anywhere, anytime, from any device. As agencies build cloud confidence, they can, finally, start to shift spending. This means less legacy, more innovation, and ultimately secure, modern government services built to deliver an experience that agency teams will appreciate.

Unlocking the Security Benefits of Multicloud and Network Modernization

By: Greg Fletcher, Business Development Director, Federal Civilian Agencies, Juniper Networks

The government’s modernization effort has evolved over time with the help of policy developments, increased funding and a cultural shift toward embracing technology. Federal leaders dedicated years to planning the impending digital transformation and now, agencies are beginning to leverage innovative forms of technology to reach their diverse mission goals.

Cloud adoption continues to play a critical role in this modernization effort for agencies, ranging from the U.S. Department of Homeland Security to U.S. Department of Defense. When looking to move their critical data, many agencies are turning to a hybrid multicloud environment, which enables data sets to live on-premise and in the cloud. Accomplishing a successful cloud adoption is no small feat – in fact, many agencies were tasked first with retrofitting the path in which this data moves from one environment to another – the network. There are many security benefits to modernizing federal networks and adopting a hybrid multicloud environment, but three key outcomes include:

Greater Visibility

With the enactment of the Modernizing Government Technology Act and the Cloud Smart Strategy, the federal government’s migration to the cloud is imminent. And yet, many agencies are still concerned that their data could be compromised when migrating sensitive information to public cloud environments. Legacy networks lack the sophistication that federal agencies need to monitor for suspicious activity and uncover nefarious threats. After all, federal agencies can’t mitigate security threats if they don’t know they exist.

Using a common operating system and a single, open orchestration platform, multicloud solutions help you manage the complexity of operating in different environments and can provide a methods-driven approach that lets agencies map their own path. They would also be able to operate with consistent policy and control across all places in the network, with support to launch workloads on any cloud and in any server across a multivendor environment. By adopting unified and integrated networks across the public cloud and on-premise IT infrastructure, federal agencies can achieve greater visibility and therefore, seamlessly determine if there are holes in their security posture or if unauthorized devices are accessing the network.

Faster Response Times

It takes as little as a few seconds for a cyberattack to occur – but the aftermath can cost millions and take years to overcome. Federal agencies hold the keys to citizens’ most critical data, whether it’s their social security information or health insurance. For this reason, it’s imperative that this data always remains secure and agencies can mitigate potential threats quickly.

However, when it comes to agencies that haven’t modernized, agility can be a pain point, simply because older networks are prone to latency and jamming if too many devices and bandwidth-intensive applications are running at the same time. As federal agencies begin to migrate some of their data to the public cloud, they can facilitate the migration and optimize the future-state multicloud environment by deploying software defined wide-area networks (SD-WAN) and advanced threat protection (ATP) software that not only transport bandwidth intensive data between public cloud environments and on-premise IT infrastructure quickly and securely, but also respond to suspicious activity immediately.

Lays Foundation for Adoption of Emerging Tech

Technology plays a central role in the administration’s mission to achieve a 21st century government. Most recently, the Alliance for Digital Innovation released a report, which found that the federal government could have reduced IT spending by $345 billion over the last 25 years if it invested in more commercially available solutions as opposed to architecting systems itself.

The high cost of custom and proprietary developed IT leaves federal agencies with limited resources to ensure the security of their technology platforms, networks and applications. By modernizing their networks using state of the art, commercially available items, government agencies can reduce operation and maintenance costs. In addition, a modern best-of-breed network can support secure, cloud-based applications and other forms of cutting-edge technology, such as drones, artificial intelligence, augmented reality and virtual reality, all of which can enable government to meet its modern-day mission goals.

The administration released the “Executive Order on Strengthening Cybersecurity of Federal Networks and Critical Infrastructure”  two years ago, and there is still much work to be done when it comes to federal agencies modernizing their networks. By overhauling its networks, government addresses a key element to successfully migrate to a hybrid multicloud environment realizing the agility, security, and cost benefits it offers.

 

TIC 3.0 Will Remove a Significant Cloud Barrier

By: Stephen Kovac, Vice President of Global Government and Compliance at Zscaler

The Office of Management and Budget in coordination with the Department of Homeland Security recently proposed an update to the Trusted Internet Connections (TIC) policy: TIC 3.0. Still in draft form, TIC 3.0, proposes increased cloud security flexibility for federal agencies, and the opportunity to use modern security capabilities to meet the spirit and intent of the original TIC policy.

During MeriTalk’s Cloud Computing Brainstorm Conference, I had the opportunity to present a session with Sean Connelly, Senior Cybersecurity Architect, CISA, DHS – or as I like to call him “Mr. TIC.” We discussed how the revised TIC 3.0 policy will remove cloud barriers and accelerate Federal cloud transformation. Connelly, who has been with DHS for the last 6 years, helped lead the TIC initiative, including recent updates to TIC 3.0.

Challenges for TIC in today’s environment

Connelly first explained that the policy originated in 2007 as a way for OMB to determine how many external connections were being used by Federal networks. The number of connections was “eye-opening” – and, OMB found the security surrounding these connections wasn’t consistent, even within the same agency. The original policy required external connections to run through the TIC with a standard set of firewalls to give agencies baseline security. But today, as the number of mobile devices and cloud adoption expands, the perimeter is dissolving. This evolving landscape makes it difficult for agencies to determine what connections are internal or external to their network.

Where do we go from here?

When I asked Connelly how TIC 3.0 will modernize internet security, he echoed Federal CIO Suzette Kent by saying “flexibility and choice”. Instead of having two choices – internal or external – TIC 3.0 allows three different choices: low, medium, and high trust zones. He said, “it changes the game entirely.” Agencies now have a responsibility to determine the appropriate trust zone for their networks.

Connelly added, “If you look at today’s environment, you’ve gone from fixed assets and desktops – and now you have mobile assets, mobile devices, and pretty soon the platform is not even going to matter… so we have to make sure the policy and reference architecture can support all three models going forward.”

Catalog of use cases

One important aspect of the draft TIC 3.0 policy is the addition of use cases that encourage moving TIC functions away from perimeter-based, single-tenant appliances to a multi-tenant, cloud service model. As agencies develop TIC 3.0 solutions, it is vital they share them across government, providing other IT leaders the opportunity to compare their security requirements, review the viable and tested options, and avoid reinventing the wheel.

Connelly shared that the use cases will come out on a consistent basis and will result in a “catalog approach to use cases.” Agencies can propose pilot programs through the Federal CISO Council; then DHS and OMB will work with the agencies on their pilots. The pilot programs will provide agencies with the use case examples and lessons learned.

When can we expect the final policy?

The final TIC 3.0 policy will be issued later this year. Connelly confirmed the final policy will look “very similar” to the draft policy.

Increased cloud adoption across the federal space will lay the foundation for emerging technology, shared services, and the ability to meet the expectations of a federal workforce that wants simple, seamless access to applications and data.

TIC 3.0 is an important step forward to expand cloud security options and remove a significant cloud barrier. With these flexible new guidelines, we should see accelerated cloud adoption in government. I’m excited to see the innovation ahead.

Is Your AI Program Stalled? Consider “as-a-Service”

Two months ago, the President signed an executive order to accelerate the research and development of artificial intelligence tools in government. It laid out six strategic goals for federal agencies:

  1. Promote “sustained investment” in AI R&D with industry, academic and international partners.
  2. Improve access to “high-quality and fully traceable federal data.”
  3. Reduce the barriers to greater AI adoption.
  4. Ensure cybersecurity standards to “minimize vulnerability to attacks from malicious actors.”
  5. Train “the next generation” of American AI researchers.
  6. Develop a national action plan to “protect the advantage” of the United States in AI.

While the order does not provide for additional funding, Agency heads were directed to set aside funding in their 2020 budget. Without additional funds, these AI programs will compete for budget allocation with operational and maintenance requirements, data center modernization and other infrastructure requirements. One option to mitigate the financial pressure while forwarding an AI strategy is to look to an as-a-Service model. By 2023, the AI as-a-Service market is expected to grow at a Compound Annual Growth Rate (CAGR) of 48.2% going from $1.5 Billion in 2018 to $10.8 Billion in 2023. (Markets and Markets Press Release)

AI as-a-Service can mean many different things. On one hand, cloud-based APIs are playing a role in helping organizations speed up their AI programs to analyze data and add different application features based upon their requirements. On the other end of the spectrum, AIaaS can mean AI ready infrastructure delivered in a consumption-based model, where hardware and software are consumed through a OpEx funding model. It could be a fully managed service, or the agency could choose to manage it themselves. They may also opt for professional services with data scientists and analysts available on a contract basis, or they can leverage internal expertise and resources. The balance of how an agency uses as-a-service is unique to their requirements, but it should be a consideration as they develop a strategy to ensure a successful AI implementation.

Information Age identifies 7 Steps to a successful AI implementation which include:

  1. Clearly define a use case.
  2. Verify the availability of data.
  3. Carry out basic data exploration.
  4. Define a model-building methodology.
  5. Define a model-validation methodology.
  6. Automation and production rollout.
  7. Continue to update the model.

While we typically associate AI with technology, for a successful deployment, it is equal parts planning, process and infrastructure. As agencies move forward with their AI plans, they should consider what phases of this process can be as-a-service to reduce their burden for budget, expertise, resources and technology. This not only can alleviate financial pressure, but accelerate the speed to value and provide an agile acquisition model to pivot when necessary. When coupled with a marketplace that enables fast acquisition of technology and an easy resource to manage the as-a-service environment from end-to-end, agencies will be well positioned to deliver against AI mandates, while maintaining their existing IT infrastructure.

To learn more about how -aaS options can jumpstart agency AI progress, download the “AI and the Infrastructure Gap” Infographic.

Scott Aukema is Director of Solutions Marketing at ViON Corporation with 15 years of experience supporting public sector commercial, and enterprise segments.

The Next Computing Wave: Ultra Powerful, Ultra Accelerated, Ultra Connected

Never, in human history, have we seen this much technological change in such a short period of time. As technology grows more powerful, every facet of society has raced to adapt. It is exciting (and perhaps a bit daunting) that approaching advancements will probably hit faster, and may be even more dramatic, than changes that came before. More specifically, several emerging technologies – Bluetooth 5, 5G, WiFi 6, and quantum computing – are poised to profoundly change our lives. They are all wireless technologies, so our wearable and carryable devices will generate and process more data much faster and help us navigate our paths more effectively than before.

While it’s still early in deployment, 5G will deliver unprecedented wireless access speeds. Case in point: It will take less than four seconds on a 5G network to download a two-hour movie, while it takes five or six minutes on a 4G network today. With that much speed and power at our fingertips, it’s difficult to fully anticipate how we will use it, but its impact will be striking, and it will undoubtedly play a critical role in emerging businesses and applications, like self-driving cars, augmented reality, or the Internet of Things (IoT).

Another breakthrough technology in the wireless arena is WiFi 6 (or 802.11ax). It’s coming soon, and it’s built for IoT. It will connect many, many more people to mobile devices, household appliances, or public utilities, such as the power grid and traffic lights. The transfer rates with WiFi 6 are expected to improve anywhere from four times to 10 times current speeds, with a lower power draw, i.e. while using less electricity. The benefits may not be explosive as 5G, but its impact will be consequential.

In the hardware space, quantum computers could revolutionize everything from encryption to medicine. It’s hard to remember a time in recent history when quantum computing wasn’t considered a distant dream. After decades of research, though, we may finally see actual benefits of quantum computing. It won’t happen tomorrow – in fact, it could still be a few years off – but when it hits, it will fundamentally change how we use technology, possibly spawning new industries we couldn’t even conceive of in the 20th century. And… we’re actually testing it now, several years before we thought we could use of it.

One key thing to note about this new technological terrain – it wouldn’t be possible without the cloud. The network revolution mentioned above is (simply put) built to handle the supernova explosion of Internet of Things (IoT) devices.  These devices (aka sensors) are going to create and store massive amounts of data into the cloud – all the time. The flexibility of the cloud allows service providers and developers at home and in enterprises to modify applications in near-real time. In fact, almost all AI-based applications or machine learning programs will be built in the cloud, including the wireless apps used in retail, manufacturing, transportation, and more. Two key accelerator techniques are serverless computing and edge computing. We will cover these topics in depth in a future communication.

At NASA JPL, we expect the new wave of computing technologies to have a materially positive effect on our work. Here are but a few examples: Use of GPUs have sped processing up to 30 times faster. We use machine learning to create soil moisture data models, which is key for crop forecasts. Deep learning helps us detect spacecraft anomalies. AI has proven effective in use cases as diverse as from defending the network from attacks to finding interesting rocks on Mars. IoT helps us measure particles in clean rooms, improve safety, reduce energy usage, control conference rooms, and much more. The cloud reduces the cost – but increases the speed – of experimentation, encourages innovation, and provides the flexibility that allows us to pivot when we fail. We are accelerating the speed of processing by “using high performance computing and specialized ultra-efficient processors such as GPUs, TPUs, FPGAs, quantum computers, and more, all in the cloud. This increased pace of innovation is necessary as the future is barreling at us at interstellar speed, but as we have our work firmly planted in the cloud, we are eagerly looking forward to it.

Tom Soderstrom is the IT Chief Technology and Innovation Officer at the Jet Propulsion Laboratory (JPL). He leads a collaborative, practical, and hands-on approach with JPL and industry to investigate and rapidly infuse emerging IT technology trends that are relevant to JPL, NASA, and enterprises.

 

Why Cyber Security and Cloud Computing Personnel Should Be BFFs

Keeping up with hackers is no small task, particularly since some attacks can be sustained in near-perpetuity. No doubt, securing cyberspace is a daunting responsibility. But focusing on security and using emerging technologies can help us meet the challenges. In particular, when the cyber security personnel of cloud providers and cloud customers work closely together, they become a force-multiplier, proactively defending the enterprise from attacks, and reacting more quickly when breaches occur.

Thanks to advancements in artificial intelligence (AI) and supercomputing, federal networks could soon be entirely self-fortifying and self-healing. The goal is to use AI-driven algorithms to autonomously detect hackers, defend the network from cyber-attacks, and automatically respond at lightning speed to an assault. Until that day, though, we rely on a handful of tools to defend our networks from phishing, vishing, or Distributed Denial of Service (DDoS) attacks.

The burden of securing a network and the devices no longer falls exclusively on systems administrators or security professionals but requires the active participation of every single network user. Continuous user training, especially role-based training, in which individuals receive customized security courses based on their specific tasks, is a good supplementary defense. A senior or C-level executive, for example, may need to be trained to identify suspicious emails that contain fund transfer requests; IT professionals may receive more technical training about different types of attacks and possible responses.

Role-based training isn’t enough in itself, though. We’re exploring how to incorporate emerging technologies into our overall cyber security strategy. Default encryption, in which data is encrypted by default during transit and at rest, is now possible (if not commonplace) in the cloud. We are now calling on all industry innovators to advance the industry so that memory can be encrypted and protected as well. In fact, by partnering our own cyber security teams with those of the cloud providers, we can significantly advance our cyber security defense. So, how do we combat all the scary bot attacks we hear about? Well, why not use the intelligence and data gathered from the Internet of Things (IoT) sensors and the power of AI and the cloud to predict, act, and protect our assets? Why not, indeed.

Blockchain, an incorruptible distributed-verification ledger, could be useful to ensure that the data has not been tampered with. While we’re taking a wait-and-see approach to blockchain, the potential is rich and the technology is developing at a tremendous pace. Several new database announcements from the cloud vendors may help provide many of these benefits and are worth investigating.

There are countless budding technologies that may become integral to our cyber security infrastructure in the future – everything from biometrics to quantum computing. The latter could have huge implications on both encryption and decryption. But every tool we use today requires that we have the agility and ability to move resources and experiment at a moment’s notice. Securing the network isn’t just a matter of protecting corporate secrets – in the case of federal organizations, it’s a matter of defending national interests. The cloud provides the computing power and scalability to secure our most valuable assets. We now have to step up to the plate to build in cyber security into everything we do – because when the easiest path for the end user is also the most secure, we’re on the right one.

Tom Soderstrom is the IT Chief Technology and Innovation Officer at the Jet Propulsion Laboratory (JPL). He leads a collaborative, practical, and hands-on approach with JPL and industry to investigate and rapidly infuse emerging IT technology trends that are relevant to JPL, NASA, and enterprises.

 

Cheaper, Faster, Smarter: Welcome to the Age of Software Defined Everything

The benefits of Software Defined Everything (SDE) – in which a physical infrastructure is virtualized and delivered as services on demand – will change the way we design, build, and test new applications. In a virtual environment, operational costs drop precipitously as the pace of innovation accelerates. Containerization – a way to virtually run separate applications without dedicating specific machines for each application – allows organizations to build, fail, test, repair, improve, and deploy new apps at a breakneck pace. Application Programming Interfaces (APIs) allow us to build and reuse other peoples software packages quickly and inexpensively.

What’s the growth path for programmers in the US? I’ve seen many numbers and predictions, and I think most are too conservative. Why? Because the nature of “programming” is expanding. JPL, NASA, Facebook, Uber, etc., all employ traditional software engineers. And we will still need even more. But what about all those who program Alexa or Google Home, or program their 3D printers, or their home automation systems, or compose music on their computers? They are using software to define their environment and that’s the essense of SDE, where everything becomes programmable or configurable.

If programming will become democratized, what will be the most popular languages? Today, Python seems to be growing the fastest. Tomorrow, I think it’ll be APIs that will prove the most valuable because they can access pre-written software libraries and change their own environment in minutes. This assumes that people can get to the underlying data and combine it with other data. The cloud will come to the rescue as most data will already be stored there. The companies that are able to take advantage of this agilty – and use data from inside and outside their organizations – will quickly establish a competitive advantage.

If an environment can be accessed through software, it can be automated through artificial intelligence (AI). The combination of: data available in the cloud; accessible APIs; Internet of Things; automation skills; AI skills; open source code available; and distributed serverless computing to inexpensively execute the automation will speed progress towards a self-configuring and self-healing environment for the companies that will become the leaders.

SDE, for example, is one of the most technologically revolutionary things we’re adopting as it promises to dramatically decrease the cost and increase the agility of dealing with our networks. The cloud enables these changes, in part, because capacity and bandwidth can be sliced on the fly. It’s a departure from what we see today, where network engineers set up a physical infrastructure that can be narrowly used by specific people for specific purposes.

SDE touches upon every aspect of an organization’s technology strategy, ranging from security to storage. It will allow systems administrators to view and manage an entire network over a single screen. Increasingly, it also allows networks to run themselves, particularly in cases where there is simply not enough manpower for the job. Netflix, for example, found, in at least one region, it was making a production change every 1.1 or 1.2 seconds. The solution? The company built a self-healing network that automatically monitors production environments and makes real-time operational decisions based on problems that are identified through AI.

The move toward SDE also means that proprietary technology – such as application programming interfaces (APIs) – will become differentiating factors for businesses. In fact, it will become so important, we may see programming taught in schools, along with reading, writing, and arithmetic. Those coding skills could be used to customize anything, from home appliances to self-driving cars.

It’s reasonable to assume that in the not-too-distant future, federal agencies will run software-defined networks that are auto-scaling, self-fortifying, and self-healing. As precious resources are freed up, government organizations can better solve real-life problems, closer to agencies’ missions, and further away from the labor-intensive effort of maintaining a physical infrastructure. And, of course, it doesn’t hurt that it will save the government millions of dollars in the process.

Tom Soderstrom is the IT Chief Technology and Innovation Officer at the Jet Propulsion Laboratory (JPL). He leads a collaborative, practical, and hands-on approach with JPL and industry to investigate and rapidly infuse emerging IT technology trends that are relevant to JPL, NASA, and enterprises.

How Can We Benefit from Changing Work Habits?

It’s hard to know for certain what the U.S. economy will look like in 10 years’ time, much less what the work day will look like. One thing we do know: As younger generations enter the workforce, they will bring new habits, technology, conventions, and expectations that will likely transform business and government.

With the rise of the “gig economy,” (that is, the rise of independent contractors who work on a task-by-task basis) younger generations already expect a certain amount of freedom and autonomy in their careers. And that’s probably a reasonable expectation, given that they walk around with supercomputers in their pockets, just a click or voice command away from talking with anyone – or controlling equipment – at any time.

Just as millennials eschew traditionally wired conventions, such as cable TV and fixed phone lines, but embrace the sharing economy, such as ridesharing, and open work spaces, the next generation of workers may not value the same things that people want today. They may not be motivated by a base salary, corner office, or a lofty title.

Businesses may also want to provide customized work experiences for their employees in the not-too distant future. Instead of an email notification for an upcoming meeting, an intelligent digital assistant may remind an employee of the meeting and automatically book transportation to the meeting based on the individual’s schedule, traffic conditions, and so on. While this already exists, the scope will be greatly expanded.

Gamification strategies could play a key part in helping businesses better understand what drives its employees. When we talk about gamification, we’re not talking about turning everyday tasks into cartoonish video games, but rather figuring out how to keep employees engaged. This includes setting up structures that allow them to compete against themselves or others (individually or in teams) in the completion of certain tasks or training exercises. They will likely use open spaces and Internet of Things (IoT) with wearables, augmented reality, and in-the-room sensors to collect data. The outcomes or effectiveness will be measured and communicated in real time using analytics employing massive amounts of data from both inside and outside the organization, all stored in the cloud. JPL has already tested this with scientists and operators navigating the Mars rovers, and it has proved to be very effective.

At JPL, we’re trying to meet the changing needs of the business by benefitting from these trends. For example, to quickly and inexpensively help solve a problem of finding parking at JPL, we held a month-long hackathon where teams of interns created prototype mobile phone solutions. There were two winning teams and their ideas were incorporated into a mobile solution now used every day by JPLers. As you approach JPL, your mobile phone will speak and tell you where there is parking available. You can also look at historical data to know what time to leave home or when to go to lunch so you can find parking when you arrive. The data is also used to predict parking during public events.

Other examples include interacting with intelligent digital assistants for many things, including finding conference rooms, hearing what’s happing on Mars, or when the International Space Station will be overhead.

As we approach what will likely be a rich and exciting decade, the most important thing organizations can do right now is lay the groundwork for change in two areas.

Technologically speaking, that means embracing cloud computing, IoT, and the wireless network improvement waves that are rapidly approaching. In particular, serverless computing is cost-effective and it allows organizations to innovate and to experiment with things like artificial intelligence or augmented reality. Edge computing allows business to employ these capabilities at huge scale and speed, which will further solidify the real time gamification and wireless communication future that our next generation will expect.

Humanologically speaking (yes, I know that’s not a word, but perhaps it should be), we can set up innovation labs and experiment in our own environments to quickly decide what to abandon and where to double down. From our experience so far, you will find willing participants in the new workforce and experiments in a safe, protected environment will serve as training opportunities and quickly evolve to produce lasting positive outcomes. And, in case you wonder, yes, it’s fun.

Tom Soderstrom is the IT Chief Technology and Innovation Officer at the Jet Propulsion Laboratory (JPL). He leads a collaborative, practical, and hands-on approach with JPL and industry to investigate and rapidly infuse emerging IT technology trends that are relevant to JPL, NASA, and enterprises.

How NASA Is Using Ubiquitous Computing to Bring Mars to Earth

When the term “ubiquitous computing” was first coined in the late 1980s, it was envisioned as a sea of electronic devices “connected by wires, radio waves, and infrared,” that are so widely used, nobody notices their presence. By that definition, the era of ubiquitous computing arrived some time ago. Everything from movie theater projectors to plumbing systems have gone digital, paving the way to meaningful, data-driven change in our daily lives.

A lot of people remain skeptical of the Internet of Things (IoT). They may not realize that they already depend upon it. While IoT often conjures images of futuristic devices – senselessly superpowered refrigerators or embarrassingly smart toilets – there are good reasons why Americans are buying thousands of programmable appliances every month. Smart devices provide an improved service to consumers, and the data we can collect from them are invaluable, with the potential to revolutionize everything from traffic management to agriculture.

It also means that the idea of a computer – as some sort of electronic box typically used at a desk – is woefully outdated. In fact, the new super computer is the one we have in our pockets, the SmartPhone, which is rapidly becoming our ever-present intelligent digital assistant. Ubiquitous computing is changing the way we interact with everyday objects. We don’t flip on light switches, we instruct a digital assistant to turn our lights on for us; or, in some cases, we just push a button on our phones to turn a light on. If a smoke alarm goes off, we don’t need to get out the ladder and manually turn it off – we can see the source of the smoke on our phones and turn the alarm off with a mere swipe.

The Curiosity rover, the most technologically advanced rover ever built, has 17 cameras and a robotic arm with a suite of specialized lab tools and instruments. It required the work of 7,000 people, nationwide, to design and construct it over the course of five years.

At JPL and NASA, one of countless ways ubiquitous computing has woven its way into our work is through augmented reality (AR). Today, if anyone wants an up-close look at Curiosity, they need only use their phones or a pair of smart goggles. JPL built an augmented reality app that allows you to bring Curiosity anywhere – into your own home, conference room, or even backyard. The app lets you walk around the rover and examine it from any angle, as if it were actually there. In addition, scientists can don AR glasses and meet on the surface of Mars to discuss rocks or features – all from their own homes or conference rooms scattered across Earth.

AR may feel like magic to the end user, but it’s not. It’s the culmination of decades of technological advancements. It requires an assortment of sensors (light and motion), cameras, as well as substantial data processing power – power that only became affordable and available via mobile devices in recent years. In fact, we are now seeing the initial swells of a major ubiquitous computing wave hitting our shores within the next few years. The entire wireless networking industry is being revolutionized to meet the needs of exponentially more devices communicating to each other and to us all the time. That’s when we all become IoT magicians in our daily lives and when that second brain (the SmartPhone) fires on all neurons. (More about that in a future blog.)

Now we can use AR for more than just finding Pokemon in the wild – we use it to review and build spacecraft. We can get a detailed look at a vehicle’s hardware without actually taking it apart, or we can see if a hand might fit in a tight space to manually turn a screw. Among the many advantages of augmented reality: It’s cost-efficient for multiple people to work together over a virtual network and it could easily be used for hands-free safety training, testing, or maintenance.

While AR is dependent on a slew of technologies, perhaps the most critical piece is the cloud. A lot of AR applications would be cost prohibitive without the supercomputing power available over the cloud. Based on our experience at JPL, we estimate that serverless computing can be up to 100 times less expensive than N-tier server-based computing. Not surprisingly, we’re now starting to use serverless computing as often as we can. What it really means is that we don’t have to worry about how a problem is solved, we just have to worry about what problems we’re solving. And that’s a powerful position to be in.

Tom Soderstrom is the IT Chief Technology and Innovation Officer at the Jet Propulsion Laboratory (JPL). He leads a collaborative, practical, and hands-on approach with JPL and industry to investigate and rapidly infuse emerging IT technology trends that are relevant to JPL, NASA, and enterprises.

The Rise of Intelligent Digital Assistants

Before we can have a rational discussion about artificial intelligence (AI), we should probably cut through the hysteria surrounding it. Contrary to popular opinion, we’re not talking about machines that are going to rise up, steal our jobs, and render humans obsolete.

We already rely on AI in countless daily tasks. The AI embedded in your email, for example, identifies and blocks spam; it reminds you to include an attachment if you’ve forgotten it; it learns which sorts of emails are most important to you and it pesters you to respond to an email that’s been neglected in your inbox for a few days. Chatbots, robot vacuums, movie recommendation engines, and self-driving cars already have AI built into them. Right now, its impact may be subtle. But, in the future, AI could save lives by, for example, detecting cancer much earlier so treatments can be that much more effective.

Why is AI exploding right now? A unique mix of conditions makes this the right time to surf the AI technology wave. They include the availability of supercomputing power, venture capital, oceans of data, open source software, and programming skills. In this sort of environment, you can train any number of algorithms, prep them and deploy them on smart computers in a very short time. The implications for most major industries – everything from medicine to transportation – are tremendous.

At NASA JPL, we’ve integrated AI into our mission in a variety of ways. We use it to help find interesting features on Mars, predicting maintenance of our antennas, and finding expected problems with the spacecraft. We’ve also come to depend upon intelligent digital assistants in our day-to-day operations. After a brainstorming session about how to experiment with applied intelligent assistance, we decided to throw AI on a mundane – but time-consuming – daily challenge: Finding an available conference room. Later that night, we built a chatbot.

Chatbots are fast, easy to build, and they deliver value almost immediately. We build ours to be easy to access with natural user interfaces. We then add the deeper (and more complex) AI on the back end, so the chatbots get smarter and smarter over time. You can text to them, or type to them, or speak to them through Amazon Alexa or Lex, and we collect user feedback to constantly improve them. Every “thumbs up” or “thumbs down” helps improve the next iteration. We can mine the thumbs down to see which areas need the most work. We now have upwards of 10 emerging chatbots used for functions like acquisitions, AV controls, and even human resources. By thinking of this as a “system of intelligence,” we can extend the life of – and get more value from – the legacy systems by teaching them how to respond to deeper questions. While applied artificial intelligence can conquer any number of menial tasks, it’s bound to have a significant effect on some of our bigger challenges, such as in space exploration, creating new materials to be 3D printed, medicine, manufacturing, security, and education.

AI has especially rich potential in the federal government, where one small operational improvement can have an exponential effect. If you work in a large agency and you’re unsure of how to approach AI, you can start playing in the cloud with any number of cloud-based applications (such as TensorFlow and SageMaker). Chatbots are a natural starting point – they deliver value right away, and the longer you use them, the smarter and more effective they become. In the cloud, experimentation is inexpensive and somewhat effortless. The goal, after all, is to have AI work for you, not the other way around.

Tom Soderstrom is the IT Chief Technology and Innovation Officer at the Jet Propulsion Laboratory (JPL). He leads a collaborative, practical, and hands-on approach with JPL and industry to investigate and rapidly infuse emerging IT technology trends that are relevant to JPL, NASA, and enterprises.

 

The Next Tech Tsunami Is Here: How to Ride the Wave

Our lives are very different today than they were even 15 years ago. We walk around with computers in our pockets and do things like share our locations with loved ones via satellite technology. Still, the changes we’ve experienced so far are nothing compared to the tech tidal wave that’s approaching.

A convergence of computing advancements ranging from supercomputers to machine learning will transform our daily lives. The way we learn, communicate, work, play, and govern in the future will be very different.

The question most IT decision makers are asking themselves right now is what they should do to prepare for the technology waves that are barreling at them. The short answer: Jump in. The only way to stay in the game is to play in the game. A certain amount of failure is inevitable, which is why it’s critical to experiment now and adapt quickly. Given the rate at which technology is evolving, anyone who takes a wait-and-see approach in hopes of finding a perfect, neatly wrapped, risk-free solution will be left out entirely. Or, alternatively, they will be left with a hefty bill for the dubious pleasure of playing catch-up.

The good news: Anyone with a cloud strategy is part-way there. The future of technology is the cloud. Every significant emerging advancement is predicated on it. Without it, the tech revolution would cost too much or take too long. The only practical and affordable way to store, collect, and analyze the massive volumes of data we’ll be bombarded with–data that’s generated by everything from smart watches to self-driving cars–will be through a cloud-based computing system. The tools we will use to make sense of our modern lives will be deployed in the cloud, because it’s the simplest, fastest, and most affordable way to use them.

In this series, we’re going to highlight the six technology waves that we expect will make the biggest impacts in the coming years. They are: Accelerated Computing, Applied Artificial Intelligence, Cybersecurity, Software-Defined Everything, Ubiquitous Computing, and New Habits. Tech waves are plentiful, but not every wave is meant to be surfed by every organization. This series aims to help you distinguish which ones are right for you.

Tom Soderstrom is the IT Chief Technology and Innovation Officer at the Jet Propulsion Laboratory (JPL). He leads a collaborative, practical, and hands-on approach with JPL and industry to investigate and rapidly infuse emerging IT technology trends that are relevant to JPL, NASA, and enterprises.

 

A Golden Age of Government Innovation, Courtesy of The Cloud

Federal agencies are not often known as cradles of innovation, but the adoption of the cloud has helped usher in a new era of government–one that improves transparency, cost efficiency and public engagement.

Technology changes constantly, but many bureaucracies do not. In some cases, government organizations that have adopted the cloud still mainly use it as a glorified filing cabinet. Others, however, are finding new and innovative uses for the virtual infrastructure that have made meaningful changes in people’s lives.

Case in point: In the Virginia Beach metropolitan area, the average elevation hovers around 12 feet above sea level. The region, home to the world’s largest naval base, Naval Station Norfolk, is expected to see sea levels rise another foot by 2050, according to one report, and the estimated cost of flooding has already surpassed $4 billion. Given that the sea level has risen an astonishing 14 inches since 1950, it follows that flooding is a consistent problem.

These days, it doesn’t take much rain to cause nuisance flooding. While researchers and engineers are exploring long-term solutions, the City of Virginia Beach helped roll out StormSense, a cloud-based platform that uses a network of sensors to help forecast floods up to 36 hours in advance of major weather events.

It was not a minor undertaking. The public initiative, dubbed “Catch the King”, was meant to encourage “citizen scientists” to use their GPS-enabled devices to help map the King Tide’s maximum inundation extents. The data allows researchers to create more accurate forecast models for future floods; determine new emergency routes; reduce property damage and identify future flooding mitigation projects.

Volunteers–all 500 of them–were directed to different flooding zones along the coast during the King Tide on Nov. 5, 2017. (A “King Tide” is a particularly high tide that tends to occur twice a year, based on the gravitational pull between the sun and moon.) As the King Tide came in, volunteers walked along designated points in 12 Virginia cities, saving data in a mobile phone app every few steps. Ultimately, there were more than 53,000 time-stamped GPS flooding extent measurements taken and 1,126 geotagged photographs of the King Tide flooding.

Not only was the initiative a scientific success, it was incredibly cost effective. Without the help of the hundreds of volunteers, it might have been cost prohibitive to tackle.

“It’s really, really amazing to see all these people out there mapping all over the region,” Skip Stiles, executive director of Wetlands Watch, a Norfolk, Virginia-based told The Virginian-Pilot. “It would take numerous tide gauges costing thousands of dollars each to gather the data collected for free by so many volunteers.”

The initiative, which was backed by seven cities, a county, and a handful of academic partners, began when Virginia Beach’s Public Works department decided to fund a USGS installation of 10 new water-level sensors and weather stations to measure water level, air pressure, wind speed and rainfall. Later, as part of the StormSense initiative, an additional 24 water-level sensors were installed, bringing the total to 40, up from the original 6 sensors that were in place in 2016.

The benefits of the data collection have not been entirely realized yet, but the region is already better prepared for future floods because StormSense operates on a cloud platform, which allows emergency responders to access flooding data from any mobile device. The City of Virginia Beach has also been working to develop an Amazon Alexa-based Application Programming Interface (API) so residents can ask an Amazon Alexa-enabled device for timely and accurate flooding forecasts or emergency routes.

Not long ago, the notion that the government could use the internet to communicate directly with citizens, in real-time, with pertinent, useful or critical information, might have seemed like a pipe dream. These days, thanks to innovative new initiatives, it’s a reality.

Derailing Old-School Asset Maintenance

Eighteen years into the 21st century, and one of the most important systems contributing to our economy and quality of life is stuck in the past. Transportation infrastructure, which delivers us from point A to point B and back, has yet to catch up with the digital revolution.

It does not have to be that way. The advent of the Internet of Things (IoT) and cloud computing makes the time right to digitize crucial maintenance functions. A technology-based solution is safer and will save time and resources.

A Time article on how technology can better help maintain America’s infrastructure calls out the Washington, D.C. subway maintenance catastrophe as an illustration of the need for change. The technology exists to revolutionize transportation maintenance, and that means creating a high level of integration among data management, analysis, and deployment environments. It is not a leap to say this technology can help build a crash-free system that is always in service.

Eliminating Paper

Previously, transit agencies relied solely on paper-based tracking procedures, using forms and spreadsheets to monitor and manage critical assets. That is not sustainable, especially for Federally funded agencies that typically store seven years’ worth of data. Paper-based systems are also inefficient and costly. Trying to find a specific document quickly in the case of a safety inspection or funding review can be nearly impossible. This manual process is susceptible to inaccuracies and risk. Without a system in place to unify how data is recorded and analyzed, each inspector can have his or her own way of describing things. The resulting information is open to interpretation, enabling systematic irregularities and human error.

Saying Goodbye to Siloes

There has been exciting innovation in using wireless sensors attached to crucial parts of assets–including engines, brakes, and batteries. Those sensors then feed performance information directly into a centralized system. Thresholds and parameters are preset, while APIs collect and process the data into workable and actionable information.

Aside from eliminating error-prone and time-consuming manual processes, deploying this system in the cloud streamlines and integrates information across the organization. This real-time data helps not only to predict, but to pinpoint maintenance needs. That means avoiding removing perfectly working assets from the system.

With intelligent, real-time data, maintenance issues are only addressed when there is an actual issue or if a threshold has been crossed–reducing mechanic time and parts purchases. Lastly, this optimizes traveler communication. By monitoring the state of vehicles and to schedule maintenance, riders can remain informed about transport schedules and locations.

That is what it comes down to: Living in a connected society, where immediacy and information access is constant. Technology is here to bring our aging infrastructure systems into the digital world, and our antiquated maintenance system is a smart place to start.

 

Kevin Price, Technical Product Evangelist and Product Strategist, Infor EAM

 

 

DevOps in Government: Balancing Velocity and Security

The Federal government isn’t known for its progressive approach to IT infrastructure, and agencies aren’t usually early tech adopters. Yet, agencies are increasingly deploying cutting-edge DevOps methodologies to achieve agility and reduce operating costs.

The Department of Homeland Security, General Services Administration, Environmental Protection Agency and Veterans Affairs are among those breaking the mold. They’re modernizing IT infrastructures, and taking strong steps forward in the digital transformation journey.

But that doesn’t come without risk. Several government agencies and organizations–including the National Security Agency (NSA), Pentagon, Republican National Committee and others–have experienced firsthand what can go wrong when environments aren’t properly secured.

The NSA, for example, rocked headlines late last year when it made top secret data publicly accessible to the world on an Amazon Web Services (AWS) bucket. A simple misconfiguration credited to human error was to blame.

The truth of the matter is DevOps tools often have interfaces designed for human users, and misconfigurations are all too easy and common. Some of the most notable breaches can be traced back to misconfigurations of the perimeter, making it all the more important that security controls are implemented across identities and environments.

Introducing Risk Through An Expanded Attack Surface

As agencies adopt new cloud and DevOps environments, they expand their attack surfaces, creating heightened levels of risk. To mitigate risk against internal and external threats, agencies need to continuously monitor privileged account sessions across every aspect of their network–including DevOps.

The DevOps pipeline comprises a broad set of development, integration, testing and deployment tools, people and resources, so it only makes sense that the attack surface grows alongside IT network expansion. This expanded attack surface is primarily propagated by the increase in privileged account credentials and secrets that are created and shared across interconnected access points. Agencies need to secure these non-human identities just like they would a human identity. Robotic actors can be compromised, and they need access controls just like their human counterparts.

That’s not always such an easy feat, however. The sheer scale and diversity of the DevOps ecosystem can make security challenging for three main reasons:

  1. Each development and test tool, configuration management platform and service orchestration solution has its own privileged credentials, which are usually separately maintained and administered using different systems, creating islands of security.
  2. Secrets (passwords, SSH keys, API keys, etc.) used to authenticate exchanges and encrypt transactions are scattered across machines and applications, making them nearly impossible to track and manage.
  3. Developers often hard code secrets into executables, leaving the Federal government vulnerable to attacks and exposure of confidential data from attackers with stolen secrets.

Although security can be a major pain point when it comes to DevOps implementation, not all is lost. Government agencies have the potential to achieve both velocity and security. The answer lies in secrets management and collaboration.

Lifting the Curtain on Secrets Management

Secrets are integral to the DevOps workflow, but their proliferation across IT environments can have unintended, potentially catastrophic consequences if exploited by attackers.

A secrets management solution can help prevent that from happening. By implementing a tool that can seamlessly connect with DevOps tools and other enterprise security solutions, Federal agencies can get a better view of unmanaged, unprotected secrets across their networks, while still meeting important compliance regulations.

With a prioritization on secrets management, Federal agencies can secure and manage secrets used across human and non-human identities and still achieve superior DevOps agility and velocity.

Eliminating Friction and Prioritizing Collaboration

Agencies and organizations alike often fail to make security easy for DevOps practitioners. Not only does that cause friction, it creates opportunity for failure.

Developers aren’t–nor should they be expected to be–security practitioners. They’re responsible for features and functionality–not figuring out how to manage credential collaboration and security for those key assets.

With that being said, it’s essential that DevOps and security teams be tightly integrated from the outset. This collaborative approach will help build a scalable security platform that is constantly improved as new iterations of tools are developed, tested and released.

Implementing and securing DevOps processes can seem daunting, but it’s no reason to adhere to business as usual and avoid change. When it comes to DevOps, the benefits far outweigh the risk if risk is managed properly.

That’s why it’s so important that agencies prioritize secrets management and collaboration to protect every aspect of their network. Only then will they be able to achieve security and velocity.

Elizabeth Lawler is vice president of DevOps security at CyberArk. She co-founded and served as CEO of Conjur, a DevOps security company acquired by CyberArk in May 2017. Elizabeth has more than 20 years of experience working in highly regulated and sensitive data environments. Prior to founding Conjur, she was chief data officer of Generation Health and held a leadership position in research at the Department of Veterans Affairs.

It’s Not About the Machines – How IoT, AI, and Massive Automation Maximize Human Potential

There are currently more than 8 billion connected devices on the planet, and by 2031 the number of devices will grow to more than 200 billion.* While the federal government is in the early stages of adopting the Internet of Things (IoT) and Artificial Intelligence (AI), it is increasingly evident these technologies will create new opportunities to maximize human potential and modernize federal missions.

Agencies are already making demonstrable progress with emerging technology. Consider how and where IoT is in use. Federal buildings are equipped to maximize energy efficiency and employee productivity using a variety of sensors and monitors. Fleet telematics monitor the location and performance of vehicles in the field and automatically schedule service as needed. Smart devices automate agricultural data collection and track how public transportation systems handle peak travel times. And, the Department of Defense uses IoT technologies to track military supply levels, battlefield conditions, and even soldiers’ vital signs, activities, and sleep quality.

When IoT is paired with automation and AI, the potential is even more transformative – a government that can fully leverage the vast amount of data at its fingertips, in real time. Dell’s research partners at the Institute for the Future (IFTF) recently noted that we’re entering the next era of human machine partnership. AI, IoT, and automation will empower the workforce by efficiently choosing the most important information and enabling teams to quickly make better decisions that reduce costs and improve service to the citizen.

AI will also open new doors for humans, as we will need IT professionals proficient in AI training and parameter-setting, and many new roles that we have not yet imagined.

Ray O’Farrell, Dell’s new IoT division general manager says, “Harnessing the full potential of IoT requires adding intelligence to every stage. Smarter data gathering at the edge, smarter compute at the core, and deeper learning at the cloud. All this intelligence will drive a new level of innovation.”

Federal agencies need this innovation as they work to overcome the limits of legacy technology and meet changing citizen and employee expectations around responsive services, access to information, and many other areas. Over the next several years, the human-machine interface will evolve to the point where technology becomes a seamless extension of its users, rather than merely a functional tool. The real potential for IoT and AI is not to replace human beings, but to free human beings to do the strategic thinking and planning that has taken a back seat to managing the nuts and bolts of technology and federal missions.

By: Cameron Chehreh, Chief Operating Officer, Chief Technology Officer & VP, Dell EMC Federal

 

*Michael Dell, Dell World 2017

The Case for Innovation: Let’s Broaden Other Transaction Authority

The recent Pentagon announcement of a $950 million (ceiling) production contract to nontraditional defense contractor REAN Cloud has sent reverberations through the acquisition community. This Production Other Transaction (OT) agreement follows a similar, although smaller production award by the Defense Innovation Unit Experimental (DiUX) to the cybersecurity firm Tanium (Full disclosure, Tanium is a client of mine).

These Production Other Transaction Authority (OTA) agreements (they are not technically “contracts”) have created quite a buzz in the acquisition community. The awards flowed out of the prototype development work of REAN Cloud and Tanium with the DiUX office in Silicon Valley. Both agreements reflected successful proof-of-concept work that the companies had undertaken with the Department of Defense (DoD) under specific prototyping procurement authorities that have been largely under the radar for many years but were recently expanded in the annual defense policy bill, the NDAA.

Other Transaction agreements are an interesting procurement approach, not so much for what they, as what they are not. The key attribute of Other Transaction Authority is that they are not a Federal Acquisition Regulation (FAR)-based procurement.

Legally, an OTA based agreement is not a contract, grant, or cooperative agreement, as those terms are usually understood, and there is no statutory or regulatory definition of “other transaction.” OTA agreements fall outside of the cumbersome and lengthy procurement processes associated with the Federal Acquisition Regulation.

Folks who claim this is “new thing” are mistaken. OT agreements have been around for a while, having been originated at NASA as so-called “Space Act” agreements. They are limited in terms of who can use them–only those agencies that have been provided OTA may engage in other transactions. Their core purpose is to accelerate access to, and adoption by, certain select agencies within government (currently mostly DoD and NASA, although the Department of Homeland Security has had OTA on an intermittent basis for some time). This authority grants access to innovative technologies from companies that otherwise would have no interest in dealing with the red tape and compliance burden created by a standard procurement relationship.

Because they are “outside” the FAR, OT agreements do not require such cumbersome oversight and audit requirements such as those imposed by the Truth in Negotiations Act, cost and pricing data or an expensive Cost Accounting System (CAS) qualified financial system. CAS compliant financial systems can cost company millions of dollars to implement and maintain–and are therefore a significant, if not potentially fatal, barrier to government market entry for startups and small, innovative companies. These requirements tend to reinforce the “legacy advantage” of large traditional contractors, who can afford to hire and staff these requirements with large staffs of accountants and lawyers.

OT agreements are also not protestable, at least at the Government Accountability Office (GAO). The lack of the ability to protest an award to GAO is an undoubtable factor that plays into the appeal of OTAs to agency procurement officials.

Freed from adhering to the FAR, OTA agreements also allow an agency to tailor the engagement terms to the needs and circumstances of specific requirements. For example, if DoD wants to spur the development of new jetpacks, an OT agreement would allow a customized set of deal terms and intellectual property allocations that the FAR might otherwise prohibit. And those FAR terms that are irrelevant or even obstacles to developing jetpacks need not apply.

But perhaps most importantly, my Silicon Valley clients inform me that unlike the FAR, OTA agreements do not require them to surrender control of their intellectual property rights that is common practice under standard FAR-based procurements.

All these attributes make OTA agreements an ideal “toe in the water” for government market engagement for cloud and startup companies from Silicon Valley (or elsewhere). It’s simply a fact that many innovative start-ups will not remotely consider contracting with the government given the compliance risk, oversight burdens and loss of control over key assets, especially their intellectual property.

Traditional government contractors often complain that OT agreements are an “end run” around the FAR. And it is true that OT agreements have been abused in the past–to the point where the Government Accountability Office has expressed concerns regarding their use.

I am not one who believes that the FAR should be lightly disregarded. Recent work by such luminaries as the Section 809 Panel hold great promise for streamlining FAR-based procurements, with, for example, a reinforcement of the use and value of commercial item procurements and recommendations to open the internet for certain types of commodity purchasing by the government. We will wait with enthusiasm how these reforms will play out.

But until these halcyon days of a streamlined FAR arrive, it is simply a fact that the FAR creates a voluminous tangle of mandates and requirements that require entire staffs of experts to navigate. Regardless of where you stand on the value and necessity of the FAR, there is no question that it hinders access and implementation to innovation and technologies from those small companies that are the source of much of the creativity and innovation for which America is famous.

For OT agreements, it is their flexibility that is the key. The best way to prevent abuse of OTs is through skilled oversight and management by DiUX and the government customer. Use of iterative, agile procurement approaches that curtail a program that goes off the rails–before millions are wasted–might be one way to address this concern.

Perhaps OT agreements should be restricted to short-term deliverables of minimally viable product in structured milestone segments? In that event, OT program abuse could be quickly curtailed before money is wasted.

OT production agreements may require a new, different form of program oversight and management. Let’s talk about that. But in the final analysis, we believe that OTA agreements should be acknowledged and recognized–not as an “end run” around normal procurement methods–but as an important supplement to regular acquisition procedures.

If new oversight standards are developed, we would go so far as to recommend expanding OT production authorities to the rest of government, and specifically the 24 CFO Act agencies. Sure, there is potential for abuse. But it is not as if the FAR has eliminated waste and fraud in government contracting.

Richard Beutel is the founder of Cyrrus Analytics. Rich is a nationally recognized expert in IT acquisition management and cloud policy with 25 years of private sector experience and more than a decade on Capitol Hill working on procurement issues.

 

MGT: A Launchpad for Tech’s Tomorrow

The Modernizing Government Technology Act (MGT) is moving forward. Signed into law in December as part of the National Defense Authorization Act, the White House just published guidelines for agencies who want to access MGT’s $500 million central revolving capital fund. Agencies will submit proposals to a board of experts who will evaluate the proposals based on public impact, feasibility, outcomes, and security.

The law represents the correct prioritization for the federal government, where 75 percent of the $80 billion spent on IT goes toward maintaining outdated systems. With more than $7 billion in technical debt that needs to be eliminated through executing on modernization projects, this $500 million fund only represents a starting point in enabling the work that needs to be done.

The bill’s co-sponsors – Rep. Gerry Connolly (D-Va.) and Rep. Will Hurd (R-Texas) – are to be commended for their vision and tenacity in pushing the bill over the finish line. MGT puts a foundation in place for agencies to enact real change and transform their IT systems.

The ability to reprogram money saved through IT efforts will have a massive impact on agencies that have traditionally shied away from making bold moves that would help streamline tech and save money. Prior to MGT, money saved was money lost.

Agencies are now rewarded, rather than penalized, for taking steps to modernize their systems. Subsequently, they can move quickly to effectively improve their cyber security postures, accelerate service delivery, and save taxpayers’ money.

Agencies can start to establish a strong framework for digital transformation and begin to achieve some of the goals set forth in the Cybersecurity Executive Order, the White House’s IT Modernization Report, and the Data Center Optimization Initiative. In short, agencies need funding and budget planning authority to truly transform their IT systems, and MGT supports this much-needed progress.

As they set off on that journey, federal CIOs need two things: a comprehensive inventory of their IT assets, and strong collaboration between mission functions and the IT teams. IT modernization will not happen overnight. Agencies will still need to maintain legacy systems for some time, as they modernize data centers. Technologies like virtualization, flash, and software-defined storage will play important roles throughout this transition while helping agencies meet key mandates.

To accelerate modernization, agencies will continue to move more and more systems to the cloud as data volumes simultaneously skyrocket in the age of artificial intelligence and the Internet of Things. While there are very real efficiency, cost savings, and speed benefits with the shift to cloud, the evolution can be complicated. Flexible, software-based solutions help streamline and support tomorrow’s technology innovation while enabling agencies to continue to meet today’s mission objectives. MGT is the catalyst for much of this change – empowering agencies to make decisions that will best support their transformation. Now the real work begins.

By: Steve Harris, Senior Vice President and General Manager, Dell EMC Federal

1 3 4 5 6 7 19