MeriTalk sat down with Jeff Grunewald, Force 3’s data center practice manager, and talked about all things data – from migrating to the cloud to crucial monitoring protocols. Having spent over 35 years working in the industry, Jeff knows a thing or two about data centers. Today, Jeff shares his insight on migrating data to the cloud and the critical importance of continuing monitoring protocols.
MeriTalk: How are Federal technology teams currently monitoring their on-premises data and systems? Why is this important?
Grunewald: Federal technology teams are using third-party toolsets that provide a single-pane-of-glass view of an environment. Agencies generally tailor these toolsets to monitor and identify what’s important to them.
It’s important because agencies want to know the health of their environment. These teams can use their monitoring tools to troubleshoot and identify problems more quickly. They can use them for predictive analysis. If they know what their standard usage looks like, they can identify anomalies.
Agencies can also use those tools for capacity planning, which is important when going to the cloud. In an on-premises environment, teams need a lot of lead time to add capacity, and they can’t give it back. When you move to the cloud, you can add capacity almost instantaneously. And, if you’re doing capacity planning and monitoring correctly, they can also see when they don’t need capacity anymore and can give it back. That’s the beauty of the cloud.
MeriTalk: Agencies are moving to the cloud to meet Cloud Smart mandates. In many cases, they are years behind the private sector. What are some lessons agencies can take from private-sector organizations to make their transitions easier?
Grunewald: Agencies cannot move everything at once when they’re migrating. They have to migrate in manageable chunks. To do that, they’ve got to look at their inventory and discover everything that they have in their portfolio – physical hardware as well as applications. They want to determine the actual usage of the applications – it’s not unusual to find that between 10 and 20 percent of their enterprise IT is no longer being used. They can retire things they’re not using anymore, which reduces costs and minimizes security concerns that come with obsolete applications. Compiling a complete inventory will help identify everything that’s important and everything that needs removing.
Agencies should always do their homework to see if the application or dataset can benefit from the move. They can’t assume that moving will necessarily be a good thing. If an application has a large memory requirement, for example, it might not be a good candidate to move to the cloud without modifying the application. In that case, the agency should consider whether the savings from moving to the cloud justifies the cost of modifying the application. Agencies should also consider availability vs. control. Moving an application to the public cloud can provide greater availability, but keeping an application on-site or moving it into a private cloud can give the agency more flexibility and control over the whole environment.
MeriTalk: Discontinuing monitoring protocols when migrating to the cloud can significantly impact agency budgets and compromise performance. Can you share an example of an agency that faced these issues and how Force 3 helped?
Grunewald: The key issue is that the team responsible for migrating may not include a voice from the operational side that’s responsible for properly executing protocols. Currently, we’re working with an agency with requirements to move analytical and reporting applications to the cloud. Unlike most applications running in the on-prem environment, the development team knew the available resources’ limits, and they tuned the application for that environment. The users learned to schedule their use of the application during specific times so that they wouldn’t overload it. When they began migration and testing in the cloud, they found that everybody could use the application at once, because the compute resources were no longer constrained.
When we asked the agency how they intended to measure how they were not going to exceed their budget for usage, they didn’t have an answer because they didn’t know what they were looking for. We provided some examples of monitoring information in their current environment, and through this exercise, we discovered they weren’t fully utilizing the tools. That’s a common occurrence. For example, we all use a word processor, and if we use 15 percent of the features, we’re considered a power user even though we’re not using 85 percent of the features available to us. Monitoring is the same thing – agencies are monitoring for what’s in front of them and sometimes they don’t know all of the other capabilities they have.
We suggested that the agency examine needs in all of their environments – their current environment and their hybrid cloud environment in the future. This analysis is an example of how important it is to know the usage so that monitoring is implemented to help agencies make the best use of their budget.
MeriTalk: How can agencies apply the principles of on-prem monitoring to their cloud environments?
Grunewald: Agencies should think about what they’re doing and why they’re doing it, and what effect changes would have on operations, troubleshooting, security, or strategic and capacity planning. They may be able to monitor with the tools they have, or they may need new tools that take cloud into effect. When they’re monitoring public clouds, they may be using more than one vendor, so they want to look for tools that can be used across all of their environments. They can measure and monitor holistically and maybe move workloads between environments.
For example, NetApp OnCommand Insight provides monitoring, capacity planning, storage resource management, and other capabilities across multiple vendors and in virtual environments on a locally installed appliance. This appliance is basically the same appliance that NetApp uses as its cloud-based solution – Cloud Insights – but on a larger scale, which can also manage the entire hybrid cloud infrastructure. That means that agencies now have a choice – the ability to keep their hybrid cloud monitoring capabilities on prem or take them to the cloud. In taking it to the cloud, they can also repurpose or retire the on-prem infrastructure.
MeriTalk: What should agencies look for in monitoring tools? Can the same tools and processes be used on-prem and in the cloud?
Grunewald: From a process perspective, the cloud is already introducing enough change, so agencies want to maintain the processes they currently have as much as possible. When they look at tools and their use cases, it’s important to ask: What tools will allow them to monitor all parts of the environment together?
The assumption is that their on-prem environment is running as part of their hybrid cloud. How can we monitor everything to see both environments and understand the differences between the two? Moving to the cloud is not a time to introduce silos. It’s time to break down silos and look holistically across everything.
MeriTalk: Federal agencies often have challenges migrating data to the cloud. What are some common missteps?
Grunewald: It’s important to take data management policies into consideration in the beginning of the migration process. This is one of the first things we encourage agencies to look at when dealing with data in the cloud. What are the policies that ensure agencies are appropriately managing and retaining the data? Even though you’re moving the data to the cloud, it’s important to remember that you still have to comply with agency and Federal data management policies.
When they’re creating and accessing data, they need to keep in mind that data in the cloud is usually charged per megabyte. It’s charged, not necessarily for storing it, but for how they’re using it. While it may be very cheap to store data, it can be 15 to 20 times more expensive to use the data. They have to determine how often they’ll use the data. Depending on the situation, maybe they bring it on-prem while they’re accessing it, and then store it in the cloud.
If they’re not accessing the data regularly, they put it in the least expensive, available place to store it. When they need to use it, they transfer it so they’re not continually paying to access that same data. Those are the types of things we encourage customers to look for to avoid issues when migrating.
Of course when you are talking about data, you also have to talk about data security and the ability to properly protect agency data whether it is on prem, in the cloud, or in transit. For example, consider NetApp ONTAP, which provides a common interface to manage storage no matter where it physically resides. ONTAP also provides the security features agencies need, and expect, to protect their data anywhere. This includes things like encryption and key management; compliance with standards like HIPAA and GDPR; cryptographic shredding and sanitizing files as well as WORM (write once read many) file locking; the ability to implement ONTAP security controls in support of a zero trust environment, and multifactor authentication, which protects against the leading cause of security breaches – weak passwords.
MeriTalk: Let’s talk a little about the Bridge to the Cloud methodology. Can you dive into how it can help organizations migrating from on-prem to the cloud?
Grunewald: Bridge to the Cloud really helps with cloud migration in that it allows agencies to begin their cloud transition with a private cloud within their own environment. If they develop in a private cloud environment, they’re controlling that environment and everything in it. If they have problems, they can troubleshoot and address them. They’re able to monitor and see how things are working while they’re in control and aren’t paying to be in the public cloud. They can then asses their applications and determine what should and shouldn’t move to the public cloud. This methodology allows them to play with and monitor everything as they’re building out the cloud environment, so that when they move it to their full hybrid cloud, they know how things are really working, and they can move them with less difficulty.
Bridge to the Cloud is, if nothing else, a guide to migration for agencies. It consists of six steps: 1) assessment, 2) prioritization, 3) roadmap, 4) optimization, 5) build, and 6) migration. I touched on the first step – assessment, which is knowing what is in the agency environment and getting a profile for each application.
Once all the applications are assessed, prioritization can take place. Considerations here include life cycle, impact on the organization, support of the mission, and even politics. One thing I recommend is that you first identify one or two components that should be quick wins – something you can point to as being done quickly and providing value. Nothing will garner support faster than success.
The third step is to create a roadmap for migration. Priorities are not the only consideration in establishing a roadmap. Application retirement, rehosting, and sometimes replacement all need to be taken into consideration. No two applications will take the same migration path, and no two agencies will do everything in the same way or the same order. This step comes down to listening, assessing, and negotiating. Whether it’s because of a mandate, for cost savings, or to enhance the agency’s missions, everyone wants to get to the same place. It just takes time and patience.
The next step is optimization. Will moving to the cloud allow an opportunity to optimize and enhance the application? Is there anything that monitoring and logging past performance has taught us that can be used to make the application more efficient or more economical?
The last two steps – build and migrate – are what most people think of when they refer to moving to the cloud. But if you don’t take the first four steps, failure is assured.
Building is where the work to convert applications to running in the cloud begins. This is also where doing the work in a private cloud environment, as I mentioned, reaps dividends. Now is also the time to utilize containerization, which will support a multi-cloud environment and allow ease of movement between platforms.
The last step is the actual migration. Once the applications are working, optimized, tested and instrumented, they are ready to be moved to their final destination. Don’t forget to test any automation, backup and disaster recovery that may be required as well.