FITARA Nuts and Bolts: Inside the Teacher’s Gradebook

The House Oversight and Government Reform (OGR) IT subcommittee followed up last week’s release of the sixth FITARA Scorecard (Scorecard 6.0) by releasing a second, more detailed scorecard. It provides insight into each of the categories of FITARA scoring, with methodology, metrics, calculations, and detailed data points on just how well each of the 24 agencies fared.

This is the first time OGR has provided insight on how it grades agencies’ IT papers since the release of the first FITARA scorecard in 2014–and yes, the methodology has changed a lot.

Dave Powner, director of IT management issues at the Government Accountability Office (GAO), told MeriTalk that this is the document his team shares with each agency when they meet to discuss how the agency fared. By releasing the science behind the new scorecard, the FITARA overseers have provided the public with a deeper dive into the nuts and bolts of the process–the numbers that back the letters on the card.

So, we’re setting out to make sense of it all. Ever been puzzled by FITARA’s color-coded madness? Wondering why your agency’s efforts still aren’t making the grade? Curious just how much your agency improved? We’re giving you all the information by breaking down the methodology for each category.

But First–What’s New?

MeriTalk has reported on some of the changes since Scorecard 5.0, but here’s a refresher.

With MGT now law, Federal agencies get to reprogram one-year savings to fund three-year IT transformations. June 6 we will dive into how cloud is powering the modernization movement. Learn more and register
The big news: OGR added a Modernizing Government Technology (MGT) Act category to track progress in establishing agency working capital funds. We’ll dive into the methodology shortly, but know this: the new category was immediately factored into scoring, and agencies still have a lot of ground to cover. No agency has an MGT-specific working capital fund up and running yet.

Another new category appears on the scorecard, in preview mode–but it’s not yet factored into scoring. The Federal Information Security Modernization Act (FISMA) is the government-wide cybersecurity legislation enacted in 2014 to promote automated security tools to monitor agency networks.

This is where the DHS-led, massively-important but not-quite-fully-adopted Continuous Diagnostics and Mitigation (CDM) Program gets its legs. Again, the methodology appears below, but take note: cyber monitoring will be a big part of the next FITARA scorecard, and with no agency scoring higher than a C in the preview, agencies need to get moving.

In an ongoing effort to promote CIO authorities, another new change saw agencies drop a full letter grade if their CIO does not report to the agency’s secretary or deputy secretary.

OGR also said at its hearing last week that MEGABYTE Act scoring–whether agencies had complete software licensing inventories–takes on new importance in this card, after being previewed in Scorecard 4.0 and counted for the first time in Scorecard 5.0. Now, let’s get our hands dirty: on to the nuts and bolts.

Agency CIO Authority Enhancement (Incremental)

This category tracks whether agency CIOs have certified that IT investments are implementing incremental development, a process where software projects deliver functionality every six months. OGR noted projects that use a “big-bang” approach, waiting years to deliver big functionality, are more often doomed to failure.

Agencies are graded on the percentage of their projects that display incremental development. The data comes from feeds on the Office of Management and Budget (OMB) IT Dashboard.

This is the category where calculation is perhaps simplest, because it maps directly to school grades: 90 to 100 percent got agencies the A, 80-89 percent a B, and so on. Perhaps that’s made it easier for agencies to wrap their heads around? Fifteen agencies received an A in Scorecard 6.0, the highest of any category, including eight agencies scoring a perfect 100 percent.

Transparency and Risk Management (Dashboard)

This category tracks the percentage of major IT investments that agency CIOs have assessed according to their level of risk, and reported to OMB regarding that level of risk, so that the information can be displayed on OMB’s IT Dashboard.

In 2016, GAO found that agencies were underreporting the level of risk in nearly two-thirds of IT investments reviewed. To create greater transparency, this category asks CIOs to codify what’s at stake in each project, from a dollars-and-cents perspective.

Calculations here map somewhat to class rank, if you will. The five agencies with the most reported risk–investments rated “red” or “yellow,” by dollar–receive an A. The next five agencies, a B; the next, a C, and so on. With 24 agencies to grade, just four received an F. So, you might say, the worse it looks, the better it is.

Portfolio Review (PortfolioStat)

This category, also known as PortfolioStat, tracks agencies’ process to review their IT portfolios to increase efficiency and reduce waste and duplication. OMB helped develop the process, standardizing metrics around cost savings, and reports on agency cost savings to Congress each quarter.

A little math required in the methodology: Take the total PortfolioStat savings for the agency, and divide by the total agency IT budget for the most recent three fiscal years. For reference: 23.1%–very good; 0.5%–not so much.

Like Transparency and Risk Management on the OMB Dashboard, this category uses class rank: top five get A’s, and so on.

Data Center Optimization Initiative (DCOI)

OMB’s Data Center Optimization Initiative has made consolidating, scaling, and shuttering Federal data centers a strong government priority, and thousands of data centers have been closed in the pursuit of billions in potential government-wide savings, and the added benefit of a better environmental footprint.

To track progress in FITARA, half of an agency’s DCOI grade relates to whether it met its planned savings goal determined with OMB, and half relates to performance in five key metrics: energy metering, power usage effectiveness, virtualization, server utilization/automation, and facility utilization. Agencies also had their grades adjusted up if they closed more than 50 percent of their total data centers.

DCOI Detailed Scorecard Shows Agency Optimization Metrics and % of Data Centers Closed.

Software Licensing (MEGABYTE)

Under the MEGABYTE Act, OMB issued a directive to CIOs to establish a comprehensive, regularly-updated inventory of their software licenses to reduce duplicative contracts and make cost-effective decisions.

Scoring is pretty black and white. If you have a comprehensive software inventory, you get a C. If you actively use that inventory to make cost-effective decisions, you get an A. If you don’t have one at all, an F. Determinations are made in meetings with GAO regarding follow-up activities once the inventories are created. OGR says each of the 14 agencies that failed has efforts underway to create and use an inventory.

MGT

The MGT Act provides agencies the opportunity to establish working capital funds where one-year savings can be reprogrammed and reinvested to fund three-year IT transformation initiatives, including sunsetting legacy systems and responding to evolving threats in cyberspace.

Agencies received an A if they have an MGT-specific working capital fund with a CIO in charge of decision-making. No agency has gotten there yet. In good faith, though, agencies received a B if their efforts to implement one in 2018 or 2019 are deemed to be sincere and in progress. Three agencies are on the board.

Agencies were still given points if they’ve got another method in place. Those with department-level working capital funds received a C, or a D if they have any sort of IT-related funding method; otherwise, they got an F.

FISMA

This preview of the forthcoming cybersecurity category tracks agency ability to continuously monitor their networks, using commercial-off-the-shelf tools to mitigate and remediate cyber threats.

The grading is split in two sections: first, the agencies’ Inspector General assessments in five key areas: the ability to identify, protect, detect, respond, and recover; and second, the agencies’ progress in cross-agency priority goal metrics: up to 10 metrics tracked quarterly as part of the President’s Management Agenda (PMA).

The grades in the two sections are averaged in order to arrive at the overall FISMA grade. It’s a good thing they didn’t count yet, as no agency scored above a C. It’s still early in the PMA lifecycle, though, and this may be just what agencies need to kickstart CDM adoption and better overall cyber hygiene.

Where Do We Go From Here?

Now that it’s all out in the open, agencies know exactly what to do to get their grades up. The detailed scorecard goes beyond the methodology we’ve reported here, and displays agency data in each category, going back to the first scorecard where available. If you like metrics, it’s a very colorful journey into agency modernization progress.

Join MeriTalk on July 17 when we honor the agencies on the Dean’s List. In conjunction with OGR leaders and the CIO Council, we’ll be awarding those who’ve aced the categories we’re now deeply familiar with, and sharing tips for improvements government-wide. Lessons at one agency stand to benefit all involved.

And to access MeriTalk’s wealth of FITARA resources and recaps, visit our FITARA section in the CIO Briefing Room.

Recent