Federal officials should focus on crafting policies that support the use of AI in cybersecurity, and continue to develop the AI workforce, the Information Technology Industry Council (ITI) recommends in a March 24 report.

The report recommends Federal officials focus on making sure policies not only support the use of AI for cybersecurity but also encourage the public and private sectors to incorporate AI in their threat modeling and security risk management activities. The report also recommends supporting policies that develop an AI workforce that is both skilled and diverse.

“Many countries are working to harness the benefits of AI, while considering various approaches to address societal challenges that may emerge,” ITI says in its report. “To innovate and prosper responsibly and securely, governments need to make strategic decisions regarding AI research and development, regulation, and standards.”

The report pushes for policies that allow for AI to be incorporated into cybersecurity practice and support using “published algorithms as the default cryptography approach.” The latter would limit access to AI encryption keys and build off the trust those algorithms have already built globally.

One specific policy recommendation from the cybersecurity privacy aspect involves crafting privacy regulations that allow AI to use personal information like IP addresses to pinpoint “malicious activity.”

“Defensive cybersecurity technology can use machine learning and AI to more effectively address today’s automated, complex, and constantly evolving cyberattacks,” the report says.

Growing and upskilling the AI workforce already has support on the Hill, with Reps. Jim Langevin, D-R.I., and Elise Stefanik, R-N.Y., calling workforce a priority for the Fiscal Year 2022 National Defense Authorization Act earlier this week. ITI recommends policymakers focus on modernizing the recruitment and hiring process, as well as working to establish skilling and re-skilling programs that are informed by industry.

The report also notes that AI is not just a STEM function and that “the best way to ensure access to an AI workforce is to invest broadly across all relevant disciplines and teach flexible skills and problem solving from early childhood education.”

Other policy recommendations in the report center around how policymakers should approach regulation broadly, facilitating public understanding and trust in AI, and engaging globally with the AI community.

Read More About
About
Lamar Johnson
Lamar Johnson
Lamar Johnson is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags