The Cybersecurity and Infrastructure Security Agency (CISA) – in partnership with the United Kingdom’s National Cyber Security Centre (NCSC) – has released guidelines to help AI developers make informed cybersecurity decisions.

The Guidelines for Secure AI System Development take a significant step forward in addressing the intersection of AI and cybersecurity. Formulated in cooperation with 21 other agencies from across the world, they are the first of their kind to be agreed to by a wide swath of governments around the world.

The 20-page document provides essential recommendations for AI system development and emphasizes the importance of adhering to secure by design principles that CISA has long championed.

“The release of the Guidelines for Secure AI System Development marks a key milestone in our collective commitment – by governments across the world – to ensure the development and deployment of artificial intelligence capabilities that are secure by design,” CISA Director Jen Easterly said in a statement on Nov. 26.

“As nations and organizations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global dedication to fostering transparency, accountability, and secure practices,” she said. “The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology revolution. This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of international partnership in securing our digital future.”

The guidelines are broken down into four key areas within the AI system development lifecycle: secure design, secure development, secure deployment, and secure operation and maintenance.

Secure design focuses on understanding risks and threat modelling, as well as specific topics and tradeoffs to consider on system and model design.

The secure development guidelines apply to the development stage of the AI system development life cycle, including supply chain security, documentation, and asset and technical debt management.

Secure deployment focuses on protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release.

The secure operation and maintenance section of the document provides guidelines on actions particularly relevant once a system has been deployed, including logging and monitoring, update management and information sharing.

CISA notes that its new document is aimed primarily at providers of AI systems, but the organization is urging all stakeholders – including data scientists, developers, managers, decision-makers, and risk owners – to read these guidelines to help them make informed decisions about the design, deployment, and operation of their machine learning AI systems.

“We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy,” Secretary of Homeland Security Alejandro Mayorkas said.

“The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a commonsense path to designing, developing, deploying, and operating AI with cybersecurity at its core,” Mayorkas said on Nov. 26. “By integrating ‘secure by design’ principles, these guidelines represent an historic agreement that developers must invest in, protecting customers at each step of a system’s design and development. Through global action like these guidelines, we can lead the world in harnessing the benefits while addressing the potential harms of this pioneering technology.”

CISA’s guidelines are the latest effort across the nation’s body of work supporting safe and secure AI technology development and deployment. Last month, the Biden administration unveiled its long-awaited AI executive order which directed the Department of Homeland Security to promote the adoption of AI safety standards globally, protect U.S. networks and critical infrastructure, reduce the risks that AI can be used to create weapons of mass destruction, combat AI-related intellectual property theft, and help the United States attract and retain skilled talent, among other missions.

Earlier this month, CISA released its Roadmap for Artificial Intelligence, a whole-of-agency plan aligned with national strategy to address efforts to promote the beneficial uses of AI to enhance cybersecurity capabilities, ensure AI systems are protected from cyber-based threats, and deter the malicious use of AI capabilities that threaten critical infrastructure.

“The new global AI guidelines announced today represent genuine efforts to deliver a much-needed global standard on secure AI design,” Vectra AI CEO Hitesh Sheth said of CISA’s new guidelines. “With AI evolving at an unprecedented rate, and businesses increasingly keen to adopt it, it’s vital that developers fully consider the importance of cybersecurity when creating AI systems at the earliest opportunity. Therefore this ‘secure by design’ approach should be welcomed.”

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags