As part of its broad efforts to foster a secure-by-design and -default technology ecosystem, the Cybersecurity and Infrastructure Security Agency (CISA) called on AI software makers last week to build security into systems from the outset.

“Like any software system, AI must be Secure by Design,” CISA’s Christine Lai, AI security lead, and Jonathan Spring, senior technical adviser, wrote in an Aug. 18 blog post. “This means that manufacturers of AI systems must consider the security of the customers as a core business requirement, not just a technical feature, and prioritize security throughout the whole lifecycle of the product, from inception of the idea to planning for the system’s end-of-life.”

It also means, they added, that AI systems must be secure to use out of the box.

“Secure by Design ‘means that technology products are built in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data, and connected infrastructure,’” the blog post reads. “Secure by Design software is designed securely from inception to end-of-life.”

CISA unveiled its secure-by-design and -default guidelines back in April, which aim to outline clear steps that technology providers can take to increase the safety of products used around the world.

“Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default” was written by CISA in collaboration with the FBI, the National Security Agency, and six of the agency’s international partners.

This joint guidance – billed as the first of its kind – asks software manufacturers to take urgent steps necessary to ship products that are secure-by-design and -default. According to CISA, the document is intended to catalyze progress toward further investments and cultural shifts necessary to achieve a safe and secure future.

It fits into the broader push from CISA and the Biden administration as a whole for secure-by-design tech.

It’s one of the main principles of the national cybersecurity strategy released this year, and CISA – in partnership with the Office of the National Cyber Director (ONCD) and other Federal agencies – recently released a request for information which seeks public comment on open-source software security and memory safe programming languages.

Responses to the RFI are due on Oct. 9.

“AI is a critical type of software, and attention on AI system assurance is well-placed,” the CISA officials wrote in the blog post. “Although AI is just one among many types of software systems, AI software has come to automate processes crucial to our society, from email spam filtering to credit scores, from internet information retrieval to assisting doctors find broken bones in x-ray images.”

“As AI grows more integrated into these software systems and the software systems automate these and other aspects of our lives, the importance of AI software that is Secure by Design grows as well,” they concluded. “This is why CISA will continue to urge technology providers to ensure AI systems are Secure by Design – every model, every system, every time.”

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags