The Department of Veterans Affairs (VA) – which has already made much use of artificial intelligence (AI) technologies to provide more efficient, effective, and responsive services for its users – is also working on an AI trustworthiness framework that aims to avoid biases that can result from use of the technology, explained a VA official during a virtual GovLoop training on August 12.
The ethics of AI illustrate the right and wrong behaviors concerning the humans and the machines involved in AI, and the way an agency approaches AI ethics could have a profound impact on outcomes from the technology. The VA has planned its AI ethics framework around a December 2020 Executive Order on promoting the use of trustworthy AI in the Federal government.
“Trustworthy AI when it’s implemented is ethical. It also removes potential biases, protects privacy, and allows for increased adoption of artificial intelligence,” said Gil Alterovitz, director of the VA’s National Artificial Intelligence Institute (NAII).
In government sphere, avoiding biases must be a top priority. Agencies that do not consider how weighted their products or services are towards specific outcomes may unfairly impact particular parts of the communities they serve. With that in mind, Alterovitz urged agencies to carefully design AI models so that prejudices do not creep into the tools’ final output.
“If the training data is flawed, the models can have biased or unethical outcomes,” he said.
Additionally, AI requires vast amounts of data to operate, so the way agencies protect this information is paramount. For example, the VA has utilized AI for initiatives surrounding health information-based data, “and we would not want sensitive data about our users to become public without permission,” Alterovitz said.
The NAII has partnered with the VA’s Cyber Innovation Program to develop a framework of trustworthiness specific to AI and users’ health information. According to Alterovitz, the agency released a request for information to gather feedback on the possible risks that they may face as they continue to utilize AI moving forward.