Ethics: A Crucial Link in DoD’s AI Strategy

Pentagon Military Defense DoD

The Department of Defense’s Artificial Intelligence Strategy puts the DoD on more of a fast track toward developing and employing AI and machine learning to support, as the strategy’s preface states, “a force fit for our time.” The strategy outlines an accelerated, collaborative approach with industry, academia, and allies toward new technologies that will “transform all functions of the Department positively,” and help the United States keep pace with the significant investments being made by countries such as China and Russia.

Though taking a hard-charging approach to getting third-wave AI technologies into the field, the strategy also says it wants to ensure it has some brake mechanisms in place in the form of ethical standards. In fact, the DoD says it plans to take the point in applying ethics to military AI operations.

The DoD “will lead in the responsible use and development of AI by articulating our vision and guiding principles for using AI in a lawful and ethical manner,” the strategy states. “We will consult with leaders from across academia, private industry, and the international community to advance AI ethics and safety in the military context.”

According to the document, ensuring ethical standards can begin by investing in systems that are resilient, robust, reliable, and secure, while also funding research into areas such as explainable AI, which is seen as essential to developing mutual trust in human-machines teams. The DoD also plans to pioneer new approaches to the testing, evaluation, verification, and validation of AI systems. “We will also seek opportunities to use AI to reduce unintentional harm and collateral damage via increased situational awareness and enhanced decision support,” the strategy states. “As we improve the technology and our use of it, we will continue to share our aims, ethical guidelines, and safety procedures to encourage responsible AI development and use by other nations.”

Ethics has been a rising issue with AI as its power for collecting and analyzing vast amounts of data and reaching conclusions that it can’t explain in human terms have led to some doomsday warnings about threats concerning privacy, law, cybersecurity, and even life and death. And aside from the strategy’s assurance that the DoD wants to employ AI in a manner consistent with “our values,” there are also practical concerns. Given the resistance of employees at Google and other companies that are home to much of the country’s best AI talent, assuaging ethical concerns is also key to getting them on board with the technical development the DoD needs to stay ahead of adversaries.

“The success of our AI initiatives will rely upon robust relationships with internal and external partners. Interagency, industry, our allies, and the academic community will all play a vital role in executing our AI strategy,” DoD CIO Dana Deasy said in unveiling the AI strategy. But the use of AI has generated unease among some of those partners.

Most notably, some Google employees protested the company’s involvement in Project Maven, which is looking to develop computer vision algorithms to automate analysis of endless hours of full-motion video from drone aircraft, which could then be used in targeting strikes. Google eventually decided not to renew its contract on the project. After a White House summit on AI in May, an industry/academic group objected, saying the summit made no mention of accountability, transparency, ethics, and fairness.

The Defense Innovation Board is primarily leading the effort to develop ethical principles for the use of AI. Among other efforts, it will hold a series of public discussions at locations around the country, starting with Carnegie Mellon University on March 14 and Stanford University on April 25. But coming to terms on ethics also is one of the focus areas of the DoD’s Joint Artificial Intelligence Center (JAIC), which is the fulcrum of the DoD’s AI plans.

The JAIC, officially launched in July, puts the DoD’s approximately 600 AI projects under one umbrella, with plans for more projects. It also promotes collaboration with industry and academia. In announcing the strategy, Deasy emphasized the latter as key to the future. “I cannot stress enough the importance that the academic community will have for the JAIC,” he said. “Our future success not only as a department, but as a country, depends on tapping into these young minds and capturing their imagination and interest in pursuing the job within the department.”

That collaboration will rest, at least in part, on ensuring agreement on the ethical application of AI.