The Department of Defense is taking a more concerted approach to the development and use of artificial intelligence (AI) by bringing AI projects under one roof and emphasizing the importance of working with industry and academia. At the same time, DoD is also recognizing that it needs to give ethics a seat at the table.

Earlier this month at a quarterly public meeting of the Defense Innovation Board (DIB), DoD laid out the four areas of focus for the new Joint Artificial Intelligence Center, which will oversee DOD’s AI projects–currently numbering about 600–with the goal of accelerating and coordinating their deployments to warfighters. In addition to getting AI technologies quickly into the field, those four areas include building partnerships with industry, academia and allies, attracting and keeping world-class AI talent, and supporting the National Defense Strategy. But DoD also plans to address responsible use of AI.

“As we move out,” Brendan McCord, the machine learning chief at the Defense Innovation Unit Experimental (DIUx), said, “our focus will include ethics, humanitarian considerations, long-term and short-term AI safety.” Video of the meeting is posted on DIB’s website.

Regular new developments in AI show the potential to change how DoD does business, from predicting political unrest and maintaining equipment to military training and disaster response. The technology also is being applied to battlefield surveillance with the high-profile Project Maven, which seeks to automate analysis of countless hours of full-motion video, and as a means to guide long-range missiles.

But concerns over when, and when not, to use AI’s power accompany those new developments, and those concerns cover more than just “killer robots.” The technology’s ability to crunch large data sets and reach conclusions have given rise to worries regarding cybersecurity, privacy, and law, as well as any area where a machine could eventually make a decision that affects humans. Tech luminaries such as Bill Gates and Elon Musk, while also touting AI’s potential, have warned of the potential consequences, with Musk last year saying AI would be either the “best or worst thing ever for humanity.”

Those concerns were reflected in a July 4 letter from a group led by the Electronic Privacy Information Center and the American Association for the Advancement of Science responding to the White House’s May summit on AI and industry, in which the group pointed out that the summit did not address ethics, accountability, transparency, or fairness.

AI ethics is becoming more of a hot topic in the tech world. Microsoft, for instance, has said it stepped away from some deals because of ethics concerns, and the British House of Lords has proposed “ethical AI” as an opportunity for Great Britain to become a leader in the field. Some Google employees notably objected to the company’s involvement in Project Maven, prompting Google to back away from the project.

For DoD, ethical use of AI could start in the development phase. A Government Accountability Office report on AI earlier this year included a participant’s proposal for a “computational ethics system” that would seek to instill ethical behavior in the programming. The Defense Advanced Research Projects Agency’s Explainable AI program could also further the ethical use of AI, by finding a means to let AI systems explain, in human terms, the complex logical reasoning behind their conclusions.

The JAIC also could promote ethics through one of its other high-priority areas, building partnerships with industry and academia for developing AI technologies. DIB executive director Josh Marcuse, reacting to Google’s decision, said industry employees could help ensure an ethical approach by engaging in, rather than avoiding, AI development. Those employees “should be active participants in making sure that ethics and safety is at the forefront of what we do,” he said.

Read More About