For military analysts struggling to make proper use of millions of hours of full-motion video from drones, the cavalry will begin arriving this month, in the form of computer vision algorithms developed under the Department of Defense’s (DoD) Project Maven.
The delivery of those algorithms, created under an accelerated program launched in April, underscores not only the DoD’s attempt to keep up with the ever-advancing pace of artificial intelligence and machine learning, but the department’s goal of solving the long-running puzzle of how to get new technologies developed, tested and deployed before they’re out of date. It’s what Lt. Gen. Jack Shanahan, director for Defense Intelligence for Warfighter Support in the Office of the Undersecretary of Defense for Intelligence, has called “prototype warfare.”
The goal “is to turn the enormous volume of data available to DoD into actionable intelligence and insights,” Shanahan said at an industry day in October for the project, also known as the Algorithmic Warfare Cross-Function Team (AWCFT), attended by more than 100 companies. It is “about moving from the hardware industrial age to a software data-driven information environment and doing it fast and at scale across the department.”
The Pentagon is far behind industry in automation. Facebook, notably, has introduced a wide range of AI applications including one that could spot users with suicidal tendencies and one that famously caused a stir after it created its own language. Google CEO Sundar Pichai has said the world (and Google) is moving into an AI-first era. DoD, meanwhile, has been looking to enlist the Silicon Valley to help make up lost ground.
Former Deputy Defense Secretary Bob Work created the AWCFT with a memo, assigning it an initial task of automating the analysis of full-motion video gathered by unmanned aerial systems in order to relieve the burden that currently overwhelms human analysts.
“Although we have taken tentative steps to explore the potential of artificial intelligence, big data and deep learning,” Work wrote in the memo, “I remain convinced that we need to do much more and move much faster across DoD to take advantage of recent and future advances in these critical areas.”
Full-motion video illustrates the kind of challenge DoD faces. Manual analysis is time-consuming and inefficient, and is compounded by the massive volume of video collected. And as big of a job as that is now, it’s only going to increase in size. National Geospatial Intelligence Agency Director Robert Cardillo, speaking about a broader range of imagery that also includes satellite feeds, has said that in five years the volume of imagery data will be “a million times more” than it is now, and that NGA would need eight million imagery analysts to handle it under current methods.
An early Project Maven deployment illustrates what AI is capable of but also the hill that DoD must climb. The project team introduced the Air Force Special Operations Command to a data tagging application that trains machines to autonomously recognize features in still images, such as a particular type of truck or a person carrying a weapon. The process can speed up analysis, but training the machine is still time-consuming, requiring tagging each type of feature as much as 100,000 times before it becomes proficient.
With the first computer vision algorithms being delivered this month, DoD is establishing a new model of humans and machines working together. “Eventually we hope that one analyst will be able to do twice as much work, potentially three times as much, as they’re doing now,” Marine Corps Col. Drew Cukor, chief of AWCFT Intelligence, Surveillance and Reconnaissance Operations Directorate-Warfighter Support, said recently. “That’s our goal.”