The Department of Defense’s many artificial intelligence programs–currently over 600 and counting–generally share one stated goal: the blend of humans and machines working as a team, in which AI systems become “partners in problem-solving,” as opposed to our new overlords. Whether in jobs such as cybersecurity, analyzing reams of data, images, and video, operating swarms of drones, or disaster assistance, the idea is to have AI and machine learning systems that augment and improve what personnel can do.

A hurdle in the way of that vision has been the basic differences in the way machines and humans “think,” which the Pentagon’s top research arm, the Defense Advanced Research Projects Agency (DARPA), is looking to clear with its AI Next program, which will put up $2 billion over the next several years to fast-track development of what DARPA calls the third wave of AI technologies.

Two programs in their early stages outline ways for humans and machines to get on the same page. One program looks to get AI to think and react more like a human in battlefield situations, whereas the other seeks to get them to connect telepathically.

The agency just held a Proposer’s Day for the Science of Artificial Intelligence and Learning for Open-world Novelty (SAIL-ON) program, which is designed to get AI systems to be able to react to situations that aren’t necessarily in their programming. Current systems excel in pre-defined environments–from games like chess and Go to identifying signs of cancer–but tend to get lost if the rules of the game are changed.

Ted Senator, program manager in DARPA’s Defense Sciences Office, compared it to flipping the board, so to speak, in chess by changing the rules while the game is being played. “How would an AI system know if the board had become larger, or if the object of the game was no longer to checkmate your opponent’s king but to capture all his pawns?,” he said. “Or what if rooks could now move like bishops? Would the AI be able to figure out what had changed and be able to adapt to it?”

Those are the kinds of changes that can happen on a battlefield, where unexpected moves from an adversary, a shift in environmental conditions, or a sudden turn into unfamiliar terrain can reset a mission and its goals. Current AI systems, which require endless training sessions to prepare for possible scenarios, can’t yet adapt to such out-of-the-box variations. SAIL-ON would teach AI systems to function like soldiers do, following the military’s OODA (observe, orient, decide, and act) loop–observing a situation, orienting themselves to what they see, deciding on a course of action, and acting on that decision–and do it without requiring retraining on a larger data set.

In another third-wave program, the agency’s Artificial Intelligence Exploration (AIE) program is going for the ultimate in hands-free computer control through Intelligent Neural Interfaces (INI). AIE is enlisting industry to further development of neurotechnology to create a kind of telepathic connection between AI systems and humans that could work on the battlefield, allowing soldiers to interact with the systems using their thoughts.

The INI program would expand on successful applications of brain-computer interfaces that have resulted, for example, in reanimating paralyzed muscles, controlling prosthetic limbs, and even operating three drones at once. For starters, DARPA wants ways to improve the robustness and reliability of neural interfaces, and to maximize information content by improving the bandwidth and computational capacity of neural interfaces.

The success of these projects also depends on other factors, including the basic matter of trust between humans and machines, and AI overcoming its current limits on explaining its thought processes in human terms. DARPA is among the research organizations looking to solve that problem, through its Explainable Artificial Intelligence (XAI) and Competency-Aware Machine Learning (CAML) programs.

Mind-control systems as a practical concern on the battlefield are likely a long way off, and systems that can expect the unexpected still have significant technology challenges in the way. But AI Next, which employs an accelerated contracting schedule, is looking to shorten the distance between ideas and reality, to find out if AI can be as smart as we already think it is.

Read More About
Recent
More Topics
About
MeriTalk Staff
Tags