Artificial intelligence (AI) machines can out-think humans when it comes to a lot of complex, fine-grained tasks, such as detecting signs of cancer more accurately than doctors can, or finding exoplanets based on “dimming effect” data from distant solar systems. But what they don’t have is good old common sense, the ability to apply its knowledge, and the experience to various tasks humans can do from childhood. AI is like a brilliant coworker who can decipher the company’s complex budget at a glance but can’t figure out how to work the coffee maker.

The Pentagon’s lead research arm is looking to close that gap through a new program with the common-sensical moniker of Machine Common Sense, or MCS. The goal–a long-running objective of both AI researchers overall and the Department of Defense (DoD)–is to get machines to learn more efficiently, act reliably in expected circumstances, and communicate more fluidly with their human partners. For all of AI’s computational prowess, it’s proven to be a difficult task.

“The absence of common sense prevents an intelligent system from understanding its world, communicating naturally with people, behaving reasonably in unforeseen situations, and learning from new experiences,” Dave Gunning, a program manager in the Defense Advanced Research Project Agency’s (DARPA) Information Innovation Office (I2O), said in announcing the program. “This absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general AI applications we would like to create in the future.”

The search for common sense goes back to the earliest days of AI, which dates to Alan Turing’s 1950 paper “Computing Machine and Intelligence” and his “imitation game” test. Researchers have since tried to develop logical ways for machines to acquire knowledge and reasoning, but have been held back by the limits of computing systems, among them the inability to fully grasp the nuances of language and semantics. The challenge, as Gunning said, is to get from the “specific AI” techniques–focused on particular tasks, from parsing financial data or playing chess, and limited to those tasks–to the “general AI” dream of machines that can think and function more like humans.

The MCS program aims to build on recent advances in machine learning, natural language processing, cognitive understanding, deep learning, and other AI improvements to give these highly advanced machines the kind of basic thought processes humans have from the get-go. “During the first few years of life, humans acquire the fundamental building blocks of intelligence and common sense,” said Gunning. “Developmental psychologists have found ways to map these cognitive capabilities across the developmental stages of a human’s early life, providing researchers with a set of targets and a strategy to mimic for developing a new foundation for machine common sense.”

The program will approach the challenge on two fronts, DARPA said. The first will use developmental psychology research to set up cognitive milestones that measure machines’ performance in three areas–prediction/expectation, experience learning, and problem solving. The second will build a knowledge repository to answer natural language and image-based questions about common sense phenomena by reading from the web.

DoD–as part of its Third Offset Strategy–and agencies such as NASA see human-machine teaming as essential to their future use of AI systems. That teaming depends on trusting that systems will behave reliably in unforeseen circumstances, communicate their reasoning to humans in understandable language, and exercise common sense when the situation calls for it. The MCS program is looking to take a significant step in that direction.

The research agency has scheduled a proposer’s day on Oct. 18, during which interested parties can learn more about the MCS program.

Read More About
About
Kate Polit
Kate Polit
Kate Polit is MeriTalk's Assistant Copy & Production Editor covering the intersection of government and technology.
Tags