Stacey Dixon, director of the Intelligence Advanced Research Projects Activity (IARPA), said today at an event organized by Defense One that some of the intelligence community’s more audacious research centers on using machines to predict the future–through more accurate forecasts of geopolitical events.
Dixon also said IARPA research aims to translate neuroscience to neural networks, hoping that efforts to map the human brain can lead to massive strides in machine learning.
“We have a forecasting challenge out right now, where individuals in the public can actually not only do regular forecasts, but if they have any ideas for how to do machine-based forecasting, they can actually do that in parallel with the research program that we’re funding,” Dixon said.
The new program is run on the same platform as the Good Judgment Project–a previous four-year study sponsored by IARPA into global events forecasting.
IARPA’s previous director, Jason Matheny, said in June that the results of previous forecasting projects show that humans need to be “especially humble about” their ability to predict geopolitical events. IARPA is now looking into human-machine teaming to augment those forecasts–about things such as major changes in the world economy or political and social upheaval.
“Let’s still have the people making forecasts, we’ll also have machines involved to try to figure out where can they augment the forecast: where will the combination of the two be better, where are humans better, where are machines better, and see if we can try to tease that apart,” Dixon said, adding that the new forecasting challenge will soon be yielding data for IARPA to evaluate.
“We have a program review. It’s at the end of the first phase, so it’s very early, too early to actually talk about results, but looking forward to getting the results of where we are in the next couple of weeks and figuring out how much progress we’re making,” she said.
Dixon said that within IARPA’s research projects, hybrid forecasting competition is “probably the most relevant” to the future of human-machine teaming.
Other groundbreaking IARPA research is occurring in neuroscience through its Machine Intelligence from Cortical Networks (MICrONS) project, which “is essentially trying to reverse engineer the algorithms of the brain so that we can potentially make our computer algorithms better,” Dixon said.
“We’re doing that by understanding at the physical level, what is in a one-millimeter cubed portion of the visual cortex. Literally, all 100,000 neurons, we’re trying to map those and the billions of connections between them to create what arguably will be the world’s largest neural net ever,” she said.
“That program has already done groundbreaking neuroscience. There is no question about that,” said Michael Wolmetz, senior scientist at Johns Hopkins University’s Applied Physics Laboratory, whose organization supports IARPA on the MICrONS project. “The next step is does that translate into groundbreaking machine learning.”
The goal for the project, Dixon said, is learning more about, as humans, “how are we able to identify, recognize images and objects so easily, with just looking at one image.”
That’s a notable limitation for machine-based neural networks, which need to be trained with massive data sets and ingest thousands of images of related items before being able to recognize an object. Even then, the process is unknown and the results are often uncertain. Not the case with humans, Dixon noted.
“You take a toddler, who’d see a picture of a giraffe. That toddler will forever know what a giraffe is, whether it’s a giraffe at the zoo, a giraffe on a shirt, a giraffe in a book,” she said. “You show a machine that, and it’s going to not quite know what the giraffe is. It’s going to be a continuous effort to see it in one image. That one-shot learning isn’t where we need it to be.”
Wolmetz said MICrONS can translate to groundbreaking machine learning if the detail from the mapping process will produce a better, more reliable neural network, or even simply “inspire the next generation” through some “motif” or pattern in the process.
“We’re not at the point where we’ve done the entire one-millimeter cube. We’ve done a portion of that,” Dixon said. “The next step will be to see if someone can actually program it into some sort of architecture.”