Secrets have always been the lifeblood of the spy game. However, sometimes it pays to know what everyone else does. To that end, agencies are learning that crowdsourced information can prove to be a good predictor of upcoming events.

Crowdsourcing is mostly familiar as information being drawn from social media, websites, and other electronic sources that can indicate public opinion trends in one direction or another. But there’s another level of information that can be crowdsourced as well, including inside sources and back-channel scuttlebutt that can put a finer point on predictions gleaned from the information.

The Intelligence Community’s (IC) research arm is looking to put the two together with the Geopolitical Forecasting Challenge, a seven-month competition, kicking off Feb. 21, in which participants will demonstrate new ways of forecasting geopolitical events. The competition, being held by the Intelligence Advanced Research Projects Activity (IARPA), wants to develop new ways of applying both human analysis and advanced machine processing to answer a range of questions – from who will win a foreign election to what will the price of crude oil be on a certain date?

“Accurate geopolitical forecasts are crucial to making informed, effective policy,” IARPA Program Manager Seth Goldstein said in announcing the competition, which is offering a fairly modest $200,000 in total prizes, but could result in new methods in machine learning and artificial intelligence that could have applications in other areas. With this contest, “IARPA is interested in identifying the most effective ways to integrate human judgement with other types of data.”

The participants won’t just be Googling subjects and searching through Twitter. They’ll have access to a steady stream of data from human forecasters that isn’t available to just anyone – the kind of information with which CIA analysts might work. “We believe that by granting access to this data, solvers will be empowered to develop more diverse and exceptional forecasting methods than we’ve ever seen,” Goldstein said.

In addition to competing with each other, participants will be able to measure themselves against other means of forecasting by working on the same forecasting questions that are being put to teams in another IARPA contest, the Hybrid Forecasting Competition (HFC), which is testing human/machine collaboration. Part of both competitions is studying how to incorporate human judgement into machine processes.

IARPA is using the current competitions to build on the work of other challenges, such as Open Source Indicators and Aggregative Contingent Estimation, which it says have moved the forecasting ball forward and are still ongoing. IARPA is also looking to broaden the pool of innovators, inviting nonprofessionals to take part in its search for new forecasting methods. “This challenge is IARPA’s first attempt to gauge whether forecasting hobbyists can develop methods that are competitive with well-funded research teams,” Goldstein said. The challenge is scheduled to wrap up in September, with winners announced shortly afterward.

The IC has always emphasized the value of open source intelligence as a key element in developing geopolitical and military forecasts. The CIA’s Open Source Enterprise dates to 1941, when it was known as the Foreign Broadcast Monitoring Service (FBMS), run by the Federal Communications Commission, and worked on developing forecasts regarding World War II. Today, with so many more potential sources of information at hand, the IC, like other agencies, sees machine learning and analytics as essential to keeping pace.

Guessing Mr. Turing would approve.

Read More About