How Machines–and People–Can Combat Fake News

“A lie can travel halfway around the world before the truth can get its boots on.”

That’s a famous quote often attributed to Mark Twain. Except that, like a lot of quotes attributed to Twain, he never said it or wrote it. It’s also been attributed to Winston Churchill, but he didn’t come up with it either. The same idea in different words also has been credited to Jonathan Swift, C. H. Spurgeon, Thomas Franklin, a Chinese proverb, an Arab proverb, and other sources. And as to the exact wording of the commonly cited quote, there are discrepancies on whether the lie travels, flies, or merely gets around the world, and whether truth is sluggishly putting on its boots, shoes, or pants in the meantime.

But regardless of who said it first in which words, the quote is frequently cited because it’s frequently true, especially these days. With the Internet for transportation, a lie can get all the way around the world before truth can blink itself awake. And that’s a challenge for those who seek to stem the proliferation of false information, be it accidentally misattributed quotes, political propaganda, or malicious “fake news.”

Artificial intelligence can help combat the problem, by using machine learning algorithms to detect the patterns used in phony stories and ads designed to stir up fear or outrage, or, in the case of Russian disinformation, unsettle people’s faith in American institutions. Stanford University, for example, is conducting research into whether an AI system can detect a fake news article based on its content. The Fake News Challenge, an effort by more than 100 volunteers from industry and academia, is using competitions to develop tools that would “help human fact checkers identify hoaxes and deliberate misinformation in news stories using machine learning, natural language processing, and artificial intelligence.” Quite a few other projects are underway.

But algorithms can only tackle one part of a complex picture. As M. Mitchell Waldrop points out in the Proceedings of the National Academy of Sciences, efforts to stem the flow of fake information can run into barriers such as the First Amendment. There’s also the fact that a lot of these stories are telling people what they want to hear. They feed into people’s biases, and that accelerates the stories’ dissemination.

The Army Research Laboratory (ARL) is leading research into that aspect of false information, using modeling and analysis to see how bias – or a lack of it – affects the propagation of false information via social networks. A paper by lead researcher Dr. Jin-Hee Cho of ARL recently received a Best Paper Award at the Institute of Electrical and Electronics Engineers’ (IEEE) GLOBECOM 2017 symposium in Singapore.

ARL’s research, being conducted with BBN Technologies, the University of California, Santa Barbara, and Rensselaer Polytechnic Institute, looked at unbiased prior belief or disbelief and found that an absence of predisposed opinions helps people sort through new information, weed out the phony stuff, and make better decisions. When people had an open mind, they were able to better recognize true information even when the volume of truth was comparatively small.

“A larger number of true information propagators is more effective than the higher frequency of true information propagation,” Cho said in a statement. “But even a small fraction of true information propagators can effectively introduce the removal of false information; for example,

Researchers found, not surprisingly, that a higher volume of true information introduced to an unbiased mind was most effective in leading to better decision-making, but also that even a small amount of truth could make a difference, helping people to separate out the bad information. For example, she said, “Two percent of true information propagators can lead about 80-percent of people in a network to make right decisions when people in the network are at least unbiased for false information.”

ARL’s research is aimed primarily at improving decision-making, but its finding could contribute to the larger effort of combating the spread of false information, which by any name–propaganda, fake news, disinformation–has always been a factor in political events. Information operates according to the laws of supply and demand, and as New York University journalism professor Jay Rosen told Wired, “Almost all the attention goes to the supply of fake news. How to reduce it, identify it, choke it off, label it. There is almost no interest in the demand for fake news.” Research like ARL’s can bring focus to the demand side by assessing how people treat new information.

Recent