It turns out a little knowledge can indeed be a dangerous thing.

That’s something the Army Research Laboratory (ARL) discovered when testing the value of artificial intelligence (AI) as an aid to battlefield decision-making. Researchers from ARL and the University of California, Santa Barbara, found in a series of test scenarios that people trust their own judgement more than they trust an AI’s advice. This was true even when an AI agent provided perfect guidance,  and when ignoring that advice led to negative results. People might trust an AI personal assistant to recommend a movie or the best way to drive to the theater, but not so much when they have skin–or their own skin–in the game.

The researchers started with the hypothesis that people’s faith in their own abilities would affect their judgement when interacting with a computer. They created a variation of the Iterated Prisoner’s Dilemma, in which where players must choose to cooperate with or turn against other players. Versions of the Prisoner’s Dilemma, which dates to the RAND Corp. in 1950, have been applied as a way to test ethics, trust, and levels of cooperation in military scenarios and a number of social and biological sciences–such as economics, international politics and evolutionary biology–as well as game theory. In the original version, two prisoners, isolated from each other, have to weigh the risks and advantages of testifying against their partner in crime, with the prospects for prison terms in the balance. When the scenario is repeated with players learning from previous actions, the game becomes “iterated.”

The research team created a game in which different versions of AI agents offered advice of varying accuracy in each round. One AI offered the optimal course of action at every turn. Another was programmed to be inaccurate, while another was more labor-intensive than others, requiring players to input game information manually. Still another bolstered its suggestions with rational arguments.

No matter.

“What was discovered might trouble some advocates of AI–two-thirds of human decisions disagreed with the AI, regardless of the number of errors in the suggestions,” said ARL scientist Dr. James Schaffer.

They found that some players trusted their own judgement more than they did an AI agent, and that the more familiar a player was with the game going in, the less they used the AI. Those unfamiliar with the game trusted the AI more. “This might be a harmless outcome if these players were really doing better,” Schaffer said, “but they were in fact performing significantly worse than their humbler peers.” As a result, the novices smoked the more skilled players.

The trust factor has been a focus of AI researchers, particularly those in the military. The Department of Defense envisions human-machine teaming as the ideal use for AI, and is devoting a substantial amount of its research money to technologies that support that approach. But the effectiveness of human-machine teams depend on trust, which is something military researchers have been trying to develop by improving the ability of an AI agent to communicate with people.

The Defense Advanced Research Projects Agency (DARPA), for instance, has a couple of projects along those lines. The agency’s Explainable Artificial Intelligence, or XAI, is looking to find ways to an AI to explain, in human terms, how it reached a certain conclusion, something machines that use complex algorithms to analyze massive amounts of data currently can’t do.

A new DARPA project, Competency-Aware Machine Learning (CAML), is taking another approach to turning AI machines into trusted partners by having the machines continuously evaluate their own performance as they work, and providing their human counterparts with ongoing reports. “If the machine can say, ‘I do well in these conditions, but I don’t have a lot of experience in those conditions,’ that will allow a better human-machine teaming,” said Jiangying Zhou, a program manager in DARPA’s Defense Sciences Office. “The partner then can make a more informed choice.”

Developing trust between humans and machines might even get humans to question their own abilities enough to listen to an AI’s suggestions.

Read More About
Recent
More Topics