In today’s society, can an artificial intelligence (AI) algorithm truly be unbiased? That question sparked some serious debate among a panel of experts from the Brookings Institution last week.
During an October 31 discussion, Nicol Turner Lee, a fellow at Brookings’ Center for Technology Innovation, argued that the societal context of the people developing AI technologies make unbiased algorithms a tall task.
“The people who are developing them are not diverse,” she noted. “When we actually develop computer systems and models, we come with our values, our norms, our assumptions about what the world looks like, and when you come with those values, norms, and assumptions, and you work in Silicon Valley … that’s what the AI starts looking like, and that’s why the bias is inherently baked into the system,” she added.
However Robert Atkinson, president of the Information Technology and Innovation Foundation (ITIF), disagreed, stating that conscious development could produce an algorithm without biases. He also argued that even imperfect algorithms have the opportunity to improve on the biased processes of humans.
“A good example would be facial recognition. Everybody is all up in arms about facial recognition, which is a teeny bit more biased against some groups than others … but there are studies looking at police lineups, where people only get that right 60 percent of the time,” said Atkinson. “Using the NIST [National Institute of Standards and Technology] studies on the best, most effective AI, they show a one percent error,” he compared.
John Villasenor, a senior fellow at the Center for Technology Innovation, noted that the existing legal framework on product liability may not have many algorithm cases yet, but does offer a potential system that would hold companies accountable for the outcomes that their algorithms bring.
“One overarching conclusion I believe … is that companies need to bear responsibility for the algorithms that they create,” he said.
For policy solutions to AI’s risks, Lee proposed a rating system modeled on the EnergyStar rating system for appliances to bring transparency, and a mechanism for user reviews to bring trust to algorithms. She also advocated for more diversity in tech development and the inclusion of other disciplines outside of technology to bring new perspectives.
Atkinson backed a light-touch approach, noting that past proposals to more strictly regulate the internet could have killed much of the innovation of recent decades.