Artificial intelligence (AI) is an increasingly popular tool for employers across the United States to utilize in their hiring processes, but this increasingly influential technology raises serious employment discrimination concerns, a Federal government official said this week.
“The use and complexity of technology in employment decisions is increasing over time,” said Charlotte Burrows, chair of the U.S. Equal Employment Opportunity Commission (EEOC), during an EEOC hearing on AI on Jan 31.
How can experts root out discrimination when it may be buried deep inside an algorithm? The answer is oversight of AI algorithms, witnesses at the hearing said.
Jordan Crenshaw, vice president for the U.S. Chamber of Commerce, explained during his testimony that building trust in AI technology requires more than just government oversight.
“The speed and complexity of technological change … means that governments alone cannot promote trustworthy AI,” Crenshaw said. “The Chamber believes that government must partner with the private sector, academia, and civil society when addressing issues of public concern associated with AI.”
According to ReNika Moore, director of the American Civil Liberties Union’s Racial Justice Program, “EEOC’s Initiative on AI and Algorithmic Fairness, and its collaboration with the Justice Department to develop and issue guidance on anti-discrimination measures to new technologies, are critical first steps.”
However, some who have been denied employment due to the impact of AI may not connect the dots to discrimination because those biases “may be buried deep inside an algorithm,” said Moore.
“For example, older workers may be disadvantaged by AI-based tools in multiple ways,” said AARP Senior Advisor Heather Tinsley-Fix.
“Companies that use algorithms to scrape data from digital profiles in searching for ideal candidates may overlook those who have smaller digital footprints. Algorithms could also create a feedback loop that then hurts future applicants, so if an older candidate makes it past the resume screening process but gets confused by or interacts poorly with the chatbot, that data could teach the algorithm to assign lower ranks to candidates with similar profiles,” she said.
Audits are necessary to ensure that the software used by companies avoids intentional or unintentional biases, the panelist unanimously agreed. But who would conduct those audits – the government, the companies themselves, or a third party – is the next significant AI issue the EEOC must consider.
According to chairwoman Burrows, each option comes with risks and limitations.
“A third-party auditor may be coopted into treating their clients leniently, while a government-led audit could stifle innovation. And setting standards for vendors and requiring companies to disclose what hiring tools they’re using remains to be seen in practice,” Burrows said.