
As Congress takes stabs at regulating artificial intelligence, increases in AI-powered crimes has prompted a big question – what happens when AI commits crimes without a human directing it to, and what might classify as “human” in the future?
Witnesses took their own aim at this question posed by Rep. Andy Biggs, R-Ariz., chair of the House Judiciary Subcommittee on Crime and Federal Government Surveillance, at a hearing on Wednesday to evaluate AI-enabled crime.
“At what point do we no longer see computational decision making with a human-first mover … at some point it won’t be a human that’s the first mover anymore, it’ll be the algorithm itself,” said Rep. Biggs.
The chairman asked witnesses “whether there’s probable cause or not for a search warrant or arrest warrant [if the crime] is merely algorithmically sustained, as opposed to having a human make that determination.”
While witnesses agreed with the chairman that future artificial general intelligence (AGI) or artificial superintelligence (ASI) systems are fast approaching – Ari Redbord, global head of policy at TRM Labs, said he’s “never seen anything move as fast as this” in his lifetime – that question will require evaluating what is a human.
“The role of AI is a choice,” said Cody Venzke, senior policy counsel of the National Political Advocacy Division at the American Civil Liberties Union, noting that laws generally state that AI cannot be used for certain things.
“You mentioned probable cause,” Venzke told the chairman. “That strikes me as a core foundational tenant of due process – that should probably be truly a human activity. And there’s a possible choice that we make of who will be – what will be human? What will be AI and where will humans be?”
Due process ensures fair treatment under the law, and probable cause means there’s enough evidence to justify a search or arrest.
Other experts suggested legal implications of AGI and ASI systems. Andrew Bowne, former counsel at the Department of the Air Force Artificial Intelligence Accelerator at the Massachusetts Institute of Technology, said that future implications may evaluate how much trust and authority is placed in AI agents and how “fine-tuned” they are to ensure they act responsibly.
Zara Perumal, co-founder of Overwatch Data, said future legal decisions could hinge on how high stakes an AI system’s actions are.
“As the models can do more complex reasoning the next few years, it really should come down to how important is the decision, and then what transparency and explainability we can get from the model, and how much human oversight is necessary,” said Perumal.
Meanwhile, Redbord urged action on legal questions now rather than later.
“We need to move … quickly, building the tools [while] working with this body in order to provide the right laws,” said Redbord. “As an old school prosecutor, I’m happy judges don’t make decisions around probable cause, but I do think we really need to ensure that we also are using the tools defensively to meet this moment.”