Sen. Ted Cruz, R-Texas, is calling for a national AI regulatory framework to avoid what he describes as a fragmented and failing system of state-led oversight.

“We need one clear federal standard, because having contradictory standards from every state and every city ensures failure, and failure should not be an option,” Cruz said Tuesday at Politico’s AI & Tech Summit in Washington.

Though Cruz underscored his support for states’ rights, he argued that artificial intelligence is the kind of issue that demands a unified federal approach.

He pointed to the Constitution’s Interstate Commerce Clause – which gives Congress the authority to regulate commerce across state lines – as a key justification, saying the clause was designed to prevent the kind of conflicting state regulations that could make national compliance with AI rules unworkable.

The Interstate Commerce Clause gives Congress power to regulate trade between states and has been the basis for major federal laws. It also includes a “dormant” aspect, limiting states from passing laws that burden interstate commerce.

Citing the Commerce Clause, Cruz questioned leaving AI regulation to the states, saying conflicting rules from 50 states don’t make sense for a technology that is national – and often global – in scope.

“I am a ferocious believer in federalism and states’ rights … [but] Congress should be setting the rules for interstate commerce, and not contradictory rules that would make compliance unworkable,” he said.

The Texas Republican specifically pointed to what he described as efforts from “far-left governors and mayors” that he said would “stifle innovation and benefit foreign adversaries.”

Specifically, Cruz criticized state laws like Colorado’s AI mandate, which he said requires algorithmic audits to ensure outcomes meet diversity, equity, and inclusion goals. He argued such measures could have unintended consequences that stifle innovation and ultimately benefit foreign adversaries.

First signed into law in May 2024, the Colorado Artificial Intelligence Act (CAIA) is considered one of the most comprehensive efforts in the U.S. to regulate predictive AI.

The law requires developers and deployers of “high-risk” AI systems – used in sectors such as housing, hiring, and healthcare – to prevent algorithmic discrimination and disclose how the systems are used in major decisions. While consumer advocates support the legislation, tech industry critics argue that inconsistent state laws could hamper innovation.

However, Colorado lawmakers voted last month to delay implementation of the state’s landmark AI law after businesses and local governments raised concerns over compliance costs. During a special session, legislators moved the start date of the CAIA from February to June 30, 2026.

Colorado Gov. Jared Polis convened the special session in part to address what his office called “the impending and costly implementation” of the law, also citing fiscal uncertainty tied to the federal government’s budget outlook.

Regulatory Sandboxes

Cruz unveiled the first of several AI-related bills last week to create a federal framework for AI regulation, which he explains shifts the United States’ focus to maintaining global leadership in AI development.

“The objective, very simply, is to win the race for AI. And I think we’re at a moment of transformation,” he said, as he advocated for the use of regulatory sandboxes – flexible regulatory environments that allow innovators to test products under modified or exempted rules.

Regulatory sandboxes have been implemented in more than 50 countries. The model provides a mechanism for regulators to avoid stifling emerging technologies while still maintaining oversight. Cruz noted that President Donald Trump included the concept in his administration’s AI strategy.

“This is not some curious idea with no precedent,” Cruz said. “It’s been implemented in many countries.”

Cruz’s bill – dubbed the Strengthening Artificial intelligence Normalization and Diffusion By Oversight and eXperimentation (SANDBOX) Act – is the first in a planned series of proposals forming Cruz’s five-pillar AI legislative framework.

Introduced during a Senate Commerce Subcommittee on Science, Manufacturing, and Competitiveness hearing on Sept. 10, the bill aims to promote safe AI development with a “light-touch” regulatory approach that reduces burdens on AI developers.

“This would let innovators go to regulators and ask for either an exception or modification to existing regulations to allow them to innovate,” he said.

Read More About
Recent
More Topics
About
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags