This week, Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., introduced the No Section 230 Immunity for AI Act, aiming to clarify that Section 230 immunity should not apply to generative AI.

The leaders of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law introduced their new bipartisan legislation on June 14.

“AI companies should be forced to take responsibility for business decisions as they’re developing products – without any Section 230 legal shield,” Sen. Blumenthal said in a statement. “This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era. AI platform accountability is a key principle of a framework for regulation that targets risk and protects the public.”

The new bipartisan bill bolsters the argument that Section 230 of the Communications Decency Act shouldn’t cover AI-generated work, promising that consumers will have the tools they need to protect themselves from harmful content produced by the latest advancements in AI technology. It also gives lawmakers an opening to go after Section 230 after vowing to amend it for years.

Section 230 of the Communications Decency Act – established in 1996 – is often credited as the law that allowed the internet to flourish and for social media to take off. The law also largely shields platforms from lawsuits over third-party content.

However, many believe the law shields too far and is not fit for today’s web – allowing social media companies and Big Tech to leave too much harmful content up online.

Legal experts and lawmakers have questioned whether AI-created works would qualify for legal immunity under Section 230 – it’s a newly urgent issue thanks to the explosion of generative AI over the last several years.

The senators’ No Section 230 Immunity for AI Act would amend Section 230 “by adding a clause that strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI,” the text of the bill reads.

The legislation would also allow people to sue companies in Federal or state court for alleged harm by generative AI models.

“We can’t make the same mistakes with generative AI as we did with Big Tech on Section 230,” Sen. Hawley said in a statement. “When these new technologies harm innocent people, the companies must be held accountable. Victims deserve their day in court and this bipartisan proposal will make that a reality.”

In May, as the leaders of the Judiciary Subcommittee on Privacy, Technology, and the Law, Blumenthal and Hawley held a hearing to conduct oversight on AI technology and appropriate safeguards.

During the hearing, witnesses agreed that oversight from Congress is needed to protect the American people. Samuel Altman, CEO of OpenAI, – the company that created the famed ChatGPT AI tool – said, “We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models.”

Read More About
More Topics
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.