
The Senate’s draft proposal for national artificial intelligence (AI) legislation unveiled Wednesday would preempt state laws by focusing on regulations that protect the “4 Cs” – children, creators, conservatives, and communities.
Led by Sen. Marsha Blackburn, R-Tenn., the measure dubbed the TRUMP AMERICA AI Act is nearly 300 pages in length.
The proposal follows President Donald Trump’s December executive order, which preempted state AI legislation and called for a singular AI framework to prevent a patchwork of regulations, which he said could hinder innovation.
Blackburn’s proposal is the first to be unveiled by a lawmaker since Trump’s order.
Notably, the proposal formalizes two key AI-related federal resources, which have been recent topics of conversation in Congress. The Center for AI Standards and Innovation (CAISI) at the National Institute of Standards and Technology (NIST) would be codified, and a formal governance and funding structure for the National Science Foundation’s (NSF) National Artificial Intelligence Research Resource (NAIRR) pilot would be established.
“Instead of pushing AI amnesty, President Trump rightfully called on Congress to pass federal standards and protections to solve the patchwork of state laws that has hindered AI innovation,” Blackburn said in a statement.
“Now, Congress must answer his call to establish one federal rulebook for AI to protect children, creators, conservatives, and communities across the country and ensure America triumphs over foreign adversaries in the global race for AI dominance,” she added.
Blackburn’s proposal aligns with aspects of Trump’s AI proposal, which he delivered to Congress Friday morning. However, it differs from suggestions made by other lawmakers; most prominently Rep. Jay Obernolte, R-Calif., who is a leading voice on AI in Congress. He has strongly supported sectoral regulation of AI, which Blackburn’s proposal largely does not.
Child safety
Blackburn’s proposal would require online and social media platforms to implement tools and guardrails to protect users under the age of 17. Platforms would have to adopt safer design practices, add privacy and parental control tools, restrict research on children and teens, and give users more transparency and choice over algorithm-driven content.
The proposal would mandate age verification through the use of a government-issued ID for underage chatbot users. Bots would need to disclose they are not human or licensed professionals and issue reminders every 30 minutes of use. Companies also cannot design chatbots that encourage sexual interactions with minors and promote suicide, self-harm, or violence. Companies could be subject to penalties up to $100,000 for non-compliance.
The proposal would also establish a Kids Online Safety Council to advise Congress on emerging online risks to minors and recommend safety standards and best practices. The panel would include experts, parents, youth, educators, industry representatives, and state officials.
Protections for creators
Copyright holders would have a new legal tool that forces transparency into how AI models are trained, under Blackburn’s draft proposal. Specifically, creators who suspect their work was used to train a generative AI system could request a court-issued subpoena requiring developers to disclose the training data. If developers don’t comply, the proposal would create a legal presumption of infringement.
The proposal directs federal agencies – particularly NIST – to develop standards and tools that identify, label, and track AI-generated or manipulated content. Certain content – such as the work of journalists or creatives – would need to carry information that shows whether the content is authentic or AI-generated.
Addressing AI bias
To combat what Blackburn called “the consistent pattern of bias against conservative figures demonstrated by AI systems,” she proposed that high-risk AI system developers conduct annual independent third-party audits to detect political affiliation or viewpoint discrimination.
Her proposal would also govern how federal agencies can buy and use AI. Agencies can only procure models that have “unbiased artificial intelligence principles,” which are defined as truthfulness, historical and scientific accuracy, acknowledgement or uncertainty, and ideological neutrality.
That follows a directive from the Office of Management and Budget (OMB) in December, which told agencies they had until early March 2026 to ensure that their AI and machine learning are “truthful” and don’t favor certain “ideological dogmas.” In July, Trump signed an executive order telling agencies to only use “unbiased AI.”
Blackburn would direct OMB to issue implementation guidance that – with some exceptions – requires agencies to build those standards into contracts and procurements.
AI safety
The TRUMP AMERICA AI Act would establish a baseline “duty of care” for AI chatbot developers that requires them to take reasonable steps to prevent and mitigate user harm that is foreseeable and tied to how the system is designed to operate.
An additional risk-based regulatory framework for AI systems would be created. That framework would require certain developers to participate in evaluation programs and submit advanced systems to the Department of Energy for testing and oversight, according to the draft.
Both developers and deployers of AI would be held legally responsible for harm caused by their systems.
To protect workers, companies and relevant federal agencies would need to issue quarterly reports to the Department of Labor on AI-related job effects, including any layoffs or job displacements. The Labor Department would then need to make that data publicly available.
To address rising energy costs linked to AI data centers, Blackburn also proposed safeguards that ensure ratepayers are not unfairly burdened by the costs of AI-related infrastructure.
Earlier this month, Trump announced his ratepayer protection pledge, which the CEOs of large AI and data center companies signed, agreeing to cover the cost of all power needed to fuel their data centers.