
The General Services Administration (GSA) is proposing new contract guidelines that would require artificial intelligence (AI) vendors selling services to the federal government to allow agencies to use their models for “any lawful” government purpose.
According to a draft policy first reported by Politico, baseline contract terms for AI systems used across the government would require vendors to grant agencies broad operational rights to integrate and use AI tools within government systems.
Under the proposed terms, contractors would provide the government with an “irrevocable, royalty-free, non-exclusive license” to use the AI system for the duration of the contract. The draft also states that agencies must be able to integrate the technology into existing government systems “as necessary for any lawful government purpose.”
The proposal also aims to limit the ability of vendors to restrict how their models respond to government queries. According to the draft, an AI system “must not refuse to produce data outputs or conduct analyses based on the contractor’s or service provider’s discretionary policies.”
Those new guidelines follow a dispute between the Department of Defense (DOD) – rebranded as the Department of War by the Trump administration – and Anthropic.
Anthropic declined to loosen safeguards that prohibit the use of its technology for applications such as fully autonomous weapons systems or mass domestic surveillance. In response, President Donald Trump barred federal agencies from using Anthropic’s AI tools.
Dario Amodei, CEO of Anthropic, said in a statement that the company believes “AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely.”
Defense Secretary Pete Hegseth later directed the DOD to classify Anthropic as a supply chain risk to national security. Anthropic sued the Trump administration this week over its decision, arguing the designation – and the resulting ban on its technology – was unlawful.
Beyond expanding how agencies can use AI tools, the draft guidance also seeks to address what administration officials have described as “woke AI.”
The proposal states that AI systems used by the government must prioritize historical accuracy, scientific inquiry, and objectivity in their outputs.
“The AI system must be a neutral, nonpartisan tool that does not manipulate responses in favor of ideological dogmas,” GSA wrote in the draft, citing diversity, equity, and inclusion principles as examples.
The guidance also outlines a “continuous improvement process” aimed at strengthening the detection and mitigation of issues related to performance, bias, trustworthiness, and the generation of illegal or prohibited outputs.
The draft would also give the federal government the authority to conduct its own automated assessments of AI systems to evaluate bias, truthfulness, safety, and ideological content.
If a contractor’s system fails to meet the proposed requirements, the government could suspend use of the AI tool until the issues are resolved. Vendors could also be responsible for decommissioning costs if their systems are found to violate the draft’s “unbiased AI principles.”
Civil liberties advocates criticized the proposal, arguing that the requirements could weaken safety safeguards in commercial AI systems. Quinn Anex-Ries, a senior policy analyst for the Center for Democracy and Technology, called the new policies “an overall detriment to advancing key safeguards in AI systems.”
“Instead of focusing on codifying commonsense guardrails into federal contracts, these overly broad requirements would make AI tools less safe by requiring systems to produce responses regardless of the risks and requiring them to adhere to meaningless ‘unbiased AI principles,’” Anex-Ries said. “The net effect of these measures is the worst of both worlds: forcing vendors to remove even more safeguards and scaring responsible companies away from working with the government at all.”