AI News

Trump Administration Plans Executive Order Against Anthropic Claude Tools

The White House prepares an executive order to ban Anthropic tools across federal agencies as court proceedings continue over the AI company's supply chain risk designation.

LLMBase Editorial Updated March 11, 2026 2 min read
ai llm industry anthropic government regulation
Trump Administration Plans Executive Order Against Anthropic Claude Tools

Government Refuses to Rule Out Additional Action

At Tuesday's federal court hearing, Justice Department attorney James Harlow declined to offer any commitment that the government would not impose further penalties on Anthropic. The hearing addressed one of two federal lawsuits filed by Anthropic on Monday, challenging the administration's designation of the company as a supply chain risk.

President Trump is currently finalizing an executive order that would formally prohibit usage of Anthropic's Claude tools across government agencies, according to a White House source familiar with the matter. This represents a significant expansion beyond the existing Pentagon designation that has already disrupted the company's business relationships.

Court Schedule Accelerated Due to Business Impact

Anthropric sought an expedited hearing schedule to prevent mounting business damage from the supply chain risk designation. The company reports that billions of dollars in revenue are at risk as current and prospective customers withdraw from deals and demand new contract terms.

US District Judge Rita Lin moved the preliminary hearing to March 24 in San Francisco, earlier than originally planned but later than Anthropic requested. The judge cited the "consequential" nature of the case for both parties as requiring thorough consideration despite the expedited timeline.

Dispute Origins in Military Use Restrictions

The conflict began when Anthropic refused to sign agreements allowing the military unrestricted use of its AI technologies for any lawful purpose. The company expressed concerns about potential applications including broad surveillance of Americans and autonomous weapons systems without human oversight.

The Defense Department maintains that usage decisions for contracted technologies fall within its authority, while Anthropic argues for maintaining ethical guardrails on its AI systems. This disagreement led to the Pentagon's designation of Anthropic as a supply chain risk.

Industry Impact and Competitive Dynamics

The sanctions create uncertainty for software companies relying on Anthropic's Claude suite of tools, forcing consideration of alternative AI providers. OpenAI and Google are advancing Pentagon partnerships to replace Anthropic's role, despite employee pressure within those companies to resist government demands over technology usage.

Legal experts suggest the administration's actions follow a pattern of using regulatory measures against perceived political opponents, including universities, media companies, and law firms. The case tests the boundaries between national security authority and constitutional protections for companies refusing government contracts.

The outcome may establish precedents for how AI companies can maintain ethical principles while engaging with government contracts, particularly relevant for European firms considering US market entry under similar regulatory pressures.

Original source: Wired

AI News Updates

Subscribe to our AI news digest

Weekly summaries of the latest AI news. Unsubscribe anytime.

EU Made in Europe

Chat with 100+ AI Models in one App.

Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.