AI News

Anthropic Claude Sabotage Claims Disputed in Court Filing

Anthropic denies Pentagon accusations about Claude AI manipulation capabilities, filing court documents that detail technical limitations on remote access during military operations.

LLMBase Editorial Updated March 21, 2026 2 min read
anthropic claude pentagon ai-governance military-ai
Anthropic Claude Sabotage Claims Disputed in Court Filing

Technical Architecture Prevents Remote Access

Thiyagu Ramasamy, Anthropic's head of public sector, stated in Friday court filings that the company lacks technical capabilities to interfere with deployed Claude instances. According to the documents, Anthropic cannot remotely disable the technology, alter model behavior, or access military systems once Claude is operational within Pentagon infrastructure.

The executive specifically denied the existence of backdoors or remote kill switches, explaining that Anthropic personnel cannot log into Department of Defense systems to modify models during operations. Updates would require approval from both the government and the cloud provider, which court documents indicate is Amazon Web Services, though not explicitly named.

The technical limitations described in the filing address core Pentagon concerns about supply chain vulnerabilities in AI systems used for battle planning, data analysis, and memo generation.

Contract Negotiations and Policy Disputes

Anthropic policy head Sarah Heck revealed that the company had proposed contract language in March addressing Pentagon operational concerns. The proposed terms would have explicitly waived Anthropic's right to control or veto Defense Department operational decisions, according to court documents.

The company also indicated willingness to accept restrictions on Claude's use in deadly strikes without human supervision, though negotiations ultimately failed to reach agreement. This breakdown preceded Defense Secretary Pete Hegseth's supply chain risk designation, which prevents federal agencies and contractors from using Anthropic's software.

The Pentagon's position, outlined in government court filings, maintains that the Department of Defense should not tolerate risks to critical military systems during active operations, regardless of technical architecture claims.

Business Impact and Legal Timeline

The supply chain risk designation has prompted customer cancellations and federal agency departures from Claude, according to Anthropic's court submissions. The company has filed two constitutional challenges seeking emergency orders to reverse the designation.

A federal district court hearing in San Francisco is scheduled for March 24, where a judge could decide on temporary relief. The Pentagon has indicated it is working with third-party cloud providers to mitigate perceived risks from existing Claude deployments while the legal dispute continues.

The case highlights broader tensions between AI safety policies and military procurement, with implications for how European defense agencies might evaluate similar supply chain concerns for Claude and other frontier models. Wired reported the details of the court filings and ongoing legal proceedings.

AI News Updates

Subscribe to our AI news digest

Weekly summaries of the latest AI news. Unsubscribe anytime.

EU Made in Europe

Chat with 100+ AI Models in one App.

Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.