Anthropic Defies Pentagon: Trump Bans Claude AI in Military Dispute

Anthropic, the company behind Claude AI, built its reputation on ethical AI with strong safeguards and “constitutional” principles. US Defense Secretary Pete Hegseth is now testing those limits by demanding unrestricted military AI access, sparking a high-stakes standoff that led to US President Donald Trump’s ban on federal use of Claude.

Imagine a future where AI systems scan every citizen’s daily movements, online searches, social connections, location history, and even casual purchases, piecing them together into a detailed, real-time profile of their life. Privacy evaporates as everyday behaviours become potential evidence of threat.

Now picture another reality where drones or robotic platforms identify, select, and eliminate human targets with no human in the decision loop. These are the risks of fully autonomous lethal weapons powered by frontier AI that remains unpredictable, prone to bias, and not yet reliable enough for life-and-death choices without human oversight.

Anthropic is standing firm in its standoff with the US Department of Defense. CEO Dario Amodei declared that the company would rather lose its Pentagon contracts than allow Claude AI to be used for mass domestic surveillance or fully autonomous weapons. He explained that these instances would “undermine, rather than defend, democratic values.”

This prompted Trump to announce that all federal agencies will immediately cease using Anthropic’s AI technology, including the Claude model.

The “Anti-Woke” Vision: Pentagon Pushes Unrestricted Military AI Adoption

US Secretary of War Pete Hegseth has sharply rejected what he calls ideological limits on AI in the military. In a January 2026 speech at SpaceX, he declared: “gone are the days of equitable AI and other DEI and social justice infusions that constrain and confuse our employment of this technology.” He redefined responsible AI at the Department of War (the rebranded Defense Department under a Trump executive order) as “objectively truthful AI capabilities employed securely and within the laws.”

Hegseth emphasised: “we will not employ AI models that won’t allow you to fight wars.” He pledged to judge models solely on being factually accurate, mission-relevant, and free from “ideological constraints that limit lawful military applications,” adding that “Department of War AI will not be woke.”

These remarks frame the Pentagon’s push against Anthropic’s safeguards as part of a broader shift toward unrestricted AI adoption. Hegseth warned against lagging in the global race: “we are done running a peacetime science fair while our potential adversaries are running a wartime arms race.”

Under current U.S. law, federal agencies and intelligence services can already collect vast amounts of Americans’ personal data. This includes texts, emails, call records, location information, and more often without individualised warrants. This is enabled through the routine purchase of commercial datasets from private brokers.

Artificial intelligence dramatically escalates these capabilities.

AI tools can:

  • Automatically detect patterns across enormous datasets
  • Link disparate records into detailed personal profiles
  • Assign real-time “risk scores” to individuals
  • Conduct continuous behavioral monitoring at a scale and speed impossible for human analysts

The result is a far more pervasive and intrusive form of surveillance than anything previously feasible under existing legal frameworks.

Anthropic Responds to Pentagon’s Supply Chain Risk Designation

Anthropic issued a statement on February 27, 2026, rejecting Hegseth’s announcement that the Department of War would designate the company a “supply chain risk”. This follows unsuccessful negotiations over ethical safeguards on its Claude AI model.

The firm reiterated its refusal to allow Claude’s use for mass domestic surveillance of Americans or fully autonomous weapons. They also argued that current frontier AI remains too unreliable for lethal autonomous systems, endangering warfighters and civilians.

Anthropic emphasised these narrow exceptions have not impacted any U.S. government missions to date and accused the Pentagon of applying an unprecedented label typically reserved for foreign adversaries to an American company.

The company vowed to challenge any formal designation in court, calling it legally unsound and a dangerous precedent for private firms negotiating with the government. It clarified that the move would only restrict Claude’s use in Department of War contracts. This means individual users, commercial customers, or non-Department of Defense contractor activities will not be affected.

The AI Transatlantic Divide

Hegseth’s stance contrasts with international calls for restraints, such as UN Secretary-General António Guterres urging “guardrails” and a ban on lethal autonomous weapons without human control. For Europe, Hegseth’s approach highlights the divide on AI ethics in defense. It also reinforces the urgency of sovereign, regulated AI pathways.

  • EU AI Act (phased implementation through 2026–2027)
    • Comprehensive, binding, risk-based framework
    • Prioritises fundamental rights, privacy, non-discrimination, and precautionary measures
    • Builds directly on GDPR principles with extraterritorial reach
    • Imposes steep fines for non-compliance
    • Bans unacceptable-risk AI outright; strict rules for high-risk systems (transparency, human oversight, conformity assessments, documentation)
    • Paired with additional regulation including the proposed European Innovation Act and Digital Omnibus

  • US Approach (Trump administration, 2026)
    • Innovation-first
    • No comprehensive federal law in place
    • Governance relies on executive orders, agency guidance, voluntary frameworks and fragmented state-level rules
    • Recent federal directives seek to preempt conflicting state laws
    • Decentralised, sector-specific model
    • Emphasises national security and economic dominance

When we last covered the EU AI Act, it highlighted a high-stakes balancing act: robust safeguards are needed to counter harms (surveillance scandals, biases), but the Act could slow Europe’s AI progress, erode competitiveness, and leave the continent reliant on US/Chinese models.

Claude AI Surges to #1 on App Store

OpenAI’s swift Pentagon deal, announced last Friday just hours after the Trump administration blacklisted Anthropic, has sparked a sharp backlash that contrasts sharply with Claude’s surge.

While Anthropic’s defiant stand won public admiration (propelling Claude to No. 1 on the US App Store free charts that weekend, with record sign-ups), OpenAI faces accusations of opportunism.

CEO of OpenAI, Sam Altman, initially claimed shared red lines with Anthropic, then revealed the contract shortly after.

Altman admitted the rollout was rushed and “looked opportunistic and sloppy,” later amending the deal on Monday to explicitly bar intentional domestic surveillance of US person, though autonomous weapons remained unaddressed in the posted updates.

The deal has dented OpenAI’s perceived ethical stance among some users and employees, fueling uninstall spikes for ChatGPT and highlighting how military ties can erode consumer trust when safeguards appear compromised.

Author: Ruben McCarthy

See Also:

Will the EU’s AI Act Cripple Europe’s Innovation Edge?

Biggest AI Surveillance Scandals Threatening Europe’s Privacy

What is the European Innovation Act?

Share this article

Latest news

Subscribe to our newsletter!

More News