F

Anthropic Pushes Back on U.S. “Department of War” Over Surveillance and Fully Autonomous Weapons

AI News
|Fumi Nozawa

Feb. 26, 2026 — Anthropic CEO Dario Amodei released a public statement outlining the company’s position following discussions with the U.S. “Department of War,” clarifying that while Anthropic will continue supporting national security efforts, it will not agree to certain uses of its AI systems.

Extensive National Security Deployment

Anthropic stated that it has proactively deployed its models across U.S. government classified networks, national laboratories, and defense agencies. According to the company, its AI system Claude is currently used in mission-critical contexts including intelligence analysis, modeling and simulation, operational planning, and cyber operations.

The company also emphasized steps it has taken to protect U.S. technological leadership, including cutting off access to entities linked to the Chinese Communist Party and supporting export controls on advanced chips.

Anthropic noted that military decisions ultimately rest with the government, not private contractors, and said it has not sought to intervene in operational matters.

Two Explicit Exceptions

Despite broad cooperation, Anthropic reiterated that it will not support two specific use cases.

Mass domestic surveillance
While endorsing lawful foreign intelligence and counterintelligence applications, the company stated that AI-enabled mass domestic surveillance is incompatible with democratic values. It argued that current legal frameworks have not kept pace with AI’s capacity to aggregate disparate public and commercially available data into detailed, automated profiles of individuals at scale.

Fully autonomous weapons
Anthropic distinguished between partially autonomous systems and fully autonomous weapons that remove humans from target selection and engagement decisions. The company said today’s frontier AI systems are not reliable enough to safely power fully autonomous weapons and that adequate oversight mechanisms are not yet in place. It added that it had offered to collaborate on research to improve reliability but said the offer was not accepted.

Government Pressure Alleged

In the statement, Anthropic said the Department has indicated it would only contract with AI providers willing to support “any lawful use” and remove safeguards in the areas described above. The company claimed it was warned it could be removed from government systems, labeled a “supply chain risk,” or compelled under the Defense Production Act to eliminate restrictions.

Anthropic characterized these potential actions as contradictory, arguing that labeling the company a security risk while simultaneously describing its technology as essential to national security sends mixed signals.

Willing to Continue Cooperation

The company expressed a preference to continue supporting U.S. national security missions with the two safeguards in place. If removed as a contractor, Anthropic said it would work to ensure a smooth transition to alternative providers to avoid disruption to military planning or operations.

The statement underscores the growing tension between AI developers and governments over the limits of military AI deployment, particularly in areas involving civil liberties and autonomous decision-making.

Share this article

Fumi Nozawa

Fumi Nozawa

Digital Marketer & Strategist

Following a career with global brands like Paul Smith and Boucheron, Fumi now supports international companies with digital strategy and market expansion. By combining marketing expertise with a deep understanding of technology, he builds solutions that drive tangible brand growth.

Japan Market Entry (GTM)Global ExpansionWeb DevelopmentDigital ExperienceBrand StrategyPaid Media

Project consultation or other inquiries? Feel free to reach out.