politicsconservative

US Army Eyes Cutting Ties With AI Firm Anthropic

Washington, D.C., USATuesday, February 17, 2026
Advertisement

The Department of Defense is weighing a major shift in its partnership with AI startup Anthropic. Recent reports suggest that the Pentagon might label the company a supply chain risk, forcing other contractors to avoid working with it and potentially hurting Anthropic’s revenue.

Why the Conflict?

Anthropic refused to let the military use its models freely. While tech giants like Google and OpenAI have agreed to allow broader military applications, Anthropic’s leaders insist their tools should not be used for autonomous weapons or domestic spying. The company’s own system, Claude, is not yet part of the military’s AI platform GenAI.mil, which already hosts versions of ChatGPT and other providers.

A senior Pentagon official warned that removing Anthropic from the supply chain would be a painful process and that the company should face consequences for its stance. This move follows last summer’s high‑profile deal, where Anthropic signed a contract worth up to $200 million with the Defense Department. The company had previously claimed that the partnership would help it support national security.

Leadership Concerns

Anthropic’s CEO, Dario Amodei, has openly spoken about the dangers of unchecked AI in warfare. He has urged for oversight on autonomous systems, noting that no one currently controls the “swarm of drones.” This caution has made the Pentagon uneasy about giving Anthropic a larger role.

Military’s Stance

The military’s focus on “all lawful uses” of AI has pushed it to pressure other companies. While OpenAI’s customized ChatGPT is already in use by millions of civilians and soldiers, Anthropic remains excluded. The Pentagon’s spokesperson said that any partnership must ultimately help troops succeed and keep Americans safe.

Implications

This development marks a sharp change from the optimism that surrounded Anthropic’s earlier contract. If the Pentagon moves forward with its supply‑chain risk designation, it could signal a broader reassessment of how the U.S. government interacts with emerging AI firms.

Actions