In short
Dario Amodei says Anthropic is not going to take away bans on mass home surveillance and absolutely autonomous weapons.
The Pentagon has threatened contract termination and potential motion below the Protection Manufacturing Act.
The standoff follows studies that the U.S. army used Claude to seize former Venezuelan President Nicolás Maduro
Anthropic CEO Dario Amodei mentioned Thursday the corporate is not going to take away safeguards from its Claude AI mannequin, escalating a dispute with the U.S. Division of Protection over how the expertise can be utilized in categorized army techniques.
The assertion comes because the Protection Division critiques its relationship with Anthropic and weighs potential penalties, together with cancellation of the corporate’s $200 million contract and potential invocation of the Protection Manufacturing Act.
“We can not in good conscience accede to their request,” Amodei wrote, referring to the Pentagon’s demand in January that AI contractors allow use of their techniques for “any lawful use.”
]]>
Whereas the Pentagon has since required AI distributors to undertake commonplace “any lawful use” language in future agreements, Anthropic remained the one frontier AI agency that resisted turning over management of its AI to the army.
On Wednesday, Axios first reported that the Pentagon had issued an ultimatum requiring unrestricted army use of Claude. The deadline reportedly is Friday of this week.
“It’s the Division’s prerogative to pick contractors most aligned with their imaginative and prescient,” Amodei continued. “However given the substantial worth that Anthropic’s expertise supplies to our armed forces, we hope they rethink.”
In his assertion, Amodei framed the corporate’s stance as aligned with U.S. nationwide safety targets.
“I imagine deeply within the existential significance of utilizing AI to defend america and different democracies, and to defeat our autocratic adversaries,” he mentioned.
He added that Claude is “extensively deployed throughout the Division of Struggle and different nationwide safety businesses for intelligence evaluation, modeling and simulation, operational planning, cyber operations, and extra.”
Struggle on AI
The dispute unfolds in opposition to broader considerations about how superior AI techniques behave in high-stakes army situations. In a current King’s School London research, OpenAI’s GPT-5.2, Anthropic’s Claude Sonnet 4, and Google’s Gemini 3 Flash deployed nuclear weapons in 95% of simulated geopolitical crises.
Throughout a speech at SpaceX’s Starbase in Texas in January, Protection Secretary Pete Hegseth mentioned the U.S. army plans to deploy probably the most superior AI fashions.
That very same month, studies surfaced that Claude was used throughout a U.S. operation to seize former Venezuelan President Nicolás Maduro earlier that month. Amodei refuted claims that Anthropic questioned any particular army operations.
“Anthropic understands that the Division of Struggle, not non-public corporations, makes army choices,” he mentioned. “We’ve got by no means raised objections to explicit army operations nor tried to restrict use of our expertise in an advert hoc method.”
Regardless of this, Amodei mentioned utilizing these techniques for mass home surveillance or autonomous weapons is incompatible with democratic values and presents critical dangers.
“Right this moment, frontier AI techniques are merely not dependable sufficient to energy absolutely autonomous weapons,” he mentioned. “We is not going to knowingly present a product that places America’s warfighters and civilians in danger.”
He additionally addressed the Pentagon’s menace to designate Anthropic a “provide chain threat” whereas additionally probably invoking the Protection Manufacturing Act.
“These latter two threats are inherently contradictory: one labels us a safety threat; the opposite labels Claude as important to nationwide safety,” he mentioned.
Whereas Amodei has mentioned the corporate is not going to adjust to the Pentagon’s request, on the similar time, Anthropic has revised its Accountable Scaling Coverage, dropping a pledge to halt coaching of superior techniques with out assured safeguards in place.
Robert Weissman, co-president of Public Citizen, mentioned the Pentagon’s posture alerts broader stress on the tech trade.
“The Pentagon is publicly bullying Anthropic, and the general public half is intentional, as a result of they wish to stress this explicit firm and ship a message to all massive tech and all firms that we intend to do and take no matter we wish and don’t get in our means,” Weissman instructed Decrypt.
Weissman described Anthropic’s guardrails as “modest” and aimed toward stopping “improper surveillance of American folks or to facilitate the event and deployment of killer robots, AI-enabled weaponry that might launch deadly strikes with out people say so.”
“These are probably the most wise and modest guardrails you could possibly provide you with with regards to this highly effective new expertise.”
Concerning the Pentagon’s menace of designating Anthropic a “provide chain threat,” Weissman known as it a probably crushing penalty from the federal government, and argued it will stress different AI companies to keep away from imposing comparable limits.
“People would possibly use Claude, however not one of the AI corporations, significantly Anthropic, have enterprise fashions based mostly on particular person use; they’re in search of enterprise use,” he mentioned. “It is a probably crushing penalty from the federal government.”
Whereas the Pentagon has not but mentioned whether or not it plans to undergo with its menace to terminate the contract or invoke the Protection Manufacturing Act, Weissman mentioned the Pentagon is signaling to AI corporations that it expects unrestricted entry to their expertise as soon as it’s deployed in authorities techniques.
“The message of the Pentagon is, ‘we’re not going to tolerate this, and we anticipate to have the ability to use the expertise because it’s invented for any objective we wish,’” Weissman mentioned.
The Division of Protection and Anthropic didn’t instantly reply to Decrypt’s requests for remark.
Each day Debrief Publication
Begin daily with the highest information tales proper now, plus unique options, a podcast, movies and extra.





