Friday, February 27, 2026
No Result
View All Result
The Crypto HODL
  • Home
  • Bitcoin
  • Crypto Updates
    • Altcoin
    • Ethereum
    • Crypto Updates
    • Crypto Mining
    • Crypto Exchanges
  • Blockchain
  • NFT
  • DeFi
  • Web3
  • Metaverse
  • Regulations
  • Scam Alert
  • Analysis
  • Videos
Marketcap
  • Home
  • Bitcoin
  • Crypto Updates
    • Altcoin
    • Ethereum
    • Crypto Updates
    • Crypto Mining
    • Crypto Exchanges
  • Blockchain
  • NFT
  • DeFi
  • Web3
  • Metaverse
  • Regulations
  • Scam Alert
  • Analysis
  • Videos
No Result
View All Result
The Crypto HODL
No Result
View All Result

Anthropic Won’t Lift AI Safeguards Amid Ongoing Pentagon Dispute: CEO

February 27, 2026
in Web3
Reading Time: 6 mins read
0 0
A A
0
Home Web3
Share on FacebookShare on Twitter



In short

Dario Amodei says Anthropic is not going to take away bans on mass home surveillance and absolutely autonomous weapons.
The Pentagon has threatened contract termination and potential motion below the Protection Manufacturing Act.
The standoff follows studies that the U.S. army used Claude to seize former Venezuelan President Nicolás Maduro

Anthropic CEO Dario Amodei mentioned Thursday the corporate is not going to take away safeguards from its Claude AI mannequin, escalating a dispute with the U.S. Division of Protection over how the expertise can be utilized in categorized army techniques.

The assertion comes because the Protection Division critiques its relationship with Anthropic and weighs potential penalties, together with cancellation of the corporate’s $200 million contract and potential invocation of the Protection Manufacturing Act.

“We can not in good conscience accede to their request,” Amodei wrote, referring to the Pentagon’s demand in January that AI contractors allow use of their techniques for “any lawful use.”

]]>

Whereas the Pentagon has since required AI distributors to undertake commonplace “any lawful use” language in future agreements, Anthropic remained the one frontier AI agency that resisted turning over management of its AI to the army.

On Wednesday, Axios first reported that the Pentagon had issued an ultimatum requiring unrestricted army use of Claude. The deadline reportedly is Friday of this week.

“It’s the Division’s prerogative to pick contractors most aligned with their imaginative and prescient,” Amodei continued. “However given the substantial worth that Anthropic’s expertise supplies to our armed forces, we hope they rethink.”

In his assertion, Amodei framed the corporate’s stance as aligned with U.S. nationwide safety targets.

“I imagine deeply within the existential significance of utilizing AI to defend america and different democracies, and to defeat our autocratic adversaries,” he mentioned.

He added that Claude is “extensively deployed throughout the Division of Struggle and different nationwide safety businesses for intelligence evaluation, modeling and simulation, operational planning, cyber operations, and extra.”

Struggle on AI

The dispute unfolds in opposition to broader considerations about how superior AI techniques behave in high-stakes army situations. In a current King’s School London research, OpenAI’s GPT-5.2, Anthropic’s Claude Sonnet 4, and Google’s Gemini 3 Flash deployed nuclear weapons in 95% of simulated geopolitical crises.

Throughout a speech at SpaceX’s Starbase in Texas in January, Protection Secretary Pete Hegseth mentioned the U.S. army plans to deploy probably the most superior AI fashions.

That very same month, studies surfaced that Claude was used throughout a U.S. operation to seize former Venezuelan President Nicolás Maduro earlier that month. Amodei refuted claims that Anthropic questioned any particular army operations.

“Anthropic understands that the Division of Struggle, not non-public corporations, makes army choices,” he mentioned. “We’ve got by no means raised objections to explicit army operations nor tried to restrict use of our expertise in an advert hoc method.”

Regardless of this, Amodei mentioned utilizing these techniques for mass home surveillance or autonomous weapons is incompatible with democratic values and presents critical dangers.

“Right this moment, frontier AI techniques are merely not dependable sufficient to energy absolutely autonomous weapons,” he mentioned. “We is not going to knowingly present a product that places America’s warfighters and civilians in danger.”

He additionally addressed the Pentagon’s menace to designate Anthropic a “provide chain threat” whereas additionally probably invoking the Protection Manufacturing Act.

“These latter two threats are inherently contradictory: one labels us a safety threat; the opposite labels Claude as important to nationwide safety,” he mentioned.

Whereas Amodei has mentioned the corporate is not going to adjust to the Pentagon’s request, on the similar time, Anthropic has revised its Accountable Scaling Coverage, dropping a pledge to halt coaching of superior techniques with out assured safeguards in place.

Robert Weissman, co-president of Public Citizen, mentioned the Pentagon’s posture alerts broader stress on the tech trade.

“The Pentagon is publicly bullying Anthropic, and the general public half is intentional, as a result of they wish to stress this explicit firm and ship a message to all massive tech and all firms that we intend to do and take no matter we wish and don’t get in our means,” Weissman instructed Decrypt.

Weissman described Anthropic’s guardrails as “modest” and aimed toward stopping “improper surveillance of American folks or to facilitate the event and deployment of killer robots, AI-enabled weaponry that might launch deadly strikes with out people say so.”

“These are probably the most wise and modest guardrails you could possibly provide you with with regards to this highly effective new expertise.”

Concerning the Pentagon’s menace of designating Anthropic a “provide chain threat,” Weissman known as it a probably crushing penalty from the federal government, and argued it will stress different AI companies to keep away from imposing comparable limits.

“People would possibly use Claude, however not one of the AI corporations, significantly Anthropic, have enterprise fashions based mostly on particular person use; they’re in search of enterprise use,” he mentioned. “It is a probably crushing penalty from the federal government.”

Whereas the Pentagon has not but mentioned whether or not it plans to undergo with its menace to terminate the contract or invoke the Protection Manufacturing Act, Weissman mentioned the Pentagon is signaling to AI corporations that it expects unrestricted entry to their expertise as soon as it’s deployed in authorities techniques.

“The message of the Pentagon is, ‘we’re not going to tolerate this, and we anticipate to have the ability to use the expertise because it’s invented for any objective we wish,’” Weissman mentioned.

The Division of Protection and Anthropic didn’t instantly reply to Decrypt’s requests for remark.

Each day Debrief Publication

Begin daily with the highest information tales proper now, plus unique options, a podcast, movies and extra.



Source link

Tags: AnthropicCEODisputeLiftOngoingPentagonSafeguardsWont
Previous Post

Bitcoin 5TH Wave Is Not Over Yet, And Price Could Still Crash To $52,000; Analyst Warns

Next Post

CFTC Cracks Down on Prediction Market Abuse as Insider Trading Cases Surface on Kalshi

Related Posts

Coin Mixers Recovering As Users Shift to New Platforms: Cambridge University
Web3

Coin Mixers Recovering As Users Shift to New Platforms: Cambridge University

February 26, 2026
OCC Lays Out Framework for Regulated Stablecoins Under GENIUS Act
Web3

OCC Lays Out Framework for Regulated Stablecoins Under GENIUS Act

February 26, 2026
Nvidia Earnings Results Steady Markets as AI Spending Debate Intensifies
Web3

Nvidia Earnings Results Steady Markets as AI Spending Debate Intensifies

February 26, 2026
UK Selects Firms for Stablecoin Regulatory Sandbox, Including Revolut
Web3

UK Selects Firms for Stablecoin Regulatory Sandbox, Including Revolut

February 25, 2026
Coinbase CEO Pushes Back on UK Stablecoin Caps as Token Profits Surge
Web3

Coinbase CEO Pushes Back on UK Stablecoin Caps as Token Profits Surge

February 25, 2026
Treasury Sanctions Russian ‘Exploit’ Broker Over Stolen US Cyber Tools
Web3

Treasury Sanctions Russian ‘Exploit’ Broker Over Stolen US Cyber Tools

February 25, 2026
Next Post
CFTC Cracks Down on Prediction Market Abuse as Insider Trading Cases Surface on Kalshi

CFTC Cracks Down on Prediction Market Abuse as Insider Trading Cases Surface on Kalshi

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Twitter Instagram LinkedIn Telegram RSS
The Crypto HODL

Find the latest Bitcoin, Ethereum, blockchain, crypto, Business, Fintech News, interviews, and price analysis at The Crypto HODL

CATEGORIES

  • Altcoin
  • Analysis
  • Bitcoin
  • Blockchain
  • Crypto Exchanges
  • Crypto Mining
  • Crypto Updates
  • DeFi
  • Ethereum
  • Metaverse
  • NFT
  • Regulations
  • Scam Alert
  • Uncategorized
  • Videos
  • Web3

SITE MAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 The Crypto HODL.
The Crypto HODL is not responsible for the content of external sites.

No Result
View All Result
  • Home
  • Bitcoin
  • Crypto Updates
    • Altcoin
    • Ethereum
    • Crypto Updates
    • Crypto Mining
    • Crypto Exchanges
  • Blockchain
  • NFT
  • DeFi
  • Web3
  • Metaverse
  • Regulations
  • Scam Alert
  • Analysis
  • Videos
Crypto Marketcap

Copyright © 2023 The Crypto HODL.
The Crypto HODL is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In