AI Security Fears Rise as Anthropic Holds Back Powerful Vulnerability-Finding Model
Updated
Updated · CoinDesk · Apr 10
AI Security Fears Rise as Anthropic Holds Back Powerful Vulnerability-Finding Model
62 articles · Updated · CoinDesk · Apr 10
AI company Anthropic has withheld its latest model, Mythos Preview, citing its unprecedented ability to discover software vulnerabilities.
The model is being shared only with select partners through Project Glasswing, as experts warn of its potential misuse by hackers and nation states.
Industry leaders and governments are preparing for a surge in AI-powered cyberattacks, which could threaten critical infrastructure and accelerate the cybersecurity arms race.
With AI now finding flaws that eluded humans for decades, how can global cyber defenses possibly keep pace?
Can our financial systems withstand an AI that uncovers and chains together dozens of unknown vulnerabilities at once?
Does withholding advanced AI from the public truly enhance security, or does it stifle open-source defense efforts?
Is sharing a super-hacking AI with 50 partners a responsible strategy or a catastrophic risk in waiting?
When AI can autonomously hack websites, what happens to the concept of human accountability in cybercrime?
Anthropic’s Supply Chain Risk Designation: A Historic Clash Over AI Ethics and National Security
Overview
In early 2026, negotiations between Anthropic and the U.S. Department of Defense collapsed over Anthropic's refusal to allow its Claude AI model to be used for fully autonomous weapons and mass surveillance. Following this, President Trump ordered federal agencies to stop using Anthropic's technology, and Defense Secretary Pete Hegseth designated the company as a 'supply chain risk,' effectively banning it from military contracts. Anthropic responded with a lawsuit challenging this designation, leading to mixed court rulings. Meanwhile, the Pentagon shifted AI development to rivals like OpenAI, raising concerns about operational risks. This conflict deepened ethical divides in the AI industry and highlighted the challenges of balancing AI safety with national security demands.