NSA Uses Anthropic’s Mythos AI Amid Cybersecurity and Safety Concerns
Updated
Updated · Engadget · Apr 19
NSA Uses Anthropic’s Mythos AI Amid Cybersecurity and Safety Concerns
51 articles · Updated · Engadget · Apr 19
The NSA is reportedly using Anthropic's new Mythos AI model for cybersecurity, despite a recent government ban on the company's technology.
Mythos is described as highly effective at identifying software vulnerabilities, prompting concerns about its potential risks if misused.
Anthropic and the US government remain in legal dispute, while officials debate balancing innovation, security, and the risks posed by advanced AI tools.
Will powerful AI make open-source software too dangerous to maintain?
Can AI-powered defenses truly keep pace with AI-driven cyberattacks?
How can nations achieve 'AI sovereignty' when its core technology is so concentrated?
Is regulating AI 'emotions' the key to preventing rogue AI behavior?
What global rules can prevent one AI from destabilizing world finance?
How will small businesses survive in an age of automated cyber threats?
Anthropic’s Mythos AI Revolutionizes Cybersecurity Amid Pentagon-NSA Divide and Legal Battle
Overview
In April 2026, the Pentagon labeled Anthropic as a supply chain risk due to the company's refusal to grant unrestricted access to its powerful Mythos AI, which can identify critical cybersecurity vulnerabilities. Despite this, the NSA uses Mythos under strict constraints for defensive purposes, highlighting a split in government approach. Anthropic launched Project Glasswing, partnering with major tech firms to secure critical infrastructure while limiting Mythos’s release to trusted organizations. Ongoing White House talks aim to resolve the standoff by allowing controlled defensive use by civilian agencies but restricting military offensive applications. This situation underscores the urgent need for balanced AI governance amid growing global competition and proliferation risks.