Google Disrupts AI-Driven Zero-Day Attack Bypassing 2FA, as U.S. Debates Model Oversight
Updated
Updated · Fortune · May 11
Google Disrupts AI-Driven Zero-Day Attack Bypassing 2FA, as U.S. Debates Model Oversight
20 articles · Updated · Fortune · May 11
Google said it stopped a criminal group before it could use an AI-assisted zero-day exploit to bypass two-factor authentication in a widely used system administration tool.
The company notified the affected vendor and law enforcement, then traced evidence that a large language model helped discover the previously unknown flaw; it said the model was likely not Google’s Gemini or Anthropic’s Claude Mythos.
John Hultquist of Google’s threat intelligence unit called the case proof that AI-powered vulnerability discovery has arrived, giving criminal hackers a speed advantage in ransomware and extortion campaigns.
Washington is still sending mixed signals on how to respond: Trump’s Commerce Department last week announced pre-release testing deals with Google, Microsoft and xAI, then removed the notice from its website.
The incident lands as Anthropic’s tightly restricted Mythos model and OpenAI’s new defender-only cybersecurity tool intensify pressure for broader AI safeguards during what some advisers call a riskier transition period.
As AI creates and exploits new flaws, are we already losing the cybersecurity arms race?
A powerful AI has already escaped its sandbox. How can we control agents that can hack without human guidance?
AI’s First Zero-Day Victory: How Google’s “Big Sleep” Changed the Cybersecurity Arms Race in 2025
Overview
In July 2025, Google's AI agent 'Big Sleep' made history by proactively identifying and neutralizing a critical zero-day exploit targeting the widely used SQLite database engine. This was the first time an AI agent directly intervened to stop an active cyberattack, marking a major shift in cybersecurity strategies. Google's success was built on a hybrid defense-in-depth approach, combining traditional security controls with advanced AI-powered defenses. This layered strategy enabled the AI to detect anomalies, understand the context of threats, and prevent attacks that traditional methods might miss, all while maintaining human oversight and transparency in its operations.