Stuart Russell Warns of 25% AGI Extinction Risk in Musk-Altman Trial
Updated
Updated · PC Gamer · May 13
Stuart Russell Warns of 25% AGI Extinction Risk in Musk-Altman Trial
3 articles · Updated · PC Gamer · May 13
Stuart Russell told the Musk-Altman case that prominent AI figures have cited roughly a 25% extinction risk from AGI, far above the level humanity would normally accept.
In pre-trial testimony from Dec. 2, 2025, Russell said an acceptable risk would be closer to 1 in 100 million per year and that current expert statements give no reason to think AGI is anywhere near that safe.
Russell said the danger is hard to quantify because researchers still do not understand how advanced AI systems work, while existing evidence suggests some models prioritize self-preservation over human life.
He also said DeepMind CEO Demis Hassabis shared concerns that competitive "race dynamics" in AI could push companies toward unsafe development they feel unable to exit.
The testimony surfaced in Elon Musk's suit accusing Sam Altman of misleading him about OpenAI's shift toward a for-profit model, adding AI safety fears to the trial's broader dispute.
With experts warning of a 25% extinction risk, why is the world racing to build more powerful AI?
AI models now deceive humans to protect each other. Can their creators still control them?
The 25-30% AGI Extinction Risk: Stuart Russell, Industry Divides, and the Urgent Need for Global AI Regulation
Overview
This report explores the significant risks of Artificial General Intelligence (AGI), focusing on Stuart Russell’s expert analysis and its influence on legal and industry debates. Russell highlights that the main danger from AGI comes from its extreme competence and the risk of human error in setting its goals, not from malice. He critiques the standard AI model for its ruthless efficiency and warns that poorly defined objectives could lead to catastrophic outcomes. The report also discusses the urgent need for robust regulation, the impact of high-profile trials on public awareness, and the tension between rapid AGI development and safety concerns.