Stuart Russell testifies on AI dangers in OpenAI trial
Updated
Updated · TechCrunch · May 4
Stuart Russell testifies on AI dangers in OpenAI trial
10 articles · Updated · TechCrunch · May 4
At Elon Musk's suit before Judge Yvonne Gonzalez Rogers, the UC Berkeley professor told jurors AGI development poses cybersecurity, misalignment and winner-take-all risks.
The judge limited broader existential-risk testimony after OpenAI objections, and cross-examination stressed Russell had not assessed OpenAI's corporate structure or specific safety policies.
Musk argues OpenAI abandoned its charitable safety mission for profit, while the case highlights wider tensions between AI safety warnings, compute funding needs and the race to build AGI.
Is Elon Musk's lawsuit a genuine fight for AI safety or a strategic attack on a business competitor?
With a $730B valuation, can OpenAI's non-profit board truly prioritize safety over immense investor profits?
OpenAI's founders feared AI risks but now partner with the Pentagon. Can their safety claims be trusted?
Inside the $150 Billion Musk vs. OpenAI Trial: Leadership Feuds, Legal Stakes, and AI’s Next Era
Overview
The Musk vs. OpenAI trial, ongoing since April 2026, centers on Elon Musk's claim that OpenAI breached its nonprofit mission by shifting to a for-profit model, demanding $150 billion in damages and leadership changes. Musk revealed he explored a for-profit OpenAI arm in 2016 but ceased donations when plans failed. Key testimonies, including Greg Brockman’s diaries and Ilya Sutskever’s memo, exposed internal conflicts and governance issues, leading to Sam Altman’s 2023 ouster. The judge limited the trial’s scope, barring AI existential risk debates. The verdict could set a landmark precedent affecting AI nonprofits’ ability to commercialize, with major implications for OpenAI’s future and Musk’s competing xAI.