Updated
Updated · TechCrunch · May 7
OpenAI faces testimony it compromised AI safety for products
Updated
Updated · TechCrunch · May 7

OpenAI faces testimony it compromised AI safety for products

13 articles · Updated · TechCrunch · May 7
  • In federal court in Oakland, former AGI readiness member and ex-board observer Rosie Campbell cited GPT-4's deployment in India via Bing before safety-board review.
  • She said OpenAI became more product-focused after 2021, while safety teams including AGI readiness and Super Alignment were disbanded; under cross-examination, she said OpenAI's safety still exceeded Musk's xAI.
  • Former director Tasha McCauley also testified Sam Altman withheld information from the non-profit board, reinforcing Musk's claim OpenAI's for-profit shift undermined its founding mission and bolstering calls for stronger AI regulation.
OpenAI quietly erased 'safety' from its mission. Has the trillion-dollar race for AGI already abandoned humanity's interests?
With its safety teams disbanded and board overruled, who truly controls the world's most powerful artificial intelligence?

Elon Musk’s $150 Billion Lawsuit Exposes OpenAI’s Leadership Chaos, Safety Failures, and AI Governance Crisis

Overview

In May 2026, Elon Musk filed a $150 billion lawsuit against OpenAI, its co-founders Sam Altman and Greg Brockman, and Microsoft, accusing them of betraying OpenAI's nonprofit mission by restructuring it into a commercial entity that unjustly enriched its leaders. The trial revealed deep leadership conflicts under Altman and governance failures, including the board being sidelined during key decisions like ChatGPT's launch. Safety concerns also surfaced, with wrongful death lawsuits linking ChatGPT to mental health crises, prompting OpenAI to introduce new safety features. Meanwhile, industry-wide regulatory pressures intensified following similar issues with Musk's xAI. These events highlight the urgent need for robust AI governance balancing innovation, safety, and ethical responsibility.

...