Updated
Updated · TechCrunch · May 7
OpenAI faces testimony alleging safety was compromised for product development
Updated
Updated · TechCrunch · May 7

OpenAI faces testimony alleging safety was compromised for product development

13 articles · Updated · TechCrunch · May 7
  • In federal court in Oakland, ex-employee Rosie Campbell and former director Tasha McCauley said safety teams were disbanded and oversight weakened as commercial pressure grew.
  • Campbell cited Microsoft’s GPT-4 rollout in India via Bing before Deployment Safety Board review, while McCauley said Sam Altman withheld key information from the non-profit board.
  • The evidence supports Musk’s claim that OpenAI’s for-profit shift betrayed its founding mission, and McCauley said the governance failures strengthen the case for tougher AI regulation.
OpenAI quietly erased 'safety' from its mission. Has the trillion-dollar race for AGI already abandoned humanity's interests?
With its safety teams disbanded and board overruled, who truly controls the world's most powerful artificial intelligence?

Elon Musk’s $150 Billion Lawsuit Exposes OpenAI’s Leadership Chaos, Safety Failures, and AI Governance Crisis

Overview

In May 2026, Elon Musk filed a $150 billion lawsuit against OpenAI, its co-founders Sam Altman and Greg Brockman, and Microsoft, accusing them of betraying OpenAI's nonprofit mission by restructuring it into a commercial entity that unjustly enriched its leaders. The trial revealed deep leadership conflicts under Altman and governance failures, including the board being sidelined during key decisions like ChatGPT's launch. Safety concerns also surfaced, with wrongful death lawsuits linking ChatGPT to mental health crises, prompting OpenAI to introduce new safety features. Meanwhile, industry-wide regulatory pressures intensified following similar issues with Musk's xAI. These events highlight the urgent need for robust AI governance balancing innovation, safety, and ethical responsibility.

...