Barry Diller says trust is irrelevant as AGI nears
Updated
Updated · TechCrunch · May 6
Barry Diller says trust is irrelevant as AGI nears
1 articles · Updated · TechCrunch · May 6
Speaking at The Wall Street Journal's Future of Everything conference, Diller said OpenAI chief Sam Altman is sincere and a decent person with good values.
He argued the bigger risk is AI's unknown consequences, saying even its creators do not fully understand what could happen as systems grow more powerful.
Diller said AGI is not here yet but is approaching quickly, and warned strong guardrails are essential because once such systems are unleashed, there may be no going back.
Is trusting AI's creators a dangerous distraction from the urgent need to build systemic guardrails against AGI?
With 95% of AI pilots failing, is the panic over superintelligence overshadowing AI's real-world implementation crisis?
As AI erodes our cognitive skills, are we trading short-term productivity for a long-term collapse of human knowledge?
The Urgent Need for Robust AGI Governance: Insights from Barry Diller’s 2026 Conference Address
Overview
On May 6, 2026, Barry Diller warned that trusting individual AGI leaders is irrelevant due to the unpredictable and powerful nature of AGI, which poses serious capability, technical, and societal risks. These risks are amplified by geopolitical tensions, exposing gaps in current regulations and driving new governance models like the EU AI Act. Diverse AGI timeline predictions add uncertainty, increasing industry disruption risks in media and travel. Governance debates pit Diller's call for external oversight against industry self-regulation and federal versus state control. To ensure safe AGI, organizations must embed ethics, adopt agile governance, foster international cooperation, and engage the public, addressing unresolved challenges like transparency and military use.