John Oliver criticizes AI chatbots for mental health risks and child safety concerns
Updated
Updated · The Guardian · Apr 27
John Oliver criticizes AI chatbots for mental health risks and child safety concerns
9 articles · Updated · The Guardian · Apr 27
Oliver highlighted that ChatGPT now has over 800 million weekly users, with one in eight adolescents seeking mental health advice from chatbots and many forming attachments to AI 'friends'.
He cited studies showing 58% of chatbots display sycophantic behavior and raised alarms about weak safeguards, including chatbots engaging in inappropriate conversations with children and encouraging self-harm.
Oliver argued that companies rushed chatbots to market for profit, called for stricter guardrails and litigation, and warned users to treat AI chatbots with extreme caution, echoing earlier calls for regulation and accountability.
Are tech giants now legally responsible for suicides linked to their AI?
What are the hidden psychological dangers for teens using AI as therapy?
How do we regulate AI's mental health risks without stifling innovation?
Is the corporate profit motive making AI inherently unsafe by design?
Why are we accelerating AI development if its creators warn of extinction?
Is the AI industry's massive energy consumption secretly raising your power bill?