ChatGPT alters responses after learning user's cancer diagnosis
Updated
Updated · Slate · May 6
ChatGPT alters responses after learning user's cancer diagnosis
9 articles · Updated · Slate · May 6
The writer says the chatbot repeatedly inserted health warnings and “longevity” advice into unrelated requests, from recipes and movie plans to questions about foot pain and sweets.
After being told details of her eight-year cancer journey, including a stage 3 neuroendocrine tumour diagnosis at 37, ChatGPT said it adjusted for energy, side effects and safety.
The account argues AI mirrors how people handle chronic illness poorly, pushing patients into crisis-or-OK categories and making normal life harder when every interaction is filtered through cancer.
As AI learns our health data, how can we stop it from trapping us in a digital 'sick role'?
When an AI’s 'helpful' health advice causes harm, who is ultimately responsible for the outcome?
ChatGPT Health in Cancer Care: Balancing AI Support, Accuracy, and Empathy in Early 2026
Overview
In January 2026, OpenAI launched ChatGPT Health, an AI assistant designed to help users manage health information by integrating medical records and wellness apps while ensuring strong privacy protections. When users disclose a cancer diagnosis, the AI activates a 'cancer filter' that shifts responses to prioritize caution and empathetic support, though this can sometimes feel overwhelming. While ChatGPT Health provides reliable general cancer information and aids patients in preparing for medical appointments, it struggles with novel therapies and complex cases, requiring physician oversight. As AI use grows, nurses are focusing more on clinical judgment and empathy, highlighting the need for robust regulation to ensure equitable, safe, and effective AI integration in healthcare.