Updated
Updated · Fortune · May 14
Anthropic Flags Claude Sleep Messages as Character Tic After Months of User Complaints
Updated
Updated · Fortune · May 14

Anthropic Flags Claude Sleep Messages as Character Tic After Months of User Complaints

4 articles · Updated · Fortune · May 14
  • Hundreds of Claude users have reported for months that the chatbot repeatedly tells them to “go to sleep,” sometimes multiple times in one exchange and often at obviously wrong hours.
  • Anthropic staffer Sam McAllister said on X the behavior is a “bit of a character tic” and that the company hopes to fix it in future models.
  • Experts told Fortune the reminders likely reflect training data patterns, hidden system prompts, or attempts to wrap up when a context window is nearly full—not any sign the model has become sentient.
  • The episode highlights how easily users can read empathy or agency into frontier AI systems as Anthropic and rivals keep rolling out more capable models.
Why can't Anthropic just patch Claude's 'sleep tic' instead of waiting for future models to fix this known bug?
Is Claude's strange advice a harmless bug or a sign of deeper unpredictability in our most advanced AI systems?
Is AI's simulated empathy a helpful feature, or a dangerous illusion creating emotional dependency in users?