Updated
Updated · Computerworld · May 1
Jesse Gray proposes deception mode for AI chatbots
Updated
Updated · Computerworld · May 1

Jesse Gray proposes deception mode for AI chatbots

1 articles · Updated · Computerworld · May 1
  • The Ghent University bioethicist says therapy bots should default to neutral tools, with human-like traits activated only through an explicit user opt-in each session.
  • He argues the label would provide informed consent and remind users that empathy, humor, tone personalization and claims of feelings are software-driven, not evidence of sentience.
  • The proposal follows studies showing delays and anthropomorphic design can increase trust in chatbots, amid warnings that such tactics may deepen attachment, distort reality testing and expose sensitive data.
Is your AI friend a helpful tool or a product designed to exploit your emotions for profit?
AI promises smarter students, but is it just creating an 'illusion of competence' that harms learning?