Updated
Updated · PCMag · Apr 22
Ruben Circelli Lists 10 Chatbot Habits to Quit for Privacy and Security
Updated
Updated · PCMag · Apr 22

Ruben Circelli Lists 10 Chatbot Habits to Quit for Privacy and Security

3 articles · Updated · PCMag · Apr 22
  • Circelli highlights that AI companies like OpenAI and Anthropic may share user data with authorities and monitor all chatbot interactions, warning users against discussing sensitive or illegal topics with chatbots.
  • He cautions that relying on chatbots for tasks such as job searches, email replies, or legal, medical, and financial advice can lead to privacy breaches, inaccurate results, and negative personal or professional consequences.
  • Circelli urges users to treat chatbots as limited tools, not replacements for human expertise or judgment, emphasizing the growing risks as AI becomes more integrated into daily life.
Since AI companies train on your chats, is any conversation with an AI truly private?
With chatbots giving unsafe medical advice, how can patients safely use AI for health questions?
Does using AI for work emails secretly damage your professional reputation?
Are we overlooking AI's massive benefits by focusing too much on its potential risks?
Is our reliance on AI assistants weakening our own critical thinking skills?
If your AI assistant makes a costly mistake, who is legally responsible?