Updated
Updated · ZDNet · Apr 26
ChatGPT users receive guidance on managing personal data privacy
Updated
Updated · ZDNet · Apr 26

ChatGPT users receive guidance on managing personal data privacy

4 articles · Updated · ZDNet · Apr 26
  • ZDNET outlines five steps for the 900 million weekly ChatGPT users to control personal information, including opting out of model training, deleting chats, using temporary chats, managing memories, and deleting accounts.
  • Privacy experts highlight risks of sharing sensitive or seemingly innocuous data with chatbots, as future uses remain uncertain and could potentially harm users.
  • OpenAI retains some data for up to 30 days and may keep information for legal or security reasons, prompting calls for increased caution and proactive privacy management among consumers.
After you delete your ChatGPT data, is it truly erased from all of OpenAI's systems and future AI models?
Are paid enterprise AI plans becoming the only safe way to protect sensitive data from being used for model training?
Will government use of AI create a future where every citizen's digital footprint is centrally profiled?
With AI now able to deanonymize users at scale, is online anonymity and pseudonymity officially dead?
As copyright lawsuits pile up, is OpenAI's 'fair use' defense for its training data on the verge of collapsing?