Updated
Updated · OpenAI · May 6
OpenAI explains ChatGPT data use and privacy controls for model training
Updated
Updated · OpenAI · May 6

OpenAI explains ChatGPT data use and privacy controls for model training

15 articles · Updated · OpenAI · May 6
  • It said training draws on public internet content, partnerships, and user, contractor and researcher data, while an internal Privacy Filter masks personal information at multiple stages.
  • Users can stop new chats being used for training by disabling “Improve the model for everyone”, while Temporary Chats are excluded, kept for 30 days for safety, then deleted.
  • OpenAI said Memory remains optional and can be reviewed, edited, deleted or turned off, as it faces growing use of ChatGPT for sensitive personal tasks and seeks stronger safety safeguards.
After a damning privacy probe, is OpenAI's new 'Privacy Filter' enough to protect your data?
If your private conversations help train ChatGPT, should OpenAI be paying you for them?
Your ChatGPT conversations are not entirely private. Who can read them and why?

Balancing Privacy and Scale: ChatGPT’s 900 Million Users and the New Privacy Filter in Early 2026

Overview

By early 2026, ChatGPT reached over 900 million weekly active users, growing rapidly from 100 million shortly after launch. To address rising privacy concerns and industry challenges around sensitive data exposure, OpenAI introduced the Privacy Filter on April 22, 2026. This tool processes data locally on user devices, preventing raw personal information from leaving the device, though OpenAI cautions against relying on it alone in high-risk areas due to potential missed data. Alongside, ChatGPT offers user controls like Temporary Chat and opt-outs for model training, while facing regulatory pressures such as the EU AI Act and California S.B. 53. Ethical debates have also emerged around advertising and military partnerships, highlighting ongoing tensions between innovation, privacy, and trust.

...