Between April and September 2025, OpenAI was legally obligated to save all user data, regardless of its privacy policy, as a result of The New York Times' copyright infringement lawsuit.
This legal requirement has since ended, but OpenAI must still retain all user data from that period, raising significant privacy concerns for users who shared sensitive information with ChatGPT and other AI services.
The situation highlights ongoing debates about AI privacy, especially as users increasingly rely on generative AI for personal and therapeutic purposes without the legal protections afforded to traditional professionals.
A therapy data leak led to suicides. Is a massive AI data scandal next?
Is your company's most valuable secret already in ChatGPT's training data?
If you tell an AI your secrets, have you legally told the entire world?
Your AI therapist can't feel empathy. Could its advice actually be dangerous?
Why do we confess our deepest secrets to machines that can't keep them?
The NYT Copyright Battle That Changed AI Data Privacy: OpenAI’s Legal Fight Over 20 Million Preserved Chats
Overview
In October 2025, a federal court modified its earlier order, ending OpenAI's obligation to indefinitely preserve new ChatGPT and API user conversations, allowing the company to resume its standard 30-day data deletion. However, OpenAI must still securely retain user data from April to September 2025, accessible only to a limited internal team for legal purposes. This change reduces future data retention burdens but raises privacy concerns due to the preserved historical data. The New York Times, which filed a copyright lawsuit in 2023 alleging unauthorized use of its articles, is now seeking access to deleted chats from before July 2025, intensifying legal pressures. OpenAI continues to challenge these demands, highlighting conflicts with privacy laws and operational costs, while proposing new protections like "AI Privilege" to safeguard user privacy.