India Cuts Social Media Takedown Window to 3 Hours Amid AI Identity Theft Surge
Updated
Updated · Financial Times · May 15
India Cuts Social Media Takedown Window to 3 Hours Amid AI Identity Theft Surge
2 articles · Updated · Financial Times · May 15
February amendments to India’s IT Act now require platforms to remove violating content within 3 hours of official notice, and within 2 hours for intimate imagery or non-consensual deepfakes.
The tighter deadlines respond to an AI-driven rise in identity theft that has pushed Bollywood stars and cricketers into court, with more than 20 personality-rights cases filed in Delhi since 2022.
Aishwarya Rai Bachchan’s 2025 case against more than 10 defendants highlighted the trend, targeting deepfakes, unauthorized merchandise and a sexually explicit chatbot using her identity; courts have also granted dynamic injunctions for new offending sites.
Meta says validating takedown requests often cannot be done within 3 hours, while free-speech advocates warn ex parte celebrity injunctions and faster removals may chill satire and burden small users.
The new rules sharply reduce the earlier 36-hour window and extend protection beyond celebrities, showing India’s courts and regulators are building a personality-rights regime without a dedicated standalone law.
As India mandates 3-hour deepfake takedowns, is it sacrificing free speech and satire for celebrity protection?
To combat AI identity theft, is India's legal patchwork better than the new digital ownership rights emerging globally?
India’s IT Rules 2026: How New Deepfake and AI Fraud Laws Are Reshaping Digital Platforms and Free Speech
Overview
India's new IT Rules 2026, effective from February 20, 2026, mark a major shift in digital regulation to address the growing dangers of deepfakes and manipulated media. Prompted by rising concerns over reputational harm, fraud, and public disorder, the rules require clear labeling of AI-generated visuals. While an earlier draft called for large watermarks, industry pushback led to a more flexible approach. The government’s urgency is seen in the tight 10-day deadline for platforms to implement systems that trace the origin of manipulated content, aiming to boost accountability and curb the rapid spread of harmful AI-driven media.