TikTok Cuts 400 London Moderators, Facing Union-Linked Dismissal Claims
Updated
Updated · HR Magazine · May 12
TikTok Cuts 400 London Moderators, Facing Union-Linked Dismissal Claims
1 articles · Updated · HR Magazine · May 12
Around 400 London moderators lost their jobs in late 2025 after TikTok launched a redundancy process about a week before a union-recognition vote.
A small group of former workers has now filed employment tribunal claims alleging unlawful detriment and automatic unfair dismissal tied to trade union activity.
TikTok denies the claims, saying the cuts were part of a broader global trust-and-safety reorganisation aimed at efficiency and heavier use of automated moderation.
UK tribunals are likely to scrutinize the timing, internal communications and whether the company can show the layoffs were driven solely by genuine business need.
Legal exposure is rising for large restructurings: since April 6, 2026, the maximum protective award for collective-consultation failures has doubled to 180 days' pay.
Can TikTok's AI efficiency defense overcome new UK laws designed to stop union busting?
With UK tribunals backlogged for years, can fired workers ever truly achieve justice from tech giants?
As AI replaces human moderators, are platforms like TikTok actually becoming less safe for users?
TikTok’s 400 Moderator Layoffs: AI Expansion, Union Push, and Industry-Wide Implications
Overview
In late 2025, TikTok laid off over 400 content moderators in London, officially citing a shift toward artificial intelligence to reduce human exposure to harmful content. However, critics like John Chadfield argued that this move was less about safety and more about cutting costs, suggesting TikTok wanted to avoid the overhead of employing a large trust and safety team as the platform grew. Instead, TikTok aimed to keep only a small group of specialists directly employed, while offshoring or outsourcing the rest. This controversy sparked unionization efforts and raised concerns about worker rights and the effectiveness of AI in moderation.