The curbs exempt medical and safety uses, but firms still deploy tools from MorphCast, Microsoft Azure and Aware to analyse meetings, messages, calls and interviews.
The report says employers use the technology to track productivity, sentiment and fatigue, especially in call centres, trucking and service work, with white-collar monitoring now expanding through Zoom, Slack and HR software.
Critics say emotion AI rests on disputed science, can reproduce bias and misread behaviour, while the global market is still forecast to reach $9bn by 2030 despite the EU ban.
The EU banned workplace emotion AI last year. Is the rest of the world creating a new era of digital sweatshops?
As AI learns to read our emotions, why is human emotional intelligence suddenly becoming more valuable than ever?
When AI writes our apologies and love letters, are we forgetting how to be human?
EU AI Act 2026: Enforcement and €35 Million Penalties for Workplace Emotion Recognition AI
Overview
In 2026, the EU intensified enforcement of its AI Act ban on using biometric data to infer emotions in workplaces, following the European Commission's firm stance in late 2025. National authorities like Ireland's WRC and France's CNIL lead investigations, with major cases expected soon. The ban strictly prohibits intrusive emotion recognition AI but allows text-based sentiment analysis and emotion inference for customers or safety purposes. Employers face significant compliance duties, including technical measures, human oversight, and due diligence, while violations risk hefty fines. Despite strong support from privacy advocates, enforcement challenges persist due to limited regulator capacity and potential vendor circumvention. The EU's approach prioritizes human dignity and sets a global standard, contrasting with lighter U.S. regulation.