Emotion AI industry expands workplace surveillance as market heads for $9 billion
Updated
Updated · The Atlantic · May 4
Emotion AI industry expands workplace surveillance as market heads for $9 billion
8 articles · Updated · The Atlantic · May 4
Tools from MorphCast, Microsoft Azure, Aware and HireVue analyse meetings, chats, calls and interviews, while the EU has banned workplace emotion AI except for medical or safety uses.
Employers use the systems to track sentiment, attention, stress and friendliness in call centres, trucking, fast food and white-collar offices, often to assess productivity, burnout or customer interactions.
Critics say the technology relies on disputed science, can reproduce racial and disability bias, and deepens already broad employer monitoring powers as remote work, layoffs and people analytics reshape workplaces.
If AI fundamentally misunderstands human emotion, why are companies spending billions on it to monitor their employees?
With Europe banning workplace emotion AI, are American workers facing a future of unchecked algorithmic management?
Workplace Emotion AI 2025–2034: Balancing Rapid Adoption with Privacy, Ethics, and Employee Resistance
Overview
Between 2025 and 2034, workplace emotion AI rapidly expands, driven by technological advances and adoption across sectors like call centers and retail. These systems collect sensitive emotional data often without consent, raising serious ethical and privacy concerns including bias, dehumanization, and power imbalances. Such issues fuel growing employee resistance, leading to union demands and strikes. In response, regulations like the EU AI Act impose strict rules, while responsible innovation efforts focus on bias mitigation, transparency, and privacy protections. Despite AI's potential to enhance efficiency and support, sustainable growth depends on balancing technological progress with human dignity and meaningful employee involvement.