Author encounters real-time deepfake of self on Microsoft Teams call
Updated
Updated · 404 Media · May 7
Author encounters real-time deepfake of self on Microsoft Teams call
10 articles · Updated · 404 Media · May 7
Using a gaming laptop and advanced software popular with scammers, another caller morphed his face into the author's during the live video conversation.
The fake image preserved distinctive details including five o'clock shadow, grin and under-eye bags, while facial gestures such as pinching a cheek and covering a nose remained convincing.
The account highlights how consumer-accessible tools can now generate highly realistic live impersonations on workplace communication platforms, raising fraud and identity-verification concerns.
With AI fakes outpacing detection, is our digital identity becoming impossible to secure?
Should AI creators be held liable when their deepfake tools enable billion-dollar financial crimes?
From $25 Million Heists to Global Epidemic: The Rise of Real-Time Deepfake Attacks in Business
Overview
In early 2026, the author encountered a real-time deepfake during a Microsoft Teams call, echoing a landmark 2024 incident where criminals used deepfakes to impersonate Arup executives and fraudulently transfer $25 million. These events revealed that video conferencing is no longer a reliable way to verify identity, especially as deepfake technology has become more advanced and accessible, enabling frequent and costly fraud worldwide. In response, organizations have adopted stronger security measures like out-of-band verification and employee training, while governments introduced stricter regulations to combat this growing threat. The rapid development of detection technologies and updated corporate policies reflect the urgent need to adapt to an era where seeing and hearing someone is no longer enough to trust them.