Updated
Updated · Techloy · May 13
AI Scammers Drove Voice Phishing Up 442% in H2 2024 as Deepfakes Grew Harder to Detect
Updated
Updated · Techloy · May 13

AI Scammers Drove Voice Phishing Up 442% in H2 2024 as Deepfakes Grew Harder to Detect

2 articles · Updated · Techloy · May 13
  • Voice phishing attacks rose 442% in the second half of 2024 versus the first half, according to cybersecurity company CrowdStrike, marking a sharp escalation in AI-enabled fraud.
  • AI-generated voices, deepfakes and impersonation tools are driving that surge because they are becoming more sophisticated, faster to deploy and much harder for targets to spot.
  • A mother in Miami, Manitoba, recently received a call from a private number in a voice identical to her son's; she avoided the scam by hanging up and calling him directly.
  • Matthew Rosenquist, a cybersecurity expert with more than 30 years of experience, said the trend shows AI is becoming convincing enough to imitate almost anyone online.
When AI can perfectly mimic a loved one's voice, is our ability to trust what we hear gone forever?
Can laws keep pace with AI threats that evolve faster than they can be written?

AI-Driven Fraud Surges 300% by 2026: Deepfake Voice Scams, Executive Impersonation, and the Global Security Crisis

Overview

AI-driven fraud has reached unprecedented levels by 2026, as artificial intelligence becomes a powerful tool for fraudsters. The sophistication of these attacks, such as executive impersonation using AI to mimic voices and appearances of high-ranking officials, makes them increasingly difficult to detect. This leads to significant financial losses and reputational damage for organizations. The growing reliance on voice authentication creates critical vulnerabilities, as AI-powered voice synthesis can bypass these security measures and allow unauthorized access to sensitive data. As a result, heightened vigilance is needed across all sectors to address these escalating threats.

...