Tests produced more than 100 fake prescriptions, bank alerts, IDs, passports, receipts and social-media screenshots, with legible text that made many images appear highly persuasive.
The report says the model outperformed Google’s tools on forged documents and screenshots, despite OpenAI’s anti-fraud rules and safety layers, while metadata protections can be stripped away easily.
The FBI said AI scams cost Americans nearly $1 billion last year, and experts warn such cheap, fast image generation could worsen phishing, reimbursement fraud and document forgery.
As AI-generated scams surge, what new strategies can individuals and institutions adopt to protect themselves beyond traditional fraud prevention?
With deepfakes now indistinguishable from reality, can current fraud detection and digital provenance truly restore trust in digital evidence?
OpenAI launched ChatGPT Images 2.0 in April 2026, introducing groundbreaking capabilities like flawless image editing, multilingual support across nine languages, and high visual quality. These advances enabled new creative possibilities but also fueled serious misuse, including nearly $1 billion in AI-driven financial fraud, political deepfakes, and harmful content like AI-generated child abuse material. Detection of such forgeries is extremely difficult, with both humans and forensic tools performing near chance levels. In response, OpenAI implemented multi-layered safety measures and provenance standards, achieving strong results against health misinformation. However, ethical gaps remain, especially in document forgery, prompting evolving regulations, cross-sector collaborations, and calls for stronger user and platform safeguards to protect trust globally.