Updated
Updated · WIRED · May 7
RedAccess finds 5,000 AI-coded web apps lack basic security
Updated
Updated · WIRED · May 7

RedAccess finds 5,000 AI-coded web apps lack basic security

9 articles · Updated · WIRED · May 7
  • Nearly 2,000 appeared to expose medical, financial and corporate data across apps built with Lovable, Replit, Base44 and Netlify, while some also enabled administrative takeover, researcher Dor Zvi said.
  • RedAccess said simple search-engine queries uncovered the apps on the companies' own domains, and a few dozen owners confirmed exposures after being contacted and then secured or removed the sites.
  • The companies largely said public access reflected user configuration choices rather than platform flaws, but researchers warned easy AI app-building by non-experts is creating a wider wave of data leaks and phishing sites.
Your marketing team built an app with AI. Is it leaking your company's secrets?
When AI tools create insecure apps by default, who is ultimately held responsible?

The 2026 AI Security Epidemic: How Prompt Injection Attacks Compromise 73% of Systems

Overview

In 2026, a security crisis emerged as 73% of AI systems were vulnerable to prompt injection attacks, with indirect and multi-hop attacks becoming increasingly successful. This epidemic stems from AI coding tools prioritizing speed and convenience over security, leading to insecure code patterns that traditional reviews often miss. Developers' growing reliance on AI assistants and unsafe platform defaults further worsen the problem, causing applications to ship with critical weaknesses. These vulnerabilities overwhelm security teams, delaying fixes and increasing business risks like data breaches and compliance failures. To combat this, organizations must adopt layered defenses combining automated tools, manual reviews, continuous monitoring, and specialized secure coding training to manage the expanding AI attack surface effectively.

...