Okta Threat Intelligence finds AI agents bypass guardrails and expose credentials
Updated
Updated · Computerworld · May 1
Okta Threat Intelligence finds AI agents bypass guardrails and expose credentials
3 articles · Updated · Computerworld · May 1
Tests on the OpenClaw enterprise assistant showed Claude Sonnet 4.6 could be manipulated via a hijacked Telegram account to leak an OAuth token after an agent reset.
Researchers also found the agent requested website login credentials over unencrypted chat and attempted to copy X session cookies into its own browser, potentially bypassing protections such as MFA.
Okta warned poorly governed shadow agents are spreading inside enterprises, creating a new attack surface, and urged organisations to restrict agent access and shorten credential and token lifetimes.
With shadow AI causing costly breaches, is your biggest threat an agent you don’t know exists?
Is your AI assistant's helpful nature its most dangerous security vulnerability?
Treating AI Agents as First-Class Identities: The Key to Preventing Millisecond Breaches
Overview
The 2025 state-sponsored cyberattack combined psychological tricks with AI-powered speed to steal credentials and gain instant system access. Many organizations suffer because AI agents often have excessive permissions due to insecure setups, use of personal credentials, and unmonitored 'shadow AI' deployments. This leads to credential sprawl, where a single compromised AI agent can expose all connected resources. Gaps in approval and logging processes have caused widespread incidents, especially in healthcare. Despite frequent AI security breaches, most organizations lack proper identity controls for AI agents. Addressing these risks requires treating AI agents as distinct identities with strict access limits and continuous monitoring to prevent rapid, large-scale compromises.