Famous Chollima targets AI coding agents in PromptMink supply-chain attack
Updated
Updated · InfoWorld · May 5
Famous Chollima targets AI coding agents in PromptMink supply-chain attack
11 articles · Updated · InfoWorld · May 5
ReversingLabs said the North Korean group began the campaign in September, using npm, PyPI and Rust packages, then shifted in February and March to compiled payloads.
Malicious dependencies stole data, exfiltrated code projects and planted attacker SSH keys, while persuasive README files and package metadata were crafted to influence autonomous coding agents.
Researchers also warned of “slopsquatting”, where agents hallucinate package names; US and allied agencies urge allow-listed tools, trusted registries, SBOMs and human approval for high-impact actions.
As hackers turn AI into an unwitting accomplice, can we trust our code anymore?
When AI assistants fall for fake packages, who is ultimately held accountable?
In early 2026, attackers launched the PromptMink campaign, using AI-assisted malware hidden in npm and PyPI packages to target cryptocurrency developers and autonomous trading agents who manage sensitive assets. After security tools began detecting their initial tactics, attackers adapted by embedding large Node executables with hidden JavaScript payloads for precise data theft. The campaign, ongoing since late 2025, leverages AI-generated polymorphic code to evade detection and exploits AI coding assistants to spread malicious packages. State-sponsored groups, notably North Korea, back these efforts, using advanced multi-platform Rust payloads and fake companies to legitimize their infrastructure. This evolving threat demands urgent defense measures, including stronger authentication, secrets scanning, AI model verification, and continuous dependency auditing to protect the software supply chain.