Both companies outperformed the IGV software index, which rose 10%, following hacks at startups like Mercor and Vercel and the unveiling of Anthropic's Mythos AI model.
Mythos identified thousands of new cybersecurity risks, prompting increased spending on security solutions and early access for firms like CrowdStrike, Palo Alto Networks, and ZScaler.
Analysts expect accelerated revenue growth for these firms, driven by heightened demand and their established trust, despite competition from AI model makers such as Anthropic and OpenAI.
If Mythos's dangerous skills were an accident, what unforeseen capabilities will the next generation of AI models develop without warning?
How can small businesses defend against AI attacks when the best defensive AI is only available to large corporations?
Is Anthropic's 'Project Glasswing' a genuine defense effort or the start of a new cybersecurity monopoly controlled by a few tech giants?
Anthropic weakened its safety policy. Can we trust our global security to a company prioritizing competition over caution?
Are thousands of new bugs a sign of AI's power, or a damning indictment of decades of human-led software development?
With AI needing immense power to train, is our electrical grid the true vulnerability in this new cyber war?