Updated
Updated · Al Jazeera English · May 5
Microsoft, Google and xAI grant US access to AI models for testing
Updated
Updated · Al Jazeera English · May 5

Microsoft, Google and xAI grant US access to AI models for testing

12 articles · Updated · Al Jazeera English · May 5
  • The Commerce Department's CAISI said the companies will let federal officials evaluate new models before deployment, amid fears Anthropic's Mythos could aid hackers.
  • CAISI, the government's main AI testing hub, said it has completed more than 40 evaluations, including on unreleased systems, often with safety guardrails reduced to probe national security risks.
  • The move extends 2024 deals with OpenAI and Anthropic and follows a separate Pentagon pact with seven tech firms to use AI on classified networks.
With AI now able to autonomously hack systems, how can we ensure it remains a tool for defense, not attack?
With AI development costs soaring, is the future of powerful AI controlled by only a handful of corporations?

Anthropic Barred as Pentagon Pushes $200M AI Integration with Google, Microsoft, OpenAI, and Others

Overview

In early 2026, the Pentagon formed strategic partnerships with eight leading AI companies to create an "AI-first fighting force," deploying advanced AI tools through the GenAI.mil platform to over 1.3 million personnel. This effort required vendors to accept a "lawful use" standard allowing unrestricted military applications, including surveillance and autonomous weapons. While most companies agreed, Anthropic rejected this on ethical grounds, leading to its designation as a supply chain risk and a subsequent lawsuit challenging the government’s action. Meanwhile, industry tensions grew as employees and investors pressured firms like Google and OpenAI over ethical concerns. Simultaneously, xAI’s Grok 4 advanced AI capabilities gained traction in defense and enterprise, and OpenAI’s shift to multi-cloud partnerships intensified competition among cloud providers, reshaping the AI landscape amid ongoing debates over military AI governance.

...