James Baker joins Anthropic to lead AI impact analysis
Updated
Updated · Defense One · May 1
James Baker joins Anthropic to lead AI impact analysis
9 articles · Updated · Defense One · May 1
Baker, who led the Pentagon’s Office of Net Assessment from 2015 to 2025, will study AI’s effects on US institutions and competition with China.
He said the US has a tight window to adapt to recursive self-improving systems, warning the biggest risk is to the long-term viability of institutions in war and peace.
Anthropic is also navigating a six-month federal withdrawal ordered by President Trump after being labelled a supply-chain risk, while limiting its Mythos cyber tool to selected agencies and companies.
Could Anthropic’s refusal to enable autonomous weapons and mass surveillance ultimately weaken US security or set a global standard for ethical AI?
As AI systems like Mythos become capable of autonomously finding critical cyber vulnerabilities, who should decide when and how these tools are deployed?
If recursive self-improvement is already underway, what new forms of oversight or governance are needed to prevent catastrophic AI failures?
James Baker’s Strategic Role in Bridging AI Innovation and U.S. National Security Challenges
Overview
In early 2026, James Baker, former head of the Pentagon's Office of Net Assessment, joined Anthropic to address long-term AI risks amid rising tensions with the U.S. government. The Pentagon labeled Anthropic a supply chain risk, restricting its Claude AI, prompting Anthropic to sue. Shortly after releasing Claude Mythos, a powerful AI tool excelling in cybersecurity, Anthropic's CEO met with the White House to ease the conflict. Anthropic insists on ethical limits, refusing to allow AI use in mass surveillance or autonomous weapons, while the Pentagon demands full access. This standoff highlights the challenge of balancing innovation, security, and ethics as AI reshapes national defense and global power dynamics.