Commerce Takes Down AI Standards Site as White House Fights Over Spy Agencies' Role in Model Testing
Updated
Updated · The Washington Post · May 11
Commerce Takes Down AI Standards Site as White House Fights Over Spy Agencies' Role in Model Testing
1 articles · Updated · The Washington Post · May 11
Commerce’s Center for AI Standards and Innovation removed its AI testing website Friday, and it was still offline Monday because of White House sensitivity around an unresolved AI security plan.
The takedown came as Trump officials debated whether to shift evaluation of advanced AI models to a new center inside the Office of the Director of National Intelligence, expanding spy agencies’ authority.
Mythos, Anthropic’s latest model, sharpened the fight by raising fears it could help even inexperienced hackers find software flaws and target grids, banks and government systems.
Commerce argues its existing center already has testing infrastructure and industry partnerships with Microsoft, Google and xAI, while national security aides want tougher scrutiny and potentially mandatory evaluations.
Trump could sign an AI security executive order as soon as Monday, but the administration remains split between a pro-industry approach and stronger controls ahead of his China summit.
Will mandatory government testing of AI models ultimately help or hinder America's technological edge over its rivals?
Can an 'FDA for AI' model protect the public without stifling the innovation needed to compete globally?
As AI learns to hack better than humans, who should be the ultimate gatekeeper: industry experts or security agencies?
U.S. AI Oversight in 2026: National Security Imperatives, Regulatory Shifts, and the CAISI Capacity Gap
Overview
In May 2026, the United States rapidly shifted its AI policy in response to national security concerns, sparked by the emergence of Anthropic’s powerful Mythos A.I. model, which set off global alarms. This event led to urgent government briefings and highlighted the risks and benefits of advanced AI. As a result, there is now a strong consensus among government and industry that AI oversight must be more rigorous and collaborative. The U.S. is moving toward stricter testing, new regulatory frameworks, and deeper partnerships between agencies and tech companies to ensure AI safety and maintain national security.