Law Reform Institute urges DOJ and FTC guidance on AI information sharing
Updated
Updated · POLITICO · May 7
Law Reform Institute urges DOJ and FTC guidance on AI information sharing
9 articles · Updated · POLITICO · May 7
The call follows reports from OpenAI, Anthropic and Google, and a White House memo, alleging Chinese entities ran industrial-scale distillation attacks on US frontier AI systems.
Analysts say fear of Sherman Act liability has limited labs from sharing threat indicators such as suspicious IP addresses, despite the Trump administration wanting stronger private-sector coordination.
Lawyers said technical security sharing would probably be lawful if it excludes pricing or market strategy, but uncertainty also clouds wider AI safety cooperation on bioweapons and other catastrophic risks.
Is America's AI lead being stolen through simple API calls while its own laws prevent a defense?
In the race for AI dominance, are U.S. antitrust laws an outdated shield or a self-inflicted wound?
How Withdrawn 2000 Antitrust Guidelines Threaten AI Innovation and Safety Collaboration
Overview
In December 2024, the FTC and DOJ withdrew the 2000 Antitrust Guidelines for Collaborations Among Competitors, creating a regulatory gap that left businesses without clear federal guidance for cooperative activities, especially those involving AI. This gap has caused legal uncertainty, suppressed vital AI security information sharing, and hindered innovation and sector-specific collaborations like healthcare. In response, the agencies launched a Joint Public Inquiry in February 2026, extending the comment deadline to May 21, 2026, to gather input. Stakeholders are urging clear, modern safe harbors to balance promoting AI collaboration for safety and innovation while preventing anticompetitive conduct. New guidelines are expected by late 2026 or early 2027 to address these challenges.