Updated
Updated · OpenAI · May 5
OpenAI releases MRC protocol through Open Compute Project
Updated
Updated · OpenAI · May 5

OpenAI releases MRC protocol through Open Compute Project

4 articles · Updated · OpenAI · May 5
  • Developed with AMD, Broadcom, Intel, Microsoft and Nvidia, MRC is already deployed on OpenAI’s largest GB200 systems, including Oracle’s Abilene, Texas site and Microsoft Fairwater supercomputers.
  • OpenAI said the protocol improves resilience and performance by spraying packets across many paths, bypassing failures in microseconds and supporting networks of more than 100,000 GPUs with two switch tiers.
  • The company said MRC extends RoCE with SRv6 source routing, reduces power and component needs, and has already been used to train multiple models as ChatGPT serves more than 900 million weekly users.
By eliminating the costly 'GPU Tax,' will MRC make building frontier AI models accessible beyond just the tech giants?
With tech giants backing the new MRC protocol, is InfiniBand’s reign in AI data centers finally over?
MRC bets on static routing for speed. Is this a brilliant simplification or a future management nightmare for supercomputers?

MRC Protocol and Microsoft Fairwater: Enabling 100,000+ GPU AI Superfactories with Low-Latency WANs

Overview

In May 2026, OpenAI, Microsoft, NVIDIA, AMD, and others released the MRC protocol as an open standard through the Open Compute Project, overcoming critical network bottlenecks for large-scale AI training. NVIDIA and AMD played key roles in developing and validating the protocol. This breakthrough enabled Microsoft to build the Fairwater data centers in Wisconsin and Atlanta, featuring advanced cooling and a dedicated AI WAN with 30% lower latency. The broader OCP 'Open Systems for AI' initiative, supported by major industry players and strategic alliances like OCP-EPRI, fosters open hardware and operational standards. Together, these efforts accelerate scalable, efficient, and sustainable AI infrastructure worldwide.

...