Broadcom has unveiled a groundbreaking networking chip named Thor Ultra, specifically engineered to connect clusters of processors commonly utilized in AI computing. This innovative chip enables the linking of hundreds of thousands of GPUs into clusters, facilitating the operation and training of large AI models. This move intensifies Broadcom’s rivalry with Nvidia in the tech industry.
Broadcom recently introduced Thor Ultra, a cutting-edge networking chip designed to connect clusters of processors commonly used in AI computing. The chip is capable of linking hundreds of thousands of GPUs into clusters to run and train large AI models, further solidifying Broadcom’s competition with Nvidia.
The company also revealed a significant deal to supply 10 gigawatts of custom silicon to OpenAI starting in 2026. Despite a 4% drop in Broadcom’s shares following the announcement, the company remains optimistic that AI-related products will continue to drive growth in its chip business.
Thor Ultra allows data center operators to deploy more chips than its predecessor, Tomahawk Ultra, directly competing with Nvidia’s networking interface products that dominate communications between GPUs in high-performance AI clusters. The networking performance is crucial for AI applications as it determines the speed at which GPUs and accelerators can exchange data, particularly as demand rises for faster, lower-latency links between chips.
While Thor Ultra is primarily designed for hyperscale data centers, it also supports emerging edge and near-edge workloads, such as those found in factories, telecom networks, and regional data centers. The increasing convergence of core cloud and edge systems presents new opportunities for chip manufacturers, with networking serving as the connective tissue between the two ends of the computing spectrum.
Following the release of Tomahawk Ultra earlier in 2025, Thor Ultra extends connectivity to data center campuses, enabling operators to “scale out” rather than solely “scale up.” Broadcom also develops custom AI processors for major clients like Google and has contributed to multiple generations of the Tensor Processing Unit (TPU) that powers Google’s AI services.
Broadcom engineers have enhanced the bandwidth of Thor Ultra twofold compared to the previous generation, with a focus on collaboration between chip and hardware teams to address power, cooling, and packaging requirements. While Broadcom doesn’t sell servers, it shares reference system designs with customers to assist them in building optimized networks.
As enterprises increasingly deploy AI models reliant on distributed data sources, Broadcom’s networking focus could help bridge the performance gap between edge and cloud systems. The launch of Thor Ultra positions the company to play a more significant role in shaping the communication of future AI systems.