Summary:
- Moonshot Energy, QumulusAI, and Connect Nation IXP.us collaborate to deploy QAI Moon Pods across the US, expanding to 125 cities.
- The partnership focuses on creating a scalable national infrastructure for AI workloads, reducing latency and extending AI compute capabilities.
- The modular approach involves Moonshot designing 2,000 kW units while QumulusAI handles GPU infrastructure, offering low-latency AI compute at the edge.
Article:
Moonshot Energy, QumulusAI, and Connect Nation IXP.us have joined forces to introduce a groundbreaking initiative that involves the design and deployment of QAI Moon Pods at 25 sites across the United States, with plans to eventually scale up to 125 cities. This strategic collaboration aims to establish a carrier-neutral interconnection, modular AI infrastructure, and GPU-as-a-Service framework to build a repeatable and scalable national infrastructure for inference and AI workloads. By doing so, the partnership seeks to minimize latency issues and expand AI compute capabilities beyond the boundaries of traditional hyperscale data centers.
The first phase of deployment is set to commence at the Wichita State University campus in Kansas by July 2026, with subsequent expansion to 25 additional cities on the horizon. The consortium has identified up to 125 potential sites in US university research campuses and municipalities to facilitate the rollout of these innovative AI pods. Co-CEO of IXP.us, Hunter Newby, emphasized that this endeavor is centered around internet exchange points and AI models, distinct from typical data center projects, with AI pod construction timelines spanning several months rather than years.
The modular approach adopted by Moonshot involves the design and construction of 2,000 kW units, complemented by QumulusAI’s management of GPU infrastructure. This collaborative effort aims to offer low-latency AI compute capabilities at the edge, liberating these pods from the constraints typically associated with large hyperscale data centers. CEO of Moonshot, Ethan Ellenberg, expressed enthusiasm for the physical convergence of power, compute, and interconnection at the epicenter of burgeoning AI demand.
Recognizing the escalating need for inference capabilities in underserved regions lacking access to robust data center GPU resources, the companies involved seized the opportunity to address this market gap. Mike Maniscalo, CEO of QumulusAI, highlighted the increasing prevalence of inference-driven, latency-sensitive workloads in today’s distributed landscape, underscoring the importance of placing GPU compute directly at the network edge. By establishing a national platform that makes high-performance AI compute practical, scalable, and economically viable outside the realm of traditional hyperscale data centers, this partnership is poised to revolutionize the AI industry.