Summary:
- TensorWave announced the deployment of AMD Instinct MI355X GPUs in its high-performance cloud platform.
- The new GPU is optimized for generative AI training, inference, and high-performance computing.
- TensorWave’s focus on AMD technology provides customers with open, optimized AI software stack, scalability, and enterprise-grade SLAs.
Article:
TensorWave, a prominent provider of AMD-powered AI infrastructure solutions, has recently revealed the integration of AMD Instinct MI355X GPUs into its cutting-edge cloud platform. This move solidifies TensorWave’s position as a leader in offering next-level performance for demanding AI workloads, coupled with unparalleled white-glove onboarding and support.The latest AMD Instinct MI355X GPU, based on the 4th Gen AMD CDNA architecture, boasts impressive specs including 288GB of HBM3E memory and 8TB/s memory bandwidth. This design is specifically optimized for generative AI training, inference tasks, and high-performance computing applications. TensorWave’s early adoption of this technology allows customers to leverage the GPU’s compact, scalable design and advanced architecture, ensuring high-density compute with efficient cooling infrastructure at scale.
Piotr Tomasik, President of TensorWave, emphasized the advantages of TensorWave’s specialization in AMD technology, highlighting the efficiency gains and cost reductions of up to 25% and 40% respectively that customers can expect. By exclusively utilizing AMD GPUs, TensorWave offers customers an open, optimized AI software stack powered by AMD ROCm, promoting flexibility and reducing total cost of ownership. The company’s commitment to scalability, developer-friendly onboarding, and enterprise-grade SLAs positions it as the preferred partner for organizations prioritizing performance and choice.
Travis Karr, Corporate Vice President of Business Development at AMD, praised the breakthrough performance of the AMD Instinct MI350 series GPUs for demanding AI and HPC workloads. The collaboration between AMD’s Instinct portfolio and ROCm open software ecosystem enables customers to develop cutting-edge platforms for generative AI, AI-driven scientific discovery, and high-performance computing applications.
In addition to deploying the AMD Instinct MI355X GPUs, TensorWave is also in the process of creating the largest AMD-specific AI training cluster in North America, furthering its goal of democratizing access to high-performance compute. By providing end-to-end support for AMD-based AI workloads, TensorWave empowers customers to seamlessly transition, optimize, and scale within an open and dynamic ecosystem.
For more information on TensorWave’s AMD-powered AI infrastructure solutions, visit their website.