AMD recently unveiled its end-to-end integrated AI platform vision at its annual Advancing AI event in Santa Clara, California. The company introduced its open, scalable rack-scale AI infrastructure and showcased the new AMD Instinct MI350 Series accelerators, which are significantly faster than previous chips on AI compute and inferencing.
Lisa Su, CEO of AMD, emphasized the importance of collaboration and openness in the AI industry, taking a subtle dig at Nvidia. The Instinct MI350 Series GPUs set a new standard for performance and efficiency in generative AI and high-performance computing, offering a substantial increase in AI compute and inferencing capabilities.
AMD also demonstrated its open-standards rack-scale AI infrastructure, featuring the AMD Instinct MI350 Series accelerators, 5th Gen AMD Epyc processors, and AMD Pensando Pollara network interface cards. The company previewed its next-generation AI rack, Helios, which will utilize the upcoming AMD Instinct MI400 Series GPUs and other advanced components for enhanced performance.
Moreover, AMD announced the availability of the AMD Developer Cloud, providing developers and open-source communities with a platform for high-performance AI development. The company’s ROCm 7 software stack has been optimized for generative AI and high-performance computing workloads, offering improved support for industry-standard frameworks and enhanced developer tools.
Overall, AMD’s advancements in AI technology, its commitment to openness and collaboration, and its focus on energy efficiency and performance improvements demonstrate the company’s dedication to driving innovation in the AI industry.