Summary:
- CEO Lip-Bu Tan emphasizes the importance of prioritizing limited supply for key customers in the data center and OEM sectors to address capacity constraints in AI workloads.
- Intel simplifies its server roadmap by focusing on the Diamond Rapids product and accelerating the introduction of Coral Rapids, reintroducing multi-threading and collaborating with Nvidia for tighter integration with GPUs.
- Yield challenges with the new 18A process node are impacting supply, but Intel is committed to improving yields by 7-8% each month.
Article:
Intel CEO Lip-Bu Tan Addresses Supply Constraints and AI Workloads
In a recent statement, CEO Lip-Bu Tan highlighted the need to prioritize limited supply for important customers in the data center and OEM sectors due to the increasing demand for AI workloads. The proliferation and diversification of AI workloads are placing significant capacity constraints on traditional and new hardware infrastructure, emphasizing the essential role of CPUs in the AI era.
Simplifying the Server Roadmap
To address these challenges, Intel has simplified its server roadmap by focusing on the 16-channel Diamond Rapids product and accelerating the introduction of Coral Rapids. This decision includes reintroducing multi-threading back into the data center roadmap and collaborating with Nvidia to build a custom Xeon fully integrated with NVLink technology for tighter connection between Intel Xeon processors and Nvidia GPUs.
Yield Challenges and Improvements
One of the key factors impacting supply has been the yields of the new 18A process node. Despite facing challenges in meeting market demand, Intel is committed to improving yields month-over-month, targeting a 7-8% improvement rate. While yields are in line with internal plans, CEO Lip-Bu Tan expressed disappointment that they are still below desired levels.
Overall, Intel is dedicated to addressing supply constraints, enhancing performance capabilities, and meeting the evolving needs of customers in the data center and OEM sectors amidst the growing demands of AI workloads.