In a recent survey conducted by EMA, it was found that the distribution of AI workloads among different environments is as follows: private data centers make up 29.5%, traditional public cloud accounts for 35.4%, GPU as a service specialists represent 18.5%, and edge compute holds 16.6%. This distribution pattern indicates a growing interest in deploying AI workloads at the corporate edge to combat latency issues and improve efficiency.
Despite the increasing popularity of AI, businesses still face various challenges when it comes to AI networking. The survey revealed that business concerns include security risks, cost/budget considerations, rapid technology evolution, and networking team skills gaps. Additionally, respondents highlighted specific concerns related to data center networking, such as integration with legacy networks, bandwidth demand, traffic flow coordination, and latency issues. WAN challenges included workload distribution complexity, latency between workloads and data at WAN edge, traffic prioritization complexity, and network congestion.
To address these challenges, enterprise leaders are planning investments in infrastructure to support their AI strategies. These investments include high-speed Ethernet, hyperconverged infrastructure, and SmartNICs/DPUs. However, it’s important to note that making a network AI-ready is not a cheap endeavor, as it may require upgrades, new equipment, and adjustments to the existing network infrastructure.
Overall, businesses need to ensure that their data center networks are prepared to handle AI workloads efficiently and effectively. By investing in the right infrastructure and addressing networking challenges, enterprises can leverage AI technology to drive innovation and growth in their organizations.