Summary:
1. AI spending in Asia Pacific is increasing, but many companies struggle to derive value from their AI projects due to inadequate infrastructure.
2. Akamai is addressing this challenge with Inference Cloud, powered by NVIDIA, to enable real-time decision-making closer to users.
3. The shift towards edge infrastructure can improve AI performance, reduce costs, and support applications that require split-second responses.
Article:
AI investment in the Asia Pacific region is on the rise, yet numerous organizations are still grappling with the challenge of extracting value from their AI initiatives. This struggle is often attributed to the lack of proper infrastructure to support AI operations, as existing systems are not equipped to handle inference at the speed and scale required for real-world applications. Despite heavy investments in GenAI tools, many projects fail to achieve their ROI targets due to this fundamental issue.
Recognizing the critical role of AI infrastructure in influencing performance, cost, and scalability of deployments, Akamai has introduced the Inference Cloud in collaboration with NVIDIA. This innovative solution aims to facilitate real-time decision-making in close proximity to end-users, as opposed to relying on distant data centers. According to Jay Jenkins, CTO of Cloud Computing at Akamai, this shift can help organizations manage costs, minimize delays, and support AI services that hinge on instantaneous responses.
The gap between experimental AI projects and full-scale deployment is wider than anticipated by many enterprises. Jenkins emphasizes that the transition from experimentation to production often encounters hurdles such as exorbitant infrastructure costs, high latency, and challenges in running models at scale. While traditional setups relying on centralized clouds and large GPU clusters are common, they become economically unviable as usage expands, particularly in regions distanced from major cloud hubs. Latency issues arise when models necessitate multiple layers of inference across long distances, compromising user experience and the anticipated business value delivery.
As AI adoption in Asia Pacific shifts towards real-world applications in various sectors, the focus is shifting from training cycles to day-to-day inference tasks. With organizations deploying language, vision, and multimodal models in diverse markets, the demand for rapid and reliable inference is escalating unexpectedly. Consequently, inference has emerged as a primary bottleneck in the region, with models now needing to operate in real time across different languages, regulatory frameworks, and data environments. The strain on centralized systems ill-equipped for such real-time responsiveness is evident.
Moving inference operations closer to users, devices, or agents can redefine the cost dynamics by reducing data travel distances and enhancing model response times. This approach not only trims the expenses associated with routing massive data volumes between major cloud zones but also ensures the efficiency of physical AI systems dependent on millisecond decision-making. Jenkins highlights the substantial cost savings observed in India and Vietnam when image-generation models are deployed at the edge instead of centralized clouds, attributed to improved GPU utilization and reduced egress fees.
Industries where minimal delays can impact revenue, safety, or user engagement are leading the demand for edge inference solutions. Retail and e-commerce are early adopters due to the significant impact of slow experiences on shopper engagement. Finance is another sector where latency directly influences operational efficacy, particularly in workloads such as fraud detection and payment approvals that necessitate rapid AI decisions. By moving inference closer to data creation points, financial institutions can accelerate operations and comply with regulatory data localization requirements.
The escalating AI workloads necessitate infrastructure that can keep pace with the evolving demands. Jenkins underscores the growing collaboration between cloud providers and GPU manufacturers to meet these requirements. Akamai’s partnership with NVIDIA exemplifies this trend, with the deployment of GPUs, DPUs, and AI software in numerous edge locations to establish an AI delivery network. This distributed approach not only enhances performance but also facilitates compliance, especially crucial for organizations grappling with diverse data regulations across markets.
In conclusion, the evolving landscape of AI infrastructure in Asia Pacific underscores the significance of edge-based solutions in enhancing performance, reducing costs, and supporting real-time applications. As companies prepare for the shift towards edge-based AI, ensuring robust orchestration, visibility, data governance, and security measures will be imperative to navigate the complexities of distributed AI operations effectively.