Summary:
1. Day one of the AI & Big Data Expo and Intelligent Automation Conference focused on the infrastructure needed for AI to work as a digital co-worker.
2. The transition from passive automation to agentic systems was a key topic, with speakers highlighting the importance of governance frameworks for handling non-deterministic outcomes.
3. Discussions also covered the challenges of data quality, physical safety, observability, and adoption barriers in deploying AI technologies effectively.
Article:
The first day of the AI & Big Data Expo and Intelligent Automation Conference delved into the essential infrastructure required to enable AI to function as a digital co-worker. While the concept of AI as a collaborator took center stage, technical sessions emphasized the necessity of a robust foundation to support these advancements.
One key theme that emerged on the exhibition floor was the evolution from passive automation to agentic systems. These advanced tools have the ability to reason, plan, and execute tasks autonomously, moving beyond the limitations of traditional robotic process automation (RPA). Experts like Amal Makwana from Citi showcased how these systems can seamlessly integrate into enterprise workflows, unlocking real value by bridging the gap between intent and execution.
Scott Ivell and Ire Adewolu of DeepL described this shift as closing the “automation gap,” emphasizing the transformation of AI into a true digital co-worker rather than a mere tool. Brian Halpin from SS&C Blue Prism added that organizations must first master standard automation before venturing into the realm of agentic AI deployment.
However, the implementation of agentic AI comes with its own set of challenges, particularly in terms of governance frameworks. Steve Holyer of Informatica stressed the importance of strict oversight to control how these autonomous agents access and utilize data, preventing potential operational failures.
Moreover, the quality of input data emerged as a critical factor in the success of autonomous systems. Andreas Krause from SAP highlighted the necessity of accurate and contextually-relevant enterprise data for AI to function effectively. Meni Meller of Gigaspaces proposed innovative solutions like retrieval-augmented generation combined with semantic layers to address data access issues and ensure real-time access to factual data.
In addition to data challenges, physical safety and observability in AI deployment were also key topics of discussion. The integration of AI in physical environments introduces unique safety risks that require established protocols before human interaction. Perla Maiolino’s research into Time-of-Flight sensors and electronic skin aims to enhance robots’ self-awareness and environmental awareness, particularly in high-risk industries like manufacturing and logistics.
Furthermore, the adoption of AI technologies faces infrastructure and cultural barriers that must be addressed for successful deployment. Julian Skeels from Expereo emphasized the need for networks designed specifically for AI workloads, ensuring high throughput and reliability. Paul Fermor from IBM Automation cautioned against underestimating the complexity of AI adoption, stressing the importance of human-centered strategies for cultural acceptance.
As the sessions from day one highlighted, while the future of technology may be moving towards autonomous agents, establishing a solid data foundation, evaluating network infrastructure, and implementing cultural adoption strategies are crucial for successful AI deployment. CIOs and technology leaders must prioritize these aspects to leverage the full potential of AI in the digital era.