Sign up for our daily and weekly newsletters to stay up to date with the latest developments and exclusive insights on cutting-edge AI coverage. Discover more
Agentic interoperability is on the rise, with organizations introducing new interoperability protocols as the industry navigates the adoption of standards.
A team of researchers from Carnegie Mellon University has put forward a novel interoperability protocol focused on the identity, accountability, and ethics of autonomous AI agents. Known as Layered Orchestration for Knowledgeful Agents (LOKA), this protocol could potentially complement existing standards like Google’s Agent2Agent (A2A) and Anthropic’s Model Context Protocol (MCP).
In a research paper, the team emphasized the necessity of regulating AI agents as they become increasingly prevalent.
“With the expanding presence of AI agents, the establishment of a standardized framework to govern their interactions becomes essential,” the researchers stated. “Despite their widespread usage, AI agents often operate within isolated systems, lacking a universal protocol for communication, ethical reasoning, and compliance with regulatory frameworks. This fragmentation poses significant risks, such as interoperability challenges, ethical discrepancies, and accountability gaps.”
To address these concerns, they propose LOKA, an open-source protocol designed to enable agents to verify their identity, exchange ethically annotated messages, enhance accountability, and integrate ethical governance into their decision-making processes.
Building on the concept of a Universal Agent Identity Layer, LOKA assigns agents a unique and verifiable identity.
“We envision LOKA as a foundational architecture and a catalyst for reevaluating the fundamental elements—identity, intent, trust, and ethical consensus—that should underlie agent interactions. As the realm of AI agents expands, it is imperative to assess whether our existing infrastructure can responsibly facilitate this evolution,” stated Rajesh Ranjan, one of the researchers, in an interview with VentureBeat.
Layers of LOKA
LOKA operates as a layered stack. The initial layer focuses on identity, defining the agent’s essence. This includes a decentralized identifier, ensuring a unique and cryptographically verifiable ID for verification by users and other agents.
The subsequent layers include communication, ethics, and security, outlining how the agent communicates its intentions, behaves ethically, and ensures security measures.
The ethics layer of LOKA dictates the agent’s conduct, incorporating a flexible yet robust ethical decision-making framework that allows adaptation to diverse ethical standards based on the operating context. The protocol employs collective decision-making models to enable agents within the framework to assess their next steps in alignment with ethical and responsible AI standards.
Additionally, the security layer utilizes what the researchers term “quantum-resilient cryptography.”
Key Distinctions of LOKA
The researchers highlight LOKA’s significance in providing essential information for agents to communicate with each other and function autonomously across various systems.
LOKA could prove invaluable for enterprises in ensuring the safety of deployed agents and offering a traceable method to comprehend their decision-making processes. An overarching concern for many enterprises is the potential for agents to access unauthorized systems or sensitive data and make errors.
Ranjan emphasized the system’s focus on defining agents’ identities, decision-making processes, and accountability.
“Our aim is to address critical questions often overshadowed in the rush to expand AI agents: How do we establish ecosystems where these agents can be trusted, held accountable, and ethically compatible across diverse systems?” Ranjan stated.
While LOKA operates independently, Ranjan expressed enthusiasm for the positive feedback received from fellow researchers and institutions regarding the expansion of the LOKA research initiative.