Many organizations are facing challenges in containing the growing risks associated with AI technology. Palo Alto Networks has predicted that 2026 may see the first major lawsuits holding executives personally accountable for rogue AI actions. The rapid and unpredictable nature of AI threats requires a more sophisticated approach to governance, beyond simply increasing budgets or headcount.
The lack of visibility into the usage and modification of AI models poses a significant vulnerability in the security landscape. Without a clear understanding of where and how AI models are being utilized, organizations are left vulnerable to potential security breaches and incidents. This visibility gap underscores the importance of enhancing governance practices to improve model traceability and data security.
Recent reports have highlighted the prevalence of security risks in AI applications, such as prompt injection, vulnerable code, and jailbreaking. These risks pose serious threats to organizations, as adversaries exploit vulnerabilities to access sensitive data and compromise AI systems. Despite substantial investments in cybersecurity solutions, many organizations struggle to detect and prevent these sophisticated attacks, which often go undetected by traditional security measures.
IBM’s Cost of a Data Breach Report for 2025 revealed that a significant number of organizations experienced breaches in their AI models or applications, with a staggering 97% lacking proper AI access controls. Unauthorized AI use, also known as shadow AI, accounted for a significant portion of breaches, resulting in substantial financial losses. The lack of visibility into AI models’ deployment and usage hinders effective incident response and mitigation efforts.
The adoption of Software Bill of Materials (SBOM) for AI models is crucial for enhancing security and risk management practices. While existing SBOM standards provide a framework for documenting software components, AI models present unique challenges due to their dynamic and evolving nature. Implementing AI-specific SBOMs can help organizations track model dependencies, mitigate risks, and improve overall security posture.
In conclusion, the evolving threat landscape in AI security necessitates a proactive approach to governance and risk management. By embracing AI-specific SBOMs, organizations can enhance visibility, traceability, and security controls to safeguard against potential threats. As AI technology continues to advance, prioritizing security measures will be essential for ensuring safe and responsible AI deployment.