The evolution of artificial intelligence has introduced new challenges and vulnerabilities in cybersecurity. As highlighted at the 2025 Security and Risk Summit by Forrester analysts, generative AI poses significant risks due to its high error rates and propensity for failure. Research studies have shown that AI models can be wrong up to 60% of the time, leading to more failed outcomes than successful ones.
Furthermore, the incorporation of unauthorized AI into daily workflows has become a widespread issue, with 88% of security leaders admitting to using shadow AI. This unauthorized use of AI introduces additional risks and vulnerabilities, particularly in the realm of code security, where AI-generated code has been found to contain significant vulnerabilities, including OWASP Top 10 issues.
The expansion of generative AI into identity management has further complicated cybersecurity efforts, with every new identity creating a potential attack surface for malicious actors. As the demand for identity access management solutions grows, the need for robust governance and security controls becomes paramount to mitigate the risks posed by AI-driven chaos.
In light of these challenges, security and risk management professionals are advised to treat AI agents as mission-critical identities, develop AI red team capabilities, operate under the assumption of AI failure, implement scalable security controls, and eliminate blind trust in automation. By taking proactive measures to address the vulnerabilities associated with generative AI, organizations can better protect their networks and data from the predatory threat of weaponized AI agents.