San Francisco’s RSA cybersecurity conference recently surpassed pre-pandemic attendance with over 43,000 people, and discussions revealed a critical security gap surrounding agentic AI. While vendors highlighted agentic AI security, a cohesive approach to securing these dynamic systems was absent; Suja Viswesan, Vice President for Security Products at IBM, observed that very few vendors spoke of end-to-end solutions. This lack of holistic security is particularly concerning because AI agents, unlike static code, change behavior at runtime, creating new vulnerabilities. Bob Kalka, Global Lead for Security Sales at IBM, noted that the entire conference focused on agentic AI.
Organizations lacking AI-dedicated access controls face significant risk, with 97 percent reporting security incidents, while those with coordinated multi-agent strategies anticipate a 42 percent higher return on investment.
RSA Conference Highlights Agentic AI Security Gaps
The RSA Conference, drawing over 43,000 attendees, signaled a clear shift in cybersecurity priorities, with agentic AI dominating discussions on the expo floor, according to industry observers. Unlike traditional static code, AI agents dynamically alter their behavior during runtime, creating a significantly expanded attack surface as they interact with tools and other agents, necessitating a reevaluation of existing security protocols. Despite the widespread excitement surrounding agentic AI, a comprehensive end-to-end approach to securing these systems remained absent from many presentations, noted Dave McGinnis, Vice President for Global Cyber Threat Management. This fragmented landscape is particularly concerning given that agentic AI introduces a novel type of identity, dynamic and constantly evolving, that traditional Identity and Access Management (IAM) frameworks are ill-equipped to handle, prompting companies like IBM to leverage tools such as Verify and HashiCorp Vault to address these gaps.
Recent data underscores the urgency to address agentic AI security; the latest Cost of a Data Breach report revealed that 97 percent of organizations experiencing AI-related security incidents lacked dedicated AI access controls. Jake Lundberg, HashiCorp Field CTO, identified a common challenge among clients: many organizations do not have a clear understanding of the scope of their identities, and they struggle to verify that those identities are functioning as expected. He advocates for “ring-fencing” identities and workflows, particularly in highly regulated sectors like finance and healthcare, where a single compromise could be rapidly amplified by an autonomous AI agent, and also noted that some organizations are prioritizing speed of deployment over security considerations, a pattern similar to early cloud adoption. A coordinated multi-agent strategy is projected to yield a 42 percent higher ROI compared to organizations without one, demonstrating the significant financial implications of prioritizing agentic AI security.
97% of Organizations Lack AI-Specific Access Controls
This widespread deficiency underscores a significant vulnerability as AI agents, unlike static code, exhibit evolving behaviors at runtime, creating novel avenues for attack and demanding a new approach to identity management. This lack of cohesive strategy is particularly concerning given the potential for rapid escalation in regulated industries like finance and healthcare, where even minor compromises can be amplified by the speed and autonomy of AI agents. Some companies, driven by pressure to innovate, are adopting a risky approach, reminiscent of early cloud adoption, potentially prioritizing speed over security. Lundberg stated that the fundamental pieces needed to protect an environment are the ability to quickly stand up and change identities if something goes wrong.
“Almost every one of hundreds of vendors on the expo floor were talking about agentic AI security.”
Bob Kalka, Global Lead for Security Sales at IBM
