The biggest security conference in the world just wrapped, and the headline underneath every announcement is the same: enterprises are deploying AI agents faster than security teams can track them, and the frameworks meant to govern them are still catching up.
RSAC 2026 was not a debate about whether AI agent security matters. That argument is settled. The conversation that dominated the conference floor was narrower and more uncomfortable: your security team has probably secured the wrong layer, and attackers already know it.
Here is what the conference revealed, why the execution layer is where the real exposure sits, and what a sound response looks like in practice.
The Execution Layer Is the Problem RSAC Finally Named
Enterprise security teams have spent the last two years building solid controls around the model layer. Access policies, vendor procurement reviews, DLP rules on which data AI tools can see. That work is real and it matters.
What RSAC 2026 surfaced is the layer those controls do not touch: tool execution. When an AI agent takes an action, it does so through a tool call. It writes to a database, triggers a workflow, calls an external API, or pushes instructions to a connected system. None of that activity lives at the model layer. It lives at the execution layer, and for most enterprises, the execution layer runs without monitoring, without scoping, and without audit trails.
Microsoft made this visible with the launch of Agent 365 at RSAC 2026, a control plane designed specifically to give IT and security teams visibility into what agents are actually doing across the environment. The announcement pointed at something the broader market has avoided saying plainly: governance policies and runtime enforcement are different things, and most organizations have built only one of them.
Shadow AI Is Not a Policy Problem. It Is an Architecture Problem.
A 2026 Gravitee survey found that only 24.4% of organizations have full visibility into which AI agents are communicating with each other. More than half of all deployed agents run without security oversight or logging of any kind.
That number does not reflect bad intentions. It reflects the way enterprise AI adoption actually works. Product teams spin up automation, engineering teams connect agents to internal tools and external APIs, and none of that flows through a central security review before going live. The average organization now manages 37 deployed agents. Each one is an unmapped access path.
Shadow AI incidents carry an average additional cost of $670,000 over standard security incidents, driven by delayed detection and difficulty scoping what the agent touched. You cannot scope an incident around infrastructure you did not know existed.
The policy response to this is to write stronger procurement rules. The architecture response is to build a system that discovers and governs agents before an incident forces you to find them manually.
Prompt Injection Attacks Do Not Need Your Perimeter
One of the more clarifying sessions at RSAC 2026 addressed a common misconception about how AI agent attacks actually work. Security leaders often frame prompt injection as a model-level problem. It is not.
Prompt injection works at the execution layer. An attacker embeds instructions in a document, an email, or an API response. The agent reads the content, interprets the embedded instruction as a legitimate task, and acts on it using real credentials through a real access path. No malware binary. No exploit code. Just text, triggering a tool the agent already has permission to use.
This is why model-level guardrails are insufficient on their own. Research cited at the conference found that fine-tuning attacks bypassed Claude Haiku in 72% of test cases and GPT-4o in 57%. Model defenses are necessary but not enough. You need input validation, action-level controls, and runtime logging at the tool layer.
The RSAC Verdict on What AI Agent Security Actually Requires
The most consistent theme across announcements and sessions was the shift from governance documentation to runtime enforcement. Three controls appeared repeatedly across vendor announcements and practitioner talks:
Least privilege scoping. Every agent should operate with the minimum permissions it needs to complete its assigned task. An overprivileged agent turns a single prompt injection into a full environment compromise. Most current agent deployments do not enforce this at the tool level.
Continuous agent discovery. You cannot govern agents you have not found. Organizations need automated discovery that maps every agent operating in the environment, including those deployed without IT review, before those agents reach production data.
Runtime audit logging. Every tool call an agent makes should produce an audit record. Who authorized the action, which tool was invoked, what data was accessed, and what the outcome was. This is the basic forensics capability that most enterprise AI deployments currently lack.
These are not new ideas. They are identity security principles applied to a new class of actor. AI agents are not software in the conventional sense. They act like privileged users at machine speed, and they need to be governed accordingly.
What This Means for Your Environment Right Now
The gap RSAC 2026 put on record is not between organizations that know about AI agent security and those that do not. It is between organizations that have moved from policy confidence to technical enforcement and those that have not.
82% of executives report confidence that existing policies protect against unauthorized agent actions. 88% of organizations reported confirmed or suspected AI agent security incidents in the last year.
That gap does not close with a policy update. It closes when the execution layer has the same visibility and control that the model layer has today.
If you want to understand where your agent exposure actually sits, Pragatix gives security teams the runtime visibility and enforcement layer that governance documents alone cannot provide. Guardian Agent monitors agent behavior at the tool call level, not just at the prompt.
The RSAC 2026 conversation is over. The implementation work is what comes next.
Book a demo with the AGAT team to see how Pragatix maps to the execution layer controls RSAC 2026 put on the table. Schedule a meeting here.
