AI agent security is no longer a future consideration for enterprise security teams. They are in production. According to a 2026 survey of over 900 executives and practitioners, 80.9% of technical teams have moved past the planning phase into active testing or full deployment. The productivity gains are real. So is the exposure, and most enterprises have addressed only half of it.
Security teams have done solid work controlling the model layer: which AI tools employees can access, which vendors pass procurement review, what data those tools can see. That work matters. But it leaves the execution layer completely open. And in 2026, the execution layer is where AI agent attacks actually happen.
What the Execution Layer Is, and Why It Gets Ignored
When an AI agent takes an action, it does so through a tool invocation. It calls an API, writes to a database, triggers a workflow, or pushes instructions to a connected system. This is where AI reasoning meets production infrastructure.
Most enterprises have no governance here. Tool invocations are trusted by default. There is no risk scoring before execution, no policy enforcement at the connector level, and no audit trail showing what agents are actually doing across the environment. Security teams secure the model. The tool layer runs free.
This creates a structural vulnerability that attackers have already started exploiting. Prompt injection attacks do not need to breach your perimeter. They only need to manipulate an agent into using a tool it already has access to. An attacker embeds instructions in a document, an email, or an API response. The agent reads the content, interprets the embedded instruction as a legitimate task, and acts on it using real credentials through a real access path. No malware binary. No exploit code. Just text.
CrowdStrike and Cisco have both moved to address this at the execution layer specifically, with Cisco’s AI Defense solution expanding in February 2026 to add runtime protections against tool abuse and supply chain manipulation at the MCP layer. These are not fringe vendors moving on a fringe risk. This is core enterprise security infrastructure shifting to cover the execution layer because that is where the attacks are going.
Shadow AI Makes the Problem Worse
Shadow AI compounds the exposure. Many agents operating inside enterprise environments were deployed by individual product and engineering teams without going through security review. They connect to tools, MCP servers, and external APIs that the security team has never mapped, scoped, or approved.
A 2026 Gravitee survey found that only 24.4% of organizations have full visibility into which AI agents are communicating with each other. More than half of all agents run without any security oversight or logging. You cannot govern what you cannot see. And right now, most security teams cannot see most of their agents.
The average organization now manages 37 deployed agents. That number grows every quarter as individual teams spin up automation without central review. Each undiscovered agent is an unmapped access path. <Shadow AI security incidents cost an average of $670,000 more than standard incidents, driven by delayed detection and difficulty scoping the exposure.>
The Identity Problem at the Core of Agent Security
Most organizations still treat AI agents as extensions of human users, assigning them to shared service accounts or existing user credentials. That architectural decision creates accountability gaps that are very difficult to close after the fact.
45.6% of technical teams rely on shared API keys for agent-to-agent authentication. When multiple agents share credentials, attribution becomes impossible. If an agent creates and instructs another agent, which 25.5% of deployed agents can do, the chain of command quickly becomes unauditable. Your SIEM sees a series of failed transactions. It does not show you which agent started the cascade or where it was compromised.
Only 21.9% of teams treat AI agents as independent, identity-bearing entities with their own access scopes and audit trails. The organizations that do treat agents as first-class security principals have a much cleaner picture of what is happening in their environment. They can attribute actions, scope blast radius, and isolate a compromised agent without taking down entire workflows.
What an AI Agent Gateway Actually Does
To close the execution layer gap, security teams are deploying AI agent gateways between the agent and its connected tools.
An AI agent gateway intercepts every tool invocation request before execution. It evaluates the request against enterprise policy, scores the risk of the intended action, and either approves or blocks the execution in real time. High-risk actions can be routed to a human approval queue. Low-risk, high-frequency actions can be approved automatically. The point is that nothing executes without evaluation.
This shifts AI security from reactive log review to proactive enforcement. Your team stops investigating what already happened and starts controlling what happens next.
Alongside gateways, leading security teams are building three additional controls:
Agent discovery. Continuous inventory of every agent operating in the environment, including homegrown automations, browser extensions, SaaS-based agents, and MCP server connections. You need to know what exists before you can govern it.
Behavioral monitoring. Runtime visibility into what agents are doing across the environment, not just whether they were approved at deployment. Agents drift. An approved agent can be manipulated. Behavioral monitoring catches the drift.
Least privilege scoping. Every agent should operate with the minimum permissions needed to complete its task. Overprivileged agents turn a single prompt injection into a full environment compromise.
What the Data Shows About Where Enterprises Stand Right Now
The gap between executive confidence and actual controls is the defining problem of enterprise AI security in 2026.
82% of executives report confidence that their existing policies protect against unauthorized agent actions. But only 14.4% of organizations send agents to production with full security or IT approval.> Policy documentation and runtime enforcement are not the same thing.
<88% of organizations reported confirmed or suspected AI agent security incidents in the last year. In healthcare, that number is 92.7%. These are not theoretical risks that live in research papers. They are active incidents happening inside organizations that believed their existing controls were sufficient.
<Stanford’s Trustworthy AI Research Lab found that model-level guardrails alone are insufficient: fine-tuning attacks bypassed Claude Haiku in 72% of cases and GPT-4o in 57%.> Model-layer safety does not extend to the execution layer. Enterprises that treat those two things as the same control have a gap they may not discover until an incident forces the issue.
Where Pragatix Fits
Pragatix is AGAT Software’s AI Security and Enablement Platform, built specifically for the execution layer problem.
It gives security teams continuous discovery of every AI agent and MCP server connection in the environment, runtime enforcement at the tool invocation layer before execution happens, behavioral monitoring across the full agent fleet, and audit trails that attribute every agent action to a specific identity and policy decision.
For enterprises in regulated industries, Pragatix supports on-premise and private cloud deployment, removing the data sovereignty concerns that make cloud-hosted agent governance difficult to justify to your legal and compliance teams.
If your organization has AI agents in production and no execution-layer controls, request a Pragatix demo to see what your current coverage gap looks like.
FAQ
What is AI agent security? AI agent security covers the controls organizations use to govern autonomous AI agents across their full lifecycle: discovery, identity management, usage monitoring, runtime policy enforcement, and connector-level governance. It is distinct from traditional AI safety, which focuses on model outputs rather than agent actions in production systems.
What is an AI agent gateway? An AI agent gateway sits between an AI agent and its connected tools. It intercepts every tool invocation, evaluates the request against enterprise policy, scores the risk of the action, and approves or blocks execution before it happens. It is the primary control for preventing unauthorized agent actions at the execution layer.
What is AI runtime security? AI runtime security is the practice of evaluating and enforcing policy on AI agent actions in real time, at the moment of execution. It is different from model-layer security, which governs what the AI can say or generate. Runtime security governs what the AI can do.
What is shadow AI in an enterprise context? Shadow AI refers to AI agents and tools operating inside an organization without security team knowledge or approval. These agents often connect to external APIs and MCP servers that have never been reviewed, scoped, or inventoried.
What is prompt injection and why does it matter for enterprise AI? Prompt injection is an attack technique where malicious instructions are embedded in content an AI agent processes, such as a document, email, or API response. The agent interprets the embedded instructions as legitimate tasks and executes them using its own access and credentials, without the attacker needing direct system access.
What is MCP security? MCP (Model Context Protocol) security refers to the controls organizations apply to the servers and integrations that connect AI agents to tools, data sources, and external APIs. MCP servers often store credentials in plaintext and run with elevated permissions, making them a high-value target for agents that have been manipulated through prompt injection.
How does Pragatix address enterprise AI agent security? Pragatix provides execution-layer security for enterprise AI deployments: continuous agent and MCP server discovery, runtime enforcement at the tool invocation layer, behavioral monitoring, least-privilege access controls, and full audit trails. It supports on-premise deployment for regulated industries. Learn more about Pragatix.
