Your AI agent has access to everything. Your CRM. Your email. Your finance system. Your code repos.
It logs in as your top sales rep, your engineer, your CFO. It moves data, sends messages, books meetings, deploys code. And nobody on your security team can see what it’s actually doing.
That’s the agentic AI security problem in one paragraph. And it’s the reason RSAC 2026 felt less like a conference and more like a five-day warning siren.
The “Double Agent” Problem Nobody Saw Coming
Microsoft’s security team has a name for this. They call it the double agent problem.
Here’s how it works. You deploy an AI agent to do something useful. Maybe it summarises support tickets, drafts contracts, or runs your weekly reports. To do its job, that agent needs credentials. Lots of them.
Then someone slips a malicious instruction into a document, an email, or a Slack thread. The agent reads it. The agent obeys it. Suddenly your helpful assistant is exfiltrating customer data, wiring funds, or deleting files. It didn’t get hacked. It got talked into it.
The numbers back this up. 80% of Fortune 500 companies are already running agents in production. 97% of organisations that reported an AI-related security incident in the last year had no AI-dedicated access controls in place. Read those two stats together and you’ll understand why every major security vendor pivoted to agentic AI security in the last 30 days.
Why Your Old Security Stack Can’t See This Coming
Traditional security tools were built for two things. Humans clicking around. Static apps making predictable API calls.
Agents do neither.
An AI agent changes its behaviour at runtime. It picks tools on the fly. It calls other agents. It chains actions across systems your SIEM has never correlated before. Your IAM platform sees one identity. Your DLP sees one user. Your endpoint agent sees nothing at all, because the agent isn’t running on an endpoint.
Adi Shamir, the “S” in RSA, said it plainly at the conference: I’m totally terrified. When the cryptographer who literally built modern encryption is scared, your CISO probably should be too.
Brian Contos at Mitiga put the same idea another way. Adversaries don’t break in anymore. They log in. And in the agentic era, the thing logging in might not even be human.
The Three Gaps Killing Most Agentic AI Security Strategies
If you’re rolling out agents right now, here are the holes attackers are walking through.
Gap 1: Shadow Agents
Your developers spun up 14 agents last month. Your marketing team built 6. Your ops team is running another 9 from a no-code platform you’ve never heard of. Your security team knows about 2.
That’s shadow AI. And a recent CSA and Google survey found 72% of organisations lack confidence in their ability to execute a secure AI strategy. The first reason is almost always visibility. You can’t protect what you can’t see.
Gap 2: Over-Permissioned Identities
Most agents get provisioned the same way contractors used to. Give them broad access, promise to clean it up later, never clean it up. Now multiply that by hundreds of agents, each holding the keys to multiple systems. One compromised agent becomes a lateral movement playground.
Gap 3: No Runtime Guardrails
Static policy reviews don’t work for systems that decide what to do in the moment. You need controls that watch the agent while it’s working, flag risky actions before they execute, and shut things down when the agent goes off-script. Most enterprises have none of this.
What Good Agentic AI Security Actually Looks Like
A real agentic AI security programme has four moving parts. Skip any one of them and the others won’t save you.
Discovery. You need a continuous inventory of every agent running in your environment. Sanctioned, unsanctioned, internal, third-party. If it can take an action on your behalf, it goes on the list.
Identity controls built for non-humans. Agents are a new identity class. They need short-lived credentials, scoped permissions, and the ability to be revoked instantly. Treating them like service accounts is a mistake. Treating them like users is a bigger one.
Runtime enforcement. Policy needs to live next to the agent at execution time, not in a quarterly review document. That means inspecting prompts, tool calls, and outputs in real time, with automated kill switches when something looks wrong.
Audit you can actually use. Every agent action needs to be logged in a way your SOC can investigate later. Not just “agent X ran.” What did it read? What did it write? Whose data did it touch? Who told it to?
This is exactly the problem Pragatix was built to solve. The platform sits between your AI agents and the systems they touch, giving security teams the visibility, identity controls, and runtime guardrails that traditional tools were never designed to deliver.
The Window Is Closing Fast
Here’s the uncomfortable part. The attackers already adapted. Mandiant’s latest threat intel report tracked adversaries moving from experimental AI use to deploying autonomous agents that rewrite their own code in real time. The UK AI Security Institute logged a five-fold rise in real-world AI misbehaviour between October 2025 and March 2026.
Defenders are still arguing about budget.
If your agentic AI security strategy is “we’ll figure it out next quarter,” you’re already behind. Microsoft Agent 365 ships GA on May 1. Cisco, Google, Palo Alto, and IBM have all launched agent-specific security products in the last 30 days. The market is telling you the threat is here.
Where to Start This Week
You don’t need a 12-month transformation programme. You need three things on your calendar by Friday.
First, run a discovery sweep. Find every AI agent currently operating against your data, whether you authorised it or not. You’ll be surprised. Most security leaders find double what they expected.
Second, audit the identities. For every agent you find, ask one question. If an attacker took control of this agent right now, what’s the blast radius? If the answer makes you wince, that agent needs scoped permissions and short-lived credentials immediately.
Third, get a runtime control layer in place before you ship another agent to production. Build, buy, or partner. The choice is yours. The deadline isn’t.
The double agent problem isn’t a future risk. It’s a Tuesday afternoon problem that most security teams haven’t been told about yet.
Don’t be the team that finds out the hard way.
Ready to close the agentic AI security gap? Book a Pragatix demo and see how leading security teams are securing their AI agents from day one.
Further reading:
