AI Adoption Is Outpacing AI Governance
- 22 hours ago
- 12 min read
Author: Kevin Bobowski, CMO, Archer

The CEOs of 15 leading cybersecurity vendors at RSAC 2026 said it directly. AI agents are entering production faster than the controls, accountability, and audit evidence required to govern them. Regulatory Change Management, run as a continuous workflow on AI Operators, is the discipline that closes the gap.
Key takeaways
AI adoption is outpacing AI governance. AI agents are entering production while the governance layer is still on the drawing board, according to CEOs at CrowdStrike, Netskope, Saviynt, and seven other leading cybersecurity vendors interviewed by CRN at RSAC 2026.
The agent identity architecture is unsettled. “We don’t know if we are going to have one account per agent, one account per user, one account per role,” said Saviynt CEO Sachin Nayyar. The foundational layer of any governance program is not yet defined.
AI creates more risk work, not less. SentinelOne CEO Tomer Weingarten: AI in cybersecurity “is not just automating work, it’s also creating more work.” Governance, risk and compliance functions face the same multiplier.
Humans must be in the lead, not just in the loop. Huntress CEO Kyle Hanslovan’s reframe is the cleanest articulation of the governance principle for autonomous systems.
Regulatory Change Management is what closes the gap. Run as a continuous workflow on AI Operators with end-to-end lineage, RCM transforms the discipline of adapting controls to regulatory change from a quarterly project into a real-time response to EU AI Act enforcement, NIST AI RMF updates, and SEC disclosure obligations.
Why is AI adoption outpacing AI governance?
AI adoption is outpacing AI governance because business leaders are under board pressure to deploy AI agents at the speed of competitive advantage, while risk and compliance teams are governing them at the speed of legacy GRC processes designed for annual audit cycles. The two clocks are running on different timescales. That is the gap. |
Sanjay Beri, CEO of Netskope, named the central tension in his RSAC 2026 interview with CRN:
“How fast can a CISO move to get the governance of their AI, while maintaining the velocity in their company of adoption of that AI?” Sanjay Beri, CEO, Netskope
Both directives are real. They are also in direct tension.
Proofpoint CEO Sumit Dhawan was blunt about the consequence. Cyber and risk teams have been pushed to the sidelines, not by choice but because adoption has outrun their ability to establish controls. AI is now a CEO priority. The risk function is not in a position to slow it down.
The result is a quiet, asymmetric reality inside most enterprises. Agents are operating in production while the governance layer is still on the drawing board.
Why is governing AI agents a risk architecture problem, not a security tool problem?
Governing AI agents is a risk architecture problem because no point security tool can connect agent behavior, access controls, policy frameworks, and audit trails into the single enterprise-wide picture that effective oversight requires. It needs GRC infrastructure built for autonomous systems, not a feature. |
George Kurtz, CEO of CrowdStrike, framed the scope of exposure clearly:
“If agents are having access to shells, having access to data and workflows, how do you even know what’s going on?” George Kurtz, Co-Founder and CEO, CrowdStrike
AI agents are not passive tools. They are autonomous actors. They execute workflows, access data, traverse systems, and accumulate permissions at machine speed. When they are compromised or misconfigured, the blast radius compounds in minutes.
Art Gilliland, CEO of Delinea, said what most are thinking but few have said plainly:
“I don’t know of any cases where an AI agent was taken over and that was what caused the breach. But it’s going to happen.” Art Gilliland, CEO, Delinea
The most striking admission came from Saviynt CEO Sachin Nayyar:
“We don’t know if we are going to have one account per agent, one account per user, one account per role. The agent architecture is undecided at this time.” Sachin Nayyar, CEO, Saviynt
The identity and access architecture for AI agents, the bedrock on which any governance program is built, is not yet settled. Agents are being deployed faster than the frameworks designed to control them. Closing that gap requires GRC infrastructure that governs AI agents end-to-end, connecting agent behavior, access controls, policy frameworks, and audit trails into one enterprise-wide picture.
Is AI a new security paradigm? Mimecast’s CEO says no.
AI is not a new paradigm to secure. The fundamentals of cybersecurity and risk management have not changed. What has changed is the operating tempo at which they must run. |
Mimecast CEO Marc van Zadelhoff offered the strongest counterweight at RSAC 2026 to the “AI changes everything” narrative:
“At the end of the day, you have to think about the network, the endpoint, the application, the data and the identity. And then, of course, the humans that use it. Those are the tenets of cybersecurity. They have been forever.” Marc van Zadelhoff, CEO, Mimecast
He is right about the tenets. The fundamentals of risk and compliance management have not changed either. Visibility, control, accountability, evidence. What has changed is the operating tempo. AI agents force every fundamental to operate at machine speed, simultaneously, across every system in the enterprise. That is not a paradigm shift. It is a stress test of the existing one.
The platforms that pass the stress test will be those built for both speed and depth. The ones that fail will be the legacy GRC platforms designed for annual audit cycles and static controls.
Why does AI create more risk work, not less?
AI creates more risk work because every deployed agent expands the surface area of risk: new controls to validate, new audit evidence to collect, new regulatory exposure to manage, and new third-party dependencies to assess. The assumption that AI will reduce the burden on risk and compliance functions is not supported by the operators closest to it. |
Tomer Weingarten, CEO of SentinelOne, challenged the labor-saving assumption directly:
“AI for cybersecurity is not just something to help scale the workforce, as it does for every industry. It also produces more work for the cybersecurity operator. It’s not just automating work, it’s also creating more work.” Tomer Weingarten, Co-Founder and CEO, SentinelOne
The same logic applies more sharply to risk and compliance. The deadlines are concrete. EU AI Act enforcement of high-risk system obligations begins in August 2026. NIST AI RMF adoption is accelerating across federal agencies and their supply chains. Boards are increasingly treating AI oversight as a fiduciary responsibility, not a technical detail. The CCO who cannot produce evidence of agent-level controls is the CCO whose name appears in the next 8-K disclosure.
Risk teams that assumed AI would automate their workload down will be caught flat-footed. The teams that win will be those running Regulatory Change Management as a continuous workflow rather than a quarterly project.
What does “humans in the lead, not just in the loop” mean for AI governance?
“Humans in the lead” means humans define the risk appetite, set the policy boundaries, and own the accountability framework before the AI acts. “Humans in the loop” only requires a human to review or approve specific outputs. The distinction matters because human-in-the-loop review breaks down when AI operates at machine speed across thousands of decisions simultaneously. |
The cleanest articulation of the principle came from Huntress CEO Kyle Hanslovan:
“AI requires humans in the lead, not just humans in the loop. They have to be guiding the AI. They have to be guiding the detection research.” Kyle Hanslovan, Co-Founder and CEO, Huntress
“Humans in the loop” has become a compliance checkbox. A human reviews. A human approves. A human signs off. That model breaks down under volume.
“Humans in the lead” is a different standard. It is governance by design rather than governance by exception. When AI acts, autonomously or otherwise, it does so within a deliberately designed structure. That is a Regulatory Change Management problem, not a security tooling problem. It requires policy management, risk assessment frameworks, control libraries, and audit workflows built to govern dynamic, autonomous systems.
How should risk leaders tier AI workflows for governance?
Risk leaders should tier AI workflows by the cost of error. Workflows where an incorrect AI decision is recoverable can be fully autonomous. Workflows where an incorrect decision creates material risk require human validation. Workflows that affect regulated outcomes require documented human accountability and audit evidence. |
Arctic Wolf CEO Nick Schneider added the practical frame at RSAC 2026:
“There are certain workflows where it can be fully autonomous, and if it’s wrong, it’s not the end of the world. There are others where, if it’s wrong, it’s a big deal.” Nick Schneider, CEO, Arctic Wolf
That is exactly the kind of structured tiering a mature Regulatory Change Management program enables. Knowing which decisions can be machine-trusted, which require human validation, and which are governed by exception. It is not about slowing AI down. It is about knowing where the guardrails belong before the train leaves the station.
How big is the AI Governance opportunity?
Sophos CEO Joe Levy called AI-driven cybersecurity transformation “probably the biggest market opportunity that I’ve ever seen in my life,” affecting hundreds of millions of businesses globally. AI Governance is the part of that transformation that has the least mature infrastructure and the highest cost of getting it wrong. |
“Hundreds of millions of businesses are about to go through this transformation. This is an economic wave that is about to splash down on the whole planet. It’s probably the biggest market opportunity that I’ve ever seen in my life.” Joe Levy, CEO, Sophos
The organizations that navigate this wave successfully will treat AI Governance as infrastructure, not an afterthought. The organizations that don’t will face a different outcome. Regulatory exposure. Audit failures. Operational disruptions. Reputational damage from AI systems operating outside any meaningful risk framework.
Why generic AI agents fail the control function. And what AI Operators do differently.
Generic AI agents are thin LLM wrappers built for productivity, not for the second line of defense. AI Operators are a different category: identity-aware, scope-constrained, audit-emitting, expert-supervised, and provider-independent by design. The distinction is architectural, not feature-level. |
The CEOs CRN interviewed at RSAC 2026 were describing the failure modes of generic agents. Agents accessing shells without provenance. Agents accumulating permissions without audit. Agents acting at machine speed without justification. Those failure modes are not bugs. They are the predictable consequences of treating a foundation model with a prompt and a few tools as if it were an enterprise control function.
AI Operators invert that architecture. Identity is bound natively through SSO and SCIM. Action limits are policy artifacts, not token caps. Every action emits a structured, immutable audit record. Justification and confidence are calibrated and schema-bound. The model layer is abstracted and swappable. Expert review is a platform primitive, not a workflow add-on. End-to-end lineage connects every agent action back to the policy and authoritative source that authorized it.
For productivity, an agent is sufficient. For control, only an Operator is defensible.
How does Regulatory Change Management close the governance gap?
Regulatory Change Management closes the gap between AI adoption and AI governance by transforming the discipline of adapting controls to regulatory change from a quarterly project into a continuous workflow. With end-to-end lineage from authoritative source through obligation, control, process, and AI agent action, every regulatory change can be propagated deterministically, and every AI agent decision can be traced back to the policy that authorized it. |
The deadlines that matter do not arrive on quarterly cycles. EU AI Act enforcement of high-risk system obligations begins in August 2026. NIST AI RMF revisions land continuously. SEC disclosure expectations are evolving. Neither do AI agent deployments arrive on quarterly cycles. New agents enter production every week, in every business unit, with every new SaaS update.
A Regulatory Change Management program built on the right architecture handles those two clocks together. When CFR §X is amended, when NIST publishes a revision, when a new EU directive lands, the an effective AI Operator identifies every affected obligation, control, business unit, process, and AI agent. When a new agent is deployed, the same lineage shows which obligations it touches and which controls it must satisfy.
That is what makes the gap closable. Not faster reviews. Not more reviewers. A continuous, lineage-driven workflow and AI Operators that runs at the operating tempo of the systems it governs.
What should CISOs, CCOs, CROs, and CAEs do now?
Three priorities are urgent for risk and compliance leaders in 2026: |
1. Establish AI Governance frameworks now. The agent identity architecture is unsettled. Organizations that wait for industry consensus will be governing retroactively. Map every deployed agent. Define accountability for every decision class. Write the policy before the regulator does.
2. Reframe risk appetite for autonomous systems. The question is no longer just what AI can do. It is what AI should be allowed to do. Who is accountable when it acts outside those boundaries. What evidence the organization can produce when an auditor or a regulator asks.
3. Run Regulatory Change Management as a continuous workflow. Legacy GRC platforms designed for annual audit cycles and static controls cannot keep pace with the operating tempo of AI agents or the cadence of AI regulation. The infrastructure has to match both.
For the CRO, the deliverable is a defensible AI control posture you can present to the board, the audit committee, and the regulator. For the CCO, it is a single platform that enforces policy uniformly across business units and produces evidence that survives a regulatory exam. For the CISO, it is identity-bound, scope-constrained, audit-emitting AI engineered to your existing IAM model, not in spite of it. For Internal Audit, it is sampling, walkthrough, and substantive testing programs that work because every AI action is consistently structured, justified, and reviewable.
What is GRC infrastructure that governs AI agents?
GRC infrastructure that governs AI agents is the policy, control, evidence, and accountability layer purpose-built to oversee autonomous, machine-speed systems. It runs Regulatory Change Management as a continuous workflow. It treats the foundation model as a swappable component rather than the system itself. It produces the audit-ready evidence at the agent level that boards, auditors, and regulators now expect. |
GRC infrastructure that governs AI agents is the policy, control, evidence, and accountability layer purpose-built to oversee autonomous, machine-speed systems. It runs Regulatory Change Management as a continuous workflow. It treats the foundation model as a swappable component rather than the system itself. It produces the audit-ready evidence at the agent level that boards, auditors, and regulators now expect.
The governance problem is solvable. The organizations building Regulatory Change Management programs that are AI-ready, with the right frameworks, the right data integration, and the right accountability structures, will move faster and with more confidence than those retrofitting tools that were never designed for agentic systems.
That is what modern Regulatory Change Management makes possible.
AI agents need guardrails. Enterprises need GRC infrastructure. That is the opportunity.
FAQs
What is AI Governance?
AI Governance is the framework of policies, controls, accountability structures, and audit evidence that determines what AI systems are allowed to do, what evidence the organization produces about their behavior, and who is accountable when they operate outside defined boundaries. For AI agents specifically, it is the risk and compliance layer required to oversee autonomous, machine-speed systems.
What is the difference between an AI agent and an AI Operator?
A generic AI agent is a thin layer over a foundation model: a prompt, a few tools, and a reasoning loop. It is built for productivity, not for the control function. An AI Operator is a different category of system, engineered as a governed actor inside an enterprise control fabric. AI Operators are identity-aware, scope-constrained, audit-emitting, expert-supervised, and provider-independent by design. For productivity, an agent is sufficient. For control, only an Operator is defensible.
How does Regulatory Change Management close the gap between AI adoption and AI governance?
Regulatory Change Management closes the gap by transforming the discipline of adapting controls to regulatory change from a quarterly project into a continuous workflow. With end-to-end lineage from authoritative source through obligation, control, process, and AI agent, every regulatory change propagates deterministically, and every AI agent action traces back to the policy that authorized it.
What is the difference between humans in the lead and humans in the loop?
“Humans in the loop” means a human reviews or approves specific AI outputs before action. “Humans in the lead” means humans define the risk appetite, set policy boundaries, and own the accountability framework before the AI is deployed. The first is a checkpoint. The second is governance by design. Huntress CEO Kyle Hanslovan introduced the distinction at RSAC 2026.
When does the EU AI Act take effect for high-risk AI systems?
EU AI Act enforcement of high-risk system obligations begins in August 2026. Organizations operating in the European Union or serving EU citizens must establish documented governance, risk management, and conformity assessment for AI systems classified as high-risk under the regulation.
What did cybersecurity CEOs say at RSAC 2026 about AI agents?
CRN interviewed the CEOs of 15 leading cybersecurity vendors at RSAC 2026, including CrowdStrike, SentinelOne, Netskope, Proofpoint, Mimecast, Sophos, Saviynt, Delinea, Huntress, and Arctic Wolf. The recurring themes were that the agent identity architecture is unsettled, that AI creates more work for risk and security teams rather than less, that “humans in the lead” must replace “humans in the loop” as the governance standard, and that the market opportunity is among the largest in cybersecurity history.
How does AI Governance differ from traditional GRC?
Traditional GRC was designed for annual audit cycles and static controls. AI Governance must run on Regulatory Change Management as a continuous workflow, ingesting agent behavior in real time, applying policy at decision speed, and producing audit-ready evidence at the agent level. The fundamentals of risk and compliance management are unchanged. The operating tempo and the data integration requirements are not.
Archer is the GRC infrastructure that governs AI agents. Built on 25 years of GRC depth, 1,200+ enterprise customers, and the regulatory intelligence backbone trusted by six of the top ten U.S. banks. The speed of AI. The depth of experience. Learn more at archerirm.com.








