In 9 seconds, an AI agent with elevated permissions deleted an entire production database — customer records, reservations, every backup, gone.
That incident, which circulated through enterprise security channels in early 2026, crystallized the governance problem that has been building since agentic AI moved from demos to production systems: agents are being deployed faster than organizations can control them. Gartner now projects that 40% of agentic AI projects will fail by 2027 — not because the AI cannot do the work, but because it cannot be controlled when it goes wrong.
On May 5, at its Knowledge 2026 conference, ServiceNow responded with the most comprehensive enterprise AI governance announcement to date: an expanded AI Control Tower with real-time kill switches, 30 new enterprise connectors spanning every major hyperscaler and business application, a Veza-powered identity access governance layer, and continuous AI observability through its Traceloop acquisition. This guide covers what was announced, how the kill switch works in practice, and what it means for teams building agentic AI in production.
The Governance Gap ServiceNow Is Targeting
The governance problem in enterprise agentic AI has three distinct layers, and most existing tooling addresses only one of them.
The first layer is visibility. Before you can govern AI agents, you need to know they exist. Most enterprises deploying AI in 2026 have agents proliferating across AWS Bedrock, Azure OpenAI, Google Vertex AI, and direct API integrations — often without a single team having a complete inventory. ServiceNow’s research presented at Knowledge 2026 found that the median enterprise had 40% more AI agents running in production than its IT team believed. Shadow AI — the enterprise equivalent of shadow IT — is accelerating as business units deploy agents without central oversight.
The second layer is access control. Agents that read and write to enterprise systems — CRMs, ERP platforms, databases, communication tools — frequently hold dangerous permission sets. The principle of least privilege, fundamental to human identity access management, has rarely been applied rigorously to AI agents. An agent that needs to read customer records to answer support queries does not need write permissions to delete them. In practice, agents are over-permissioned because scoping permissions correctly requires understanding exactly what each agent does — which requires the observability infrastructure most teams have not built.
The third layer is enforcement. Even organizations with good visibility and correct access policies often have no mechanism to act on them in real time. If an agent begins behaving unexpectedly — looping, accumulating permissions, generating destructive outputs — the response is typically manual investigation after the fact. The 9-second database deletion happened because there was no automated mechanism to interrupt the agent once it started executing. The full breakdown of that incident is covered in the PocketOS database deletion analysis, and it remains the clearest illustration of what ungoverned agent access looks like at runtime.
What ServiceNow Announced at Knowledge 2026
The original AI Control Tower, first announced in 2025, was primarily a visibility tool: it discovered AI assets, assessed their risk posture, and produced reports. The Knowledge 2026 expansion adds enforcement across five domains: discovery, observation, governance, security, and measurement.
Discovery: Finding Every Agent in the Enterprise
The updated discovery engine automatically catalogs AI assets across an enterprise regardless of where they run — every model, agent, dataset, and MCP server, including those deployed on AWS, Azure, and Google, and direct Anthropic and OpenAI API integrations. Coverage extends to 30 new enterprise connectors spanning SAP, Oracle, Workday, and other major business applications.
Critically, this discovery requires no agent-side instrumentation. Agents do not need to be modified or tagged — Control Tower finds them by monitoring network traffic, API calls, and platform-level metadata. For enterprises that have been unable to produce a clean count of deployed AI agents, this passive discovery is the baseline capability everything else depends on. You cannot kill a rogue agent you do not know exists.
The Kill Switch: What It Actually Does
When the AI Control Tower detects suspicious activity — an agent calling tools it has no business calling, accumulating permissions, or generating outputs that deviate significantly from its defined behavioral profile — it surfaces an alert and presents a kill switch action.
Activating the kill switch executes four operations in sequence:
- Revokes the agent’s model and tool access via the ServiceNow AI Gateway
- Deactivates the agent instance
- Generates a P1 security incident with the full event timeline
- Produces a complete audit trail of every action the agent took before termination, including every LLM call made and every tool invoked
The implementation is intentionally not fully automated. ServiceNow made the design decision that kill switch execution — because it interrupts live business processes — requires human confirmation. What is automated is detection and escalation; what requires human judgment is the termination decision itself. This positions the system as an augmentation for incident response rather than autonomous enforcement — likely the correct trade-off for a first-generation product, given the consequences of false positives on production workloads.
The blast zone concept that ServiceNow leadership emphasized throughout Knowledge 2026 is central here: the kill switch does not just stop the agent, it also revokes the permissions that define the blast zone. An agent shut down but still holding write access to a production database is a security incident waiting to resume. Revoking access atomically with deactivation is what makes the kill switch meaningful rather than cosmetic.
Identity Access Governance via Veza
The access control layer comes from ServiceNow’s integration with Veza, whose patented access graph technology maps relationships between identities and resources. In the AI governance context, Control Tower can now answer questions that were previously very difficult: which agents have write access to which databases, which agents share permission sets with flagged human identities, and which agents have acquired permissions since they were initially deployed.
Veza’s access graph extends to every connected device, agent, model, and action within the enterprise environment, including hyperscaler AI environments. This matters because permission drift — agents gradually acquiring access they should not have — is one of the primary mechanisms by which agentic AI creates unintended risk over time. Agents that start with read-only access accumulate write permissions through workflow integrations, and without an identity graph that tracks this drift in real time, the accumulated permissions become invisible until something goes wrong.
AI Observability Through Traceloop
ServiceNow acquired Traceloop in March 2026, and Knowledge 2026 was the first public integration of Traceloop’s technology into the Control Tower. Traceloop tracks every LLM call running in the system — the model called, the prompt sent, the response received, the latency, and the downstream action taken — replacing periodic manual audits with continuous runtime monitoring.
Anomaly detection runs against the LLM call stream in real time. An agent that normally makes three tool calls per workflow and suddenly makes thirty will surface before the downstream impact materializes. Combined with the kill switch, this creates a detect-and-respond capability that is qualitatively different from anything available in enterprise AI governance tooling before May 2026. The instrumentation patterns that Traceloop-style monitoring depends on — structured logging of every agent action, the input, the model called, the output, and the downstream effect — are covered in depth in the AI agent observability and production monitoring guide.
Cost and ROI Governance
The Knowledge 2026 release also added cost tracking and ROI dashboards — a direct response to the CFO problem that has been growing alongside the agent deployment problem. Enterprises have been approving AI infrastructure budgets based on per-seat licensing analogies from the SaaS era. Agentic AI billing is consumption-based, and the consumption curve for agents is nonlinear in ways that routinely catch procurement teams by surprise.
Control Tower now tracks token consumption across providers — OpenAI, Anthropic, Google — in a unified dashboard that maps spending to business outcomes: adoption rates, cost per task, productivity improvements, model spend by department, and comparison against pre-agent productivity baselines. The stated purpose is to give a CFO the data to answer the board’s question about AI ROI with actual numbers rather than estimates.
This financial governance layer directly complements the infrastructure cost management patterns covered in the agentic cost crisis guide. The combination of observability (what agents are doing) and cost tracking (what they are spending) is what makes it possible to answer the question every enterprise board is now asking: is the AI spend producing proportionate business value?
What This Means for Teams Building Agentic AI
ServiceNow’s announcement has practical implications for anyone building and operating agentic AI systems in 2026, regardless of whether your organization uses ServiceNow.
Governance is becoming a procurement requirement. Enterprises evaluating agentic AI deployments are increasingly requiring vendors and internal teams to document: how agents are discovered and inventoried, how access is scoped, and what the incident response procedure is when an agent misbehaves. ServiceNow’s five-area framework — discovery, observation, governance, security, measurement — is a checklist that applies to any production agentic deployment, not just those running through ServiceNow infrastructure.
Least-privilege design is non-negotiable. The Veza integration enforces a principle that should already be applied at design time. Agents should be scoped to exactly the permissions they need for the specific workflow they perform, and no broader. For teams designing agent architectures, this means documenting tool access at the agent level rather than the application level, and building permission scoping into agent specifications from day one rather than retrofitting it after an incident. Permission scope documents should be as natural a part of agent deployment as API key management.
Observability is the prerequisite for governance. The Traceloop integration works because it has telemetry from every LLM call. Teams running agentic AI without this level of instrumentation cannot detect the anomalies that governance systems act on. Before investing in kill switches and access graphs, the foundational investment is structured logging: the input, the model called, the output, and the downstream effect of every agent action. You cannot govern what you cannot observe.
Design for minimal blast zone from the start. ServiceNow leadership used the term “blast zone” repeatedly at Knowledge 2026: the scope of damage an agent can cause if it goes wrong. Minimizing blast zone means agents that write to production systems should have confirmation steps before irreversible operations, agents should not hold permissions longer than their task requires, and agentic workflows should be decomposed so that a rogue sub-agent cannot compromise the entire pipeline. These are architectural decisions that are cheap to make at design time and expensive to retrofit after a production incident.
Availability and Pricing
The AI Control Tower enhancements announced at Knowledge 2026 enter ServiceNow’s Innovation Lab in May 2026, with general availability expected in August 2026. ServiceNow is offering Control Tower free for one year — a stated $2 million value — to any enterprise that deploys it before the GA date. That offer is designed to accelerate adoption of the governance standard before a competing enterprise platform captures the category.
The free-year offer signals ServiceNow’s intent to make Control Tower the operating layer for enterprise AI governance the way ITSM platforms became the operating layer for IT incident management. If that positioning succeeds, the 30-connector ecosystem becomes a meaningful moat: once an enterprise’s AI inventory is managed through Control Tower, migrating to a different governance platform requires re-instrumentation of every connected system.
For teams evaluating enterprise AI governance tooling in mid-2026, the case for piloting Control Tower before August is straightforward. The free year significantly reduces the cost of building the discovery and observability infrastructure that every agentic AI deployment should have, and native coverage of all major hyperscalers and enterprise applications means meaningful coverage from day one rather than months of connector configuration work.
The Governance Standard Is Being Set Now
The 9-second database deletion will not be the last incident of its kind. The rate at which enterprises are deploying AI agents is accelerating faster than governance infrastructure is being built, and that gap creates material operational, financial, and reputational risk.
ServiceNow’s expanded AI Control Tower is the most complete enterprise governance response announced to date. The kill switch is the visible headline. But the underlying capabilities — automated discovery across every major AI platform, least-privilege enforcement through identity access graphs, continuous LLM call observability, and cost-to-outcome tracking — are the foundation of responsible agentic AI deployment at scale.
Whether your enterprise runs on ServiceNow or not, the framework it has produced is the right template for thinking about AI governance in 2026: know what agents exist, know what they can access, watch what they do in real time, and have the ability to stop them when they go wrong. The governance standard for enterprise agentic AI is being set in May 2026 — and teams that wait until a production incident to start thinking about it will find the cost of retrofitting governance far exceeds the cost of building it in from the start.
Written by
Anup Karanjkar
Expert contributor at WOWHOW. Writing about AI, development, automation, and building products that ship.
Ready to ship faster?
Browse our catalog of 3,000+ premium dev tools, prompt packs, and templates.
Monday Memo · Free
One insight, every Monday. 7am IST. Zero fluff.
1 field report, 3 links, 1 tool we actually use. Join 11,200+ builders.
Comments · 0
No comments yet. Be the first to share your thoughts.