Customers
Chapter 5

Data governance and security 

Modern IT infrastructure is plagued with multiple AI copilots, chatbots, assistants, and workflow agents that sit inside portals, collaboration tools, browsers, or SaaS suites. Left unchecked, this leads to more fragmented data, multiplied risks, and cloud accountability. IT leaders need to start weaving strong governance and security into every layer of the tech stack start for reliable AI operations in IT.

of U.S IT leaders point to governance and compliance as their top barrier to AI adoption. - Atomicwork’s State of AI in IT 2025 report

Here’s an actionable framework that CIOs can adopt to reimagine data governance and security in the age of agentic AI.

Set up a central AI Control portal 

Shadow agents appear faster than you can audit them and without a single registration point, you’re soon going to be left with an unorganized mess of rampant AI tools.

Strategic actions:

  1. Audit existing AI agents: Take an inventory of your AI agents, identify overlapping ones, and retire those that do not add unique value or violate least-privilege data access.

  2. Mandate AI agent registration: Mirror the Okta model, enforcing every new bot to authenticate through a centralized portal that enforces policies, RBAC, and data-handling rules.

  3. Auto-discover rogue agents: Use scanning tools and flag anything that connects to your infra (via collaboration tools or APIs or browser plug-ins) without a valid ID.

Define specific roles, responsibilities & escalation paths for AI agents

When human and AI duties overlap, they may slow resolutions and create muddy audit trails if task delegation is not given thought.

Strategic actions:

  1. Publish a RACI-style matrix that states tasks AI owns, when it must escalate, and how hand-offs are logged.

  2. Integrate AI responses into the employee’s flow of work (chat, email, mobile) and notify them when issues are routed to human agents.

  3. Capture immutable audit trails of both AI and human interventions to maintain compliance, transparency, and accountability.

  4. Plan for AI failure modes and expand incident taxonomies to include unauthorized model access, data poisoning, biased output, hallucination, and agent-vs-human accountability errors.

Embrace secure-by-design architecting principles

Prioritize responsible AI principles while incorporating AI in your tech stack. Having worked with tech-forward CIOs for the past 3 years, we’ve put together the TRUST framework for responsible and ethical AI adoption.

Transparent: Clearly disclose to end users when they receive AI responses. Each response can surface “why & how” details (cited sources, reasoning trails) to reduce the black-box effect.

Responsible: Enforce AI guardrails to limit disallowed topics for ethical and enterprise-appropriate outputs. Run bias tests and toxic-content filters both before and after models move to production.

User-centric: Personalize user interactions at scale without compromising on speed or privacy and have consistent feedback loops for AI models to continuously learn and deliver contextual responses. At any step, offer employees the option in every interaction to escalate to a human agent.

Secure: Run strict input validation (sanitisation, anomaly detection, PII masking) to shield models from handling sensitive data. Real-time output monitoring also blocks leaks, toxicity, or policy violations before they hit users.

Traceable: Tamper-proof, time-stamped logs capture every prompt, decision, and downstream action for IT admins to track an AI agent’s “decision path” for compliance and RCA.

Re-engineer your service management platforms to handle AI interactions 

While bringing AI into your tech stack, IT leaders need to account for the incoming high-velocity logs decisions, prompts, embeddings that legacy ITSM databases can’t absorb.

Strategic actions:

  • Ensure your service management platform is designed to ingest conversational logs and agent-to-agent interactions at scale.

  • Bake in data-retention and masking rules for sensitive prompts or user messages.

  • Use real-time analytics to detect drift or rising hallucination rates before it impacts users.

Adopt an “Agent-of-Agents” orchestration model

Point AI agents working in silos force users to guess which bot can help while also fragmenting user information and context.

Strategic actions:

By deploying a central aka. master orchestrator agent you can:  

  • Enable context sharing and lineage tracking between agents, so every decision inherits the correct permissions and data-privacy constraints.

  • Have a single pane of governance, exposing usage analytics, anomaly alerts, and license utilization across the AI fleet.

  • Process requests across IT, HR, Finance, and other domain-specific agents while enforcing security policies

Explore how to rethink governance in the agentic era – Read the article

Download the kit