Reflections from the AI Summit, Delhi — I spent last week at the AI Summit in Delhi listening to leaders talk about AI safety, regulation, and 'responsible innovation.' Everyone agreed we need governance. Almost nobody talked about the actual control plane that makes governance real: identity and access.
That gap is where the real risk is hiding. Most enterprises are racing to plug AI into everything: copilots, internal chatbots, agentic workflows, decision engines. The pattern is familiar: massive enthusiasm, rapid deployment, identity as an afterthought, painful reckoning.
AI isn't just another app. It's an identity multiplier.
The New Identity Explosion
Here's what's actually happening inside modern environments: every AI deployment silently creates a long tail of new non-human identities. Agents, sub-agents, orchestration workflows, API integrations, serverless functions — all of them need credentials, tokens, keys, or roles. In mature cloud shops, non-human identities already outnumber humans by tens or even hundreds to one.
- Huge growth in non-human identities that nobody fully owns or understands
- Long-lived secrets and tokens welded into pipelines, tools, and agents
- 'Shadow access' — AI and automation using permissions nobody ever explicitly designed for them
We built our IAM programs for employees and contractors. AI just turned that model inside out.
Why Traditional IAM Breaks With AI
Most IAM programs are optimized for a simple world: a human logs in, gets a role, accesses some apps, and logs out. AI agents don't behave like that. They run 24x7 chaining calls across multiple systems, spawn sub-agents and new workloads on demand, accumulate permissions indirectly via APIs and delegated roles, and act at machine speed — making thousands of decisions before anyone notices.
If you drop this behavior into an IAM estate built around static roles and annual access reviews, three things happen fast: privilege sprawl accelerates, visibility collapses, and incident blast radius multiplies. 'Responsible AI' without identity is just a slideware slogan.
1. Treat Agents as First-Class Identities
The first shift is conceptual: agents are not 'just scripts' or generic service accounts. They are sponsored non-human identities with real business impact. Every AI agent should have a concrete owner (a person and a team), a declared purpose and scope, and a maximum lifetime and review cadence. No anonymous execution. No shared 'bot' accounts. If it can act, it must have an identity you can see, own, and eventually decommission.
2. Fix How Agents Authenticate
Most organizations still run agents on long-lived API keys, broadly scoped OAuth tokens, and credentials embedded in code or config. That is completely misaligned with how fast agents operate. The move is toward short-lived, task-scoped tokens tied to specific workloads, proof-of-possession approaches so stolen tokens are useless, and clear resource scoping so agents only touch exactly what they need.
If you can't rotate it, scope it, or revoke it in real time, you shouldn't be giving it to an AI agent.
3. Move Policy Out of Apps and Into Infrastructure
RBAC alone cannot handle agent behavior. You need externalized policy that can take into account who/what the agent is, what task it is performing, what data and action are being requested, and the current risk context. Think 'policy as infrastructure,' not configuration. Use an external engine, define guardrails like maximum transaction scope, banned actions, and sensitive data boundaries. If policy still lives inside individual applications, you will not be able to govern AI at scale.
4. Design for Agent Sprawl From Day One
Service account sprawl already hurts most organizations. Agent sprawl will hit faster and harder. Because agents can be created, forked, and abandoned in minutes, your default stance should be: every agent identity has a time-to-live by default, renewal requires justification by the owner, when the owner leaves or the project ends the agent is auto-deprovisioned, and continuous discovery runs to find 'shadow agents' created outside standard pipelines.
5. Make Delegation Chains Safe by Design
AI workloads rarely act alone. One agent calls another, which calls a service, which triggers a workflow, which calls yet another agent. The rule for delegation should be simple: every hop maintains or reduces scope, never expands it; there is a hard limit on how deep the chain can go; and crossing trust boundaries requires explicit approval or stronger checks. If you can't answer 'who is ultimately responsible for this action?' by looking at the chain, you don't have governance — you have guesswork.
6. Build an Audit Trail You Can Actually Use
Traditional logs capture 'what call hit what API at what time.' That's not enough for AI. For agents, your audit baseline should include: agent identity, owning principal/team, transaction or correlation ID, stated or inferred intent, target system and data, and the policy decision (allow/deny) and why. You want to be in a position where, if a regulator or your board asks 'What did this agent do, and who approved it?' you can answer in minutes, not launch a six-week forensic project.
Start Here This Quarter
You don't need a perfect 'AI governance framework' to begin. You do need momentum. Over the next 90 days, you can take meaningful action:
- Inventory your agents and automations across cloud, SaaS, and internal platforms
- Map every agent to an accountable owner; if there is no owner, disable or quarantine it
- Enforce TTLs on new non-human identities — no more 'create and forget'
- Externalize at least your first set of guardrail policies (e.g., forbidden actions, sensitive datasets)
- Define a minimum audit schema for AI-driven activity and start propagating a transaction ID across calls
Governance isn't a one-time project. It's a muscle. Start exercising it while agent adoption is still containable.
Avoid These Traps
- Treating agents like legacy service accounts. They have different lifecycles, behaviors, and risk profiles.
- Waiting for a perfect standard before taking action. The ecosystem is moving; your principles and controls need to move too.
- Letting business units deploy AI agents with zero IAM involvement. Shadow AI is the new shadow IT — and it moves faster.
- Over-engineering before you've fixed basics like ownership, TTLs, and least privilege. Get the foundations right first.
The Organizations That Get This Right Will Scale AI Faster
The Delhi summit made one thing crystal clear: in the age of AI, trust is not a slogan — it's an architecture choice. And identity is the layer that either makes that architecture enforceable or leaves it to chance. The organizations that treat identity as the missing governance layer — not an afterthought — will scale AI faster and with far fewer ugly surprises.


