Every identity in your infrastructure tells a story. For decades, that story was simple: a human logs in, does work, logs out. But today, the cast of characters has exploded. Service accounts, API keys, CI/CD runners, Kubernetes operators, cloud functions, and now—AI agents that reason, plan, and act autonomously. Welcome to the era of Non-Human Identity (NHI), where the machines outnumber the people, and your IAM strategy hasn’t caught up.
If you’re a DevOps engineer, security architect, or platform engineer, this isn’t theoretical. This is the attack surface you’re defending right now, whether you know it or not.
The NHI Sprawl Problem: Your Identities Are Already Out of Control
Here’s a number that should keep you up at night: in the average enterprise, non-human identities outnumber human users by 45:1. In some DevOps-heavy organizations analyzed by Entro Security’s 2025 report, that ratio has climbed to 144:1—a 44% year-over-year increase driven by AI agents, CI/CD automation, and third-party integrations.
GitGuardian’s 2025 State of Secrets Sprawl report paints an equally alarming picture: 23.77 million new secrets leaked on GitHub in 2024 alone, a 25% increase from the previous year. Repositories using AI coding assistants like GitHub Copilot show 40% higher secret leak rates. And 70% of secrets first detected in public repositories in 2022 are still active.
This is NHI sprawl: an uncontrolled proliferation of machine credentials—API keys, service account tokens, OAuth client secrets, SSH keys, database passwords—scattered across your infrastructure, your CI/CD pipelines, your Slack channels, and your Jira tickets. 43% of exposed secrets now appear outside code repositories entirely.
The scale of the problem becomes clear when you inventory what qualifies as a non-human identity:
- Service accounts in cloud providers (AWS IAM roles, GCP service accounts, Azure managed identities)
- API keys and tokens for SaaS integrations
- CI/CD runner identities (GitHub Actions, GitLab CI, Jenkins)
- Kubernetes service accounts and workload identities
- Infrastructure-as-code automation (Terraform, Pulumi state backends)
- AI agents that autonomously call APIs, deploy code, or access databases
Each one of these is an identity. Each one needs authentication, authorization, and lifecycle management. And most organizations are managing them with the same tools they built for humans in 2015.
Why Traditional IAM Fails for AI Agents
Traditional IAM was designed around a specific model: a human authenticates (usually with a password plus MFA), receives a session, performs actions within their role, and eventually logs out. The entire architecture assumes a bounded, interactive session with a human making decisions at the keyboard.
AI agents break every one of these assumptions.
Ephemeral lifecycles. An AI agent might exist for seconds—spun up to process a request, execute a multi-step workflow, and terminate. Traditional identity provisioning, which relies on onboarding workflows, approval chains, and manual deprovisioning, can’t keep up with entities that live and die in milliseconds.
Non-interactive authentication. Agents don’t type passwords. They don’t respond to MFA push notifications. They authenticate through tokens, certificates, or workload attestation—mechanisms that traditional IAM treats as second-class citizens.
Dynamic scope requirements. A human user typically has a stable role: „developer,“ „SRE,“ „database admin.“ An AI agent’s required permissions can change from task to task, even within a single execution chain. It might need read access to a monitoring API, then write access to a deployment pipeline, then database credentials—all in one workflow.
Scale that breaks assumptions. When your environment can spin up thousands of autonomous agents concurrently—each needing unique, auditable credentials—the per-identity overhead of traditional IAM becomes a bottleneck, not a safeguard.
No human in the loop (by design). The entire value proposition of AI agents is autonomy. But traditional IAM’s risk controls assume a human is making judgment calls. When an agent autonomously decides to escalate a deployment or modify infrastructure, who approved that access?
Delegation Chains: The Trust Problem That Keeps Growing
Perhaps the most fundamental challenge with AI agent identity is delegation. In traditional systems, delegation is simple: Alice grants Bob access to a shared folder. The chain is short, auditable, and traceable.
With AI agents, delegation becomes a recursive chain. Consider this scenario:
- A developer asks an AI orchestrator to „deploy the latest release to staging“
- The orchestrator delegates to a CI/CD agent to build and test
- The CI/CD agent delegates to a security scanning agent to verify compliance
- The security agent delegates to a cloud provider API to check configurations
- Each hop requires credentials, and each hop reduces the trust boundary
This is a delegation chain: a sequence of authority transfers where each agent acts on behalf of the previous one. The security questions multiply at each hop: Did the original user authorize this entire chain? Can intermediate agents expand their scope? What happens when one link in the chain is compromised?
Without a formal delegation model, you get what security teams call ambient authority—agents inheriting broad permissions from their caller without explicit, auditable constraints. This is how lateral movement attacks happen in agent-driven architectures.
OpenID Connect for Agents: Standards Are Catching Up
The good news: the identity standards community has recognized this gap. The OpenID Foundation published its „Identity Management for Agentic AI“ whitepaper in 2025, and work on OpenID Connect for Agents (OIDC-A) 1.0 is actively progressing.
OIDC-A extends the familiar OAuth 2.0 / OpenID Connect framework with agent-specific capabilities:
- Agent authentication: Agents receive ID Tokens with claims that identify them as non-human entities, including their type, model, provider, and capabilities
- Delegation chain validation: New claims like
delegator_sub(who delegated authority),delegation_chain(full history of authority transfers), anddelegation_constraints(scope and time limits) enable relying parties to validate the entire trust chain - Scope attenuation per hop: Each delegation step can only reduce scope, never expand it—a critical safeguard against privilege escalation
- Purpose binding: The
delegation_purposeclaim ties access to a specific intent, supporting auditability and compliance - Attestation verification: JWT-based attestation evidence lets relying parties verify the integrity and provenance of an agent before trusting its claims
The delegation flow works like this: a user authenticates and explicitly authorizes delegation to an agent. The authorization server issues a scoped ID Token to the agent with the delegation chain attached. The agent can then present this token to downstream services, which validate the chain—checking chronological ordering, trusted issuers, scope reduction at each hop, and constraint enforcement.
This is a fundamental shift from „the agent has a service account with broad permissions“ to „the agent carries a verifiable, constrained, auditable proof of delegated authority.“ The difference matters enormously for security posture.
Modern Approaches: Zero Standing Privilege and Beyond
Standards provide the protocol layer. But implementing NHI security in practice requires adopting a set of architectural principles that go beyond what traditional IAM offers.
Zero Standing Privilege (ZSP)
The single most impactful principle for NHI security is eliminating standing privileges entirely. No agent, service account, or workload should have persistent access to any resource. Instead, all access is granted just-in-time (JIT)—requested, approved (potentially automatically based on policy), and expired within a defined window.
This sounds radical, but it’s increasingly practical. Tools like Britive, Apono, and P0 Security provide JIT access platforms that can provision and deprovision cloud IAM roles, database credentials, and Kubernetes RBAC bindings in seconds. The agent requests access, the policy engine evaluates the request against contextual signals (time, identity chain, workload attestation, behavioral baseline), and temporary credentials are issued.
The result: even if an agent is compromised, there are no standing credentials to steal. The blast radius collapses from „everything the service account could ever access“ to „whatever the agent was authorized for in that specific moment.“
SPIFFE and Workload Identity
SPIFFE (Secure Production Identity Framework for Everyone) and its runtime implementation SPIRE represent the most mature approach to cryptographic workload identity. SPIFFE assigns every workload a unique, verifiable identity (SPIFFE ID) and issues short-lived credentials called SVIDs (SPIFFE Verifiable Identity Documents)—either X.509 certificates or JWTs.
For AI agents, SPIFFE provides several critical capabilities:
- Runtime attestation: Identities are bound to workload attributes (container metadata, node selectors, cloud instance tags) rather than static credentials
- Automatic rotation: SVIDs are short-lived and automatically renewed, eliminating the credential rotation problem
- Federated trust: SPIFFE trust domains can federate across organizational boundaries, enabling secure agent-to-agent communication in multi-cloud environments
- No shared secrets: Authentication uses cryptographic proof, not shared API keys or passwords
SPIFFE is already integrated with HashiCorp Vault, Istio, Envoy, and major cloud provider identity systems. An IETF draft currently profiles OAuth 2.0 to accept SPIFFE SVIDs for client authentication, bridging the gap between workload identity and application-layer authorization.
Verifiable Credentials for Agents
The W3C Verifiable Credentials (VC) model, originally designed for human identity use cases, is being adapted for non-human identities. In this model, an agent carries a set of cryptographically signed credentials that attest to its capabilities, provenance, and authorization—without requiring real-time connectivity to a central authority.
This is particularly powerful for offline-capable agents and edge deployments where agents may need to prove their identity and authorization without reaching back to a central IdP. Combined with OIDC-A delegation chains, verifiable credentials create a portable, tamper-evident identity for AI agents.
Teleport: First-Class Non-Human Identities in Practice
While standards and frameworks provide the conceptual foundation, some platforms are already implementing first-class NHI support. Teleport is a notable example, offering unified identity governance that treats machine identities with the same rigor as human users.
Teleport’s approach covers the full infrastructure stack—SSH servers, RDP gateways, Kubernetes clusters, databases, internal web applications, and cloud APIs—under a single identity and access management plane. What makes it relevant for NHI is the architecture:
- Certificate-based identity: Every connection (human or machine) authenticates via short-lived certificates, not static keys or passwords
- Workload identity integration: Machine-to-machine communication uses cryptographic identity tied to workload attestation
- Unified audit trail: Human and non-human access events appear in the same audit log, enabling correlation and compliance
- Just-in-time access requests: Both humans and machines can request elevated access through the same workflow, with policy-driven approval
Similarly, vendors like Britive and P0 Security are building platforms specifically designed for the NHI challenge—providing discovery, classification, and JIT governance for the thousands of non-human identities scattered across cloud environments.
The key insight from these implementations: treating non-human identities as a governance afterthought (i.e., handing out long-lived service account keys and hoping for the best) is no longer viable. First-class NHI support means the same identity lifecycle, the same audit rigor, and the same least-privilege enforcement—applied uniformly to every identity in your infrastructure.
Practical Implementation Guidelines for NHI Security
Moving from theory to practice requires a structured approach. Here’s a roadmap for engineering teams building NHI security into their platforms.
1. Inventory and Classify Your Non-Human Identities
You can’t secure what you can’t see. Start with a comprehensive inventory of every NHI in your environment—service accounts, API keys, OAuth clients, CI/CD tokens, workload identities, and AI agent credentials. Classify them by criticality, scope, and lifecycle. Many organizations discover they have 10–50x more NHIs than they estimated.
2. Eliminate Long-Lived Credentials
Every static API key and long-lived service account token is a breach waiting to happen. Establish a migration plan to replace them with short-lived, automatically rotated credentials. Prioritize high-privilege credentials first. Use workload identity federation (GCP Workload Identity, AWS IAM Roles for Service Accounts, Azure Workload Identity) to eliminate static credentials for cloud-native workloads.
3. Implement Zero Standing Privilege for Agents
No AI agent should have permanent access to production resources. Deploy JIT access platforms that provision credentials on-demand with automatic expiration. Define policies that evaluate request context—who triggered the agent, what task it’s performing, what workload attestation it carries—before issuing credentials.
4. Adopt Cryptographic Workload Identity
Deploy SPIFFE/SPIRE or equivalent workload identity infrastructure. Issue SVIDs to your agents tied to runtime attestation. Use mTLS for agent-to-service communication and JWT-SVIDs for application-layer authorization. This eliminates shared secrets from your architecture entirely.
5. Model and Enforce Delegation Chains
For agentic workflows where AI agents delegate to other agents, implement explicit delegation tracking. Whether you adopt OIDC-A or build a custom solution, ensure that every delegation hop is recorded, scope is attenuated (never expanded), and the original authorizing identity is always traceable. Use policy engines like OPA (Open Policy Agent) to enforce delegation constraints at each service boundary.
6. Unify Human and Non-Human Audit Trails
Your SIEM shouldn’t have separate views for human and machine access. Correlation is critical—when an AI agent accesses a database after a human triggered a deployment, that causal chain must be visible in a single audit view. Ensure your identity platform emits structured logs that include delegation chains, workload attestation, and request context.
7. Build Behavioral Baselines for Agent Activity
AI agents produce distinct behavioral patterns—API call frequencies, resource access sequences, timing distributions. Establish baselines and alert on deviations. Unlike human users, agent behavior should be relatively predictable; anomalies are a strong signal of compromise or misconfiguration.
The Road Ahead
Gartner predicts that 30% of enterprises will deploy autonomous AI agents by 2026. With emerging standards like OIDC-A, maturing frameworks like SPIFFE, and vendors building first-class NHI platforms, the tooling is finally catching up to the problem.
But the window for proactive implementation is closing. Organizations that wait for NHI sprawl to become a security incident—and over 50 NHI-linked breaches were reported in H1 2025 alone—will be playing catch-up from a position of compromise.
The bottom line: your AI agents are identities. They need authentication, authorization, delegation controls, lifecycle management, and audit trails—just like your human users. The difference is scale, speed, and autonomy. Build your IAM strategy accordingly, or the agents will build their own—and you won’t like the result.
