This article is part of a series examining how the field’s leading governance frameworks address identity, trust, and control for AIgentic systems. The central thesis, established in Governing AIgentic Actors: Identity, Trust and Control, is that the governance problem in AIgentic systems is not solved by verifying Actors more carefully; it is dissolved by building environments where the scope of what an Actor can do is constrained before any verification occurs. Each article in this series evaluates one framework against that thesis: what it contributes, where its verification posture reaches a structural limit, and what a security leader must add to close it.
NIST has begun work on AI agent identity governance. The National Cybersecurity Center of Excellence published “Accelerating the Adoption of Software and Artificial Intelligence Agent Identity and Authorization” in February 2026. The paper proposes to adapt existing identity frameworks to AI agents as a new class of digital principal. It is not a standard. It has no compliance force in any jurisdiction. What it provides is the clearest government-level vocabulary yet available for a board asking whether its AI agent deployments are governed.
The concept paper organizes the governance problem into five focus areas.
Identification. Distinguishing AI agents from human users and managing the information required to define and limit what an agent is permitted to do.
Authorization. Extending existing access control mechanisms to apply to agents as a new class of digital principal, rather than treating them as extensions of the users who run them.
Access Delegation. Linking user identities to AI agents in ways that maintain accountability and prevent privilege escalation through delegation chains.
Logging and Transparency. Attributing specific AI agent actions to their non-human source for audit and forensic review.
Tracking Data Flows. Maintaining provenance of user prompts and data inputs to support risk determinations and policy decisions about what actions a deployed agent may take.
The five areas compose a governance picture: how an agent gets its identity, what it is permitted to do with it, how that permission traces back to a human, how every action can be attributed afterward, and what data and prompts drove the agent’s decisions. Existing identity and authorization frameworks were designed for human users and static software services. An AI agent may autonomously access tools, query databases, and operate across multiple systems in a single task. Per-action human authorization cannot keep pace at that scale and speed.
What does the NIST concept paper establish?
The concept paper proposes a demonstration. NIST’s NCCoE plans to implement and test agent identity and authorization mechanisms using commercially available technologies in a laboratory setting. The result will be implementation guidance, not a regulation or a mandatory control set.
The institutional context matters. NIST’s Center for AI Standards and Innovation launched the broader AI Agent Standards Initiative on February 17, 2026. The initiative has three pillars: facilitating industry-led standards development, supporting open-source protocol work, and advancing research in AI agent security. That research has produced results worth noting. NIST and UK AI Security Institute researchers found that optimized attack strategies against AI agents achieved an 81% success rate in structured exercises.1 Baseline defenses achieved 11%. The attack surface NIST’s governance work addresses is not hypothetical.
NIST’s Computer Security Division is also developing Control Overlays for Securing AI Systems (COSAiS): an extension to the SP 800-53 security control framework tailored to AI agent deployments. It will address both single-agent and multi-agent systems. As of March 2026, no published overlays for agent use cases exist. When finalized, they will be the first systematic federal control set built for the agent threat model.
What do the five focus areas mean for board oversight?
The five focus areas translate directly into governance questions about each deployed agent in your organization.
The question behind Identification is whether each deployed agent has its own identity, separate from the user who ran it. The common alternative is the opposite. An agent running under a user’s credentials inherits that user’s access to every system the user can reach. NIST’s direction is toward agents that authenticate as distinct non-human principals, not under user credentials.
The questions behind Authorization and Access Delegation are about scope and time. NIST’s direction is toward least privilege (access limited to the specific task the agent is authorized to perform) and just-in-time access (access that expires when the task ends), with delegation chains from user to agent that are bounded in scope and duration. The governance question is whether those bounds exist in your current deployments, and who established them.
The question behind Logging and Transparency is accountability after the fact. An agent without an attributable audit trail cannot be investigated when something goes wrong. The question is whether your security team can trace every action a deployed agent took to the specific, non-human entity that took it.
Who’s Running Your Organization? frames the accountability dimension precisely: a board that cannot answer who authorized each agent, on whose behalf it acts, and who owns the outcome when something goes wrong has risk assessment without governance. NIST’s five focus areas supply the technical vocabulary for those questions. They do not answer them.
What should boards ask their security and technology leadership?
The five focus areas generate specific governance questions for the board’s next technology review. No regulatory requirement currently compels them. They are useful now regardless.
For each deployed AI agent or autonomous system, the questions are: Does this agent have its own identity, independent of the user who runs it? Is its access scoped to a specific task, or does it inherit broader permissions from its operator? Is the delegation from user to agent bounded in scope and time, or does the agent carry those permissions indefinitely? Can your security team trace every action this agent took to a specific, non-human entity in an audit log? Can your team trace what data and prompts drove the agent’s decisions, for risk review and accountability after the fact?
If the answer to any of those questions is “we don’t know” or “no,” the organization has a governance gap the board is positioned to close. Only 23% of organizations have a formal strategy for agent identity management.2 NIST’s work establishes the direction. The Agent Problem: What Boards Are Not Being Told About AI Oversight describes what happens when the board’s question goes unasked until an incident makes it unavoidable.
Frequently asked questions
Does the NIST concept paper create a compliance obligation?
No. The paper is a proposal for a laboratory demonstration project. It is not a standard, a regulation, or a mandatory control set, and it carries no compliance force. NIST’s future guidance, when published, may have compliance implications in federal contractor contexts and may inform sector-specific regulations. The concept paper itself does not.
Should boards wait for NIST’s finalized guidance before acting on agent governance?
NIST’s guidance is not yet finalized and no specific publication timeline has been announced. The governance gap the concept paper describes exists in most enterprise AI agent deployments today. The five governance questions the concept paper implies are useful now. Waiting for a standard is itself a governance decision, and it is one the board should make deliberately rather than by default.
What is the difference between the NIST concept paper and other current AI agent governance work?
The NIST concept paper addresses agent identity and authorization mechanisms: how agents are identified, how their access is scoped, and how delegation chains are encoded. Other current governance work addresses complementary layers. The IETF AIGA draft specifies enforcement infrastructure: hardware isolation requirements by risk tier, a tamper-resistant kernel architecture, and constraints that enforce governance at the hardware level. The two address different parts of the same problem. NIST establishes the authorization record; the enforcement infrastructure determines whether that record is meaningful.
Footnotes
-
NIST / UK AI Security Institute joint research, January 2025. 81% attack success rate for optimized strategies vs. 11% for baseline defenses. https://www.nist.gov/news-events/news/2025/01/technical-blog-strengthening-ai-agent-hijacking-evaluations ↩
-
Strata Identity, “The AI Agent Identity Crisis: A 2026 Guide,” Cloud Security Alliance, 2025. Survey of 285 IT and security professionals, conducted September–October 2025. https://www.strata.io/blog/agentic-identity/the-ai-agent-identity-crisis-new-research-reveals-a-governance-gap/ ↩