DRAFT This article is not published on the public site.
Security leaders

The CoSAI Evaluation: What the Field's Most Complete Toolkit Reveals About the Verification Posture

CoSAI's Agentic IAM framework goes further on identity than any prior framework in this series — and confirms the structural gap every framework before it has shared

A

CoSAI's Agentic Identity and Access Management framework (TSC-approved March 2026) is the most operationally detailed published guidance on AIgentic identity from any body: first-class agent identities, three deployment patterns, a risk-based capability-impact matrix, and OBO delegation chains with mandatory scope narrowing.

B

CoSAI is the fourth consecutive framework in this evaluation series to converge on a verification posture: behavioral detection and policy enforcement after the Actor has acted, not architectural constraint of what the Actor can do before it acts.

C

The gap is not a criticism of CoSAI's work — it is the field's current frontier. The Actor Identity Lifecycle and topology-first enforcement are what security leaders must architect to close it.

This article is part of a series examining how the field’s leading governance frameworks address identity, trust, and control for AIgentic systems. The central thesis, established in Governing AIgentic Actors: Identity, Trust and Control, is that the governance problem in AIgentic systems is not solved by verifying Actors more carefully; it is dissolved by building environments where the scope of what an Actor can do is constrained before any verification occurs. Each article in this series evaluates one framework against that thesis: what it contributes, where its verification posture reaches a structural limit, and what a security leader must add to close it.

The Coalition for Secure AI is the most technically productive framework body this series has evaluated. Its Agentic Identity and Access Management specification is TSC-approved, operationally specific, and goes further on AIgentic identity than any document any body has published. It operates under OASIS Open governance with 45+ member organizations, including Anthropic, Cisco, Google, IBM, Meta, Microsoft, NVIDIA, and OpenAI. A security leader implementing AIgentic governance today should know these outputs and use them. That is the accurate appraisal of CoSAI’s contribution. The structural limit it shares with the CSA Agentic Trust Framework, the IETF AIGA draft, and the NIST NCCoE concept paper is also the field’s current frontier: CoSAI’s security posture is organized around behavioral detection. Detection is necessary and insufficient. This article evaluates both in sequence.


CoSAI Principles for Secure-by-Design Agentic Systems: Three foundational principles establishing governance objectives for AIgentic system deployments: human-governed and accountable, bounded and resilient, and transparent and verifiable.1

CoSAI Agentic Identity and Access Management Framework (March 2026): TSC-approved operational specification covering agent identity lifecycle, three deployment patterns (Standalone, User-delegated, Crew), risk-based capability-impact matrix, OBO delegation chain requirements, and phased adoption roadmap.2

CoSAI MCP Security Taxonomy (January 2026): Nearly 40 threats across 12 categories for Model Context Protocol deployments, distinguishing traditional security concerns amplified by AI mediation from novel attack vectors unique to LLM-tool interactions.3

CoSAI AI Incident Response Framework v1.0: Adapted NIST lifecycle for AI contexts with CACAO-standard playbooks targeting training data poisoning, multi-channel prompt injection, and RAG poisoning.4

CoSAI’s outputs compose a practitioner toolkit: the principles establish governance objectives; the Agentic IAM specification addresses identity; the MCP taxonomy maps the threat surface; the IR framework operationalizes response.


CoSAI’s Agentic IAM framework is the most operationally specific published guidance on AIgentic identity, and it confirms the field’s convergence on a verification posture

What does CoSAI’s Agentic IAM framework actually specify?

The Agentic IAM framework establishes a First-Class Identity Principle: every agent receives unique, persistent identity that survives its full lifecycle.2 This is a foundational commitment that earlier frameworks in this series stated as a requirement without operationalizing it. CoSAI operationalizes it across three deployment patterns.

Standalone agents bind identity to the runtime environment via TPM or HSM, producing a workload credential that is cryptographically tied to the execution context. User-delegated agents receive short-lived OAuth on-behalf-of (OBO) tokens per call rather than long-lived service account credentials, limiting the credential exposure window to a single task invocation. Crew deployments consist of collaborating Agentlets sharing a service-account identity, carrying group claims that distinguish individual Actors within a shared credential envelope.2

The framework also identifies five IAM-specific threat vectors absent from classical IAM threat models: standing privilege (persistent credentials that outlive the tasks they were issued for), loss of actor clarity (shared credentials obscuring which Actor performed which action), unsigned or swapped models (an agent running different code than the one provisioned), indirect prompt injection (an external payload redirecting an agent’s actions), and agent collusion (coordinated behavior across Actors to circumvent access controls). These are not extensions of existing threat taxonomy. They are new threat classes that emerge from the non-deterministic, ephemeral, and multi-hop nature of AIgentic execution.

[DIAGRAM: four-quadrant risk-based capability-impact matrix — x-axis: agent impact (low to high), y-axis: agent capability (low to high); four labeled control baselines: low/low = narrowly scoped service accounts with basic rotation; high/low = short-lived tokens + anomaly detection; low/high = environment attestation + just-in-time credentials; high/high = full Agentic IAM with ephemeral IDs, OBO delegation, ABAC/PBAC, continuous evaluation — use /diagram skill]

How does the delegation chain requirement work in practice?

The delegation chain requirement is the strongest identity constraint in any published framework in this series. Delegated permissions MUST NOT expand beyond the delegating principal’s effective permissions at any hop.2 Maximum delegation depth is enforced via token TTLs and audience restrictions. Revocation at any point in the chain cascades to all downstream delegations.

The implementation uses OAuth token exchange (RFC 8693 [VERIFY: confirm RFC number before publishing]) to pass context across hops without embedding upstream credentials in downstream requests. Tokens carry both the actor identity (the agent) and the subject identity (the upstream service or end-user), preserving visibility across the full execution chain. Rich Authorization Requests (RFC 9396 [VERIFY: confirm RFC number before publishing]) carry structured authorization_details rather than broad scopes, enabling fine-grained per-call permission specification.

[DIAGRAM: three-hop OBO delegation chain showing scope narrowing: human principal issues scoped credential to orchestrator AIgentic Actor; orchestrator issues narrower-scoped OBO token to task AIgentic Actor; task Actor presents token to verifying service; revocation event at orchestrator cascades to invalidate task Actor credential — use /diagram skill]

The three-phase adoption roadmap is practical against existing enterprise IAM infrastructure. Phase 1 addresses agent discovery, registration, elimination of shared accounts, and immutable logging: the most immediate governance gaps, achievable without new infrastructure. Phase 2 adds short-lived tokens and ABAC/PBAC for higher-risk Actors. Phase 3 extends to cross-domain delegation chains and continuous evaluation. Security leaders can implement Phase 1 today. That is what distinguishes CoSAI’s guidance from IETF and NIST outputs, which require waiting for protocol adoption across the ecosystem.

Behavioral detection is necessary and insufficient: the verification posture’s structural limit and what closes it

Where does behavioral detection reach its structural limit?

CoSAI’s Agentic IAM framework specifies what an AIgentic Actor is allowed to do through identity, delegation constraints, and policy evaluation. It specifies how deviations should be detected and how response should proceed. What it does not specify is an architectural layer that physically constrains an Actor’s action space before any policy check runs. [VERIFY: this is inferred from the Agentic IAM framework and Principles documents — confirm neither specifies a topological enforcement substrate before publishing]

This is the verification posture in precise terms: provision identity correctly, grant minimum necessary permissions, monitor behavior, respond to anomalies. Every element is correct and necessary. The structural limit is that policy evaluation and behavioral monitoring both operate at or after the point of action. An Actor making a request to a service is already holding a credential. The policy engine evaluates whether the credential justifies the request. The monitoring system logs whether the outcome was expected. Neither constrains the action space before the Actor has already attempted to act.

For deterministic systems, this posture is close to sufficient: well-specified policies catch prohibited requests. For non-deterministic AIgentic Actors, however, the action space is open-ended by design. The Actor’s behavior at any execution step is a function of its context window and prior tool outputs. Neither the policy engine nor the monitoring system has evaluated those inputs when the Actor forms its next request. [VERIFY: this claim is inferred from CoSAI’s Principles and Agentic IAM framework — confirm neither document directly addresses the non-determinism gap before publishing] The verification posture assumes a deterministic mapping from role to action that AIgentic execution does not provide.

What does a security leader need to add to CoSAI’s guidance?

Three additions close the structural gap CoSAI’s guidance leaves open.

The first is a topology-first enforcement substrate: an architectural layer that constrains an Actor’s action space before any identity check or policy evaluation runs. As detailed in Treat Your AI Agents Like You Treat Untrusted Code, the Semantic Proxy Pattern implements this via out-of-band semantic enforcement. The proxy evaluates the semantic content of the Actor’s request against a set of allowed operations and rejects requests that exceed scope before they reach a policy engine. Its architectural position is what makes it an enforcement layer rather than an additional detection layer: it sits alongside the request path, not in it, with no credential to steal or bypass.

The second addition is the Actor Identity Lifecycle as an operational discipline. CoSAI’s Agentic IAM framework specifies identity structure: what claims a credential should carry, how delegation should work, what logging is required. The Actor Identity Lifecycle is the governance process that produces and maintains those credentials: provisioning with explicit scope decisions, delegation with signed audit records, regular audit of active credentials against current deployment state, and revocation when scope changes. The lifecycle is an operational commitment. No IAM specification substitutes for it.

The third addition is explicit governance of Agentlets, the spawned sub-Actors that orchestrator Agents produce at runtime. CoSAI’s Agentic IAM framework addresses agent-to-agent delegation for known, pre-provisioned identities. Agentlets are frequently not pre-provisioned: they are created dynamically at runtime, may inherit credentials from the orchestrator’s scope, and terminate when the task ends. The inheritance model is identified in CoSAI’s threat taxonomy as loss of actor clarity. The governance question of whether Agentlets receive provisioned identities at spawn or operate under inherited scope, and how the lifecycle applies to ephemeral Actors, is raised more precisely by CoSAI than any prior framework. It is not yet fully resolved.5

Frequently asked questions

What is CoSAI and how does it differ from IETF, NIST, and CSA in this series?

CoSAI is the Coalition for Secure AI, an OASIS Open Project with 45+ member organizations including every major AI vendor. It produces implementation guidance rather than protocol specifications (IETF) or federal standards (NIST). Its outputs are immediately actionable in enterprise environments without waiting for broad protocol adoption. The CSA Agentic Trust Framework is the closest comparator: both are practitioner-facing, both include maturity guidance. [VERIFY: the research brief notes CoSAI lacks a formal maturity model while ATF has a four-tier model — confirm whether CoSAI’s phased adoption roadmap qualifies as equivalent “maturity guidance” before publishing] CoSAI’s deliverable portfolio is broader (supply chain, threat taxonomy, IR framework, and identity) and its institutional backing is deeper. Both converge on the same verification posture.

What does the Agentic IAM framework specify that is genuinely new?

Five things are new relative to existing IAM standards and prior frameworks in this series: the First-Class Identity Principle applied to ephemeral agents; three deployment patterns scoped to the agent lifecycle; the capability-impact matrix as a risk-based control baseline selector; the delegation chain scope narrowing requirement (permissions MUST NOT expand at any hop); and the five IAM-specific threat vectors that do not appear in classical IAM threat models. OAuth token exchange and Rich Authorization Requests are existing standards, reapplied to the agent context. The Agentic IAM framework is the first document to specify how they compose for multi-hop AIgentic delegation.

What does “behavioral detection posture” mean architecturally, and why is it insufficient for non-deterministic Actors?

A behavioral detection posture means: provision identity and permissions, monitor behavior for deviation from expected patterns, respond when anomalies are detected. For deterministic systems, this posture is close to sufficient. For non-deterministic AIgentic Actors, the problem is that the Actor’s action space is open-ended by design: what the Actor does next depends on its context window and prior tool outputs, not solely on its role. Detection is still necessary. It is insufficient as the primary containment mechanism because it operates after the Actor has decided to act, not before.

What is the Actor Identity Lifecycle, and does CoSAI’s framework cover it?

The Actor Identity Lifecycle is the operational discipline of provisioning, scoping, delegation, audit, and revocation applied to every AIgentic Actor in the environment. CoSAI’s Agentic IAM framework specifies the technical components: identity structure, delegation constraints, logging requirements. It does not define the operational process that produces and maintains them. The lifecycle is a governance commitment: someone owns the provisioning decision, someone audits active credentials against current deployment state, someone executes revocation when scope changes. CoSAI’s framework provides the best available technical foundation for those decisions. The decisions themselves are not a technical problem.

How does CoSAI’s supply chain workstream connect to actor identity governance?

CoSAI extends SLSA provenance principles to AI models, enabling verification that the model running in a given execution context matches the model that was provisioned and signed. This directly addresses the unsigned/swapped model threat vector. For security leaders implementing the Actor Identity Lifecycle, the practical integration is model attestation at spawn: each Actor’s credential includes a binding to its model hash, and the relying service verifies the hash before accepting the credential. CoSAI’s supply chain workstream provides the signing infrastructure; the Actor Identity Lifecycle is the governance process that makes attestation a provisioning requirement rather than an optional control.

Footnotes

  1. Coalition for Secure AI, “Announcing the CoSAI Principles for Secure-by-Design Agentic Systems,” OASIS Open, 2025. https://www.coalitionforsecureai.org/announcing-the-cosai-principles-for-secure-by-design-agentic-systems/

  2. Coalition for Secure AI, “Agentic Identity and Access Management,” Workstream 4, OASIS Open Project, approved March 20, 2026. https://github.com/cosai-oasis/ws4-secure-design-agentic-systems/blob/main/agentic-identity-and-access-control.md 2 3 4

  3. Coalition for Secure AI, “Securing the AI Agent Revolution: A Practical Guide to Model Context Protocol Security,” OASIS Open, January 2026. https://www.oasis-open.org/2026/01/27/coalition-for-secure-ai-releases-extensive-taxonomy-for-model-context-protocol-security/

  4. Coalition for Secure AI, “Defending AI Systems: A New Framework for Incident Response in the Age of Intelligent Technology,” OASIS Open. https://www.coalitionforsecureai.org/defending-ai-systems-a-new-framework-for-incident-response-in-the-age-of-intelligent-technology/

  5. [citation needed — CoSAI’s Agentic IAM framework addresses pre-provisioned agent-to-agent delegation but the ephemeral Agentlet governance case is identified as a threat (loss of actor clarity) without a fully resolved specification; no separate CoSAI publication on dynamic Agentlet identity found at time of writing]

Charles Carrington

Written by

Charles Carrington

Founder, Attribit-ID  ·  LinkedIn