This article is part of a series examining how the field’s leading governance frameworks address identity, trust, and control for AIgentic systems. The central thesis, established in Governing AIgentic Actors: Identity, Trust and Control, is that the governance problem in AIgentic systems is not solved by verifying Actors more carefully; it is dissolved by building environments where the scope of what an Actor can do is constrained before any verification occurs. Each article in this series evaluates one framework against that thesis: what it contributes, where its verification posture reaches a structural limit, and what a security leader must add to close it.
The Coalition for Secure AI has done something governance-significant. Forty-five organizations, including Anthropic, Cisco, Google, IBM, Meta, Microsoft, NVIDIA, and OpenAI, have formally agreed that AIgentic systems must be human-governed, accountable, bounded, and transparent. That agreement is not a technical position. It is a governance declaration. When the organizations building the systems publish requirements for meaningful human control as an architectural standard, boards that have not yet demanded those controls have a documented gap between what the field now requires and what their organizations have implemented. This is the fourth evaluation in this series to reach the same finding. The convergence is the finding.
The organizations building AIgentic systems have agreed in writing: human governance is an architectural requirement
The Coalition for Secure AI operates as an OASIS Open Project, governed by an international standards consortium with a Project Governing Board and Technical Steering Committee. Its Technical Steering Committee approved three Principles for Secure-by-Design Agentic Systems. These principles define what 45+ member organizations have formally agreed that AIgentic governance requires.
Human-governed and Accountable: AIgentic systems must be architected for meaningful human control, with clear and shared accountability, constrained by authority boundaries that reflect the organization’s stated risk tolerance. Governance is an architectural decision, not a policy overlay applied after deployment.
Bounded and Resilient: AIgentic systems must operate with strict, purpose-specific entitlements on what they can access and do, with continuous validation that they remain within those boundaries. An AIgentic Actor without defined, enforced boundaries is an Actor operating without governance.
Transparent and Verifiable: AIgentic systems must generate comprehensive records of inputs, plans, decisions, communications, and outputs, sufficient for real-time monitoring and forensic analysis. Accountability without traceability is not accountability.
The three principles compose a single governance requirement. For every AIgentic Actor operating in your environment, your organization must be able to answer four questions: who authorized it, to do what, within what boundaries, and what did it actually do.
CoSAI’s Agentic Identity and Access Management framework, approved by its Technical Steering Committee in March 2026, is the most operationally detailed guidance any body has published on this question. It establishes that each agent must carry unique, persistent identity across its full lifecycle. It requires that delegation chains narrow in scope at every step: permissions cannot expand beyond the delegating principal’s authority, and revocation at any point cascades to all downstream delegations. It proposes a risk-based matrix that scales governance controls to each Actor’s capability and impact. This is specific operational guidance, not aspiration.
Four frameworks have converged on the same finding, and boards that have not responded have a documented accountability gap
The convergence across four frameworks is not accidental. The Cloud Security Alliance, the IETF, NIST, and now CoSAI have each, independently, organized their AIgentic governance guidance around the same three requirements: human control, bounded entitlements, and traceability. [VERIFY: this characterization of CSA ATF, IETF AIGA, and NIST NCCoE is sourced from prior articles in this series, not from this brief — confirm all three frameworks are accurately characterized as requiring these three elements before publishing] Each reached this position from a different institutional mandate and a different approach. The conclusion is the same.
Each framework is also organized around the same verification posture: detect what happened, monitor for deviation, respond to anomalies. [VERIFY: this is inferred from CoSAI’s Principles document and its three prior counterparts — confirm that none specifies a topological enforcement substrate before publishing] The question of what constrains an Actor’s action space before any detection occurs remains open across all four.
For boards, the convergence establishes a defensible standard. It is no longer one body’s position. It is the published conclusion of four independent bodies with different institutional mandates. When a regulator, an auditor, or a legal review asks what governance standard applied to a specific AIgentic deployment, the answer now exists in four independent documents. The question is whether your organization’s governance posture reflects those documents or predates them.
The regulatory instruments compound this. The EU AI Act establishes risk management obligations for high-risk AI system deployers throughout the AI lifecycle [VERIFY: confirm instrument text, article number, and current enforcement status]. Its record-keeping provisions require automatic logging of operational events [VERIFY: confirm instrument text, article number, and current enforcement status]. DORA establishes digital operational resilience requirements for financial entities deploying AI systems [VERIFY: confirm instrument text, article number, and current enforcement status]. NIS2 requires supply chain security and incident handling from essential and important entities [VERIFY: confirm instrument text, article number, and current enforcement status].
CoSAI’s frameworks are candidate tools for satisfying these obligations. Whether they satisfy them for a specific deployment is a legal determination. What boards can establish now, based on four frameworks converging on the same requirements, is a governance policy: no AIgentic deployment proceeds without demonstrated alignment with the three foundational principles. As Who’s Running Your Organization? established, the accountability gap in AIgentic systems is a board governance gap. An Actor operating without a formal authorization chain, defined boundaries, and a traceable record belongs to the organization that permitted the deployment. The industry has now said what governance requires. The board’s job is to demand it.
Frequently asked questions
What is CoSAI and why should a board pay attention to it?
CoSAI is the Coalition for Secure AI, an OASIS Open Project with 45+ member organizations including Anthropic, Cisco, Google, IBM, Meta, Microsoft, NVIDIA, and OpenAI. It produces vendor-neutral security standards and guidance for AI systems. Boards should pay attention because it represents the broadest industry consensus on AIgentic governance published to date. When the organizations building the systems agree on what governance requires, that agreement carries weight in procurement decisions, audit reviews, and regulatory contexts.
What does “human-governed and accountable” mean as a board accountability question?
It means your organization must be able to answer four questions for every AIgentic Actor in your environment: who authorized it, to do what, within what boundaries, and what did it actually do. If those questions cannot be answered from existing records and controls, there is a gap between CoSAI’s foundational requirement and your organization’s current posture. That gap is not a technical failure. It is a governance decision that has not been made.
Does following CoSAI’s guidance satisfy our regulatory obligations?
CoSAI’s frameworks are candidate implementation tools, not compliance certifications. Whether alignment with CoSAI satisfies EU AI Act risk management obligations, record-keeping requirements, or DORA’s ICT risk management provisions requires legal determination based on your organization’s specific deployment context [VERIFY: confirm instrument text, article number, and current enforcement status for all cited instruments]. Boards should treat CoSAI alignment as a governance signal, not a legal conclusion.
What is the board’s specific accountability here?
The board’s accountability is in the governance decision to authorize AIgentic deployments. If deployments proceed without formal governance requirements attached, without demanding the human control, bounded entitlements, and traceability that four frameworks now require, the board has accepted accountability for what those systems do. CoSAI’s convergence with three prior frameworks provides the basis for a clear board policy: no AIgentic deployment proceeds without demonstrated alignment with the three foundational principles. The question every board should be able to answer is whether that policy exists today.