Boards

What the IETF's AI Governance Draft Means for Board Accountability

Five tiers, an Immutable Kernel, and a governance vocabulary boards can use today: what the IETF AIGA draft establishes

A

AIGA defines five risk tiers for AI agents based on potential for harm, financial impact, and the reversibility of actions. Boards can apply this vocabulary independent of AIGA's standards-track fate.

B

Tier assignment is a capability question. Accountability is a different question: who authorized this agent, on whose behalf, and who owns that decision if something goes wrong?

C

The infrastructure requirements AIGA ties to each tier are the clearest available benchmark for whether an organization's agent deployment has the governance architecture it claims.

This article is part of a series examining how the field’s leading governance frameworks address identity, trust, and control for AIgentic systems. The central thesis, established in Governing AIgentic Actors: Identity, Trust and Control, is that the governance problem in AIgentic systems is not solved by verifying Actors more carefully; it is dissolved by building environments where the scope of what an Actor can do is constrained before any verification occurs. Each article in this series evaluates one framework against that thesis: what it contributes, where its verification posture reaches a structural limit, and what a security leader must add to close it.

The IETF has published a governance specification for AI agents. It is not a standard yet. It is an individual Internet-Draft that may expire in July 2026 without ever reaching the standards track. That distinction matters for legal counsel. It does not diminish the draft’s value as a governance vocabulary. Our research shows the AI Governance and Accountability Protocol (AIGA) is the most technically specific attempt yet to answer the question boards are quietly asking: what does a well-governed AI agent deployment actually look like?

The answer involves five tiers, a tamper-resistant enforcement kernel, and infrastructure requirements that most enterprise AI deployments do not yet meet.


T0 (Minimal): Chatbots and filters. Requires software attestation, audit-only approval, and liveness check-ins every 24 hours.

T1 (Low): Code assistants and content creators. Requires software attestation plus audit, asynchronous 5-second approval for consequential actions, and check-ins every hour.

T2 (Medium): Trading agents and automated vehicles. Requires certified hardware or a Trusted Execution Environment (TEE, a hardware-isolated computing space that operates independently of the main processor), asynchronous 1-second approval, and check-ins every 5 minutes.

T3 (High): Critical infrastructure agents. Requires multiple TEEs plus a Hardware Security Module (HSM, a dedicated processor for managing cryptographic keys), synchronous approval within 100 milliseconds for every critical operation, and continuous check-ins.

T4 (Restrict): Weaponized systems and experimental general-purpose agents. Requires air-gapped multiparty controls, human-in-the-loop approval for all consequential actions, and a real-time response from a human review board.

Tier assignment is based on potential for harm, financial impact, data sensitivity, autonomy, self-modification capability, and irreversibility of actions. The Immutable Kernel enforces these requirements at the hardware level: a Trusted Computing Base the agent cannot modify, containing a Policy Enforcer, an Action Interceptor, an Audit Logger, and an Authority Client. Four Constitutional Constraints define what cannot be changed under any circumstances: the kernel code cannot be modified, logging cannot be disabled, approval cannot be bypassed for T2 agents and above, and a kill switch must be implemented.


What does the IETF’s AIGA draft establish?

The IETF AIGA draft, published January 2026, is an individual submission classified as Informational, with no formal working group or standards-track standing. It specifies a central Authority that issues each agent a unique identity credential: an AIGA-ID binding the agent’s tier, geographic region, and cryptographic fingerprint, together with an X.509 certificate (a standard digital identity credential issued by a trusted authority) signed by that central Authority. Agents at T2 and above must hold their private keys inside certified hardware. At T3, peer consensus among other agents is required to rotate identity keys. At T4, no single party controls the approval decision.

These are not aspirational principles. They are specific requirements with enforcement mechanisms. The difference between AIGA and most governance language about AI agents is that AIGA specifies infrastructure, not intent. A board that adopts “we govern our AI agents responsibly” has made no verifiable commitment. A board that asks “what tier are our deployed agents, and do we have the infrastructure AIGA requires for that tier?” has asked a question with a specific, auditable answer.

The draft expires July 30 2026. An expiry date on an informational document does not mean the document was wrong. It means the author must renew or allow it to lapse. The governance vocabulary AIGA introduces will not expire with the document.

What do the five tiers mean for board accountability?

Tier assignment is a technical assessment. Accountability is a governance decision. The two are not the same, and the confusion between them is where most board-level AI governance currently fails.

AIGA’s five tiers classify agents by what they can do and what harm they could cause. That is a necessary input to governance. It is not sufficient. Who’s Running Your Organization? frames the governance gap precisely: an organization that can characterize its agent population by risk tier, yet cannot answer who authorized each agent, on whose behalf it acts, and who owns accountability for its actions, has risk assessment without governance. The AIGA tier tells the security team what enforcement infrastructure is required. The accountability question tells the board who is responsible when that infrastructure fails or is absent.

The agent governance problem boards must solve is prospective. The Identity Inheritance Model (where agents operate under the inherited permissions of the user who deployed them, with no independent identity and no scoped authorization) does not fail the AIGA tier assessment at T1. A code assistant running with a developer’s credentials, accessing the developer’s systems, looks like a well-governed T1 deployment on a tier checklist. It looks like an ungoverned AIgentic Actor on an accountability audit.

The board’s accountability question is not “what tier is this agent?” It is “who authorized this agent, what was it authorized to do, and who do we call when something goes wrong?” AIGA makes the first question answerable. The second set of questions requires a governance discipline the draft does not prescribe.

What should boards ask their security and technology leadership?

The five AIGA tiers generate specific governance questions boards can bring to their next technology review. They do not require AIGA to become a standard to be useful.

For every deployed AI agent or autonomous system, the questions are: What tier does this agent’s risk profile assign it? Does our deployment have the enforcement infrastructure that tier requires? For T2 agents, that means certified hardware or a TEE. For T3 agents, it means synchronous approval gates, an HSM, and continuous monitoring. For T4, it means a human review board infrastructure that may not yet exist in most organizations. If the answer to any of those questions is “we don’t know” or “no,” the organization has a governance gap the board is positioned to close. Closing it requires authority and budget that the security team alone does not hold.

The second governance question is structural: who is the central Authority in our agent deployment? AIGA specifies that a central Authority issues identity credentials and approves consequential actions. In an enterprise deployment, that Authority is an organizational role with defined accountability. If no one has been assigned that role, no one is governing the agents. The Agent Problem: What Boards Are Not Being Told About AI Oversight frames what happens when that role is empty: the oversight infrastructure is absent, and attribution after an incident stops at “the AI system did this” rather than the specific decision that authorized it.

The governance act the board must drive is not compliance with AIGA. It is the decision to make agent accountability traceable to a specific role with specific authority.


Frequently asked questions

Does AIGA compliance protect the organization legally?

No. AIGA is an individual Internet-Draft classified as Informational, with no regulatory force in any jurisdiction. Legal protection from AI-related incidents is not currently well established across jurisdictions. It will at minimum require compliance with applicable law, which varies by jurisdiction, sector, and the nature of the incident. AIGA’s value is as a governance benchmark, not a legal safe harbor.

Should boards wait for AIGA to become an IETF standard before acting?

The draft expires July 30 2026. Whether it achieves standards-track status is a question the IETF will answer over time; boards cannot wait on that process to establish AI agent governance. The tier vocabulary and infrastructure requirements AIGA specifies are useful governance tools now, regardless of AIGA’s eventual status. Only 23% of organizations have a formal strategy for agent identity management.1 That gap does not shrink while waiting for a standard.

What is the difference between an agent’s risk tier and an accountability decision?

A risk tier classifies what an agent can do and what harm it could cause. An accountability decision assigns who authorized the agent, what it was authorized to do, and who owns the outcome if something goes wrong. AIGA answers the classification question with precision. The accountability decision is a governance act that only the organization (and ultimately the board) can make. The tier tells you how much enforcement infrastructure you need. The accountability decision tells you who signs off when that infrastructure is absent.

Footnotes

  1. Strata Identity, “The AI Agent Identity Crisis: A 2026 Guide,” Cloud Security Alliance, 2025. Survey of 285 IT and security professionals, conducted September–October 2025. https://www.strata.io/blog/agentic-identity/the-ai-agent-identity-crisis-new-research-reveals-a-governance-gap/

Charles Carrington

Written by

Charles Carrington

Founder, Attribit-ID  ·  LinkedIn