DRAFT This article is not published on the public site.
Boards

The Hidden Compliance Exposure in Your AI Deployment

Why approving agentic AI without governing its credentials is a duty-of-care failure, not an IT gap

A

AI agents generate non-human identities — credentials, tokens, and service accounts — faster than existing inventory processes can track them, creating an audit gap that is invisible until it becomes a breach.

B

DORA Article 9 and EU AI Act Article 14 impose specific obligations on access control and human oversight of AI systems — obligations that most deployed agentic architectures do not currently satisfy.

C

Approving agentic AI deployment without a parallel investment in non-human identity governance is a board-level decision with board-level regulatory consequences.

The board approved the AI strategy. It probably did not approve what came with it: a growing population of non-human identities that no existing governance structure can account for.

Every AI agent your organization deploys creates or consumes credentials — API keys, OAuth tokens, service accounts, short-lived session tokens. In a single agentic pipeline, one agent spawns Agentlets, each Agentlet calls external APIs, and each API call may generate or inherit a credential. A 451 Alliance analyst report finds that non-human identities already outnumber human identities 50 to 1 in enterprise environments. Agentic AI does not change the ratio. It changes the rate, and the rate is what breaks existing governance.

The governance problem this creates is not technical. It is a duty-of-care problem. Two regulatory instruments that many of your organization’s AI deployments already fall under are specific about what is required. The question is whether the board can demonstrate it knew.


Non-human identity (NHI): Any credential, token, or service account that represents a system, application, or agent rather than a human user. NHIs include the API keys AI agents use to call external services, the OAuth tokens that authorize one system to act on behalf of another, and the workload credentials that grant an agent access to cloud resources.

NHI sprawl: The condition in which NHIs multiply faster than any inventory process can track, leaving the organization unable to answer basic questions: which agents hold which credentials, what those credentials can access, and which of them are still active.

Human sponsorship model: A governance requirement that every NHI is formally owned by an identified human accountable for its scope, its use, and its revocation. Without this model, NHIs exist without authorization chains, and without authorization chains, no audit trail is complete.

Together, these three concepts describe the governance exposure: sprawl without sponsorship produces a population of active credentials that no one formally owns and no regulator can be shown evidence of control over.


Boards that approve AI deployment are approving credential creation at a scale their governance structures were not built for

The Docusign CISO states that AI systems generate new credentials and tokens faster than inventory processes can track, creating an audit gap where the NHI population is never accurately known. That is not a vendor complaint about tooling. It is a description of a structural condition that applies to any organization running agentic AI at operational scale. A CSA and Oasis Security study found that 78% of organizations lack formal policies for creating or removing AI identities, and 92% are not confident their legacy IAM tools can manage the risks that AI and non-human identities introduce [VERIFY: confirm instrument text, article number, and current enforcement status — primary CSA/Oasis Security study URL not confirmed; cited through MSSP Alert secondary source; obtain direct CSA citation before publishing].

Nearly 75% of enterprise leaders surveyed by Deloitte plan to deploy AIgentic AI within two years. The governance infrastructure to manage the resulting NHI population is not yet in place at most of those organizations, meaning scale and compliance exposure will compound simultaneously.

This is the baseline condition your board should understand before the next approval decision on an agentic AI initiative.

DORA Article 9 and EU AI Act Article 14 impose obligations that current agentic architectures routinely cannot satisfy

DORA Article 9 requires EU financial entities to limit access to ICT assets to what is required for legitimate and approved functions only [VERIFY: confirm instrument text, article number, and current enforcement status — specifically whether NHI and agentic AI credentials are explicitly addressed in Article 9 or in associated Regulatory Technical Standards; prefer eur-lex.europa.eu for citation; enforcement active since January 17, 2025]. This is a least-privilege mandate. It applies to every credential an AI agent carries, not only to human user accounts. The agent that connects your workflow automation to a financial data system carries ICT access credentials. Under DORA, those credentials are in scope.

DORA Article 10 requires financial entities to implement monitoring tools capable of identifying anomalous activity across their ICT estate [VERIFY: confirm instrument text, article number, and current enforcement status — verify whether Article 10 explicitly extends to automated or agentic system behavior monitoring or applies only to human user activity; obtain primary Article 10 source separate from Article 9 URL]. In an agentic pipeline, anomalous credential use by a compromised agent may be the first and only detectable signal of a breach. Without monitoring designed for non-human identity behavior, the obligation cannot be met.

EU AI Act Article 14 requires high-risk AI systems to be designed so that natural persons can effectively oversee them, including the ability to interpret outputs, override decisions, and halt operations [VERIFY: confirm instrument text, article number, and current enforcement status — enforcement date August 2, 2026 per Annex III obligations; confirm whether autonomous AI agents used in enterprise workflows meet Annex III classification thresholds for high-risk]. [INSERT: The research brief identifies a genuine regulatory interpretation gap — no authoritative source has been found stating how Article 14 paragraph 4(d) applies to multi-agent pipelines where no human is in the decision loop. This is a gap requiring author judgment: Attribit-ID should consider whether this is a position the firm can take or a question to surface explicitly for clients before publication.]

The gap between what these instruments require and what most deployed agentic architectures currently provide is not a compliance program problem. It is an architecture problem that the board’s approval decisions either closed or left open.

What does it mean for a board to exercise duty of care on NHI governance?

Duty of care in this context means demonstrating, at the time of the approval decision, that the board understood what credentials its AI systems would carry, what access those credentials confer, what oversight controls would govern them, and what the organization’s response would be in the event of compromise. The evidence from the March 2026 Vertex AI “Double Agent” incident is instructive. In that case, an overprivileged Google-managed service account allowed a compromised agent to pivot across cloud resources, extract credentials, and reach restricted internal artifacts, exploiting excessive default permissions rather than any authentication bypass. The credential configuration that enabled the breach was a design-time decision, not an operational failure. The board that approved the deployment inherited the exposure without, in all likelihood, being shown it.

A board that cannot answer four questions has not exercised duty of care on NHI governance: which agents are authorized, on whose authority, what access do their credentials confer, and who is responsible for revoking them when the agent is decommissioned?

Governing non-human identity requires a sponsorship model, not better tooling

The instinct to solve NHI sprawl by procuring a dedicated NHI management tool is understandable. Only 41% of organizations currently use dedicated tools for NHI management, with 18% in proof-of-concept and 15% planning implementation within six months. However, tooling without a governance model produces an inventory that no one acts on. The prior question is structural: who owns each agent’s credentials, and what is their accountability for scope and revocation?

The human sponsorship model answers that question. Every non-human identity requires an identified human sponsor accountable for its scope, its authorized use, and its offboarding when the agent is retired. This is not technically complex. It requires the same governance discipline as any other access authorization: a named owner, a defined scope, and an expiration or review date. What it does require is that the board’s approval of an AI deployment is accompanied by a sponsorship assignment, not delegated to the technology team as an implementation detail.

For a board governing a regulated financial entity, as explored in Who’s Running Your Organization? The Identity Challenge of the AI Agent Era, the accountability question is existential: if a regulator asks which humans are accountable for the access decisions your agents made last quarter, the answer needs to exist in a governance record, not in someone’s memory of who built the system.

What does “zero standing privilege” mean in practice for the board?

Zero standing privilege (ZSP) means an AI agent holds no persistent credentials between tasks. Each operation requests exactly the access it needs, for exactly the duration required, and the credential expires at task completion. It is the non-human identity equivalent of requiring employees to check out building access cards rather than carry permanent badges. The IETF WIMSE working group draft on AI agent identity proposes dual-identity credentials that cryptographically bind each agent’s credential to an identified human owner, making every access decision traceable to a sponsoring human regardless of how many Agentlets a pipeline spawns [VERIFY: confirm instrument text, article number, and current enforcement status — draft-ni-wimse-ai-agent-identity-02 is an individual Internet-Draft, not an IETF-endorsed standard; expires September 1, 2026]. For the board, the significance is not the technical mechanism. It is that zero standing privilege makes the authorization chain auditable in a way that persistent, over-scoped credentials never can be.

[INSERT: The research brief identifies a second blocking gap — no authoritative practitioner source was found for a production-tested NHI governance architecture specifically for agentic pipelines. The author should consider whether a client insight or direct experience with the Carrington system can serve as the grounding example for how zero standing privilege and human sponsorship have been implemented in practice.]

The approval decision you make next quarter is a governance decision you will be asked to account for

The NIST NCCoE launched a concept paper in February 2026 specifically on software and AI agent identity and authorization, signaling that existing NIST identity guidance does not yet address agentic pipeline identity as a defined problem space. OWASP’s 2025 Non-Human Identities Top 10 identifies overprivileged NHIs, long-lived secrets, improper offboarding, and secret leakage as the four highest-impact categories, with each risk amplified when NHIs operate as autonomous AI agents rather than static service accounts. The standards bodies are telling you the map does not exist yet. That does not mean the territory is ungoverned. DORA and the EU AI Act are in force now.

The board’s governance posture on agentic AI deployments needs to shift from “has the security team approved this?” to “can we demonstrate control of the credentials this system will carry?” Those are different questions. The second one is the one a regulator will ask.

Frequently asked questions

What is the board’s specific regulatory exposure if NHI governance is absent?

Under DORA Article 9, EU financial entities must demonstrate least-privilege access controls across all ICT assets [VERIFY: confirm instrument text, article number, and current enforcement status]. AI agent credentials are ICT assets. An inability to demonstrate that agent credentials were scoped to minimum necessary access is a direct gap against the Article 9 obligation. Under EU AI Act Article 14, high-risk AI systems must support human oversight and the ability to override or halt the system [VERIFY: confirm instrument text, article number, and current enforcement status]. A fully autonomous agentic pipeline with no human in the decision loop requires legal interpretation before it can be deployed with confidence by organizations subject to the Act. The exposure is not hypothetical: the instruments are in force or entering force in 2026.

How is this different from the IAM governance the organization already does?

The existing IAM program governs human identities at human pace: an employee joins, is provisioned access, changes role, leaves, and is deprovisioned. NHI lifecycle management for agentic pipelines operates at machine pace. An agent may generate and consume dozens of credentials in a single workflow run. Existing IAM tools were not designed for this rate. The CSA and Oasis Security study finding that 92% of organizations are not confident their tools can manage AI and NHI risk is a statement about rate and volume, not about policy intent [VERIFY: confirm instrument text, article number, and current enforcement status — primary CSA/Oasis Security study URL not confirmed; obtain direct citation]. The governance model needs to account for the speed at which agents create and abandon credentials, not only the policies under which they operate.

What should the board ask management before approving the next AIgentic AI initiative?

Four questions establish whether the governance posture is adequate. First: is there a named human sponsor accountable for each agent’s credentials and their authorized scope? Second: what is the credential offboarding process when this agent is retired or replaced? Third: do the monitoring tools in place detect anomalous NHI behavior, or only human user behavior? Fourth: if this deployment involves a high-risk AI system under EU AI Act Annex III, has the legal team confirmed what Article 14 requires of this specific architecture [VERIFY: confirm instrument text, article number, and current enforcement status]? These questions will not slow the deployment. They will produce the governance record the board needs to demonstrate it managed the risk it took on.

Who carries the accountability if an AI agent’s credentials are compromised?

Today, in most organizations, no one does — not because no one cares, but because no formal human sponsorship model exists for agent credentials. The credentials were created by the deployment process, scoped by the platform defaults, and never formally assigned to an accountable owner. That is the NHI sprawl problem in its governance form. When a breach occurs through a compromised agent credential, the absence of a named sponsor does not protect the organization from regulatory scrutiny. It exposes the board to the additional question of why no one was accountable.

Charles Carrington

Written by

Charles Carrington

Founder, Attribit-ID  ·  LinkedIn