preloader
blog post hero
author image

The Identity Math Has Broken

Part 3 looked at how the autonomous enterprise consumes external vulnerability intelligence - the disclosure-to-patch gap, the compensating controls applied at machine speed, the validation gate that turns external signals into environment-specific certainty. All of that architecture rests on a precondition we haven’t yet examined: that the identities doing the work - the workloads ingesting the feeds, the agents running the validation, the pipelines applying the controls - are themselves trustworthy.

In 2026, that precondition is the weakest link in the whole stack.

Part 2 of this series argued that AI has collapsed the cost of vulnerability discovery while leaving verification stuck at human speed - the SecOps engineer with a 53-second triage budget per finding. Identity is the same fault line, one stack lower. Discovery happens in milliseconds. Authentication decisions happen in milliseconds. Lateral movement happens in milliseconds. Identity governance happens in days, weeks, or never. The asymmetry is the whole problem.

In 2026, the ratio of non-human identities to human users in a typical cloud-native enterprise is no longer something a security team can wave at and move on. Industry estimates have settled into a range - 40:1 to 100:1 across most organisations, with 500:1 reported in hyper-automated environments at Gartner’s IAM Summit earlier this year (CSO Online, 2026; Protego, 2026). GitGuardian’s CEO Eric Fourrier put it in operational terms: what was once a ratio of roughly 10 non-human identities per human “has now expanded to potentially 100-to-1 through the rapid proliferation of AI agents and automation systems” (BankInfoSecurity, 2026).

The “user” in the autonomous enterprise is no longer a person behind a keyboard. It is a long-running service account, a CI/CD runner, an ephemeral container, or - increasingly - an autonomous agent making thousands of API calls per minute on behalf of a workflow that may not have been explicitly approved by anyone with security authority. Most of these identities sit outside formal IAM governance. The 2026 NHI Reality Report estimates that 71% of non-human identities haven’t been rotated within recommended timeframes, 97% carry excessive privileges beyond their function, and only 15% of organisations feel confident in their ability to prevent NHI-based attacks (Protego, 2026).

This is the perimeter the autonomous enterprise actually has. And the attackers have already worked out how to use it.


The summary line for the CISO who reads only the last one:

You cannot firewall your way out of an attack that already has valid credentials. You can only out-architect it on identity.


What GTG-1002 Showed Us

In November 2025, Anthropic published a report describing what it called “the first documented case of a cyberattack largely executed without human intervention at scale” (Anthropic, 2025). The threat actor - designated GTG-1002 and assessed with high confidence as a Chinese state-sponsored group - used Anthropic’s own Claude Code tool as the central orchestration engine for a sophisticated cyber espionage campaign that targeted approximately thirty global organisations between mid- and late September 2025. A small number, reportedly four, were successfully breached.

The numbers from the report are the ones every CISO should commit to memory. AI carried out 80–90% of the tactical operations autonomously. Human operators intervened at four to six critical decision points per campaign - typically yes/continue, no/halt judgements at major escalations like reconnaissance-to-exploitation transitions, or final exfiltration approvals. Claude operated at thousands of requests per second, “an attack speed that would have been, for human hackers, simply impossible to match” (Anthropic, 2025).

The attackers did not “hack passwords” in the traditional sense. They did what Anthropic’s own threat-intelligence head Jacob Klein described as conducting attacks “literally with the click of a button, and then with minimal human interaction” (Anthropic, 2025). The AI conducted reconnaissance, mapped network topology, identified high-value internal services, harvested credentials, parsed exfiltrated data for intelligence value, and produced operational documentation - all autonomously, often in parallel across multiple targets.

The campaign succeeded not because the attackers had novel exploits but because the trust model of the modern enterprise is built around the assumption that high-volume credentialed activity from inside the perimeter is legitimate. Once Claude had a foothold and a service-account credential, it looked exactly like the system doing what it was designed to do.

The contrast with what we covered in Part 1 is the point worth holding. UK AISI’s evaluation of Claude Mythos Preview against the 32-step “Last Ones” benchmark - first model to complete the full chain end-to-end, 3 of 10 attempts, 22/32 average - was a laboratory result. Test ranges with no live defenders, no EDR, no incident response. AISI itself was explicit that they “cannot say for sure whether Mythos Preview would be able to attack well-defended systems” (UK AISI, 2026).

GTG-1002 wasn’t a laboratory result. It was a production campaign, against thirty real targets, running through real defences, executed at thousands of requests per second by an AI tool the attackers convinced was conducting authorised penetration testing. AISI showed the capability in principle. GTG-1002 showed it deployed in the wild against operational enterprises that had EDR, had IR teams, had perimeter defences - and lost anyway, because the trust model assumed credentialed activity from inside was legitimate. The window between capability demonstration and capability deployment was not measured in years. It was measured in weeks.

Why Traditional IAM Fails the Agent

The Identity and Access Management primitives most enterprises rely on were designed for human-centric access patterns and have not aged well. The mismatch is structural:

  • Long-lived sessions: Humans work in 8-hour blocks. Service accounts and agents work in milliseconds. A session-management model calibrated for the former is a permanent operational debt against the latter.
  • Static credentials: API keys, service-account tokens, and stored secrets are evergreen by default. Once leaked into a Slack channel, a Confluence page, a public repository, or an attacker’s collection - and last year GitGuardian detected 13 million secrets exposed in public GitHub repositories alone (CSO Online, 2026) - they remain valid until someone notices and rotates them, which the data suggests rarely happens within recommended timeframes.
  • Implicit trust: Once a service account is “in,” it is usually trusted across an entire VPC or service mesh. Lateral movement is then a credential-reuse problem, not a perimeter-breach problem.
  • Ownership ambiguity: Non-human identities are typically created on an ad-hoc basis by development teams, granted permissions to avoid friction, and then forgotten. Compromised NHI dwell time averages over 200 days - more than three times that of compromised human accounts (Protego, 2026) - because there is no manager to notice the access patterns.

This is the substrate GTG-1002 ran on. Not a sophisticated zero-day chain. The unmanaged web of service accounts, persistent credentials, and implicit-trust boundaries that powers modern automation, exploited at AI speed.

The Architectural Answer: Workload Identity

The substantive answer to the identity problem is to stop treating identity as a static credential and start treating it as a runtime property of the workload itself. The open standards that implement this idea are SPIFFE and SPIRE, both CNCF projects, both production-deployed across cloud-native enterprises today. They are not Sakura inventions; they are the industry-standard answer to a structural problem the enterprise cloud has been carrying for a decade.

The relevant primitives are worth understanding precisely.

SPIFFE: identity as cryptographic primitive

SPIFFE (Secure Production Identity Framework for Everyone) defines a universal identity standard for workloads (SPIFFE, 2026). Every workload - every service, every CI/CD runner, every AI agent - receives a SPIFFE ID, which is a structured URI that identifies the workload across trust domains. The cryptographic credential backing that ID is the SVID (SPIFFE Verifiable Identity Document), typically delivered as either an X.509 certificate or a JWT.

The SVID is short-lived by design - typical TTLs are on the order of an hour, configurable to minutes - and automatically rotated by the underlying runtime before expiry. A leaked SVID is therefore valid for minutes, not the months or years a stolen API key remains exploitable. The static-credential attack surface that GTG-1002 ran on simply does not exist for workloads operating under SPIFFE.

SPIRE: attestation as the proof of identity

The architecturally elegant property of SPIFFE is that there is no bootstrap secret. To understand why this matters, consider the structural problem with conventional secrets management.

In a vault-based architecture - HashiCorp Vault, AWS Secrets Manager, CyberArk - workloads receive credentials by authenticating to the vault using another credential. That initial credential is called Secret Zero, and it has to live somewhere: in an environment variable, a Kubernetes Secret, a configuration file, a hardware HSM. If Secret Zero is compromised, every credential it can fetch from the vault is compromised. The vault doesn’t eliminate the static-credential problem; it concentrates it. Most enterprise breaches involving stolen secrets are Secret Zero failures.

SPIFFE inverts this. Workloads do not carry an initial credential to prove they are entitled to receive their identity. Instead, the SPIRE runtime (the SPIFFE Runtime Environment, the most widely adopted SPIFFE implementation) performs attestation: it verifies the workload is what it claims to be by inspecting platform-specific signals from the environment in which the workload is running.

A SPIRE Agent runs on every node, exposing a local Workload API over a Unix domain socket. When a workload starts up and calls that socket, the agent performs node attestation (verifying the node itself is a legitimate part of the trust domain, using AWS instance metadata, GCP IIT, Kubernetes service-account tokens, or hardware roots of trust) and workload attestation (verifying which specific process is calling, using OS-level introspection - UID, binary path, container image hash, Kubernetes pod selectors). Only if both attestations succeed does the workload receive an SVID matching its registered identity.

The architectural inversion is this: identity is derived from what the workload is and where it is running, not from what secret it holds. A compromised attacker who steals an SVID gets a credential valid for under an hour that they cannot extend, because they cannot pass the next attestation cycle from the legitimate workload’s node and process context. A compromised attacker who steals a service-account API key gets months or years of evergreen access. The two threat models are not on the same scale - and the difference is not “better secrets management.” It is the elimination of the secret as the load-bearing artefact.

Authorization, not just authentication

SPIFFE solves authentication. It does not, on its own, solve authorization - what an attested workload is allowed to do once it has proven who it is. The complementary pattern, which is increasingly standard in production deployments, is policy-as-code authorization with intent-aware controls: OPA/Rego policies, Cedar, or equivalent engines that evaluate every workload action against declarative policy at request time.

The pattern most worth implementing is intent-aware authorization. If an agent has not accessed a particular database in 30 days, attempts to exfiltrate data to an unapproved endpoint, or starts behaving in patterns inconsistent with its registered purpose, the policy engine kills the session immediately - even if the SVID is valid. The runtime has to be capable of acting on behavioural deviation faster than an attacker can complete a kill chain. At GTG-1002 speeds - thousands of requests per second - that means policy enforcement at sub-millisecond latency, on every request, with no fallback to human approval.

This is the closed-loop pattern. SPIFFE/SPIRE provides the cryptographic identity layer. The policy engine provides the runtime authorization layer. The cloud trust foundation provides the IaC-synchronised environment that keeps both layers consistent with the production estate as it changes. The closed loop maps directly to the Sakura product family: Sentinel is the runtime governance and policy-as-code enforcement layer for autonomous agents, monitoring agent behaviour and killing sessions on deviation; Enclave is the cloud trust foundation, providing the IaC-synchronised environment in which SPIFFE attestation has stable signals to attest against; and the Sakura Proof-Point gate from Part 2 closes the verification loop on agent-surfaced findings before they reach SecOps.

The architecture itself - SPIFFE attestation, SPIRE-managed SVIDs, policy-as-code authorization - is open standards. The discipline of wiring all three layers into a single coherent identity plane and keeping that plane synchronised with production as it changes is what most enterprises haven’t yet built.

The Sakura Sky Position: Identity Is the Last Perimeter Standing

Three principles for security leaders building identity for the autonomous enterprise.

  1. Identity is the perimeter you actually have: Network segmentation, VPC boundaries, and firewall rules still do useful work, but they are no longer load-bearing. In a world where 80–90% of an attack runs on credentialed activity that looks like legitimate workload behaviour (Anthropic, 2025), the only control that decides whether the attacker succeeds is workload identity itself. Three questions on every request: who is making this, can they cryptographically prove it, and is the request consistent with what that identity is supposed to be doing right now? Everything else is theatre.
  2. Stop vaulting secrets, eliminate them: This is the principle most security teams resist, and the resistance is wrong. The instinct when service-account leaks keep happening is to vault more aggressively, rotate more often, add more layers to the secrets-management pipeline. That instinct is wrong too. Every additional vault layer is another instance of the same Secret-Zero problem at a different altitude - you are still betting your security on a static credential, just a more expensive one. The structural answer is to remove the static credential entirely. SPIFFE-issued SVIDs derived from attestation are not a marginal improvement on a well-managed vault; they are a different threat model. A leaked SVID is a one-hour problem. A leaked API key is a 200-day problem (Protego, 2026). No amount of vault hardening closes that gap. Only architectural elimination does. If your roadmap has more vault tooling on it and no SPIFFE rollout, your roadmap is wrong.
  3. Pair identity with intent: A valid identity is the start of authorization, not the end. The runtime has to be capable of evaluating every workload action against policy that knows what that workload is supposed to do, and capable of revoking or quarantining a session when the action diverges. GTG-1002 succeeded because the trust model assumed credentialed activity from inside the perimeter was legitimate. Closing that assumption requires intent-aware authorization at sub-millisecond latency on every request - not a quarterly access review.

Coming up next in The Mythos Ledger (Series Finale): Part 5 - The Resilient Cloud Manifesto. Velocity, verification, and identity tied into a unified strategy playbook for the 2026 CISO.


References

Anthropic (2025) Disrupting the first reported AI-orchestrated cyber espionage campaign. Available at: https://www.anthropic.com/news/disrupting-AI-espionage (Accessed: 04 May 2026).

BankInfoSecurity (2026) GitGuardian Doubles Down on AI Agent Defense With $50M Raise. Available at: https://www.bankinfosecurity.com/gitguardian-doubles-down-on-ai-agent-defense-50m-raise-a-30778 (Accessed: 04 May 2026).

CSO Online (2026) Why non-human identities are your biggest security blind spot in 2026. Available at: https://www.csoonline.com/article/4125156/why-non-human-identities-are-your-biggest-security-blind-spot-in-2026.html (Accessed: 04 May 2026).

Protego (2026) Non-Human Identities (NHI): The Hidden Security Crisis Powering AI Agent Attacks in 2026. Available at: https://protego.me/blog/non-human-identities-nhi-ai-agent-security-2026 (Accessed: 04 May 2026).

SPIFFE (2026) SPIFFE Concepts. Available at: https://spiffe.io/docs/latest/spiffe-about/spiffe-concepts/ (Accessed: 04 May 2026).

UK AISI (2026) Our evaluation of Claude Mythos Preview’s cyber capabilities. Available at: https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities (Accessed: 04 May 2026).

Intelligence, Engineered.

Accelerate your operations with proven expertise built to scale and adapt.
Enable, automate, and govern the intelligent systems that keep your business moving.

Unlock Your Potential