preloader
blog post hero
author image

The “Defensive Maxim Gun”

On 7 April 2026, Anthropic announced Project Glasswing - a coalition of twelve organisations committed to using a frontier AI model for defensive cybersecurity, backed by up to $100M in usage credits and $4M in direct donations to open-source security organisations (Anthropic, 2026).

The launch partners are AWS, Apple, Anthropic, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Beyond that core, more than 40 additional organisations responsible for critical software infrastructure have been granted access to scan their codebases (Anthropic, 2026).

The name refers to Greta oto - a butterfly with transparent wings. The metaphor is dual-layered: the bugs hiding in plain sight inside the open-source software the world runs on, and the transparency Anthropic claims it wants to bring to AI-driven vulnerability research.

The premise is simple, and aggressive. Offensive AI capability is approaching a threshold where it will proliferate to actors not committed to deploying it safely. The window before that happens is narrow, and to stay ahead, defenders need access to the most capable model first - and the coalition has to be broad enough that the patches land before the exploits do. By committing the credits and convening the partners, Anthropic is trying to flip the economics.

The early evidence is genuinely impressive in scale. Mythos surfaced a 27-year-old vulnerability in OpenBSD - an operating system whose entire reputation is built on security focus - and a 16-year-old flaw in FFmpeg that automated testing tools had failed to detect despite running the affected code line five million times (CyberScoop, 2026). Bugs that have survived decades of human review, in code paths that have been fuzzed exhaustively, do not get found by accident. Whether the “AI Avengers” framing earns its capes is a separate question; the underlying capability is real, and the patches landed.


The executive summary, for the CISO who reads only the last line:

Glasswing will harden the floor. You are responsible for the ceiling.


The Antitrust Critique: Coalition or Cartel?

Not everyone is applauding. The sharpest published critique comes from Madhavi Singh in ProMarket (2026), who argues that Glasswing risks violating Section 1 of the Sherman Antitrust Act - the provision prohibiting concerted action in restraint of trade.

The argument has two specific legs, and the legal distinction between them matters.

  1. Information exchange as concerted action. US antitrust doctrine has long treated information sharing among competitors with suspicion, even where there is no explicit agreement on prices or output. The concern is that systematic information exchange among rivals can stabilise market behaviour in ways that produce the effects of collusion without the formal agreement. Recent Department of Justice guidance has made clear that information exchange alone can constitute a Section 1 violation. Glasswing’s information-sharing protocols - what gets disclosed, when, to whom, and on what timeline - operate inside a private circle of forty-plus firms, and ProMarket’s argument is that this asymmetric transparency risks aligning market behaviour in ways that suppress competition from those outside the loop (ProMarket, 2026).
  2. Concerted refusal to deal - the group boycott. This is the more aggressive of the two arguments. A “group boycott” is a coordinated refusal by competitors to deal with a particular party, and certain forms of it are per se illegal under antitrust law - meaning courts won’t even entertain pro-competitive justifications. Glasswing distributes Mythos Preview access to coalition members and approximately forty additional approved organisations. Everyone else, by definition, is excluded. Coalition members can therefore safety-proof their products before competitors can - and many of those excluded competitors are direct rivals in adjacent markets. Singh’s specific example: Google can secure its browser using Mythos while the third parties that compete with Google’s many verticals are denied the same opportunity (ProMarket, 2026).

The defenders of Glasswing aren’t naïve about either argument. Andy Hall, who helped design Meta’s Oversight Board, argues that the issue is design, not concept - that a self-governance body of this kind needs four ingredients to be credible: independent expert assessment, incentives that make non-participation costly, broad enough participation to prevent free-riding, and an external regulatory backstop (Hall, 2026). Anthropic’s own Logan Graham has indicated Glasswing could “transition very quickly into a third-party-led consortium that features all the other model providers” (Hall, 2026, citing Heath). On that reading, the current shape is a starting point, not a finished design.

For technical leaders, the legal question is interesting but secondary. Whether or not Glasswing is eventually held to be a cartel, the operational reality is here: your access to vulnerability intelligence is now tiered. Coalition members get information first. Approved critical-infrastructure organisations get it next. Everyone else waits - possibly weeks - until disclosures filter into public CVE databases. The architectural question for the rest of the industry is not whether to fight the tiering or wait for regulators to flatten it. It is how to operate effectively inside it.

Architecture for the Disclosure-to-Patch Gap

Whether or not Glasswing settles its antitrust questions, the operational reality it creates is here. Vulnerability intelligence in 2026 will move through tiered, partly-private channels - coalition feeds, vendor pre-disclosures, AI discovery output from your own tooling - before it reaches public CVE databases. The gap between when you know about a vulnerability and when a patch is available has always existed. AI-driven discovery widens it, because intelligence is now produced faster than maintainers can ship fixes.

That gap is where attackers actually live. Most security programmes optimise patch deployment speed, which is the wrong target - you can’t patch a vulnerability whose patch hasn’t been written. The right target is the time between signal and compensating control: the seconds or minutes between an intelligence event arriving and your environment automatically restricting the exposure of affected workloads while verification catches up.

Three architectural patterns close that gap. Any competent security architect can build them. The hard part is having them in place before you need them.

Pattern 1: Real-time intelligence ingestion

Traditional vulnerability management is pull-based. You scan, you find, you fix. Closing the disclosure-to-patch gap requires push-based ingestion: the environment subscribes to high-velocity intelligence feeds and treats new indicators as first-class events that the cloud control plane can act on directly.

The technical hard part is rarely the ingestion itself. It is normalisation. Coalition feeds, vendor advisories, ISAC channels, and your own discovery output arrive in heterogeneous formats - affected component, affected version range, severity, recommended mitigation, exploit availability. The architecture has to land them in a common schema that downstream systems can reason about without human intervention on every event. Get the schema right and the rest follows; get it wrong and you’ve built a notification system that requires a human to translate every alert.

Plan to ingest from CrowdStrike’s Falcon platform, Palo Alto’s Cortex, conventional ISACs, vendor pre-disclosure programmes, and whatever Glasswing-aligned channels become available. The architecture should treat the source as configurable; the schema should be invariant.

Pattern 2: Posture enforcement decoupled from patch availability

Intelligence without enforcement is a dashboard. The second pattern is the cloud control plane’s ability to apply compensating controls automatically when a critical signal arrives - before the patch is available, and proportional to the assessed risk.

The architectural shape is the same across cloud providers: an event source carries the intelligence signal into a serverless trigger, which evaluates the signal against a policy mapping and applies the resulting control via the provider’s policy engine.

On Google Cloud, that wiring runs through Security Command Center finding-path triggers feeding Cloud Functions that apply Organization Policy constraints and tighten VPC Service Controls perimeters. On AWS, EventBridge rules pattern-match incoming intelligence events into Lambda handlers, which apply Service Control Policies, VPC endpoint policies, and IAM permission boundaries. On Azure, Event Grid routes signals into Functions or Logic Apps, which apply Azure Policy assignments and Conditional Access policy updates.

Whichever provider you run, the goal is bounded degradation: the affected workloads have egress permissions stripped, identity-aware proxy rules tightened, service-account scopes narrowed, and access to non-essential network paths quarantined - without taking the workload offline. The human-in-the-loop sits on rollback, not on approval.

This is operationally harder than it sounds. The control logic has to be specific enough that a critical signal triggers the right compensating controls automatically, but bounded enough that a false positive doesn’t take production down. The mitigation matrix - what controls apply for what classes of vulnerability against what classes of workload - is the artefact that takes time to build and is the actual durable investment. Buy the cloud primitives off the shelf; earn the matrix.

Pattern 3: Validation before response scaling

External intelligence tells you that a class of system is vulnerable. It does not tell you whether your specific deployed configuration is exploitable, which is the only question that matters operationally. Acting on disclosure as if it were environment-specific finding produces weeks of unnecessary degraded-functionality posture enforcement and trains your team to ignore the next signal.

The pattern that closes this gap is the bridge between external intelligence and internal certainty: pair every external intelligence signal with internal validation against an instrumented replica of the production environment, built from the same Infrastructure-as-Code as production. If the disclosed vulnerability cannot be exploited against the replica with the deployed defences in place, the compensating controls relax automatically. If it can, they stay until remediation lands.

We covered this validation pattern in detail in Part 2 and called it the Sakura Proof-Point. It is the same pattern that closes the noise gap on AI-surfaced findings; here it serves as the bridge that turns coalition-surfaced disclosures into environment-specific operational certainty.

The three patterns combine into a single property: external feeds provide breadth, validation provides specificity, IaC remediation provides speed. Each is independently useful. The combination is what lets your environment respond to intelligence at machine speed without producing weeks of false-positive posture changes.

The Sakura Sky Position: Three Principles

Project Glasswing is a meaningful piece of internet infrastructure if it survives its antitrust questions and matures into the broader self-governance body its defenders sketch. A $100M credit pool plus $4M in cash for open-source maintainers is, on the open-source supply chain alone, the most concrete defensive coordination move the industry has produced in a decade.

But it is not a security strategy. Glasswing hardens the floor of the open-source ecosystem you depend on. It does not protect your first-party proprietary code, your specific cloud configuration, or the gaps between disclosed vulnerabilities and your patch cycles. Three principles for security leaders navigating that gap:

  1. Architect for ingestion regardless of which feed wins. The pipeline that consumes a Glasswing-aligned feed today should be the same pipeline that consumes whatever the next consortium ships. Coalitions reshuffle, partner lists change, regulatory pressure rewrites disclosure terms. The intelligence ingestion layer has to be feed-agnostic to remain useful through the next round of changes - which means treating Glasswing as one source among several from day one, not as the architecture itself.
  2. Treat the disclosure-to-patch window as the actual security problem. The window between we know about this vulnerability and the patch is deployed is where attacker advantage lives. It is closeable only with infrastructure that can act on intelligence without human approval on every event. Most security programmes optimise patch deployment speed; the durable improvement is in the seconds-to-minutes window before the patch exists at all.
  3. Verify before scaling the response. External intelligence amplifies the verification gap we covered in Part 2. The temptation when a coalition feed surfaces a vulnerability is to act on the disclosure as if it were an environment-specific finding. The validation pattern above is what prevents that temptation from producing weeks of unnecessary posture enforcement and the alert-fatigue that follows.

Coming up next in The Mythos Ledger: Part 4 - Identity in the Autonomous Enterprise. Why traditional IAM fails when your most active users are non-human agents, and how SPIFFE/SPIRE and workload identity become the last perimeter standing.


References

Anthropic (2026) Project Glasswing: Securing critical software for the AI era. Available at: https://www.anthropic.com/glasswing (Accessed: 2 May 2026).

CyberScoop (2026) Tech giants launch AI-powered ‘Project Glasswing’ to identify critical software vulnerabilities. Available at: https://cyberscoop.com/project-glasswing-anthropic-ai-open-source-software-vulnerabilities/ (Accessed: 2 May 2026).

Hall, A. (2026) Can Glasswing Stop the AI Backlash? Available at: https://freesystems.substack.com/p/how-to-trust-glasswing (Accessed: 2 May 2026).

ProMarket (2026) The Antitrust Risks of Anthropic’s Project Glasswing and the ‘AI Avengers’. Available at: https://www.promarket.org/2026/04/22/the-antitrust-risks-of-anthropics-project-glasswing-and-the-ai-avengers/ (Accessed: 2 May 2026).

Intelligence, Engineered.

Accelerate your operations with proven expertise built to scale and adapt.
Enable, automate, and govern the intelligent systems that keep your business moving.

Unlock Your Potential