Project Glasswing: AI Cybersecurity and What Businesses Need To Know

Anthropic's Glasswing initiative signals a calm before the storm of AI-supported cyberattacks. What this means for businesses, and the practical steps to take.

On 7th April 2026, Anthropic made an announcement unlike anything the AI industry has seen before that has huge implications on AI and cybersecurity.

Project Glasswing is an initiative to secure the world’s most critical software using Claude Mythos Preview: their most capable model yet. The partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.

That’s a coalition full of A-name companies, but the reason it was formed matters profoundly more.

Anthropic formed Project Glasswing because AI models have reached a level of coding capability where they can surpass all but the most skilled cybersecurity experts at finding and exploiting software vulnerabilities.

Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it won’t be long before these capabilities fall into the wrong hands. The fallout for economies, public safety, and national security could be severe.

In plain terms: Anthropic built a model capable of finding and exploiting software vulnerabilities at a scale and speed no human can match. Rather than release it publicly, they have assembled the most important software companies in the world to patch as many critical systems as possible before similar capabilities reach bad actors.

The calm before the storm

Right now, the practical cyber threat to most Australian businesses remains manageable by conventional means, but the window in which conventional defences are sufficient is narrowing.

Anthropic’s own researchers acknowledge that “the transitional period may be turbulent regardless” and that by releasing Mythos initially to a limited group of critical industry partners, they aim to enable defenders to begin securing the most important systems before models with similar capabilities become broadly available.

The practical implication is that the capability gap between Project Glasswing participants and the rest of the market is temporary, likely 12 to 18 months before AI-powered vulnerability discovery at this level is broadly accessible.

The major tech companies inside the Glasswing coalition will be hardened first. Mid-market businesses — accounting practices, financial planners, wealth managers, insurance brokers — will navigate this transition without Mythos-class defensive tooling. That gap is where risk concentrates.

CrowdStrike’s 2026 Global Threat Report already found an 89% increase in attacks by adversaries using AI year-on-year. The offensive use of AI was accelerating before Glasswing. The existence of Mythos-class capabilities raises the ceiling of what is possible.

This is not a reason to panic, but rather a reason to act now, while the environment remains comparatively manageable.

Patch and update your IT systems. Now

The most immediate cybersecurity actions any business should take is the least glamorous: accelerate your software patching program.

In just the past few weeks, Mythos Preview has identified thousands of zero-day vulnerabilities, many of them critical and difficult to detect, including some in every major operating system and web browser. Several had existed undetected for years, the oldest being a 27-year-old bug in OpenBSD.

Less than 1% of those cybersecurity vulnerabilities have been fully patched by maintainers.

Anthropic has committed to a public report within 90 days — landing in early July 2026 — covering what Glasswing has fixed, followed by months of high-volume patch releases across operating systems.

A significant volume of patches is coming across the software your business depends on.

  • If your organisation has a habit of deferring updates, that habit needs to change now.
  • If you don’t have a documented patching schedule, create one.
  • If you do, verify it’s being followed.

The industries most at risk

Not all sectors face equal exposure. Several face a heightened threat profile because of the data they hold, the AI tools they have adopted, or the regulatory scrutiny they operate under.

Accounting and tax advisory firms hold tax file numbers, financial statements, and trust arrangements — high-value targets for identity fraud and business email compromise. AI productivity tool adoption has accelerated faster than governance has kept pace.

Financial planning and wealth management firms hold investment portfolios, personal financial data, and estate planning documentation. Financial institutions are specifically identified as sectors that can now be targeted by threat activity otherwise invisible to traditional signature-based detection systems.

Insurance broking firms hold policyholder data and claims histories. Business email compromise — where a fraudulent party impersonates a broker to redirect payments — is an established attack vector that AI-accelerated phishing will make considerably more effective.

Legal practices handle privileged communications and transaction records. AI agents connected to internal work systems — often unwittingly by staff using tools at home — open new doors for cybercriminals. The industry has a name for this: shadow AI.

Healthcare organisations face patient record exposure compounded by specific obligations under the My Health Records Act and the Privacy Act.

Mining and resources companies face distinct risk through operational technology systems controlling physical infrastructure — with ASX listing obligations creating an immediate disclosure requirement in the event of a successful attack.

What AI governance actually means right now

Every business that has adopted AI tools has, intentionally or not, created new data flows, new access pathways, and new dependencies on third-party systems. Most have not documented what those flows look like, what data is being accessed, or what would happen if one of those tools were compromised.

Implementing effective AI governance makes that visible and manageable. In practice, for a mid-market professional services firm, it means the following things:

Know what AI tools are in use

Staff frequently use AI tools through browser extensions, personal devices, or through existing software, without them appearing in any IT register. A governance review will surface this trend.

Understand what data those tools can access

A tool connected to a client database or document management platform has potential access to everything in those systems. If the vendor has a security incident, the question of what data was accessible becomes legally significant.

Create traceability for AI-assisted decisions

From 10th December 2026, amendments to the Privacy Act introduce new obligations around automated decision-making systems that govern decisions made by systems with limited or no human involvement that significantly affect individuals. These requirements apply to AI systems used in hiring, lending, insurance, and customer analytics. If your firm uses AI tools that influence client outcomes, traceability is becoming a legal requirement.

Apply basic data hygiene

Collect only what is needed. Delete what is no longer required. The less data you hold, the less you have to lose. This is both a Privacy Act obligation under the Australian Privacy Principles and a practical risk reduction measure: data that doesn’t exist cannot be breached.

Train your staff on good AI policy

Show clear policy on which AI tools are approved, what client data should never be entered into an AI system, and how to recognise AI-generated phishing attempts. Firms that best navigate this period may not have the most sophisticated technology, but will be the ones where every staff member understands the basics.

The regulatory clock is already running

Independent of Project Glasswing, the compliance environment is tightening and AI governance is going to be crucial.

AI systems in Australia are regulated across multiple statutes: the Privacy Act 1988 governs personal data use and automated decision-making; the Australian Consumer Law addresses misleading or deceptive AI outputs; the Corporations Act applies to governance in financial services. Oversight is distributed across the OAIC, ACCC, ASIC, and sector-specific bodies.

For AFSL-regulated firms, ASIC’s REP 798 has already flagged AI governance as a material risk area. For tax practitioners, the TPB’s draft AI guidance creates professional obligations. The Privacy Act’s NDB scheme means a breach is not just an operational crisis, it is a notification event with reputational and legal consequences.

Regulators across multiple sectors are moving in the same direction: toward documented, auditable AI governance as a baseline expectation. The question for most mid-market firms is not whether this requirement will arrive, but whether they will be ready when it does.

What to do this quarter

The practical actions that matter most are not complicated. They require commitment and focusing on the big picture more than resources.

Audit your AI tool usage. Not just what IT has approved, but what staff are actually using, and what data those tools can access. Review your software patching posture and address outstanding updates. Review data retention practices and reduce what isn’t necessary. And if your leadership team hasn’t had a structured conversation about AI governance yet, have it now before the changing AI landscape makes it urgent.

Project Glasswing is, in Anthropic’s own words, an urgent attempt to get defenders ahead of a threat that is coming regardless. The window to act before that threat reaches mid-market firms is real, and is closing in months, not years.

Share

Not sure where your AI governance gaps are?

Start with an assessment. We’ll benchmark your current practices and give you a clear, prioritised roadmap.

Related insights

Why AI Governance Is Now Critical for Australian Professional Services Firms

AI governance is now a compliance obligation for Australian accountants, lawyers and advisers. Learn what's changed and what to do next.

Anthropic Comes to Australia: What This Means for Local Businesses

Anthropic is opening its Sydney office in 2026. Explore what this means for Australian businesses around data residency, AI governance and opportunities.

How Mid-Sized Enterprises Are Winning with AI

Discover how mid-sized enterprises can use AI to gain a competitive edge. Learn strategies for practical, budget-conscious AI transformation.