Frontier AI meets cybersecurity: threat, catalyst or both?
While new developments can push up demand for protection, the competitive landscape for security services may change in the long term
FINANCIAL authorities typically do not convene emergency meetings over software releases. Yet that is precisely what followed the limited release of Anthropic’s artificial intelligence model, Claude Mythos.
Regulators and banks across the US, UK, South Korea and Japan held urgent discussions on the potential cyber risks posed by the model.
Notably, the release of Mythos rattled more than just regulators – it also unnerved investors. Markets began to worry that AI-native firms such as Anthropic, armed with models as capable as Mythos, could eventually render traditional cybersecurity vendors obsolete.
Just how powerful is Mythos, and is the concern justified?
What is Claude Mythos?
Mythos Preview, the limited internal release of Anthropic’s latest AI model, represents a step change in what AI can do in cybersecurity, particularly on the offensive side. The model can identify and exploit zero-day vulnerabilities – security flaws unknown to developers and therefore unpatched – across all major operating systems and Web browsers.
In some cases, it has uncovered vulnerabilities that are decades old and extremely difficult to detect, including a now-patched 27-year-old bug in OpenBSD, an operating system widely regarded for its security.
Navigate Asia in
a new global order
Get the insights delivered to your inbox.
Mythos is also notable for its speed and sophistication. Exploits that would take skilled human hackers weeks to develop can be generated in a matter of hours.
Even more concerning is its ability to chain together multiple low-severity vulnerabilities into a single critical exploit. In one demonstration, Mythos linked four separate vulnerabilities to bypass a browser’s security layers.
Importantly, Mythos is not alone in advancing these capabilities. Other frontier AI models, such as GPT-5.4-Cyber from OpenAI and Big Sleep from Google, are demonstrating similar potential – and more are likely to follow.
Recognising these risks, Anthropic has opted not to release Mythos publicly. Instead, it has launched Project Glasswing, a controlled partnership involving organisations such as Amazon, Microsoft, JPMorgan, Google, CrowdStrike and Palo Alto Networks, that grants access to the model so as to help secure critical systems.
Not an immediate threat to cybersecurity sector
Some investors have interpreted Mythos as a threat to not only software security, but also to cybersecurity companies themselves.
The concern is that AI-native firms such as Anthropic could eventually replace traditional vendors, given the capabilities of frontier models. However, we think this concern is overstated.
First, there is a fundamental difference between finding vulnerabilities and preventing breaches. Mythos – at least in its current form – can uncover and exploit weaknesses in code, but it does not provide the end-to-end protection enterprises require.
Effective cybersecurity requires real-time threat detection, rapid response and system-wide integration. This is where established cybersecurity firms continue to provide value.
Second, large language models (LLMs) still exhibit high error rates, particularly in complex, real-world environments. Palo Alto CEO Nikesh Arora has noted that false positive rates for LLMs are around 30 per cent.
In cybersecurity, such error margins are unacceptable. An AI system that incorrectly flags benign activity could disrupt operations, while one that misses a genuine threat could result in a breach. Until reliability improves materially, AI models such as Mythos are not viable replacements for established cybersecurity platforms.
Third, Mythos is more likely to act as a catalyst for increased cybersecurity spending rather than reducing it. For years, boards have tolerated chronic underinvestment in security infrastructure. That dynamic is now shifting as frontier AI models significantly enhance the capabilities of malicious actors.
Vulnerabilities that once went undetected for years can now be discovered and exploited almost instantly, compressing the time between discovery and attack. This materially increases the urgency of cybersecurity investment and strengthens board-level conviction to allocate greater budgets to mitigate what is increasingly a direct business risk.
Bain & Co estimates that many large enterprises may need to increase cybersecurity spending by up to two times to defend against AI-enabled threats – far above the roughly 10 per cent annual increases most companies are currently planning.
In this sense, AI is not a substitute for cybersecurity solutions, but a powerful demand driver.
Longer-term disruption possible
That said, longer-term disruption cannot be ruled out.
Certain segments of the cybersecurity industry could come under pressure. Vulnerability management firms such as Tenable, Qualys and Rapid7, for example, focus heavily on identifying and prioritising vulnerabilities. If frontier AI models can perform these tasks faster and more effectively, parts of their value proposition may erode over time.
At the same time, AI-native players such as Anthropic and OpenAI could expand into adjacent security services, potentially competing more directly with legacy providers.
However, platform leaders are structurally well-positioned to withstand such disruptions.
Firms such as Palo Alto Networks and CrowdStrike are not single-product vendors; they offer integrated platforms spanning endpoint, cloud and identity security, among others. This breadth makes them inherently more resilient to disruption, as their value lies not in any single function, but in the coordination of multiple layers of defence.
Importantly, both companies are not only embedding AI deeply into their offerings but are also partnering with Anthropic on Claude Mythos. This positions them at the forefront of the transition and helps ensure they remain leaders as the cybersecurity landscape continues to evolve.
Crucially, these companies also possess a critical advantage: data. Effective AI-driven cybersecurity depends on vast, high-quality datasets that capture how systems behave under both normal operations and malicious activity.
These datasets have been accumulated over years of deployment and are extremely difficult to replicate. Any new entrant attempting to build a competing AI-powered defence system would face a significant data disadvantage, even if they have access to equally advanced models.
Investment implications
Taken together, frontier AI does not displace cybersecurity incumbents but reshapes the competitive landscape. The near-term impact is likely to be demand expansion, while longer-term disruption will be uneven across companies.
Investors with a higher risk tolerance may consider focusing on platform leaders such as CrowdStrike and Palo Alto Networks, which are well-positioned to integrate AI and leverage their data advantages.
More conservative investors, on the other hand, may prefer diversified exposure through cybersecurity exchange-traded funds, capturing the sector’s overall growth while mitigating company-specific risks.
The writer is a research analyst with the research and portfolio management team of FSMOne Singapore, the business-to-consumer division of iFast Financial
Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.
Share with us your feedback on BT's products and services
TRENDING NOW
On the board but frozen out: The Taib family feud tearing Sarawak construction giant apart
Thai and Vietnamese farmers may stop planting rice because of the Iran war. Here’s why
MAS convenes bank CEOs over AI cyberthreats; boards told to own risks, not leave to IT teams
Is it time to scrap COE categories for cars?