Dario Amodei, co-founder and chief executive officer of Anthropic, at the AI Impact Summit in New Delhi, India, on Thursday, Feb. 19, 2026.
Prakash Singh | Bloomberg | Getty Images
Global banks, tech giants and governments were sent scrambling last month to contain the risks posed by Mythos, the Anthropic model said to be so powerful that it has found thousands of previously unknown vulnerabilities in the world’s software infrastructure.
There’s just one problem: the capability they’re worried about is already here.
Cybersecurity experts and artificial intelligence researchers told CNBC that the software vulnerabilities revealed by Mythos can be found using existing models, including those from Anthropic and OpenAI.
“What we are seeing across the industry now is that people are able to reproduce the vulnerabilities found with Mythos through clever orchestration of public models to get very, very similar results,” said Ben Harris, CEO of cybersecurity firm watchTowr Labs.
Mythos has jolted executives and policymakers alike over concern that a perilous new era of AI-enabled cybercrime may be near. Anthropic limited its release to a few American companies including Apple, Amazon, JPMorgan Chase and Palo Alto Networks to reduce the risk that bad actors get their hands on it.
Even with that precaution, the release has prompted the Trump administration to consider new government oversight over future models.
It’s the latest in a string of high-profile launches from Anthropic that have intensified its rivalry with OpenAI as the two AI giants approach their highly anticipated IPOs. Weeks after the arrival of Mythos, OpenAI CEO Sam Altman announced GPT-5.5-Cyber, a model specifically tailored for cybersecurity.
OpenAI on Thursday allowed limited access to GPT-5.5-Cyber to vetted cybersecurity teams.
The controlled rollout of Mythos, part of a security measure called Project Glasswing, was to give the corporate world time to gird its cyber defenses against a coming onslaught of attacks from criminal groups and adversarial nations.
“The danger is just some enormous increase in the amount of vulnerabilities, in the amount of breaches, in the financial damage that’s done from ransomware on schools, hospitals, not to mention banks,” Anthropic CEO Dario Amodei said this week at an Anthropic event.
‘Scary enough’
But to those fighting in the trenches of cyber warfare, one of the key capabilities advertised by Anthropic — to find software vulnerabilities at scale — has been around since last year.
“The models that we have right now are powerful enough to detect zero days in a large scale, and this is scary enough,” Klaudia Kloc, CEO of cybersecurity firm Vidoc, told CNBC.
That has been the case for “a couple of months, if not a year,” she said.
The term “zero-day” refers to a previously unknown software flaw that hasn’t been patched, giving attackers a window to exploit it before defenders can respond.
Researchers at Vidoc leaned on a technique called “orchestration” to test if they could find the same vulnerabilities that Mythos did. As the name suggests, the process involves creating workflows that split code into smaller pieces, coordinating between various tools or models to cross-check results.
“We ran older models against the same code base to see if we’d be able to detect the same vulnerabilities,” Kloc said. “We did, with both OpenAI and Anthropic’s older models.”
Another cybersecurity firm, AISLE, found that many of Mythos’s headline results could be reproduced using cheaper models working in parallel — suggesting that scale and coordination were more important than having the latest model.
“A thousand adequate detectives searching everywhere will find more bugs than one brilliant detective who has to guess where to look,” AISLE founder Stanislav Fort wrote in a blog post.
In comments to CNBC, Anthropic didn’t dispute that earlier models were capable of finding software vulnerabilities.
In fact, a company spokesperson said, Anthropic has been warning for months that AI’s cyber capabilities were advancing rapidly. They pointed to a February blog post showing that Claude Opus 4.6, a widely available model, found more than 500 “high severity” vulnerabilities in open-source software.
At the Anthropic event this week, Amodei affirmed this point, saying that while the scale of software vulnerabilities found by Mythos surged from earlier models, the trend wasn’t new.
“The risks are very real. This is why we took the actions we did,” Amodei said. “But they’re also, in some sense, not that surprising. … We’ve been seeing warnings of this for a while.”
Hysteria and panic
What makes Mythos different is its ability to take the next step, developing working exploits with little or no human input, effectively automating a process that previously required skilled researchers, the Anthropic spokesperson said.
But hackers working for criminal groups and adversarial nations already have this skill set, cyber researchers say. Hackers in North Korea, China and Russia “know how to do this, with or without Anthropic,” Kloc said.
The threat of AI-enabled hacking has corporations and government regulators worried about protecting crucial systems from a new wave of ransomware and other types of attacks, according to Harris.
He described conversations with banks, insurers and regulators in recent weeks as “hysteria.”
Even before the advent of generative AI, corporations faced the problem of skilled hackers exploiting newfound vulnerabilities in hours, while patching the code often takes days or weeks. Some patches require key systems to be taken offline, complicating matters.
“The industry is panicking about the number of vulnerabilities they face now,” Harris said. “But even before Mythos is widely available, it couldn’t fix vulnerabilities fast enough.”
Before, only a tiny population of experts globally had the ability and time to find obscure vulnerabilities in software and exploit them, according to Harris. Now, using currently-available AI models, the barriers of entry to wreaking cyber havoc have been lowered.
That means that banks and other targets will see more attacks, and that software systems that previously didn’t draw as much interest from cybercriminals will now face threats, Harris said.
Advantage: Offense
While Anthropic, OpenAI and others are working on developing cyber defense capabilities commensurate with the problems they have identified, the initial advantage goes to offense, not defense, researchers say.
“You have a significant increase in the volume of vulnerabilities discovered, but they don’t seem to have deployed a tool that helps you fix them,” said Justin Herring, partner at the law firm Mayer Brown and former executive deputy superintendent for cybersecurity at New York’s financial regulator.
“Vulnerability management is the great Sisyphean task of cybersecurity,” Herring said.
The limited group that was part of the initial Mythos release got a head start on patching vulnerabilities, but there is a downside. AI researchers haven’t been given access to Mythos to independently verify Anthropic’s claims or to begin building defenses against it.
Some say it prevented the wider cyber community from being part of the solution.
It has created “tiers of haves and have-nots,” which could stunt the pace of cybersecurity innovation, said Pavel Gurvich, CEO of cybersecurity startup Tenzai, which uses Anthropic’s models.
Many cybersecurity startups are working on solutions that can help businesses in this new era of AI, he said.
“They’re trying to figure out the best way to fix the world before this becomes accessible to the world,” said Ben Seri, co-founder of cybersecurity startup Zafran Security. “It’s this kind of chicken-and-egg situation, and you’re going to break some eggs. It’s unavoidable.”

