Two years after our first report on the topic, we return to cyber security. The widespread adoption of artificial intelligence is changing the landscape, raising the threat level. AI has not only enabled more attacks, it has provided tools to devise new ways to attack. Such incursions can be more severe. As the Global Cybersecurity Outlook 2026 from the World Economic Forum says: AI means there are additional vectors for more potent attacks.
A prime example of the effect of AI on cyber security is Anthropic’s decision in April not to publicly release its Claude Mythos large language model which, in tests, found thousands of vulnerabilities in everyday operating systems and browsers. The company has created Project Glasswing to use Mythos privately, with vetted partners, to devise and employ defences against any similar AI that could be developed by “cyber threat actors”.
The corporate world is worried — and it is right to be
Although cyber risk currently trails geopolitical risk as most businesses’ top concern, the Bank of England systemic risk report for the second half of 2025 said 86 per cent of companies put cyber risk in their top five risks, up from 72 per cent since the first-half report.
Responses from cyber security leaders suggest that companies are struggling to keep up. A survey of 1,600 chief information security officers conducted by Proofpoint, the cyber security provider, found that 66 per cent of Cisos worldwide had experienced a material loss of sensitive information in the previous year, up from 46 per cent in 2024. In India, home to much of the world’s digital outsourcing, 99 per cent of Cisos said their businesses’ systems had been compromised in the past 12 months.
Falling victim to a cyber attack is costly. According to Statista data cited by Viking Cloud, cyber crime’s toll was $10.5tn in 2025 and could reach $15.6tn by 2029. Chainalysis, the blockchain data platform, says that in 2023 victims paid $1bn to ransomware attackers. While payments stagnated after that, the number of reported attacks increased. Between 2025 and 2026, the median ransomware payment grew by 368 per cent to nearly $60,000.
The three main vulnerabilities
While surveys rank threats differently, they generally agree on the main vulnerabilities: illegitimate use of legitimate identities, breaches through supply chains or third-party software, or access via internet-enabled interfaces with the public, such as websites or databases. All respondents believed that the environment is more dangerous because of AI.
Tech for Growth Forum

The Tech for Growth Forum looks at how technology can be used to achieve the growth goals and objectives of organisations, consumers and society as a whole.
Through a series of reports, events and sharing of expertise, it seeks to inform leaders on how they can harness technology to make real change.
💾 View and download the entire report
🔎 Visit the Tech for Growth Forum hub for more from this series
Exploitation of trust, be it through the use of valid credentials, systems or software chains, makes access easier. The 2026 Global Threat Report from CrowdStrike, the cyber security company, found that 82 per cent of detected intrusions did not use malware. Instead, adversaries moved on authorised pathways and through trusted systems, blending into normal daily activity.
The cyber threat extends into people’s private lives. Scam farms that target individuals are a problem worldwide. Reports from Asian nations mention increasingly elaborate ruses to convince people to hand over their life savings. Such ploys cross into the corporate realm, with similar approaches used to persuade employees to provide logins or other sensitive information. From sophisticated deepfakes to old-fashioned phishing emails, all the reports highlight that the persistent weakness is the human in the chain.
Cyber defenders must also be alert to benign risks. No incident illustrates the importance of resilient systems more than the worldwide outage in 2024, the largest ever, which ironically was caused by CrowdStrike. The update to its Falcon Sensor software triggered crashes and boot loop failures in which users were subjected to what is known as the “spinning wheel of death”. An estimated 8.5mn Windows systems were affected, with airlines, governments, hospitals and other critical sectors all disrupted.
Working from home and the human element
While AI has increased the speed and scale of attacks, the intruders’ main ways into a system are so far unchanged. The Verizon Data Breach Investigations Report 2025 found that bad actors still relied on the abuse of credentials to force entry, accounting for 22 per cent of breaches analysed. The same report found that 60 per cent of breaches involved a human element, with employees falling victim to phishing, social engineering, poor digital hygiene or simple mistakes.
The number of access points into company systems has proliferated as more people work from home. In 2024 more than half of EU enterprises conducted online meetings and more than 80 per cent gave their employees remote access to email systems, according to Eurostat. In the UK, 40 per cent of workers have a hybrid homeworking arrangement or work fully remotely, according to government figures.
These scattered workforces present a challenge for cyber security. Companies can no longer rely on firewalls, instead needing to focus on security for each individual employee’s system. This makes identity and zero trust for login verification critical to security.
The use of legitimate credentials gained by illegitimate means offers an easy way into and around company systems. The British-American hacking group Scattered Spider is notorious for its painstaking research into company employees, which it then uses to trick staff into providing information for a cyber attack. The group hit UK retailer Marks and Spencer in April 2025 in an attack that cost the company up to £300mn in lost profits and took £600mn off its market capitalisation. While the specific means of entry have not been disclosed, reports indicate that IT help desk workers were fooled.
Viable credentials can be obtained by theft, subterfuge or the purchase of stolen IDs but these can also be gained through poor practices at home and work. Cyber criminals take advantage of employees who use inadequately secured personal devices or personal logins for AI models, as well as employee errors. Corporate cyber security teams cannot defend against phishing attacks on employees who use personal devices for professional communications.
Even if employees are vigilant about protecting logins, AI has helped fraudulent IDs and deepfakes to become more advanced. Sumsub, the identity verification platform, says deepfakes once relied on real documents to create fake videos but today large language models can create entire identities, including deepfake videos, for verification purposes. While the better systems can detect inconsistent quality, many forgeries are realistic enough to circumvent standard verification procedures.
Advances in detection have pushed fraudsters towards more sophisticated and layered attacks. This presents a problem at account set-up for some institutions, such as those in the financial or public sectors, and it exposes the companies that are less alert than they should be. State-sponsored North Korean operatives, for instance, have gained access to US and European company systems by applying for jobs using fake and stolen identities.
External exposures
Securing one’s own environment is fundamental but a company’s security depends on the third parties it relies on. Outsiders with legitimate access, such as contractors, managed service providers or cloud operators, can become entry points if their credentials or systems are compromised. Beyond direct access, weaknesses in the broader supply chain, such as flaws in a software vendor’s updates or poor physical security at a logistics partner, can expose nearly every organisation.
This exposure is especially acute in sectors that cannot see into parts of their extended supply chains. A recent WEF survey found that limited insight into upstream suppliers was the first or second most prevalent cyber risk. Compounding this, several sectors rely heavily on a small number of critical providers, which raises the spectre of systemic risk should any one be compromised.
An example of this vulnerability occurred in 2020 when attackers infiltrated SolarWinds, the Oklahoma IT group, and inserted malicious code into a pending update of its Orion software. When 18,000 customers installed the update, they opened a back door to their systems. Some reports blamed Cozy Bear, a Russian hacking group. While software developers have tightened their processes, enterprises that depend on external suppliers have limited ability to protect themselves against upstream failures.
Cyber criminals will also exploit trust in the supply chain. Employees are often more willing to share information with a familiar supplier, which makes compromised vendor accounts or impersonation attempts an effective way for attackers to gain a foothold.
Recent reports have highlighted how interconnectedness increases risk. The 2025 Verizon Data Breach Investigations Report found that 30 per cent of breaches involved a third party, double the level of the previous year. Most of the 2,360 third party-related breaches stemmed from a system intrusion via an external supplier or interface. IBM’s 2026 X-Force Threat Intelligence Index also shows a sharp rise in attacks targeting exposed systems and weaknesses in the software supply chain. The report says that supply-chain and third-party breaches have quadrupled in five years.
Companies also expose themselves through their own internet-facing technology. The Verizon report found that the exploitation of vulnerabilities had increased significantly and accounted for 20 per cent of breaches, slightly behind the 22 per cent for credential abuse. The most commonly exploited vulnerabilities were in web applications, while exploitation of virtual private networks and edge devices has surged. This could indicate that organisations are insufficiently vigilant about the security of remote access and on-the-go technology.
This trend also showed up in the 2026 X-Force index, which found that exploitation of public-facing applications had risen sharply, driven in part by a surge in supply-chain attacks affecting development ecosystems and trusted infrastructure. Public-facing applications accounted for 40 per cent of initial accesses, a 44 per cent increase in 12 months. More than half of disclosed vulnerabilities required no authentication, which made them especially attractive to opportunistic attackers.
The consequences can be severe. In the ByBit attack in February 2025, a supply-chain compromise enabled North Korean attackers to distribute trojanised software and steal nearly $1.5bn in cryptocurrency.
Innocent mistakes
Not all incidents are caused by malicious actors. Software can be vulnerable for various reasons, for instance misconfiguration or errors introduced in development. In March 2026, the UK’s Companies House referred itself to regulators after admitting that a bug on its website could have exposed directors’ personal details for up to five months.
The CrowdStrike outage of 2024 was similar. Events such as these highlight how a single error in a widely used component can have global consequences. They also sharpen awareness of the risks inherent in software updates. Organisations increasingly recognise that even trusted suppliers can introduce systemic vulnerabilities: many have responded by adopting more rigorous monitoring and validation.
The impact of AI
AI is reshaping the landscape in two fundamental ways: it enables criminals to operate at a mass scale and it lowers the barrier to entry for attackers who previously lacked the skill to conduct a sophisticated campaign.
In April the American company Anthropic withheld its Claude Mythos software from public release after the AI exposed “thousands of zero-day vulnerabilities [unknown flaws], many of them critical, in every major operating system and every major web browser”. Anthropic described Mythos as a security danger and has made it available only to vetted organisations such as Apple, Microsoft, Broadcom and Cisco to create cyber security defences as part of Project Glasswing.
The company said in its blog: “AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.” News of the power of Mythos has led financial regulators worldwide to rush to assess the risks, their fear being that criminals could develop a similar capability.
Security teams report that adversaries are clearly using AI because attacks are becoming more prolific, faster and more automated. According to CrowdStrike, AI-enabled adversaries increased activity by 89 per cent year on year. Average eCrime breakout time fell to 29 minutes in 2025 (down from 98 minutes in 2020) and the fastest intrusion took 27 seconds. In one case, data exfiltration began only four minutes after initial access.
This acceleration already affects real-world outcomes. AI allows threat actors to scan for weaknesses at unprecedented speed, helping them to identify exploitable systems before defenders can react. CrowdStrike reported a 42 per cent rise in the exploitation of previously unknown vulnerabilities, as well as increased activity by groups associated with China that exploit publicly disclosed vulnerabilities, getting in before a fix can be applied.
Jadee Hanson, the Ciso at Vanta, the security and compliance automation platform, says the threat has shifted beyond deepfakes and impersonation into “agent to agent actions”, which include prompt-injection attacks and architectural risks where AI has access to more data or system functions than intended.
Defenders must be alert to the shift brought about by AI, Hanson says. “The advantage will go to the organisations that can quickly adapt and pivot to understanding that the economics of cyber crime have completely changed. In this environment, speed is not just an operational metric. It’s almost a security control. If you’re not moving fast enough on AI that is a gap in your security posture and your security controls.”
Vyacheslav Zholudev, the co-founder and chief technology officer of Sumsub, says that with both sides equally able to use AI, the battle over cyber security has become a game of cat and mouse. When a new large language model is released, Sumsub immediately tests it by generating deepfakes to ensure its detection systems can still recognise them. Zholudev says: “I don’t exclude the possibility that at some point there will be a gap — say one week or a couple of days — where new types of fraud temporarily outrun detection systems.” His hope is that AI will eventually allow defenders to detect unknown threats before clients report them.
Katie Moussouris, the founder and CEO of Luta Security and a US government cyber security adviser, agrees that AI is transforming both sides of the battle. “AI is getting better at identifying vulnerabilities on its own,” she says, adding that at a recent conference the only competitor who initially tried to avoid using AI in a capture-the-flag exercise eventually had to adopt it simply to remain competitive.
What should companies do to protect themselves?
Our previous cyber security report advised putting security at the heart of systems development. The good news is that this appears to be happening — Hanson says that security is now less of an afterthought. Still, companies could go further. The X-Force index recommends that organisations treat identity systems as critical infrastructure: “Security leaders should elevate their identity systems to the same level of resilience, governance and monitoring as core infrastructure components.”
Building a robust system also requires a shift in mindset. As AI accelerates both attack and defence, organisations must ensure that their security foundations are strong enough to cope with greater speed and complexity. That means strengthening the fundamentals — access control, visibility, response times — while designing systems that withstand rapid change.
Hanson says that despite growing complexity, getting the basics right is still one of the most effective means of defence. “The real damage often comes from this complexity we have to manage.” She advises security leaders to prioritise resilience and scale over the pursuit of perfection. “We may not prevent every little incident from happening but we can absolutely reduce the likelihood and broader impact of a serious one affecting our companies.”
Recent enforcement actions from the UK’s Information Commissioner’s Office underline the point. Many of the failings cited in regulatory penalties — poor data security, weak access controls, insufficient staffing and testing and inadequate training — reflect precisely the foundational issues that Hanson highlights. Even as threats evolve, it is often the basic weaknesses that turn an incident into a breach.
• Educate
Security is dependent on the individual employee but Proofpoint’s 2025 Voice of the Ciso says only 27 per cent of companies have education and awareness training. Ideally training is both interesting and easy and gives workers simple ways to take the right actions. Continuous education in digital hygiene is essential, as well as an emphasis on basic measures such as ensuring employees do not blindly click email links, even those purporting to be from known business partners.
This is especially important because while humans are frequently the weakest link, they are also the last line of defence. Unlike machines or AI, they can exercise judgment, which helps to protect a company from malicious actors.
There has been improvement in how companies deploy new tools. In the responses from more than 800 participants to the WEF’s cyber security outlook, 64 per cent said they had processes to assess the security of AI tools before deployment, up from 37 per cent in the previous year. The risks cannot be understated: nearly three-quarters of respondents said they or someone in their private or personal network had been a victim of cyber-enabled fraud in the previous 12 months.
• Patch and update
Companies should assiduously patch and update to guard against known vulnerabilities. The priority has to be critical vulnerabilities and internet-facing systems, as these are the most likely to be exploited. An accurate inventory of systems is essential so that security teams know what needs patching and when. If you can’t patch — which is often impossible because patches do not keep up universally, especially for older hardware — stay off the internet.
• Monitor and have oversight
Behavioural monitoring is especially valuable. The X-Force index says the Solarwinds Orion issue was noticed because of uncharacteristic employee behaviour rather than the breach itself. Sumsub and other companies bake behavioural monitoring into their verification systems because this will reveal clues that are otherwise easily missed.
Red flags that go beyond straightforward ID analysis are increasingly used to detect fraudulent bank accounts set up by real people who sell their identities for money-muling. “If a user creates an account from a VPN from a one-time email address and then they copy-paste their name into the fields instead of typing it . . . it’s already a bad signal,” Zholudev says.
Elsewhere, visibility into all the phases of a client’s lifecycle, for instance from “know your customer” checks to a money withdrawal, can alert Sumsub to a transaction carried out in a suspicious location. While such examples are especially relevant to financial companies, having a complete picture of customer or employee behaviour will highlight anomalies. Continuous monitoring is essential.
• Secure your supply chain
Risk questionnaires are often used to determine a supplier’s approach to cyber security but these cover only known areas of weakness. A lack of visibility into supply-chain exposures, highlighted in the WEF survey, makes contractual protections critical, especially given that there is no regulatory recourse for failures.
Moussouris says: “Right now we don’t have any legislation that says to software companies that they are liable if they have security holes in their products . . . people are allowed to sell software without any real security requirements by law — and they don’t get in trouble if there’s a catastrophic failure because of security bugs.”
• The importance of contracts
Given that “we’re nowhere close” to a law imposing a high level of liability, companies should protect themselves through, for instance, service-level agreements. They should require credits for outages, disclosures about the use of AI in a service, and clarity over where liability would lie if a third-party AI model was found to be responsible for a systems failure.
Contracts should also specify mandatory breach-notification timelines, require visibility into any subcontractors or fourth-party providers involved in delivering the service, and set out where data is stored and processed, including whether it can be used to train AI models.
• Get an independent audit
Where possible, companies should require evidence of security controls, such as independent audits or penetration-testing reports, rather than relying on self-reported questionnaires. Moussouris advises making a vendor contractually obliged to deal with security holes in a set time.
• Deploy AI and use it to your advantage
AI can also be harnessed to help companies identify fraud and other cyber crime. AI makes it easier for attackers to find weak spots, so defenders need to find them too. With unfettered access to AI, companies cannot afford to ignore it “because the people who are using it to find vulnerabilities, they are not going to stop”, Moussouris says.
AI can be the solution as well as the problem, she says. “It’s really finding things that humans have missed, which is generally a good thing — except that it is also overwhelming organisations with all these new reports of bugs.
“Where AI generates more findings than teams can process, AI may also be the tool that helps to sift through the noise. Smaller organisations with fewer staff can use AI to increase their capacity. They can use it to quickly digest things like vulnerability reports and alert logs . . . It does require some fine-tuning to get it right . . . it’s important to create workflows you can depend on.”
An advocate of keeping “humans in the loop”, Moussouris advises that AI must not be given free rein. “The one danger is treating it like it is all-knowing and all-capable when in fact it is more like an intern, and you wouldn’t give the intern all of the rights and privileges that you would give some kind of omniscient senior engineer.”
Human intervention will remain important, she says, to ensure that the AI can be trained to work more reliably.
