Cybersecurity is an limitless game of cat and mouse as attackers and defenders refine their tools. Generative AI systems are actually joining the fray on each side of the battlefield.
Though cybersecurity experts and model developers have been warning about potential AI-powered cyberattacks for years, there was limited evidence hackers were widely exploiting the technology. But that’s starting to alter.
Growing evidence shows hackers now routinely use the technology to turbocharge their seek for vulnerabilities, develop recent code exploits, and scale phishing campaigns. At the identical time, AI firms are constructing defensive security measures directly into foundation models to maintain pace with attackers.
As cybersecurity becomes more automated, corporations can be forced to rapidly adapt as they grapple with the safety of their products and systems within the age of AI.
A recent report by Amazon security researchers highlighted the growing sophistication of hackers’ AI use. The researchers wrote that Russian-speaking attackers used multiple commercially available generative AI services to plan, manage, and conduct cyberattacks on organizations with misconfigured firewalls in over 55 countries this January and February.
The attack targeted greater than 600 systems protected by FortiGate firewalls and worked by scanning for internet-exposed login pages—these are essentially front doors leading into private company networks—and attempting to access them with commonly reused security credentials. Once inside, they extracted credential databases and targeted backup infrastructure. This activity suggests they could have been planning a ransomware attack.
The researchers report the attack was largely unsuccessful but nonetheless highlighted how much AI can lower the barrier to large-scale attacks. Despite being relative amateurs, the group “achieved an operational scale that might have previously required a significantly larger and more expert team,” they wrote.
In essentially the most vivid demonstration of AI’s hacking potential, a research prototype created by a Latest York University researcher referred to as PromptLock used large language models to create a wholly autonomous ransomware attack.
The malware used AI to generate custom code in real time, scour the goal system for sensitive data, and write personalized ransom notes based on what it found. While the tool was only a proof of concept, it highlighted the mounting threat of fully automated malware attacks.
A recent report from security firm CrowdStrike found that AI can also be making attackers significantly more nimble. They found that average breakout times—the window between when an attacker first breaches a network and once they move into other systems—fell to simply 29 minutes in 2025, 65 percent faster than in 2024.
In November, Anthropic also claimed they’d detected a Chinese state-linked group using the corporate’s Claude Code assistant to conduct a large-scale espionage campaign. The group used jailbreaks—prompts designed to bypass a model’s safety settings—to trick Claude into carrying out the attacks. Additionally they broke the campaign into smaller sub-tasks that looked more innocent.
The corporate claimed the hackers used the tool to automate between 80 and 90 percent of the attack. “The sheer amount of labor performed by the AI would have taken vast amounts of time for a human team,” the corporate’s researchers wrote in a blog post. “At the height of its attack, the AI made 1000’s of requests, often multiple per second—an attack speed that might have been, for human hackers, simply unattainable to match.”
But while AI is reshaping the offensive cybersecurity landscape, defenders are deploying the tools too. In February, Anthropic released Claude Code Security, which might scan systems for vulnerabilities and propose fixes routinely. The tool can’t perform real-time security tasks like detecting and stopping live intrusions, however the news nonetheless sent stocks in traditional cybersecurity firms plummeting, based on Reuters.
Cybersecurity vendors are also embedding AI into their defensive platforms. CrowdStrike recently launched two recent AI agents, one designed to research malware and suggest tips on how to defend against it and one other that actively combs through systems for emerging threats. Similarly, Darktrace has introduced recent AI tools designed to automate the detection of suspicious network activity.
But perhaps some of the promising applications for the technology is using it like a hacker to proactively probe defenses. Aikido Security recently released a recent tool that uses agents to simulate cyberattacks on each recent piece of software an organization creates—a practice referred to as penetration testing—and routinely discover and fix vulnerabilities.
This might be a strong tool for defenders, Andreessen Horowitz partner Malika Aubakirova wrote in a blog post. Traditional penetration testing is a labor-intensive process counting on highly expert experts in brief supply. Each aspects seriously constrain where and the way such testing could be applied.
Whether AI finally ends up advantaging attackers or defenders will likely depend less on raw model capabilities and more on who adapts fastest. So, it seems the unending game of cat and mouse that’s characterised cybersecurity for many years will proceed much the identical.

