A team of researchers has uncovered what they are saying is the primary reported use of artificial intelligence to direct a hacking campaign in a largely automated fashion.
The AI company Anthropic said this week that it disrupted a cyber operation that its researchers linked to the Chinese government. The operation involved the usage of a synthetic intelligence system to direct the hacking campaigns, which researchers called a disturbing development that would greatly expand the reach of AI-equipped hackers.
While concerns concerning the use of AI to drive cyber operations usually are not latest, what’s concerning concerning the latest operation is the degree to which AI was capable of automate a number of the work, the researchers said.
“While we predicted these capabilities would proceed to evolve, what has stood out to us is how quickly they’ve done so at scale,” they wrote of their report.
The operation targeted tech corporations, financial institutions, chemical corporations and government agencies. The researchers wrote that the hackers attacked “roughly thirty global targets and succeeded in a small variety of cases.” Anthropic detected the operation in September and took steps to shut it down and notify the affected parties.
Anthropic noted that while AI systems are increasingly getting used in a wide range of settings for work and leisure, they may also be weaponized by hacking groups working for foreign adversaries. Anthropic, maker of the generative AI chatbot Claude, is considered one of many tech corporations pitching AI “agents” that transcend a chatbot’s capability to access computer tools and take actions on an individual’s behalf.

Get day by day National news
Get the day’s top news, political, economic, and current affairs headlines, delivered to your inbox once a day.
“Agents are precious for on a regular basis work and productivity — but within the flawed hands, they’ll substantially increase the viability of large-scale cyberattacks,” the researchers concluded. “These attacks are prone to only grow of their effectiveness.”
A spokesperson for China’s embassy in Washington didn’t immediately return a message in search of comment on the report.

Microsoft warned earlier this 12 months that foreign adversaries were increasingly embracing AI to make their cyber campaigns more efficient and fewer labor-intensive. The pinnacle of OpenAI‘s safety panel, which has the authority to halt the ChatGPT maker’s AI development, recently told The Associated Press he’s watching out for brand new AI systems that give malicious hackers “much higher capabilities.”
America’s adversaries, in addition to criminal gangs and hacking corporations, have exploited AI’s potential, using it to automate and improve cyberattacks, to spread inflammatory disinformation and to penetrate sensitive systems. AI can translate poorly worded phishing emails into fluent English, for instance, in addition to generate digital clones of senior government officials.
Anthropic said the hackers were capable of manipulate Claude, using “jailbreaking” techniques that involve tricking an AI system to bypass its guardrails against harmful behavior, on this case by claiming they were employees of a legitimate cybersecurity firm.
“This points to a giant challenge with AI models, and it’s not limited to Claude, which is that the models should give you the chance to tell apart between what’s actually occurring with the ethics of a situation and the sorts of role-play scenarios that hackers and others should want to cook up,” said John Scott-Railton, senior researcher at Citizen Lab.

The usage of AI to automate or direct cyberattacks may even appeal to smaller hacking groups and lone wolf hackers, who could use AI to expand the dimensions of their attacks, in accordance with Adam Arellano, field CTO at Harness, a tech company that uses AI to assist customers automate software development.
“The speed and automation provided by the AI is what’s a bit scary,” Arellano said. “As a substitute of a human with well-honed skills attempting to hack into hardened systems, the AI is speeding those processes and more consistently getting past obstacles.”
AI programs may even play an increasingly vital role in defending against these sorts of attacks, Arellano said, demonstrating how AI and the automation it allows will profit either side.
Response to Anthropic’s disclosure was mixed, with some seeing it as a marketing ploy for Anthropic’s approach to defending cybersecurity and others who welcomed its wake-up call.
“That is going to destroy us – prior to we expect – if we don’t make AI regulation a national priority tomorrow,” wrote U.S. Sen. Chris Murphy, a Connecticut Democrat, on social media.
That led to criticism from Meta‘s chief AI scientist Yann LeCun, an advocate of the Facebook parent company’s open-source AI systems that, unlike Anthropic’s, make their key components publicly accessible in a way that some AI safety advocates deem too dangerous.
“You’re being played by individuals who want regulatory capture,” LeCun wrote in a reply to Murphy. “They’re scaring everyone with dubious studies in order that open source models are regulated out of existence.”
© 2025 The Canadian Press



