As the worldwide artificial intelligence engine keeps accelerating, so are concerns about threats to the very infrastructure powering it.
The rise of AI agents has opened latest questions on the degrees of security needed to manage the access they’ve and the actions they take. More questions are being raised about securing protocols around inter-agent communication and enabling technology that can allow more rapid advances in AI amongst nation-states resembling China.
The cybersecurity community expressed concern about rising AI risks not long after OpenAI Group PBC’s ChatGPT burst on the scene near the tip of 2022. Greater than three years later, amid widespread AI adoption, the prevalence of AI agents is leading some cybersecurity researchers to wonder if the danger zone has grown even wider.
“The forms of behaviors that we’ve began seeing in agentic AI are really changing our landscape,” Dr. Margaret Cunningham, vp of security and AI strategy for Darktrace Inc., said during a two-day virtual briefing hosted this week by the nonprofit Cloud Security Alliance. “As we’re going through this adoption, it’s rapidly expanding our attack surface.”
MCP servers under attack
That attack surface includes among the most generally used Model Context Protocol or MCP servers on the internet today. They supply large language models with the power to connect with external data sources, other models and software applications.
Security researchers have noted that when Anthropic PBC introduced the MCP open standard in November 2024, it put the onus on users to secure it properly. Since then, security professionals from Red Hat Inc. and IANS Research have documented security concerns with MCP in recent months. Anthropic itself released additional MCP guidance in November that referenced security techniques involving code execution when using MCP for AI agents.
“I even have not found true native full-stack security in MCP,” Aaron Turner, a college member at IANS, said in a presentation throughout the CSA event. “We’ve got to be ready for some really bad things to occur.”
The challenges with MCP security extend to CI pipelines, cloud workloads, and worker endpoints. In an evaluation of MCP server deployments across enterprise environments recently published by Clutch Security Inc., researchers found that 95% of MCP deployments were running on worker endpoints where security tools had no visibility.
“It’s my opinion that it is best to treat MCPs as malware if they struggle to run on endpoints,” Turner said.
Dropping below the safety poverty line
The challenges related to AI deployment have brought renewed deal with the power of smaller businesses to guard their critical assets. Accenture plc has reported that while 43% of cyberattacks affect small businesses, only 14% of those firms have the power to guard themselves.
Wealthy Mogull of CSA and Wendy Nather of 1Password spoke in regards to the widening security gap throughout the CSA event.
This has given rise to the “security poverty line,” a term attributed to Wendy Nather, senior research initiatives director at 1Password LLC. There’s a growing belief inside the cybersecurity community that AI could widen the divide between resource-rich firms and those who cannot afford the staff or tools to defend themselves.
“When you are a retail shop with a 1% profit margin, you will have trouble spending the cash on security that you simply need,” Nather said during an appearance for the CSA event. “Just training alone isn’t going to do it.”
The flip side of this dynamic is that malicious actors with fewer resources are in a greater position to leverage AI today. Signs are starting to look that they’re targeting large language model infrastructure in volume.
Honeypots arrange by the cybersecurity firm GreyNoise Intelligence Inc., recorded greater than 91,000 attack sessions on LLM infrastructure over three months, starting in October, with nearly 81,000 going down during an 11-day period. The attacks were designed to probe LLM model endpoints resembling OpenAI-compatible APIs and Google Gemini formats.
“I’m seeing lower-resource attackers capable of scale up,” said Wealthy Mogull, chief analyst on the Cloud Security Alliance, who appeared in the identical session with Nather. “They’ll automate a whole lot of processes. Everybody from script kiddies to nation states at the moment are using AI to develop exploits. This legitimately scares me.”
Advances by China and Iran
The involvement of nation states in development of exploits and targeting of AI infrastructure is adding a brand new element to cybersecurity preparedness for the inevitable attack. Dr. Avi Davidi, a senior researcher at Tel Aviv University, recently published an evaluation of Iran’s quest to construct sovereign AI capabilities that span cyberwarfare and future conflicts with Israel and Western nations.
Davidi highlighted the use of business AI tools by Iranian groups to scan industrial control systems and probe defense systems of other countries. The Iranian hacker collective APT-42 endeavored to trick AI systems into providing “red-team”-style attack guidance that would then be utilized by malicious actors.
Perhaps of greater concern amongst cybersecurity professionals is the expected strengthening of AI capability inside China. This scenario was recently reinforced by Anthropic Chief Executive Dario Amodei, who published a recent essay that noted China because the country with the best likelihood of surpassing the USA in AI capabilities.
China can also be on the minds of many leading voices inside the U.S. defense community. During a panel discussion arranged by TED AI in San Francisco this week, one former government official noted his concern across the balance of world AI power.

Former Department of Defense officials Maynard Holliday and Colin Kahl spoke about AI and nation states on the TED AI event in San Francisco.
In response to Colin Kahl, a senior fellow at Stanford University’s Freeman Spogli Institute for International Studies and former U.S. under secretary of defense, China is gaining ground within the race to provide artificial superintelligence.
“We still have one of the best AI labs on this planet, our models are still one of the best on this planet,” Kahl said. “But China has almost every little thing they have to be a very close fast follower.”
Kahl noted that the previous administration had implemented progressively stricter export controls on China for a period of two years, geared toward limiting the country’s ability to acquire advanced semiconductors for AI. The present administration has allowed the export of Nvidia Corp.’s H200 AI processor, with greater than 2 million orders for the chip expected to come back from Chinese tech firms, in accordance with Kahl.
“We didn’t need to flood totalitarian adversary states with one of the best technology that the U.S. made,” Kahl said. “It doesn’t net out from a national security perspective to permit China to shut the technology gap.”
Image: SiliconANGLE/ChatGPT
Support our mission to maintain content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with greater than 11,400 tech and business leaders shaping the long run through a singular trusted-based network.
About SiliconANGLE Media
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our latest proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to assist technology corporations make data-driven decisions and stay on the forefront of industry conversations.

