The AI trust gap: Developers grapple with issues around security, memory, cost and interoperability

There’s a paradox amongst developers surrounding their use of artificial intelligence today: They’re willing to make use of AI, but trust in AI tools has dropped sharply.

That was among the many findings contained within the annual developer survey commissioned by Stack Overflow, a preferred web resource within the developer community. The survey found that though 84% of developers now use AI, only 29% trust the accuracy.

Results comparable to these highlight the growing pains AI is experiencing because the technology becomes ingrained into enterprise operations. As questions swirl around issues comparable to security, memory, cost and interoperability, developers are difficult the presumption that AI is able to make their lives easier.

“If AI is speculated to be a revolutionary productivity tool, then why am I still doing a lot of the work?” asked Tony Loehr, a solutions engineer at Cline Bot Inc.

Solving the memory problem

Loehr spoke on the Developer Week conference in San Jose, a gathering on Thursday and Friday of engineers and enterprise executives focused on independent software development and AI tools. The event provided a chance to evaluate how AI has impacted enterprise operations and what key issues have moved to the front. Considered one of these involves memory.

As recently documented by SiliconANGLE Chief Executive John Furrier, AI is “memory-bound.” A typical AI server demands about eight times more memory than a conventional machine to deliver informed, accurate results.

Richmond Alake of Oracle spoke in the course of the conference concerning the importance of memory for AI developers.

“The character of what we’ve to do to construct AI agents in production is changing,” Richmond Alake, director of AI developer experience at Oracle Corp., said during a presentation on the conference on Friday. “We’d like memory to be front and center.”

A part of Oracle’s solution is to offer representations of memory as tables in an Oracle database, allowing developers to construct agents that may remember. The Oracle AI Database serves as an Agent Memory Core, providing unified retrieval, scalable persistence and foundations for constructing agents that learn and adapt over time.

“Agent Memory Core is the a part of your system that sees essentially the most traffic of information,” Alake explained. “It’s a bunch of system components working together to ensure your agents adapt. Memory will not be only a layer, it became a product, it became a core feature.”

Benefits of smaller models

One other solution to enhance AI’s accuracy and reduce the quantity of memory needed involves a move toward smaller models. Considered one of the businesses involved in streamlining this process for developers is Red Hat Inc.

The goal is to speed up inference and a number one selection to facilitate that is quantization, a way for converting large language model weights to lower-parameter formats and reducing the memory load. Red Hat cites an example where quantization allows users to run a Llama 70 billion-parameter model on a single Nvidia A100 GPU versus the necessity for 4 A100s to run the identical model. The corporate maintains a repository of pre-quantized models for developers to access.

Legare Kerrison of Red Hat offered a brand new perspective concerning the advantages of smaller models.

“When the LLMs are smaller, it signifies that our costs are also smaller and we’re moving faster,” said Legare Kerrison, a developer advocate for AI at Red Hat. “There’s less time to first token.”

Despite the promise of smaller models, most of the tools on display within the exposition hall at Developer Week were designed to facilitate access to leading AI models comparable to those from OpenAI Group PBC, Anthropic PBC and China’s DeepSeek. The developer community is looking closely on the merits of each small and huge models, a situation that may likely turn into clearer over the approaching yr.

“Without delay, it’s hard to bet against the muse models, especially in our space,” Jody Bailey, chief product and technology officer at Stack Overflow, said in an interview with SiliconANGLE in the course of the conference. “I do imagine there are plenty of places where small models make a ton of sense.”

Constructing governance for MCP

One other area of focus for developers has been the increasingly influential role of MCP or Model Context Protocol servers of their work. MCP servers provide LLMs and AI agents with the flexibility to connect with external data sources, other models and software applications.

MCP governance has been a key concern in the event community. Security professionals from Red Hat and IANS Research have documented security concerns with MCP in recent months, and one research report found that almost 2,000 of the MCP servers exposed on the internet today lacked proper authentication and access controls.

“It’s like a number of things where you have got adoption first,” said Stack Overflow’s Bailey, who noted that his firm’s own MCP server employs enterprise-grade access controls. “Anybody can get an MCP server.”

The issue is that MCP doesn’t offer the governance controls required for production systems. To deal with this issue, firms comparable to Descope Inc. and WSO2 LLC have recently announced solutions designed to facilitate safer use.

Descope released Client ID Metadata Documents or CIMD support in January as a part of its Agentic Identity Hub. CIMD addresses the client registration challenges of MCP by making a stronger identification and verification path with server interactions.

WSO2 has approached the governance issue by constructing a gateway feature into its API manager and SaaS platform Bijira. The WSO2 MCP Gateway adds governance, security and operational controls to the MCP standard.

“You possibly can’t depend on agents to control themselves,” said Derric Gilling, vp and general manager of the API Platform at WSO2. “Your gateway might have to evolve.”

Gateways for interoperability

Evolution of the gateway may indeed play a more necessary role for AI within the enterprise, as seen within the statements and actions of major players comparable to IBM Corp. The corporate’s vision of the AI Gateway as a specialized middleware platform that facilitates the combination and management of AI tools is a central a part of its AI strategy. Interoperability might be key, in line with Nazrul Islam, chief architect and CTO for AI and the combination platform at IBM.

Nazrul Islam of IBM outlined the importance of AI agent interoperability for Developer Week attendees.

“The issue will not be the model, the issue will not be the agent, the issue is the interactions,” said Islam. “We’re missing the interoperability and never the intelligence.”

IBM’s AI Gateway is a feature of the DataPower service in API Connect. It’s designed to make it easier to administer access to API endpoints utilized by various AI applications.

“There may be the policy enforcement point,” Islam explained. “Agents recommend, the gateway authorizes. We’d like a control layer for agent-to-agent interaction.”

The challenge confronting developers and the technology community usually is that frenetic activity to construct the infrastructure around AI doesn’t carry the promise that it’ll deliver expected results. Much as Stack Overflow has documented the trust gap amongst developers, open questions remain around full enterprise adoption, at the least until a few of the core issues comparable to security and interoperability are resolved.

“Simply because AI can solve an issue doesn’t mean that individuals will adopt that solution,” said Caren Cioffi, co-founder and CEO of Agenda Hero Inc. “Agents can’t automate adoption.”

Photos: Mark Albertson/SiliconANGLE

Support our mission to maintain content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with greater than 11,400 tech and business leaders shaping the longer term through a novel trusted-based network.

About SiliconANGLE Media

SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. Because the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the Latest York Stock Exchange — SiliconANGLE Media operates on the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our latest proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to assist technology firms make data-driven decisions and stay on the forefront of industry conversations.

Related Post

Leave a Reply