IBM Corp. will use its Think 2026 conference today to stipulate a broad expansion of its enterprise artificial intelligence portfolio, positioning a brand new “AI operating model” as the subsequent stage in its customers’ march toward translating early investments into measurable returns.
The announcements span agent orchestration, real-time data integration, hybrid cloud operations and digital sovereignty, reflecting what executives described as a shift away from isolated AI deployments toward systemic integration across the enterprise.
“The enterprises pulling ahead aren’t deploying more AI; they’re redesigning how their business operates,” IBM Chief Executive Arvind Krishna said during a media briefing.
IBM is framing AI as an operational transformation challenge slightly than a model or tooling race, emphasizing its independence from AI models. The corporate is promoting a four-part architecture built around agents, data, automation and hybrid infrastructure, which it argues must work together to deliver value at scale.
Krishna emphasized that the majority enterprise data stays internal, favoring IBM’s concentrate on hybrid cloud. “Over 70% of all data remains to be sitting contained in the enterprise in systems which might be core and germane to them,” he said. AI strategies must subsequently account for where data resides.
A central piece of today’s announcements is the evolution of watsonx Orchestrate — a platform for constructing, deploying and managing agents — right into a multi-agent control plane spanning heterogeneous environments.
IBM characterizes its orchestration layer as a unifying framework that integrates agents from multiple vendors, said Rob Thomas, senior vice chairman of software and chief business officer. “It’s about one of the best agentic technology from any company on the planet,” he said.
The strategy positions IBM as an integrator slightly than a builder of foundation models. While the corporate its own foundation models called Granite, it emphasizes partnerships with model providers reminiscent of Anthropic PBC and OpenAI LLC, in addition to major cloud platforms.
“We help put AI into the enterprise,” Krishna said, describing IBM’s role as orchestrating models, data and infrastructure while ensuring governance and security.
That approach reflects a broader shift within the competitive landscape. Reasonably than competing directly with hyperscalers on infrastructure or foundation models, IBM is specializing in what it sees as the subsequent layer of value: operational integration.
Son of Bob
IBM also introduced latest capabilities in its “Project Bob” platform, an AI-based tool system for enterprise software development lifecycles. Recent features are designed to support multimodel workflows across each cloud and on-premises environments.
IBM has deployed the technology internally and driven “over $5 billion of productivity improvements,” Thomas said.
Data integration is one other pillar of the strategy. Following its recent acquisition of Confluent Inc., IBM is emphasizing real-time data pipelines as a prerequisite for effective AI coordination. The mixing of streaming and batch data into watsonx.data is meant to offer agents with constantly updated context.
“Your AI is barely nearly as good as your data,” Thomas said. “We’re leveraging real-time data to tell agents that run within the enterprise.”
The corporate can be expanding its Concert platform, which applies AI to infrastructure operations and security. Initially focused on identifying vulnerabilities, the platform now embeds security management directly into developer workflows. It identifies and prioritizes risks as code is written and might generate automatic remediations to repair or patch vulnerable code.
Execuetives stressed that human oversight remains to be needed. “Nothing is totally hands off, nevertheless it is used as augmentation,” Thomas said, describing how AI-generated fixes are reviewed before deployment.
Sovereign control
Asserting that security and sovereignty are emerging as critical themes in enterprise AI, particularly in regulated industries and government environments, IBM formally announced the overall availability of Sovereign Core, a platform announced early this 12 months that supports AI deployments inside tightly controlled, geographically bounded environments.
Thomas said early use cases center on organizations requiring air-gapped or fully localized infrastructure. The offering includes an extensible catalog that organizations can populate with their very own applications or those from pre-vetted IBM, third-party and open-source partners.
Krishna framed sovereignty as a core requirement slightly than an optional feature as AI becomes embedded in critical systems. “This fashion people can mix and match what’s appropriate,” he said. “That’s our technique to go forward on AI.”
Quantum advance
Outside the enterprise realm, IBM highlighted recent advances in quantum computing, including a collaboration with the Cleveland Clinic to simulate protein complexes containing greater than 12,000 atoms. The milestone reflects growing confidence that quantum systems are moving beyond experimental phases.
The work is an element of a broader push toward what IBM calls “quantum-centric supercomputing,” which mixes quantum and classical systems to tackle complex problems in areas reminiscent of drug discovery. Marrying the 2 architectures is driving much of the present research into quantum processors.
“Quantum isn’t any longer a science lab experiment,” Krishna said. “Persons are doing real use cases of great scale.”
Nonetheless, executives cautioned that large-scale business applications remain several years away. Krishna said meaningful enterprise impact is more likely to emerge toward the top of the last decade as hardware capabilities improve.
IBM executives were careful to not trumpet AI’s transformational potential, selecting as an alternative to emphasise the exertions that also must be done to make models scalable and reliable.
Krishna drew a parallel to previous technology cycles, arguing that initial innovation phases are likely to center on infrastructure before moving up the stack. “The actual value in every one in all these comes with the applications and the deployment into enterprises,” he said.
Thomas compared the present state of AI to the early days of electrification, suggesting that current AI deployments resemble incremental productivity tools slightly than transformative systems.
“It’s useful, nevertheless it’s probably not redefining how the corporate runs,” he said. “That is about moving beyond light bulbs to things which might be more fundamental to how an organization operates.”
Photo: Paul Gillin/SiliconANGLE
Support our mission to maintain content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with greater than 11,400 tech and business leaders shaping the longer term through a singular trusted-based network.
About SiliconANGLE Media
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our latest proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to assist technology corporations make data-driven decisions and stay on the forefront of industry conversations.

