Salesforce today detailed its plan to integrate the Informatica data management platform it acquired earlier this year with its framework for deploying artificial intelligence (AI) applications and agents.

Starting in the first quarter of 2026, it will be possible to bi-directionally share data between the Salesforce metadata model and catalog, the real-time integration platform from the Mulesoft arm of Salesforce and the enterprise-wide data catalog from Informatica to create a complete data index without having to copy data.

Organizations will be able to ingest data from any source into a data layer that provides the ability for AI applications and agents to remember, write custom code or automations at the application layer and then use the Salesforce Agentforce framework to orchestrate Salesforce and third-party AI agents.

That capability will make it possible to provide the trusted enterprise context that organizations require to successfully operationalize AI, says Rahul Auradkar, executive vice president and general manager for Unified Data Services, Data 360 and AI Foundations for Salesforce. “Models are incredibly powerful but they tend to be corporate stupid,” he says.

The overall goal is to provide the context that AI applications require to get the full value from their investments in platforms such as Salesforce Agentforce, adds Krish Vitaldevara, chief product officer for Informatica. “Context is the new currency,” he says.

Additionally, that level of integration creates a data verification layer that traces the full journey of data from origin to consumption, he adds.

The Mulesoft integration platform, meanwhile, will surface operational signals such as inventory changes, shipment delays, and order exceptions to trigger secure actions. As a result, an AI agent will know where a piece of data came from, how it was produced, and whether it’s fresh enough to trust, noted Vitaldevara.

Salesforce is now reporting there are 598 trillion transactions occurring per quarter across the Informatica data management platform, with another 32 trillion records ingested via the Salesforce Data 360 platform in the third quarter of this year. Collectively, that amount of data provides the metadata and context that AI applications and agents require to produce more accurate results. The challenge and the opportunity now is to find a way to safely expose the right data to the right AI model at the right time.

Regardless of the approach to data management, it’s clear AI will require a massive modernization effort that might take years to complete. In the meantime, IT teams will be asked to surface subsets of data that will enable organizations to derive value from a few AI projects today with an eye toward steadily expanding the scope of those efforts over the months and years to come.

It’s not clear how much patience senior business leaders will have with that approach, but the alternative is to invest heavily in AI applications and agents that, in addition to being more likely to surface inaccurate results, might ultimately do more harm than good, especially within the context of highly deterministic workflows that require organizations to perform the same task the same way every time.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Events

SHARE THIS STORY