teradata,

Teradata has partnered with Fivetran to make it simpler to move data into its cloud platform for processing and analyzing data at scale.

Dan Spurling, senior vice president for product management at Teradata, said a Teradata destination connector will make it simpler to pull data from more than 700 sources into the Teradata VantageCloud service using extract, transform and load (ETL) and extract, load and transform (ELT) tools provided by Fivetran.

Hosted on a platform managed by Fivetran, that connector will make it possible to synchronize data between the platform and the Teradata VantageCloud service.

That capability is especially crucial as more organizations leverage Teradata VantageCloud to aggregate data in a way that makes it simpler to build artificial intelligence (AI) applications at scale, said Spurling.

The challenge that organizations typically encounter is that setting up and maintaining the data pipelines that drive those AI applications requires a lot of time and effort. The alliance with Fivetran reduces that complexity using a platform that is managed by Fivetran on behalf of the data engineers who build those pipelines, noted Spurling.

In general, Teradata is making a case for an approach to building AI applications using a platform that already houses massive amounts of data. In addition to making it easier to ingest data, Teradata has also significantly expanded the capabilities of a vector database that it provides within the Teradata VantageCloud service, said Spurling.

The overall goal is to provide IT teams with access to multiple types of data engines that can process data using open data formats that prevent organizations from becoming locked into specific platforms, he added.

There is, of course, no shortage of platforms for processing massive amounts of data, so each organization will need to determine which ones best suit their specific use cases. The one certain thing is that the amount of data that will need to be collected, processed, analyzed, stored and secured in the age of AI will exponentially increase. Most organizations are not going to be able to handle that amount of data without relying on some sort of data lake platform to both unify the management of data and reduce the total cost of storing it.

Inevitably, many organizations will soon find themselves relying more on AI tools to manage all the data required to train and customize the AI models being embedded into enterprise applications, noted Spurling. After all, while the amount of data being managed has exponentially increased, the size of the teams assigned to accomplish that task has, in most cases, stayed roughly the same. More challenging still, competition for that expertise has only increased further in the age of AI.

Regardless of how data is managed, there is now a greater appreciation for a discipline that, before the rise of AI, was often taken for granted. The issue now becomes how to harness the expertise of database administrators, data engineers, data scientists and application developers in a way that reduces the amount of time required to build and deploy the next generation of AI applications.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Events

SHARE THIS STORY