
Oracle has deployed thousands of NVIDIA Blackwell graphics processing units (GPUs) to improve the capacity of its Oracle Cloud Infrastructure platform and support next-gen advanced reasoning models and agentic artificial intelligence (AI) deployments.
Fully optimized, this deployment of liquid-cooled NVIDIA GB200 NVL72 racks reflects a close partnership between NVIDIA and OCI. The use of NVIDIAโs newest Grace Blackwell chips is part of Oracleโs goal to build a supercluster of more than 100,000 NVIDIA GPUs to meet the extraordinary growth of accelerated computing in general and AI inference tokens in particular. The move demonstrates the market demand for high-speed compute created by the increase in large reasoning models, including those released by Oracle partner OpenAI.
As any data center manager knows, the Grace Blackwell chips need ultra-fast networking to perform their data crunching, and for that Oracle is using NVIDIA Quantum-2 InfiniBand and NVIDIA Spectrum-X Ethernet networking products. This chip-networking configuration, equipped with a full stack of database and application integrations, is built to enable the low latency required by large enterprise clients that run AI applications.
Also supporting the Oracle deployment is the NVIDIA DGX Cloud, which is NVIDIAโs cloud-hosted artificial intelligence supercomputing service, and can be hosted on any major cloud provider. (For instance, Google Cloud and AWS also host DGX Cloud.) The DGX platform is fully managed and is architected to produce rapid processing for AI training and inference workloads. Itโs serverless andโimportant given its complexityโallows access to tech experts.
In NVIDIAโs terms, this configuration of its chips, networking gear and the DGX platform is an โAI factory,โ which in essence is marketing-speak for a full-featured installment of the companyโs gear. โWeโre an AI factory now,โ said NVIDIA CEO Jensen Huang at the companyโs recent GTC event. The company has promoted the buzzword ever since. Specifically, an AI factory uses the new Grace Blackwell NVL72 platform, which is a rack-scale product that combines 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs.
Oracle, not surprisingly, doesnโt promote its new deployment as an NVIDIA AI factory. Instead, the database vendor stresses its cloud platformโs strength in AI, which is supported by a massive investment in the latest NVIDIA GPUs. Oracle touts OCI as enabling flexible deployment of GPUs, for public, government and private enterprise data facilities. Customer can select its OCI Dedicated Region solution, which is a private deployment of OCI within a customer’s data center; or the OCI Alloy, which is a full-featured cloud platform that allows customers to become a cloud provider and offer cloud services to their own customers.
Among the uses for Oracleโs new GPU-based supercluster is its support of OpenAI, which consistently releases new and ever more complex AI reasoning models. When OpenAI launched ChatGPT in 2022, the AI app was supported exclusively by Microsoft Azure, but OpenAI has since broadened its choice of cloud platforms to include Oracle. In January 2025, Oracle CEO Larry Ellison and OpenAI CEO Sam Altman attended a White House event in which President Donald Trump announced the Stargate Project, a partnership for AI development that included the two companies and featured a potential investment of $500 billion.