The PCI-SIG consortium today officially released the next PCI Express (PCIe) 7.0 specification that will be used to build next-generation motherboards that will be needed to drive everything from 800G Ethernet to artificial intelligence (AI) applications.

At the same time, the PCI-SIG revealed that it is driving the development of an optical interconnect for motherboards based on the PCIe 6.4 specification and that work has begun on defining a PCIe 8.0 specification.

The PCIe specification defines a 128.0 GT/s raw bit rate to provide up to 512 GB/s bi-directional I/O bandwidth across motherboards that will be used in next-generation IT infrastructure.

PCI-SIG President and Chairperson Al Yanes said that, based on previous iterations of the PCIe specification, it will take the 1,000 members of the consortium about two-plus years to build those platforms.

In general, the PCIe Consortium has been able to double bandwidth every three years and plans to maintain that pace with PCIe 8.0 specification. Less clear at the moment is whether that will require motherboards to rely solely on optical rather than existing electrical based designs to achieve that goal, he noted.

Itโ€™s not clear to what degree organizations are planning to replace legacy infrastructure in the age of AI, but the more data-intensive applications become, the less probable it is they will be able to support large numbers of AI models and their associated agents. In fact, a recent Futurum Group research survey finds one-quarter of the IT decision-makers are already prioritizing the modernization of their data infrastructure.

Obviously, many of the decisions will involve platforms based on the current PCIe 6.0, but as more AI applications are deployed, demand for more robust platforms built around the next generation of that specification will increase.

In the meantime, IT leaders should closely consider the ability of their existing infrastructure to support AI workloads, including the degree to which AI inference engines might require access to graphical processor units (GPUs) or might be just as well served using a less expensive class of processors.

Additionally, if an AI model is to be deployed at the network edge, the processing capabilities of the platform running them need to be evaluated.

Finally, distributed computing applications of any kind require network bandwidth that might become more congested as, for example, AI agents invoke application programming interfaces (APIs) to access data.

Ultimately, the total cost of implementing any AI initiative will need to be thoroughly evaluated. It’s one thing to create a proof-of-concept, but the cost of running AI software that is continuously analyzing data will add up. While smaller AI models are certainly less expensive to run than some of the larger AI models developed by OpenAI, Anthropic and others, the overall amount of IT infrastructure that might be required to support hundreds, or even thousands, of AI agents will require significant investment. The only thing to determine is exactly how much of an appetite there is for deploying AI agents once those costs are factored into the equation.