Loading...
Selected products added to your list. To view the list, Click here

AI Network Infrastructure

Leviton AI Network Infrastructure

Building the Ideal AI Data Center Cabling System

Investments in Artificial Intelligence (AI) have skyrocketed, and AI computing clusters are at beginning of a huge ramp in growth. Data center managers face a range of challenges when deploying AI clusters, including power, cooling, geography, latency and deployment speeds. These factors drive the network architecture and type of cabling required throughout all types of data centers — from enterprise designs to large hyperscale builds.

Leviton offers a full suite of products to support AI networks. With a wide range of pre-terminated fiber cabling for speed of deployment and global availability, Leviton can provide solutions tailored to specific network deployments. Our product portfolio includes multimode fiber APC assemblies, 16-fiber MPO-based connectivity, and end-to-end systems designed for ultra-low loss performance.

In addition, Leviton provides expertise in data center design, rack elevation, and layout optimization to support AI applications effectively for today and tomorrow.

White Paper

 
Get an overview of the challenges when deploying AI clusters, and the factors that drive the network architecture, cabling, and connectivity required in AI data centers. 

Frequently Asked Questions

Generative AI relies on the use of GPUs to handle high-performance computing (HPC) workloads. The GPUs are typically built in clusters, which require much larger amounts of processing, data storage, and power densities than a traditional data center.

As an alternative to direct connections, structured cabling offers significant benefits in AI clusters. It reduces congestion in the overhead trays and cuts down on cabling runs — creating up to 85% fewer cables to manage. 

Also, the trunk cabling infrastructure can be pre-installed before active equipment is in place, so only the patch cords need to be installed once active equipment is installed. And structured cabling allows for smaller in-rack cables on the front side of patch panels. This helps reduce congestion and improve cable density within the rack itself.

Today, 200 Gb/s is most likely the slowest data rate in a large-scale AI cluster. In fact, 400 Gb/s and 800 Gb/s are more typical, and 1.6 Tb/s is expected to gain adoption in the near future, with 3.2 Tb/s to follow.

Connection at the 400 Gb/s data rate is dominated by traditional 12-fiber MPO connectors and LC connectors, with volume on the LC duplex FR4, and MPO-based DR4 and SR4 parallel optics. One difference is that, at this rate, multimode interfaces will drive the adoption of angled physical contact (APC) to reduce reflectance or return loss.

At the 800 Gb/s rate, 16-fiber MPO connectors will emerge as an option. With an offset key, shifted to the side, MPO-16 has its own unique connector interface while still using eight pairs of fibers for communication. Very Small Form Factor, or VSFF connectors, have also been introduced for these higher data rates. While the transceivers may keep using the MPO and LC interfaces, the MMC connector, for example, can enable higher densities at the patch panel to help facilitate the large volume of fibers that need to be managed.