EXCERPT:
INTRODUCTION
The latest advancements in Artificial Intelligence (AI) are introducing revolutionary new ways to automate our workflows, solve problems, and develop insights on large datasets. However, intense processing, higher data rate transceivers, and large data storage requirements for AI computing creates new power and latency implications in data centers that can affect network infrastructure design.
Applications using AI are expanding every day, driving up demand for compute power in financial, industrial, government, manufacturing, and so many other sectors. Generative AI platforms like OpenAI's ChatGPT and Anthropic's Claude offer virtual assistance, including research, task automation, coding, and more. Microsoft Copilot, introduced in 2023, is an AI companion that helps enterprises across many sectors to save time by summarizing meetings, identifying trends in spreadsheets, and more. In the biomedical sector, AlphaFold is an AI program recently developed by the DeepMind research lab to make highly accurate predictions about how amino acid sequences form three-dimensionally — advancing drug and vaccine development and accelerating biological research.
As we continually discover new ways to harness emerging AI technology, AI computing clusters are at the front end of a huge ramp in growth. Industry analyst LightCounting expects a nearly 30% Compound Annual Growth Rate (CAGR) in fiber transceiver sales for AI clusters through 2028. Transceivers for non-AI applications in the data center will see a 9% CAGR — also strong growth — but it pales in comparison to what the AI expansion will bring.