Location: Home/Contact/News

AI-driven Data Center Cabling Optimization

In recent years, the rapid development of artificial intelligence (AI) technology has not only reshaped the boundaries of technology but also put forward new requirements for the infrastructure that supports its operation. This article will delve into the cabling strategies of AI data centers, analyzing how to optimize performance and efficiency to meet these new challenges.

Transition to AI-Driven Data Centers
The proliferation of AI technology, represented by innovations such as DALL-E 2 and ChatGPT, has greatly influenced the public's perception and expectations of AI. As these technologies become increasingly indispensable across various industries, the infrastructure that supports them must also continue to evolve. AI is now the main driving force behind the growth of data centers, necessitating changes in the design and operation of these centers. The core of AI computing lies in its reliance on high-performance graphics processing units (GPUs), which are designed to handle complex parallel tasks. The processing power required to train and run AI models often exceeds the capabilities of a single machine, thus requiring the interconnection of multiple GPUs between servers and racks. This setup forms AI clusters within data centers, presenting unique cabling challenges and opportunities.

Architectural Differences: AI vs. Traditional Data Centers
Traditional data centers, especially hyperscale facilities, typically adopt a folded Clos architecture, also known as the "leaf-spine" architecture. In this setup, server racks are connected to top-of-rack (ToR) switches, which are then connected to leaf switches via optical fiber cables. However, the cabling needs of AI clusters are vastly different from those of traditional data centers, requiring us to adopt innovative methods to meet their demanding requirements for high-speed connections and low latency. The report outlines: "GPU servers require more inter-server connections, but due to power and heat limitations, the number of servers per rack is usually smaller. Therefore, compared to traditional architectures, there is more rack-to-rack cabling in AI data center architectures." This increase in cabling complexity is necessary to support the higher data transfer rates required for AI workloads, which range from 100G to 400G, and cannot be supported by copper cables over these transmission distances.

Minimizing Latency in AI Clusters
In AI and machine learning (ML) algorithms, latency is a crucial performance metric, as it directly affects the runtime of large training models. To minimize this impact, we must optimize the network architecture to ensure that the connections between GPU servers are as tight and efficient as possible. However, not all data centers can accommodate this configuration, especially older facilities with lower power capacities. These centers may need to separate GPU racks, further increasing cabling requirements.

Choosing the Right Transceivers and Optical Fiber Cables
When selecting optical transceivers and fiber optic cables, we must balance cost, power efficiency, and future scalability. Parallel optical technology is favored for its cost-effectiveness and simplified deployment process, especially in AI clusters that require large-scale deployment.

News

Dept.

Contact Us

America
U.S.A.+
  • Add: 2485 Huntington Drive#218 San Marino, US CA91108
  • Tel: +1-626-7800469
  • Fax: +1-626-7805898
Asia
Hong Kong+
  • Address: 1702 SINO CENTER 582-592 Nathan Road, Kowloon H.K.
  • TEL: +852-2384-0332
  • FAX: +852-2771-7221
Taiwan+
  • Add: Rm 7, Floor 7, No. 95 Fu-Kwo Road, Taipei, Taiwan
  • Tel: +886-2-85124115
  • Fax: +886-2-22782010
Shanghai+
  • Add: Rm 406, No.1 Hongqiao International, Lane 288 Tongxie Road,Changning District, Shanghai
  • Tel: +86-21-60192558
  • Fax: +86-21-60190558
Europe
BELGIUM+
  • Add: 19 Avenue Des Arts, 101, BRUSSELS,
  • Tel: +322 -4056677
  • Fax: +322-2302889