Generative AI Impacts Data Center Architecture
Artificial Intelligence (AI) and Machine Learning (ML) are leading transformations across industries and addressing global challenges. Today, a new generation of artificial intelligence—Generative AI—is emerging, leveraging deep neural networks to unlock new capabilities. Generative AI is poised to become a catalyst in the digital age, reshaping the patterns of business operations and societal functions.
Leading enterprises are actively adopting Generative AI to gain a competitive edge, and publicly available models have stimulated market demand, leading to significant changes in the data center landscape—ranging from hyperscale data centers to enterprise-level data centers. Faced with the challenges of deploying sophisticated hardware, collecting data, and training models in data centers, a central issue arises: How can we build infrastructure that can support the complex and heavy computational demands of Generative AI technology?
The rise of Generative AI is driving the transformation of data centers. The training process of Generative AI is extremely complex, requiring the parallel processing of massive datasets from numerous sources and the execution of tens of thousands of computations at the same time. Traditional CPU (Central Processing Unit) servers are ill-equipped for this task, making GPU (Graphics Processing Unit) servers or nodes crucial.
A large-scale Generative AI cluster may consist of tens of thousands of interconnected nodes, consuming power that is ten times that of an ordinary cluster, and connected through high-speed, low-latency transmission means. Even enterprise-level clusters require multiple GPUs to operate at full capacity to train models—and as application scenarios continue to expand and benefits become increasingly apparent, their scale is set to grow further.
To ensure the operation of Generative AI, data center infrastructure must meet the following requirements:
- Higher bandwidth and lower latency – Backend nodes need to support high-speed data transmission between 100G and 800G and achieve real-time (less than 20 milliseconds) east-west data flow, while front-end switches need to reach 800G or even 1.6T transmission rates.
- Stronger power supply and cooling efficiency – With rack densities climbing to 30-100kW per rack, more efficient cooling solutions (such as liquid cooling technology) are needed to address higher heat dissipation challenges.
- Advanced communication protocols – The backend adopts InfiniBand protocol to support high-bandwidth, low-latency connections between nodes, while the frontend uses Ethernet protocol to support switching, storage, and management functions.
- High-density, high-performance cabling – Ensuring the efficiency and stability of connections between nodes, storage, management, and switching.
Generative AI Solutions Guide Explore innovative network infrastructure solutions to help you easily design, deploy, and scale the backend, frontend, and storage network structures of complex high-performance computing AI environments.
News
Dept.
Contact Us
- Add: 2485 Huntington Drive#218 San Marino, US CA91108
- Tel: +1-626-7800469
- Fax: +1-626-7805898
- Address: 1702 SINO CENTER 582-592 Nathan Road, Kowloon H.K.
- TEL: +852-2384-0332
- FAX: +852-2771-7221
- Add: Rm 7, Floor 7, No. 95 Fu-Kwo Road, Taipei, Taiwan
- Tel: +886-2-85124115
- Fax: +886-2-22782010
- Add: Rm 406, No.1 Hongqiao International, Lane 288 Tongxie Road,Changning District, Shanghai
- Tel: +86-21-60192558
- Fax: +86-21-60190558
- Add: 19 Avenue Des Arts, 101, BRUSSELS,
- Tel: +322 -4056677
- Fax: +322-2302889