Beijing, China : Huawei has unveiled a data center switch, which it claims to be the industry’s first data center switch built for the Artificial Intelligence (AI) era –– CloudEngine 16800.
“The data center switch built for the AI era has three characteristics, namely – embedded AI chip, 48-port 400GE line card per slot, and the capability to evolve to the autonomous driving network,” said Kevin Hu, President – Huawei Network Product Line.
With these three traits, according to the company, the data center will able to incorporates AI technologies that will help customers accelerate intelligent transformation.
According to Huawei’s Global Industry Vision (GIV) 2025, the AI adoption rate will increase from 16% in 2015 to 86% in 2025. The capability of leveraging AI to reshape business models, make decisions, and improve customer experiences will become a key driving force.
“A fully connected, intelligent world is fast approaching. Data centers become the core of the new infrastructures such as 5G and AI. Huawei will first introduce AI technology to data center switches, leading data center networks from the cloud era to the AI era,” said Hu.
Interestingly, the AI computing power is affected by the performance of data center networks, which is becoming a key bottleneck of the AI commercial process. On a traditional Ethernet, the AI computing power of data centers can only reach up to 50% due to a packet loss rate of 1%. At the same time, the industry expects that the annual volume of data worldwide will increase from 10 zettabytes in 2018 to 180 zettabytes (180 billion terabytes) in 2025.
Industry’s first data center switch with an embedded AI chip, reaching an AI computing power of 100%
The CloudEngine 16800 leverages the power of an embedded high-performance AI chip and uses the new iLossless algorithm to implement auto-sensing and auto-optimization of the traffic model, thereby realizing lower latency and higher throughput based on zero packet loss. It overcomes the computing power limitations caused by packet loss on the traditional Ethernet, increasing the AI computing power from 50 to 100% and improving the data storage Input/Output Operations Per Second (IOPS) by 30%.
Industry’s highest density 48-port 400GE line card per slot, meeting the requirements for 5-fold traffic growth in the AI era
The CloudEngine 16800 boasts an upgraded hardware switching platform, and with its orthogonal architecture, overcomes multiple technical challenges such as high-speed signal transmission, heat dissipation, and power supply. It provides the industry’s highest density 48-port 400GE line card per slot and the industry’s largest 768-port 400GE switching capacity (five times the industry average), meeting the traffic multiplication requirements in the AI era. In addition, the power consumption per bit is reduced by 50%, ensuring greener operation.
Enabling the autonomous driving network, identifying faults in seconds, and automatically locating faults in minutes
The CloudEngine 16800 is embedded with an AI chip, which is enhancing the intelligence level of devices deployed at the network edge and enabling the switch to implement local inference and rapid decision-making in real time. With its local intelligence and the centralized network analyzer FabricInsight, the distributed AI O&M architecture identifies faults in seconds and automatically locates the faults in minutes, helping to accelerate the advent of autonomous driving network. Furthermore, this architecture improves the flexibility and deployability of O&M systems.
“Huawei CloudEngine series data center switches have been successfully launched into commercial use for more than 6000 customers, helping digital transformation of industry customers such as finance, Internet, and carrier customers. Huawei launched the CloudEngine 16800 to help customers accelerate intelligent transformation, achieve pervasive use of AI, and jointly build a fully connected and intelligent world,” said Leon Wang, GM – Huawei Data Center Network Domain.
(Image source – Huawei)