Cisco recently released two data center devices optimized for artificial intelligence models, aiming to help enterprises handle AI workloads more efficiently. The two products released this time are UCS C885A M8 series servers and AI POD large data center equipment, both reflect Cisco's strategic layout in the AI field and its commitment to meeting the company's growing AI computing needs. These new products not only improve hardware performance, but also provide more comprehensive solutions in software and integration, providing enterprises with more convenient and efficient ways to deploy and manage AI applications.
Cisco Systems recently launched two data center devices optimized for artificial intelligence (AI) models at a partner event held in Los Angeles. These new systems will further enrich Cisco's hardware portfolio to help businesses efficiently handle AI-related workloads.
The first new product line is the UCS C885A M8 series server. The server can support up to eight graphics processing units (GPUs), providing enterprise users with powerful computing power. Cisco offers three GPU options for the series, including the H100 and H200 provided by Nvidia, as well as the MI300X chip from rival AMD.
In addition, each graphics card in the UCS C885A M8 series is equipped with a separate network interface controller (NIC), which are capable of acting as intermediaries between the server and the network. Cisco offers two Nvidia NIC options, namely ConnectX-7 and BlueField-3. The latter is an advanced solution called "Super NIC" that accelerates tasks such as encryption of data traffic.
Meanwhile, Cisco has integrated the BlueField-3 chip in its new server, a data processing unit (DPU), also made by Nvidia, which can improve the efficiency of managing additional storage and network infrastructure. AMD's central processor (CPU) is the one that handles computing tasks that are not processed by dedicated chips, and users can choose the latest fifth-generation CPU or the 2022 server processor family.
In addition to servers, Cisco has launched four large data center devices called AI PODs. These AI PODs can integrate up to 16 Nvidia graphics cards, network devices and other support components, and customers can also choose to add storage devices from NetApp or Pure Storage. In terms of software, AI POD is equipped with a license for Nvidia AI Enterprise, which contains a series of pre-packaged AI models and tools for enterprises to train their own neural networks. In addition, there is the Nvidia Morpheus framework for building AI-powered network security software, as well as the HPC-X toolkit to help optimize AI cluster networks and the Red Hat OpenShift platform for simplifying the construction and deployment of container applications.
"Enterprise customers are under pressure to deploy AI workloads, especially as smart workflows become a reality and AI is beginning to solve problems independently," said Jeetu Patel, chief product officer of Cisco. He added that Cisco's AI POD and GPU Server innovation improves the security, compliance and processing capabilities of these workloads.
Cisco plans to start accepting orders for AI POD next month, and the UCS C885A M8 server series is now available for orders and is expected to start shipping by the end of the year.
Key points:
Cisco has launched the UCS C885A M8 series servers that support up to eight Nvidia GPUs, providing powerful computing power for AI workloads.
The newly released AI POD device integrates up to 16 Nvidia graphics cards and supports extended storage options to facilitate enterprises to quickly deploy AI solutions.
Cisco's AI solutions emphasize enhanced security, compliance and processing capabilities to address new needs in enterprise applications.
In short, Cisco's new product line provides enterprises with powerful AI solutions, helping enterprises gain competitive advantages in the AI era. The launch of these new devices marks a solid step in Cisco's AI field and provides enterprises with a more complete AI infrastructure choice.