With BlueField 3, Nvidia delivers a chip that not only accelerates AI, but also gives data center infrastructure a boost.
While the Jetson micro-boards are the modules that compute the AI tasks in the final device, BlueField is the peer and ensures that artificial neural network training is accelerated in data centers. At its GTC 21 tech conference, Nvidia announced BlueField-3, a data processing unit (DPU) that offers acceleration functions not only for artificial intelligence, but also for networking, storage and cybersecurity tasks in data centers.
Cloud data centers are fully virtual. On a platform with massive computing power, network connectivity, and storage space, every user receives their share: whether it is a virtual machine, network bandwidth, software services, or memory for storing data. BlueField-DPU’s accelerator modules handle infrastructure, such as network management, segregation of different user domains, traffic scanning for security, IPsec encryption and much more. Actual applications still exist being handled by “traditional” processors who can now handle their businesses exempt from all administrative tasks. Thus, BlueField-3 separates data center infrastructure from business applications. As for the latter, BlueField-3 has 16 Arm-A78 processors on board. Thanks to the accelerator, however, according to Nvidia, the DPU should have performance corresponding to the equivalent of around 300 Xeons mid-range.
Rent $ 9000 a month
Little is known about Nvidia as not only a supplier of components or graphics cards and computer boards, but also of complete systems. In the case of BlueField, Nvidia is offering devices in various stages of expansion, from the DGX Station, which acts as a workstation for teams, to the DGX server, to the “DGX Superpod”, which is a ready-made infrastructure for data centers. Since investments in these devices are high, Nvidia has also introduced a rental model in GTC: You can rent a DGX workstation for $ 9,000 a month.
BlueField-3 also provides Nvidia fully integrated into full DGX systems, from the workstation (front) to the “pod”, which consists of three servers, to the “superpod” for data centers.
“The modern, hyper-scale cloud requires fundamentally a new architecture for a data center. A new type of processor designed to address data center infrastructure software is needed to mitigate and accelerate the massive computing load of virtualization, network, storage, security, and other services,” said Jensen Huang, Founder and CEO of Nvidia. From the original artificial intelligence services. “
Software support for “on-chip data center”
In addition to the hardware, Nvidia also has software frameworks that can be used to make the hardware work. Nvidia offers a software development kit for BlueField called DOCA, which stands for “data center on chip”. “What is CUDA for GPUs, DOCA is for DPUs” is how the responsible product manager sums up the function of DOCA in a nutshell. It includes a runtime environment for building, translating and optimizing BlueField DPU applications, coordination tools to deploy, update and monitor thousands of DPUs across the data center, as well as libraries, APIs and a growing number of applications such as deep packet scanning and load balancing.
You may also be interested in