NVIDIA DGX-2 increases deep learning performance 10x in six months

March 27, 2018

Product

NVIDIA DGX-2 increases deep learning performance 10x in six months

The DGX-2 delivers two petaflops of computational power in a single server, supported by a 2x memory boost in NVIDIA Tesla V100 GPUs and the NVIDIA NVSwitch GPU interconnect fabric.

SAN JOSE, CA. NVIDIA has announced that its deep learning compute platform, the DGX-2, provides 10x the performance compared with the previous generation, which was released six months ago. The DGX-2 is capable of delivering two petaflops of computational power for executing deep learning workloads in a single server, supported by a 2x memory boost in NVIDIA Tesla V100 GPUs and the NVIDIA NVSwitch GPU interconnect fabric, which allows DGX-2 GPUs to communicate at up to 2.4 terabytes per second.

The DGX-2 can be outfitted with up to 16, 32 GB Tesla V100 GPUs that share a common memory space for high-performance computing (HPC) applications, while NVSwitch extends NVIDIA’s NVLink offering to deliver 5x the bandwidth of leading PCIe switches. The result is a platform with the processing capabilities of 300 servers, but 60x smaller and 18x more power efficient.

The DGX-2 is also supported by updates the NVIDIA’s deep learning and HPC software stack, including new versions of CUDA, TensorRT, NCCL, cuDNN, and a new Isaac software developer kit. These updates are available free of charge for members of the NVIDIA developer community.

For more information, including technical details, visit nvda.ws/2IRilLe.

eletter-03-27-2018