NVIDIA HAS ANNOUNCED a series of updates to its GPU-accelerated deep learning software that it claims will double
deep learning training performance.
"The new automatic multi-GPU scaling capability in Digits 2 maximises the available GPU resources by automatically distributing the deep learning training workload across all of the GPUs in the system," Nvidia said.
"Using Digits 2, [our] engineers trained the well-known AlexNet neural network model more than two times faster on four Nvidia Maxwell architecture-based GPUs compared to a single GPU."
Nvidia said that the cuDNN 3 update also provides higher performance than cuDNN 2, enabling researchers to train neural networks up to two times faster on a single GPU.
"The new cuDNN 3 library is expected to be integrated into forthcoming versions of the deep learning frameworks Caffe, Minerva, Theano and Torch, which are widely used to train deep neural networks," explained the firm.
"It adds support for 16-bit floating point data storage in GPU memory, doubling the amount of data that can be stored and optimising memory bandwidth. With this capability, cuDNN 3 enables researchers to train larger and more sophisticated neural networks."
The Digits 2 Preview release is available now as a free download for registered developers, and the cuDNN 3 library is expected to be available in major deep learning frameworks "in the coming months".