Loading [a11y]/accessibility-menu.js
Benchmarking State-of-the-Art Deep Learning Software Tools | IEEE Conference Publication | IEEE Xplore

Benchmarking State-of-the-Art Deep Learning Software Tools


Abstract:

Deep learning has been shown as a successful machine learning method for a variety of tasks, and its popularity results in numerous open-source deep learning software too...Show More

Abstract:

Deep learning has been shown as a successful machine learning method for a variety of tasks, and its popularity results in numerous open-source deep learning software tools coming to public. Training a deep network is usually a very time-consuming process. To address the huge computational challenge in deep learning, many tools exploit hardware features such as multi-core CPUs and many-core GPUs to shorten the training and inference time. However, different tools exhibit different features and running performance when they train different types of deep networks on different hardware platforms, making it difficult for end users to select an appropriate pair of software and hardware. In this paper, we present our attempt to benchmark several state-of-the-art GPU-accelerated deep learning software tools, including Caffe, CNTK, TensorFlow, and Torch. We focus on evaluating the running time performance (i.e., speed) of these tools with three popular types of neural networks on two representative CPU platforms and three representative GPU platforms. Our contribution is two-fold. First, for end users of deep learning software tools, our benchmarking results can serve as a reference to selecting appropriate hardware platforms and software tools. Second, for developers of deep learning software tools, our in-depth analysis points out possible future directions to further optimize the running performance.
Date of Conference: 16-18 November 2016
Date Added to IEEE Xplore: 17 July 2017
ISBN Information:
Conference Location: Macau, China

I. Introduction

In the past decade, deep learning has been successfully applied in diverse application domains including computer vision, image classification, speech recognition, natural language processing, etc. The success of deep learning is attributed to its high representational ability of input data, by using various layers of artificial neurons [1]. GPUs have played a key role in the success of deep learning by significantly reducing the training time [2]. In order to improve the efficiency in developing new deep neural networks, many open-source deep learning toolkits have been recently developed, including Caffe from UC Berkeley [3], CNTK from Microsoft [4], TensorFlow (TF) from Google [5], Torch [6], and many other tools like Theano [7], MXNet [8], etc. All these tools support multi-core CPUs and manycore GPUs for high-performance. One of the main tasks of deep learning is to learn a huge number of weights, which can be implemented by vector or matrix operations. TensorFlow uses Eigen [9] as accelerated matrix operation library, while Caffe, CNTK and Torch employ OpenBLAS [10] or cuBLAS [11] to speed up matrix related calculations.

Contact IEEE to Subscribe

References

References is not available for this document.