Loading [a11y]/accessibility-menu.js
SDCNN: An Efficient Sparse Deconvolutional Neural Network Accelerator on FPGA | IEEE Conference Publication | IEEE Xplore

SDCNN: An Efficient Sparse Deconvolutional Neural Network Accelerator on FPGA


Abstract:

Generative adversarial networks (GANs) have shown excellent performance in image generation applications. GAN typically uses a new type of neural network called deconvolu...Show More

Abstract:

Generative adversarial networks (GANs) have shown excellent performance in image generation applications. GAN typically uses a new type of neural network called deconvolutional neural network (DCNN). To implement DCNN in hardware, the state-of-the-art DCNN accelerator optimizes the dataflow using DCNN-to-CNN conversion method. However, this method still requires high computational complexity because the number of feature maps is increased when converted from DCNN to CNN. Recently, pruning has been recognized as an effective solution to reduce the high computational complexity and huge network model size. In this paper, we propose a novel sparse DCNN accelerator (SDCNN) combining these approaches on FPGA. First, we propose a novel dataflow suitable for the sparse DCNN acceleration by loop transformation. Then, we introduce a four stage pipeline for generating the SDCNN model. Finally, we propose an efficient architecture based on SDCNN dataflow. Experimental results on DCGAN show that SDCNN achieves up to 2.63 times speedup over the state-of-the-art DCNN accelerator.
Date of Conference: 25-29 March 2019
Date Added to IEEE Xplore: 16 May 2019
ISBN Information:

ISSN Information:

Conference Location: Florence, Italy

I. Introduction

Convolutional neural networks (CNNs) have been widely used in various computer vision applications with high level of accuracy. However, it depends on the fairly large amount of labeled training data. Generative adversarial networks (GANs), which produce novel data samples from high-dimensional data distributions, emerge as a solution [1]. GANs usually consist of a generator and discriminator, which compete with each other for learning data distributions. The generator produces similar data so that real and fake samples cannot be distinguished. on the other hand, the discriminator distinguishes real samples from fake samples produced by the generator.

Contact IEEE to Subscribe

References

References is not available for this document.