Abstract:
Fast and small-resourced implementation of convolutional neural network (CNN) into a field-programmable gate array (FPGA) was realized using a binarized neural network (N...Show MoreMetadata
Abstract:
Fast and small-resourced implementation of convolutional neural network (CNN) into a field-programmable gate array (FPGA) was realized using a binarized neural network (NN). We propose a set of neuron and network models optimized for fully binarized implementation of general NNs using the look-up-tables (LUTs) in modern FPGAs, which is herein referred to as sparse-LUT model. Arrayed MNIST data images of more than 40 characters input from a camera were recognized with 92.8% accuracy and classified with the colored marks on the characters on organic light-emitting diode (OLED) display images with 1-ms cycle time, <; 1.0-ms delay in the LUT-based CNN recognition, and <; 2-ms total time delay. In combination with stochastic time-divided signal processing, binarized signals in this model can be extended for processing multi-bit (analogue-like) signals in an oversampling manner with increased recognition accuracies up to 98.6% in MNIST and 58.3% in CIFAR-10 image data sets using CNN. The source codes for the binarized NN core were released and open-sourced.
Date of Conference: 08-11 September 2019
Date Added to IEEE Xplore: 23 January 2020
ISBN Information: