Code Acceleration Using Memristor-Based Approximate Matrix Multiplier: Application to Convolutional Neural Networks | IEEE Journals & Magazine | IEEE Xplore

Code Acceleration Using Memristor-Based Approximate Matrix Multiplier: Application to Convolutional Neural Networks


Abstract:

In this paper, we demonstrate the feasibility of building a memristor-based approximate accelerator to be used in cooperation with general-purpose ×86 processors. First, ...Show More

Abstract:

In this paper, we demonstrate the feasibility of building a memristor-based approximate accelerator to be used in cooperation with general-purpose ×86 processors. First, an integrated full system simulator is developed for simultaneous simulation of any multicrossbar architecture as an accelerator for ×86 processors, which is performed by coupling a cycle accurate Marss ×86 processor simulator with the Ngspice mixed-level/mixed-signal circuit simulator. Then, a novel mixedsignal memristor-based architecture is presented for multiplying floating-point signed complex numbers. The presented multiplier is extended for accelerating convolutional neural networks and finally, it is tightly integrated with the pipeline of a generic ×86 processor. To validate the accelerator, first it is utilized for multiplying different matrices that vary in size and distribution. Then, it is used as an accelerator for accelerating the tiny-dnn, an open-source C++ implementation of deep learning neural networks. The memristor-based accelerator provides more than 100× speedup and energy saving for a 64 × 64 matrixmatrix multiplication, with an accuracy of 90%. Using the accelerated tiny-dnn for the MNIST database classification more than 10× speedup and energy saving along with 95.51% pattern recognition accuracy is achieved.
Page(s): 2684 - 2695
Date of Publication: 06 June 2018

ISSN Information:


Contact IEEE to Subscribe

References

References is not available for this document.