Loading [MathJax]/extensions/MathMenu.js
EVA²: Exploiting Temporal Redundancy in Live Computer Vision | IEEE Conference Publication | IEEE Xplore

EVA²: Exploiting Temporal Redundancy in Live Computer Vision


Abstract:

Hardware support for deep convolutional neural networks (CNNs) is critical to advanced computer vision in mobile and embedded devices. Current designs, however, accelerat...Show More

Abstract:

Hardware support for deep convolutional neural networks (CNNs) is critical to advanced computer vision in mobile and embedded devices. Current designs, however, accelerate generic CNNs; they do not exploit the unique characteristics of real-time vision. We propose to use the temporal redundancy in natural video to avoid unnecessary computation on most frames. A new algorithm, activation motion compensation, detects changes in the visual input and incrementally updates a previously-computed activation. The technique takes inspiration from video compression and applies well-known motion estimation techniques to adapt to visual changes. We use an adaptive key frame rate to control the trade-off between efficiency and vision quality as the input changes. We implement the technique in hardware as an extension to state-of-the-art CNN accelerator designs. The new unit reduces the average energy per frame by 54%, 62%, and 87% for three CNNs with less than 1% loss in vision accuracy.
Date of Conference: 01-06 June 2018
Date Added to IEEE Xplore: 23 July 2018
ISBN Information:
Electronic ISSN: 2575-713X
Conference Location: Los Angeles, CA, USA

Contact IEEE to Subscribe

References

References is not available for this document.