Loading web-font TeX/Main/Regular
An Energy-Efficient Edge Computing Paradigm for Convolution-Based Image Upsampling | IEEE Journals & Magazine | IEEE Xplore

An Energy-Efficient Edge Computing Paradigm for Convolution-Based Image Upsampling


In our design paradigm (left), we introduce the blocks highlighted in yellow to use convolution-based image upsampling algorithms in their optimal context. Unlike the sta...

Abstract:

State-of-the-art deep learning solutions for image upsampling are currently trained using either resize or sub-pixel convolution to learn kernels that generate high fidel...Show More

Abstract:

State-of-the-art deep learning solutions for image upsampling are currently trained using either resize or sub-pixel convolution to learn kernels that generate high fidelity images with minimal artifacts. However, performing inference with these learned convolution kernels requires memory-intensive feature map transformations that dominate time and energy costs in real-time applications. To alleviate this pressure on memory bandwidth, we propose a novel energy-efficient edge computing paradigm that confines the use of resize or sub-pixel convolution to training in the cloud by transforming learned convolution kernels to deconvolution kernels before deploying them for inference as a functionally equivalent deconvolution. These kernel transformations, intended as a one-time cost when shifting from training to inference, enable a systems designer to use each algorithm in their optimal context by preserving the image fidelity learned when training in the cloud while minimizing data transfer penalties during inference at the edge. We compare the inference properties of these convolution-based image upsampling algorithms and introduce a novel deconvolution inference algorithm, which we refer to as REVD2. To demonstrate the benefits of our approach, we upsample images selected from the BSD300 dataset using a pre-trained single-image super resolution network provided by the PyTorch model zoo. Using quantitative models of incurred time and energy costs to analyze this deep neural network, we estimate that using REVD2 for inference at the edge improves system latency by 2.1\times or 2.8\times and energy efficiency by 2.1\times or 2.7\times when respectively compared to sub-pixel or resize convolution counterparts.
In our design paradigm (left), we introduce the blocks highlighted in yellow to use convolution-based image upsampling algorithms in their optimal context. Unlike the sta...
Published in: IEEE Access ( Volume: 9)
Page(s): 147967 - 147984
Date of Publication: 28 October 2021
Electronic ISSN: 2169-3536

References

References is not available for this document.