Silicon Photonics Codesign for Deep Learning | IEEE Journals & Magazine | IEEE Xplore

Silicon Photonics Codesign for Deep Learning


Abstract:

Deep learning is revolutionizing many aspects of our society, addressing a wide variety of decision-making tasks, from image classification to autonomous vehicle control....Show More

Abstract:

Deep learning is revolutionizing many aspects of our society, addressing a wide variety of decision-making tasks, from image classification to autonomous vehicle control. Matrix multiplication is an essential and computationally intensive step of deep-learning calculations. The computational complexity of deep neural networks requires dedicated hardware accelerators for additional processing throughput and improved energy efficiency in order to enable scaling to larger networks in the upcoming applications. Silicon photonics is a promising platform for hardware acceleration due to recent advances in CMOS-compatible manufacturing capabilities, which enable efficient exploitation of the inherent parallelism of optics. This article provides a detailed description of recent implementations in the relatively new and promising platform of silicon photonics for deep learning. Opportunities for multiwavelength microring silicon photonic architectures codesigned with field-programmable gate array (FPGA) for pre- and postprocessing are presented. The detailed analysis of a silicon photonic integrated circuit shows that a codesigned implementation based on the decomposition of large matrix-vector multiplication into smaller instances and the use of nonnegative weights could significantly simplify the photonic implementation of the matrix multiplier and allow increased scalability. We conclude this article by presenting an overview and a detailed analysis of design parameters. Insights for ways forward are explored.
Published in: Proceedings of the IEEE ( Volume: 108, Issue: 8, August 2020)
Page(s): 1261 - 1282
Date of Publication: 10 February 2020

ISSN Information:

Funding Agency:


I. Introduction

Deep learning is an extraordinarily popular machine-learning technique that is revolutionizing many aspects of our society. Machine learning addresses a wide variety of decision-making tasks such as image classification [1], audio recognition [2], autonomous vehicle control [3], and cancer detection [4]. Matrix multiplication is an essential but time-consuming operation in deep learning computations. It is the most time-intensive step in both feedforward and backpropagation stages of deep neural networks (DNNs) during the training and inference and dominates the computation time and energy for many workloads [1]–[3], [5], [6]. Deep learning uses models that are trained using large sets of data and neural networks with many layers. Since DNNs have high computational complexity, recent years have seen many efforts to go beyond general-purpose processors and toward dedicated accelerators that provide superior processing throughput and improved energy efficiency.

Contact IEEE to Subscribe

References

References is not available for this document.