Abstract:
Convolutional neural networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped Internet of Things devices permeate into every a...Show MoreMetadata
Abstract:
Convolutional neural networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped Internet of Things devices permeate into every aspect of modern life, the ability to execute CNN inference, a computationally intensive application, on resource constrained devices has become increasingly important. In this context, we present Cappuccino, a framework for synthesis of efficient inference software targeting mobile system-on-chips (SoCs). We propose techniques for efficient parallelization of CNN inference targeting mobile SoCs, and explore the underlying tradeoffs. Experiments with different CNNs on three mobile devices demonstrate the effectiveness of our approach.
Published in: IEEE Embedded Systems Letters ( Volume: 11, Issue: 1, March 2019)