Flet-Edge: A Full Life-cycle Evaluation Tool for deep learning framework on the Edge | IEEE Conference Publication | IEEE Xplore

Flet-Edge: A Full Life-cycle Evaluation Tool for deep learning framework on the Edge


Abstract:

Deep learning frameworks, such as TensorFlow, PyTorch, MXNet, and Paddle Paddle are widely used and studied by industry. At the same time, AIoT (Artificial Intelligence a...Show More

Abstract:

Deep learning frameworks, such as TensorFlow, PyTorch, MXNet, and Paddle Paddle are widely used and studied by industry. At the same time, AIoT (Artificial Intelligence and Internet of Things) and edge computing have provided more deep learning scenarios on the edge. In order to develop and deploy AIoT applications, we need to evaluate deep learning frameworks from ease-of-use and performance. To describe the full life-cycle performance of deep learning frameworks on the edge, this paper proposed a metric set, PDR, includes three comprehensive submetrics: Programming complexity, Deployment complexity, and Runtime performance. Based on the PDR, this paper designed and implemented a full life-cycle evaluation tool, Flet-Edge, which can automatically collect and present the PDR’s metrics, visually. Finally, to verify the availability of the Flet-Edge, this paper built a heterogeneous edge device cluster and carried out three case studies. With only one configuration file as input, the FletEdge will collect the twelve metrics of training or inference tasks and output them in text or chart. By observing the hierarchical roofline diagram provided by the Flet-Edge, this paper shows that the Flet-edge has the ability to optimize software and hardware of deep learning.
Date of Conference: 10-12 January 2023
Date Added to IEEE Xplore: 27 March 2023
ISBN Information:

ISSN Information:

Conference Location: Nanjing, China

Funding Agency:


I. Introduction

With the explosion of data and the increase in hardware computing power, deep neural networks have been successful in supervised learning, unsupervised learning, reinforcement learning, and blended learning [1]. At the same time, with the development of edge computing, deep learning tasks have been widely deployed on edge devices. Nowadays, edge devices can not only support inference but also training and saving models [2]. To support the development of deep learning tasks, a variety of deep learning frameworks (framework will be used in the rest of this paper to represent deep learning framework.), such as TensorFlow [3], PyTorch [4], MXNet [5], Paddle Paddle [6], Caffe [7] etc. have emerged, and new frameworks such as MindSpore [8] and OneFlow [9] are being developed. However, there are many differences between different frameworks, including the implementation principles of themselves, optimization of different deep learning models (The model will be used in the rest of this paper to represent the deep learning model.) and hardware, which lead that different combinations of models, frameworks, and hardware producing different performance. Therefore, the evaluation of frameworks on the edge is especially important for finding the best combinations. Although there are some research on frameworks on the edge, the following three problems remain unsolved. First of all, the existing researches mainly focused on runtime performance and are unable to give a comprehensive evaluation of frameworks. Secondly, these researches were camed out by comparing the performance of hardware or programs under different conditions, without analyzing the constraints of the program, software, and hardware to get optimization suggestions. Finally, the evaluation methods used in these researches were highly complex and inefficient, which make it difficult for non-professional to reproduce.

Contact IEEE to Subscribe

References

References is not available for this document.