TFormer: A Transmission-Friendly ViT Model for IoT Devices | IEEE Journals & Magazine | IEEE Xplore

TFormer: A Transmission-Friendly ViT Model for IoT Devices


Abstract:

Deploying high-performance vision transformer (ViT) models on ubiquitous Internet of Things (IoT) devices to provide high-quality vision services will revolutionize the w...Show More

Abstract:

Deploying high-performance vision transformer (ViT) models on ubiquitous Internet of Things (IoT) devices to provide high-quality vision services will revolutionize the way we live, work, and interact with the world. Due to the contradiction between the limited resources of IoT devices and resource-intensive ViT models, the use of cloud servers to assist ViT model training has become mainstream. However, due to the larger number of parameters and floating-point operations (FLOPs) of the existing ViT models, the model parameters transmitted by cloud servers are large and difficult to run on resource-constrained IoT devices. To this end, this article proposes a transmission-friendly ViT model, TFormer, for deployment on resource-constrained IoT devices with the assistance of a cloud server. The high performance and small number of model parameters and FLOPs of TFormer are attributed to the proposed hybrid layer and the proposed partially connected feed-forward network (PCS-FFN). The hybrid layer consists of nonlearnable modules and a pointwise convolution, which can obtain multitype and multiscale features with only a few parameters and FLOPs to improve the TFormer performance. The PCS-FFN adopts group convolution to reduce the number of parameters. The key idea of this article is to propose TFormer with few model parameters and FLOPs to facilitate applications running on resource-constrained IoT devices to benefit from the high performance of the ViT models. Experimental results on the ImageNet-1K, MS COCO, and ADE20K datasets for image classification, object detection, and semantic segmentation tasks demonstrate that the proposed model outperforms other state-of-the-art models. Specifically, TFormer-S achieves 5% higher accuracy on ImageNet-1K than ResNet18 with 1.4× fewer parameters and FLOPs.
Published in: IEEE Transactions on Parallel and Distributed Systems ( Volume: 34, Issue: 2, 01 February 2023)
Page(s): 598 - 610
Date of Publication: 17 November 2022

ISSN Information:

Funding Agency:


1 Introduction

The International Data Corporation predicts that by 2025, there will be 41.6 billion connected Internet of Things (IoT) devices [1]. Additionally, the recently proposed vision transformer (ViT) models, with the support of large datasets, have crushed the convolutional neural network models that have dominated for many years in multifarious vision tasks, such as image classification [2], [3], object detection [4], [5], and semantic segmentation [6], [7]. Deploying high-performance ViT models on ubiquitous IoT devices to provide high-quality vision services has attracted great attention from both industry and academia.

Contact IEEE to Subscribe

References

References is not available for this document.