A Survey on Vision Transformer | IEEE Journals & Magazine | IEEE Xplore

A Survey on Vision Transformer


Abstract:

Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its stron...Show More

Abstract:

Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformer to computer vision tasks. In a variety of visual benchmarks, transformer-based models perform similar to or better than other types of networks such as convolutional and recurrent neural networks. Given its high performance and less need for vision-specific inductive bias, transformer is receiving more and more attention from the computer vision community. In this paper, we review these vision transformer models by categorizing them in different tasks and analyzing their advantages and disadvantages. The main categories we explore include the backbone network, high/mid-level vision, low-level vision, and video processing. We also include efficient transformer methods for pushing transformer into real device-based applications. Furthermore, we also take a brief look at the self-attention mechanism in computer vision, as it is the base component in transformer. Toward the end of this paper, we discuss the challenges and provide several further research directions for vision transformers.
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence ( Volume: 45, Issue: 1, 01 January 2023)
Page(s): 87 - 110
Date of Publication: 18 February 2022

ISSN Information:

PubMed ID: 35180075

Funding Agency:


1 Introduction

Deep neural networks (DNNs) have become the fundamental infrastructure in today's artificial intelligence (AI) systems. Different types of tasks have typically involved different types of networks. For example, multi-layer perceptron (MLP) or the fully connected (FC) network is the classical type of neural network, which is composed of multiple linear layers and nonlinear activations stacked together [1], [2]. Convolutional neural networks (CNNs) introduce convolutional layers and pooling layers for processing shift-invariant data such as images [3], [4]. And recurrent neural networks (RNNs) utilize recurrent cells to process sequential data or time series data [5], [6]. Transformer is a new type of neural network. It mainly utilizes the self-attention mechanism [7], [8] to extract intrinsic features [9] and shows great potential for extensive use in AI applications

Contact IEEE to Subscribe

References

References is not available for this document.