Loading [MathJax]/extensions/MathMenu.js
Joint Learning of Neural Transfer and Architecture Adaptation for Image Recognition | IEEE Journals & Magazine | IEEE Xplore

Joint Learning of Neural Transfer and Architecture Adaptation for Image Recognition


Abstract:

Current state-of-the-art visual recognition systems usually rely on the following pipeline: 1) pretraining a neural network on a large-scale data set (e.g., ImageNet) and...Show More

Abstract:

Current state-of-the-art visual recognition systems usually rely on the following pipeline: 1) pretraining a neural network on a large-scale data set (e.g., ImageNet) and 2) finetuning the network weights on a smaller, task-specific data set. Such a pipeline assumes that the sole weight adaptation is able to transfer the network capability from one domain to another domain based on a strong assumption that a fixed architecture is appropriate for all domains. However, each domain with a distinct recognition target may need different levels/paths of feature hierarchy, where some neurons may become redundant, and some others are reactivated to form new network structures. In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness, compared to the existing image recognition pipeline that only tunes the weights regardless of the architecture. Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks. This further improves the search efficiency of our method. Moreover, we also provide principled and empirical analysis to explain why our approach works by investigating the ineffectiveness of existing neural architecture search. We find that preserving the joint distribution of the network architecture and weights is of importance. This analysis not only benefits image recognition but also provides insights for crafting neural networks. Experiments on five representative image recognition tasks, such as person re-identification, age estimation, gender recognition, image classification, and unsupervised domain adaptation, demonstrate the effectiveness of our method.
Published in: IEEE Transactions on Neural Networks and Learning Systems ( Volume: 33, Issue: 10, October 2022)
Page(s): 5401 - 5415
Date of Publication: 19 April 2021

ISSN Information:

PubMed ID: 33872158

Funding Agency:


I. Introduction

The success of ImageNet has enabled a standard paradigm of image recognition. Specifically, neural networks are often first pretrained on ImageNet to obtain a set of pretrained weights [e.g., in Fig. 1(a)]. Then, these pretrained network weights are further finetuned on a smaller, task-specific data set to obtain the final optimal weights [e.g., , and in Fig. 1(a)]. Such a paradigm has led to state-of-the-art performance in almost all computer vision tasks, including person re-identification (re-ID) [1], human attribute recognition (e.g., age estimation and gender recognition) [2], and image classification [3].

Comparison between (a) WP&F and (b) proposed framework of NTAA. In WP&F, only network weights are transferred to the downstream tasks, e.g., from in a source task to , and in the target tasks. While in our NTAA, both the network weights and architecture are transferred to the downstream tasks, e.g., from conv, in a source task to , , and in the target tasks.

Contact IEEE to Subscribe

References

References is not available for this document.