Loading [MathJax]/extensions/MathMenu.js
Hint-Dynamic Knowledge Distillation | IEEE Conference Publication | IEEE Xplore

Hint-Dynamic Knowledge Distillation


Abstract:

Knowledge Distillation (KD) transfers the knowledge from a high-capacity teacher model to promote a smaller student model. Existing efforts guide the distillation by matc...Show More

Abstract:

Knowledge Distillation (KD) transfers the knowledge from a high-capacity teacher model to promote a smaller student model. Existing efforts guide the distillation by matching their prediction logits, feature embedding, etc., while leaving how to efficiently utilize them in junction less explored. In this paper, we propose Hint-dynamic Knowledge Distillation, dubbed HKD, which excavates the knowledge from the teacher’s hints in a dynamic scheme. The guidance effect from the knowledge hints usually varies in different instances and learning stages, which motivates us to customize a specific hint-learning manner for each instance adaptively. Specifically, a meta-weight network is introduced to generate the instance-wise weight coefficients about knowledge hints in the perception of the dynamical learning progress of the student model. We further present a weight ensembling strategy to eliminate the potential bias of coefficient estimation by exploiting the historical statics. Experiments on standard benchmarks of CIFAR-100 and Tiny-ImageNet manifest that the proposed HKD well boost the effect of knowledge distillation tasks.
Date of Conference: 04-10 June 2023
Date Added to IEEE Xplore: 05 May 2023
ISBN Information:

ISSN Information:

Conference Location: Rhodes Island, Greece

Funding Agency:


1. INTRODUCTION

Whilst deep neural networks (DNNs) have achieved remarkable success in computer vision, most of these well-performed models are difficult to deploy on edge devices in practical scenarios due to the high computational costs. To alleviate this, light-weight DNNs have been investigated a lot. The typical approaches mainly include parameter quantization [1], network pruning [2], knowledge distillation (KD) [3], etc. Among them, the KD topic has gained increasing popularity in various vision tasks due to its simplicity to be integrated into other model compression pipelines.

Contact IEEE to Subscribe

References

References is not available for this document.