Loading [a11y]/accessibility-menu.js
Compact Models for Periocular Verification Through Knowledge Distillation | IEEE Conference Publication | IEEE Xplore

Compact Models for Periocular Verification Through Knowledge Distillation


Abstract:

Despite the wide use of deep neural network for periocular verification, achieving smaller deep learning models with high performance that can be deployed on low computat...Show More

Abstract:

Despite the wide use of deep neural network for periocular verification, achieving smaller deep learning models with high performance that can be deployed on low computational powered devices remains a challenge. In term of computation cost, we present in this paper a lightweight deep learning model with only 1.1m of trainable parameters, DenseNet-20, based on DenseNet architecture. Further, we present an approach to enhance the verification performance of DenseNet-20 via knowledge distillation. With the experiments on VISPI dataset captured with two different smartphones, iPhone and Nokia, we show that introducing knowledge distillation to DenseNet-20 training phase outperforms the same model trained without knowledge distillation where the Equal Error Rate (EER) reduces from 8.36% to 4.56% EER on iPhone data, from 5.33% to 4.64% EER on Nokia data, and from 20.98% to 15.54% EER on cross-smartphone data.
Date of Conference: 16-18 September 2020
Date Added to IEEE Xplore: 02 October 2020
ISBN Information:
Electronic ISSN: 1617-5468
Conference Location: Darmstadt, Germany

Contact IEEE to Subscribe

References

References is not available for this document.