Loading [a11y]/accessibility-menu.js
Frankenstein: Learning Deep Face Representations Using Small Data | IEEE Journals & Magazine | IEEE Xplore

Frankenstein: Learning Deep Face Representations Using Small Data


Abstract:

Deep convolutional neural networks have recently proven extremely effective for difficult face recognition problems in uncontrolled settings. To train such networks, very...Show More

Abstract:

Deep convolutional neural networks have recently proven extremely effective for difficult face recognition problems in uncontrolled settings. To train such networks, very large training sets are needed with millions of labeled images. For some applications, such as near-infrared (NIR) face recognition, such large training data sets are not publicly available and difficult to collect. In this paper, we propose a method to generate very large training data sets of synthetic images by compositing real face images in a given data set. We show that this method enables to learn models from as few as 10 000 training images, which perform on par with models trained from 500 000 images. Using our approach, we also obtain state-of-the-art results on the CASIA NIR-VIS2.0 heterogeneous face recognition data set.
Published in: IEEE Transactions on Image Processing ( Volume: 27, Issue: 1, January 2018)
Page(s): 293 - 303
Date of Publication: 25 September 2017

ISSN Information:

PubMed ID: 28952941

Funding Agency:


I. Introduction

In recent years, deep learning methods, and in particular convolutional neural networks (CNNs), have achieved considerable success in a range of computer vision applications including object recognition [25], object detection [10], semantic segmentation [37], action recognition [46], and face recognition [42]. The recent success of CNNs stems from the following facts: (i) big annotated training datasets are currently available for a variety of recognition problems to learn rich models with millions of free parameters; (ii) massively parallel GPU implementations greatly improve the training efficiency of CNNs; and (iii) new effective CNN architectures are being proposed, such as the VGG-16/19 networks [47], inception networks [55], and deep residual networks [13].

Contact IEEE to Subscribe

References

References is not available for this document.