Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy | IEEE Journals & Magazine | IEEE Xplore

Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy


Abstract:

We study model recovery for data classification, where the training labels are generated from a one-hidden-layer neural network with sigmoid activations, also known as a ...Show More

Abstract:

We study model recovery for data classification, where the training labels are generated from a one-hidden-layer neural network with sigmoid activations, also known as a single-layer feedforward network, and the goal is to recover the weights of the neural network. We consider two network models, the fully-connected network (FCN) and the non-overlapping convolutional neural network (CNN). We prove that with Gaussian inputs, the empirical risk based on cross entropy exhibits strong convexity and smoothness uniformly in a local neighborhood of the ground truth, as soon as the sample complexity is sufficiently large. This implies that if initialized in this neighborhood, gradient descent converges linearly to a critical point that is provably close to the ground truth. Furthermore, we show such an initialization can be obtained via the tensor method. This establishes the global convergence guarantee for empirical risk minimization using cross entropy via gradient descent for learning one-hidden-layer neural networks, at the near-optimal sample and computational complexity with respect to the network input dimension without unrealistic assumptions such as requiring a fresh set of samples at each iteration.
Published in: IEEE Transactions on Signal Processing ( Volume: 68)
Page(s): 3225 - 3235
Date of Publication: 07 May 2020

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.