Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/AMS/Regular/Main.js
Sparse Coding and Autoencoders | IEEE Conference Publication | IEEE Xplore

Abstract:

In this work we study the landscape of squared loss of an Autoencoder when the data generative model is that of “Sparse Coding”/“Dictionary Learning”. The neural net cons...Show More

Abstract:

In this work we study the landscape of squared loss of an Autoencoder when the data generative model is that of “Sparse Coding”/“Dictionary Learning”. The neural net considered is an \mathbb{R}^{n}\rightarrow \mathbb{R}^{n} mapping and has a single ReLU activation layer of size h > n. The net has access to vectors y\in \mathbb{R}^{n} obtained as y=A^{\ast}x^{\ast} where x^{\ast}\in \mathbb{R}^{h} are sparse high dimensional vectors and A^{\ast}\in \mathbb{R}^{n\times h} is an overcomplete incoherent matrix. Under very mild distributional assumptions on x^{\ast}, we prove that the norm of the expected gradient of the squared loss function is asymptotically (in sparse code dimension) negligible for all points in a small neighborhood of A^{\ast}. This is supported with experimental evidence using synthetic data. We conduct experiments to suggest that A^{\ast} sits at the bottom of a well in the landscape and we also give experiments showing that gradient descent on this loss function gets columnwise very close to the original dictionary even with far enough initialization. Along the way we prove that a layer of ReLU gates can be set up to automatically recover the support of the sparse codes. Since this property holds independent of the loss function we believe that it could be of independent interest. A full version of this paper is accessible at: https://arxiv.org/abs/1708.03735
Date of Conference: 17-22 June 2018
Date Added to IEEE Xplore: 16 August 2018
ISBN Information:
Electronic ISSN: 2157-8117
Conference Location: Vail, CO, USA

Contact IEEE to Subscribe

References

References is not available for this document.