By Topic

Iterative algorithms for learning a linear gaussian observation model with an exponential power scale mixture prior

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $31
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Deng, G. ; Dept. of Electron. Eng., La Trobe Univ., Bundoora, VIC, Australia

The authors study an iterative algorithm for learning a linear Gaussian observation model with an exponential power scale mixture prior (EPSM). This is a generalisation of previous study based on the Gaussian scale mixture prior. The authors use the principle of majorisation minimisation to derive the general iterative algorithm which is related to a reweighted lp-minimisation algorithm. The authors then show that the Gaussian and Laplacian scale mixtures are two special cases of the EPSM and the corresponding learning algorithms are related to the reweighted l2-and l1-minimisation algorithms, respectively. The authors also study a particular case of the EPSM which is a Pareto distribution and discuss Bayesian methods for parameter estimation.

Published in:

Signal Processing, IET  (Volume:5 ,  Issue: 1 )