Skip to Main Content
The authors study an iterative algorithm for learning a linear Gaussian observation model with an exponential power scale mixture prior (EPSM). This is a generalisation of previous study based on the Gaussian scale mixture prior. The authors use the principle of majorisation minimisation to derive the general iterative algorithm which is related to a reweighted lp-minimisation algorithm. The authors then show that the Gaussian and Laplacian scale mixtures are two special cases of the EPSM and the corresponding learning algorithms are related to the reweighted l2-and l1-minimisation algorithms, respectively. The authors also study a particular case of the EPSM which is a Pareto distribution and discuss Bayesian methods for parameter estimation.