By Topic

Asymptotic Relative Efficiency and Exact Variance Stabilizing Transformation for the Generalized Gaussian Distribution

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Kai-Sheng Song ; Dept. of Math., Univ. of North Texas, Denton, TX, USA

It is demonstrated that the sampling distributions of the maximum likelihood (ML) estimator and its Studentized statistic for the generalized Gaussian distribution do not pass the most powerful normality tests even for fairly large sample sizes. This disagreement with what the standard large sample ML theory predicts and the computational burden of having to deal with its associated polygamma functions motivate the consideration of a competing convexity-based estimator. The asymptotic normality of this estimator is derived. It is shown that the competing estimator is almost as efficient as the ML estimator and its asymptotic relative efficiency to the ML estimator is equal to 1 in the limit as the shape parameter approaches zero. More important, its asymptotic distribution admits an exact variance stabilizing transformation, whereas the asymptotic variance function of the ML estimator does not have a closed-form variance stabilizing transformation. The exact transformation is a composition of the inverse hyperbolic cotangent and square root functions. Besides stabilizing the variance, the inverse hyperbolic cotangent and square root transformation is remarkably effective for symmetrizing and normalizing the sampling distribution of the estimator and hence improving the standard normal approximation. Furthermore, this simple transformation provides a quite accurate approximation to the non-closed-form variance stabilizing transformation of the ML estimator.

Published in:

Information Theory, IEEE Transactions on  (Volume:59 ,  Issue: 7 )