Scheduled System Maintenance
On Tuesday, September 25, IEEE Xplore will undergo scheduled maintenance from 1:00–5:00 pm ET.
During this time, there may be intermittent impact on performance. We apologize for any inconvenience.
Cover Image

Rethinking Biased Estimation:Improving Maximum Likelihood and the Cramer-Rao Bound

The purchase and pricing options for this item are unavailable. Select items are only available as part of a subscription package. You may try again later or contact us for more information.
1 Author(s)

Rethinking Biased Estimation discusses methods to improve the accuracy of unbiased estimators used in many signal processing problems. At the heart of the proposed methodology is the use of the mean-squared error (MSE) as the performance criteria. One of the prime goals of statistical estimation theory is the development of performance bounds when estimating parameters of interest in a given model, as well as constructing estimators that achieve these limits. When the parameters to be estimated are deterministic, a popular approach is to bound the MSE achievable within the class of unbiased estimators. Although it is well-known that lower MSE can be obtained by allowing for a bias, in applications it is typically unclear how to choose an appropriate bias. Rethinking Biased Estimation introduces MSE bounds that are lower than the unbiased Cramer-Rao bound (CRB) for all values of the unknowns. It then presents a general framework for constructing biased estimators with smaller MSE than he standard maximum-likelihood (ML) approach, regardless of the true unknown values. Specializing the results to the linear Gaussian model, it derives a class of estimators that dominate least-squares in terms of MSE. It also introduces methods for choosing regularization parameters in penalized ML estimators that outperform standard techniques such as cross validation.