Skip to Main Content
In continuation to an earlier work, we further develop bounds on the mean-squared error (MSE) when estimating a deterministic parameter vector thetas0 in a given estimation problem, as well as estimators that achieve the optimal performance. The traditional Cramer-Rao (CR) type bounds provide benchmarks on the variance of any estimator of thetas0 under suitable regularity conditions, while requiring a priori specification of a desired bias gradient. To circumvent the need to choose the bias, which is impractical in many applications, it was suggested in our earlier work to directly treat the MSE, which is the sum of the variance and the squared-norm of the bias. While previously we developed MSE bounds assuming a linear bias vector, here we study, in the same spirit, affine bias vectors. We demonstrate through several examples that allowing for an affine transformation can often improve the performance significantly over a linear approach. Using convex optimization tools we show that in many cases we can choose an affine bias that results in an MSE bound that is smaller than the unbiased CR bound for all values of thetas0. Furthermore, we explicitly construct estimators that achieve these bounds in cases where an efficient estimator exists, by performing an affine transformation of the standard maximum likelihood (ML) estimator. This leads to estimators that result in a smaller MSE than ML for all possible values of thetas0.
Date of Publication: Aug. 2008