Exploring Bona Fide Optimal Noise for Bayesian Parameter Estimation

,


I. INTRODUCTION
Adding some random noise to a signal before quantization has been shown to be beneficial for analog to digital converters resulting in smaller signal distortion and wide system dynamic range [1]- [3].The technique of adding noise or dithering was perhaps the first that recognized a beneficial role for noise in a signal processing context [1]- [5].Then the term stochastic resonance was initially coined to describe the possible mechanism for maximizing the response of a bistable system to a small periodic force by optimizing the noise intensity to a non-zero level [6].Stochastic resonance attracted much attention in physics and biology [7]- [17] soon afterwards.Gammaitoni [1] first pointed out that stochastic resonance, far from being limited to a resonant phenomenon, can also be interpreted as a special case of dithering and is related to the notion of noise-induced threshold crossings.Similarly, Collins et al. [18], [19] coined the The associate editor coordinating the review of this manuscript and approving it for publication was Cesar Vargas-Rosales .term of aperiodic stochastic resonance for characterizing the noise-induced behavior in excitable systems with aperiodic inputs, and Stocks [20] defined the suprathreshold stochastic resonance using Shannon's average mutual information measure between the input and the output of a summing network of threshold devices.These widened concepts of stochastic resonance that are closely relate to the field of statistical signal processing, are now widely referred to as noise enhancement or noise benefit [21]- [50].
There are two main situations whereby the noise benefit has been exploited in signal estimation: one is implementing suboptimal estimators in practical estimation problems to avoid too complex or intractable optimal estimators in general [34], [51].The other is estimating a signal from observed data of a number of low-cost sensors (e.g.quantizers) in a fusion center.These sensors with a few bits are often deployed over a sensing field to compose wireless sensor networks in distributed estimation problems [38], [52], [53].For the first situation, the performances of some easily implemented suboptimal estimators were shown to be substantially improved by exploiting the benefits of added noise [22], [32], [34]- [38], [42], [46]- [49].In the second situation, rich results from utilizing various kinds of noise have been reported for quantized observations [1], [2], [12]- [14], [20], [21], [23]- [25], [27], [30], [31], [33], [36]- [43], [45]- [49].For instance, Papadopoulos et al. developed a methodology of additive control input before signal quantization at the sensor to achieve the maximum possible performance for quantizer-based networks [23].They also noticed the option of using feedback from past observations for efficient estimators in terms of mean-square error (MSE) [23].Modeling the suprathreshold stochastic resonance as stochastic quantization, McDonnell et al. systematically studied the optimal linear and nonlinear decoding schemes associated with the information bound on the MSE [12].The optimal Bayesian estimators constructed by the quantizer outputs were also explicitly derived, and a basic mechanism was provided for the performance improvement of optimal Bayesian estimator by increasing the noise level [24], [25], [30], [31].
Since the addition of noise can be artificially designed, then finding the optimal probability density function (PDF) of added noise becomes an interesting question [26], [28], [29], [32], [33], [36]- [43], [45]- [49].Especially, Chen et al. considered all possible PDFs of added noise to optimize an arbitrary fixed or variable estimator and proved that the optimal noise, if it exists, is just a finite number of (no more than two) constant vectors by using the properties of convex hull and Caratheodory theorem [28], [29], [32], [38], [39].Then, this kind of optimal noise PDFs inspired a series of theoretical improvability of estimation under various estimation criteria [26], [33], [37], [40]- [43], [45]- [49].An interesting question is whether optimal bona fide noise, rather than a constant bias, exists for enhancing the estimator performance or not.Interestingly, Uhlich [42] proposed a new estimator constructed by bagged estimators that are modified by mutually independent noise samples, and derived the necessary and sufficient conditions for the existence of the optimal noise.Uhlich [42] also found that the optimal noise PDF, not limited to the noise type revealed in [28], [29], [32], [38], [39], has non-trivial complicated shapes.For minimizing the MSE of a combiner of identical estimators, we also found that solving the optimal noise PDF is a constrained nonlinear functional optimization problem, and approximate optimal PDFs of the optimal noise are also found to be complicated [46], [48].
Although many important results for noise benefits in estimators have been obtained, there are still some unsolved questions.For instance, it is known that, for random parameter estimation, a lower bound on the MSE of any estimator is called the Bayesian Cramér-Rao bound (BCRB) that is directly calculated from the primary observations [54].Then, two interesting questions need to be addressed: after artificially injecting noise into the primary observed data, resulting updated data-so, does the corresponding new BCRB calculated on the updated data increase or decrease?Can the minimum MSE (MMSE) estimator deduced from the updated data achieve a lower MSE than that of the original MMSE estimator based on the primary observations?
In this paper, we will theoretically provide the solutions to aforementioned crucial questions, and elucidate the possibility of exploiting the noise benefits in some easily implemented suboptimal estimators.We argue that the noise-enhanced Bayesian estimators proposed by [22]- [43], [45]- [49] in recent years can be mainly classified into four categories: (i) the noise-modified estimator established on a single sensor [32], (ii) a linear minimum MSE (LMMSE) estimator based on a single sensor, (iii) the noise-enhanced Bayesian estimator as the average of outputs of an ensemble of identical sensors [42] and (iv) the linear combination estimator executing the LMMSE transform on an array of identical or nonidentical sensors [48].
For a noise-modified estimator, it was proved that the optimal added noise is just a constant bias [32].However, the optimized MSE achieved by the noise-modified estimator has a long way to catch up the MSE of the original MMSE estimator.In order to provide useful parameter estimation, it is useful to incorporate both the statistical properties of the original background noise as well as the prior knowledge of the random parameter.This design principle leads to the LMMSE estimator with adaptively adjustable weights that depend only on the first two moments of the joint PDF.We demonstrate that, based on a single sensor, the LMMSE estimator can obtain a lower MSE than the noise-modified estimator does, but the minimum MSE achieved by the LMMSE estimator is still larger than that of the MMSE estimator.Moreover, the optimal added noise is still a constant bias for minimizing the MSE of the LMMSE estimator, and not bona fide random noise.
Furthermore, based on a sufficiently large number of identical sensors, it is shown that the noise-enhanced estimator can efficiently approach the MMSE estimator by the bona fide optimal noise that is not restricted to a constant bias.However, the noise-enhanced estimator is inapplicable to an ensemble of nonidentical sensors.For the general case of nonidentical sensors, we theoretically demonstrate that the linear combination estimator always outperforms the noiseenhanced estimator, and is able to perform as efficiently as the MMSE estimator.From observations of one-bit-quantizer sensors, we illustratively confirm the aforementioned conclusions of the performance comparison between four considered estimators.The complicated PDFs of bona fide optimal noise for the linear combination estimator are also presented.These interesting results of mutually independent added noise components in sensors manifest their potential benefits to the parameter estimation problems.

II. PARAMETER ESTIMATION MODEL AND PROBLEM FORMULATION
Consider a parameter estimation scenario with the scalar observation where s(θ ) is a function of an unknown random parameter θ with a prior PDF f θ , and the mutually independent samples ξ n of background noise, uncorrelated with θ , are with zero mean and common PDF , the statistical characteristic of observation data can be described by the joint PDF Here, f x|θ is the conditional PDF.It is well known [54] that the MSE R of any estimator θ (x) satisfies the inequality where the error of estimator is ε = θ (x)−θ, and the Bayesian information J B is defined as with the prior Fisher information of the prior PDF f θ and the Fisher information ] of observation data x with respect to the parameter θ [54].This lower bound of J −1 B in (2) on the MSE R of any estimator is also called BCRB [54].Here, E x,θ (•), E x|θ (•) and E θ (•) denote expectations with respect to the joint PDF f x,θ , the conditional PDF f x|θ and the prior PDF f θ , respectively.
Theorem 1: After the injection of added noise η n into x n , the updated observation data xn = x n +η n = s(θ )+ξ n +η n = s(θ ) + z n , and the updated BCRB is not less than the original one.
Proof: Letting j F be the Fisher information of one sample x n , we have with the Fisher information j ξ = E ξ [(∂ln f ξ (x)/∂x) 2 ] of the PDF f ξ .Then, for the independent identically distributed (i.i.d.) noise samples ξ n , the Fisher information of the observation vector x is J F (θ ) = Nj F (θ ).Similarly, the Fisher information of the updated data xn can be expressed as jF (θ ) = j z (∂s/∂θ) 2 with the Fisher information Since the Fisher information quantities j z , j ξ , j η > 0 satisfy the convolution inequality Thus, for i.i.d.noise samples z n , the Fisher information of the updated data 2 and the updated Bayesian information satisfies Substituting ( 6) into (2) proves Theorem 1.
Theorem 1 only tells us the increase of the updated BCRB of the updated data vector x.However, based on the MSE criterion and among all estimators, the minimum MSE R ms is achieved by the MMSE estimator θms [51], [54].Therefore, an interesting question is whether the updated MMSE estimator θms (x) = E θ |x (θ |x) can achieve a lower MSE Rms than that of the original MMSE estimator θms or not.The answer is given in Theorem 2.
Theorem 2: It is impossible to design an updated MMSE estimator θms (x) to achieve a lower MSE Rms than the orignial MMSE estimator θms (x) does.
Proof of Theorem 2 is presented in Appendix A. Although this theorem leads to a negative aspect of the added noise to the optimal MMSE estimator θms (x), it also indicates the possibility of noise benefits in some suboptimal estimators beyond the restricted conditions of [12], [20], [22]- [25], [27], [30], [31], [33], [36]- [43], [45]- [49].In practice, the MMSE estimator θms (x) is usually too computationally intensive to implement [51], [54], thus we will exploit the optimal added noise in some easily implementable suboptimal estimators as follows.7) and (b) the LMMSE estimator θL in (13).The optimal noise η is intentionally injected into a given sensor g for the improvement of the MSE of the designed estimator.

III. NOISE BENEFITS IN SUBOPTIMAL ESTIMATORS A. NOISE BENEFITS IN A NOISE-MODIFIED ESTIMATOR
Consider the scalar-parameter observation model x = θ + ξ , and the observation x plus the added noise η is applied to a fixed sensor g, as shown in Fig. 1 (a).Then, the noisemodified estimator is established on the updated sensor output g(x + η).Then, the artificially added noise η is optimized to minimize the MSE ) is given, then Chen et al. [32] proved min and the optimal added noise accords with the PDF f o η (η) = δ(η − η * ) and the constant [32] Thus, with this optimal bias η * , the MSE R NM of the noisemodified estimator θNM has a minimum Example 1: Consider an uniformly distributed parameter θ with its PDF f θ (θ ) = 1/a (a > 0) over the interval (0, a) and zero otherwise.A quantizer sensor is given by where γ is the threshold value of the quantizer.In Table 1, consider three background noise types with Gaussian PDF  1 that MSEs of θNM (x) without the added noise are 0.2571, 0.3333 and 0.2643, respectively.With the optimal bias η = η * given in Table 1, the noisemodified quantizer θNM (x + η * ) has the optimized MSEs of 0.1643, 0.1242 and 0.1771.However, compared with the MSEs of 0.0446, 0.0256 and 0.0533 achieved by the corresponding MMSE estimators θms (x), the improvement on the MSE of θNM by the optimal added noise η = η * is limited.

B. NOISE BENEFITS IN A LMMSE ESTIMATOR
In order to further reduce the MSE of the noise-modified estimator, we perform the LMMSE transform on the sensor output g(x + η) and establish a LMMSE estimator θL = wg(x + η) + w 0 (13) with adjustable weights w and w 0 , as shown in Fig. 1 (b).Theorem 3: For the LMMSE estimator θL of ( 13), the optimal noise η has the PDF f o η (η) = δ(η − η † ) with the constant The MSE R L of θL has the minimum where var(θ 1 that, without the added noise, the MSEs of the LMMSE estimator θL (x) are 0.0701, 0.0833 and 0.0741 for three background noise types, respectively, which already are lower than that of the noise-modified estimator θNM (x + η * ) with its optimal added noise η * in (9).Utilizing the optimal added noise η † in ( 14), the MSEs of θL (x + η † ) can be further reduced to 0.0547, 0.0396 and 0.0589.However, the improved MSEs of θL (x + η † ) still cannot approach the MSE achieved by the MMSE estimator θms (x).Moreover, the optimal ''noise'' η = η † is still a constant bias, rather than a bona fide random noise.

C. NOISE BENEFITS IN IDENTICAL SENSORS
The configurations of both noise-modified estimator θNM and the LMMSE estimator θL are only operated on one sensor.Next, consider an ensemble of identical sensors that receive the same input data x, and the mutually i.i.d.noise components η m are also fed into each sensor, as shown in Fig. 2.
Here, η m are with the common PDF f η and satisfy E η (η . Then, the average value of all outputs of sensors forms the noise-enhanced estimator The MSE of θNE can be computed as where the correlations E Theorem 4: For minimizing the MSE R NE of the noiseenhanced estimator θNE by mutually i.i.d.noise components η m , the optimal noise is not with the PDF f o η (η) = δ(η − η ‡ ) for a constant bias η = η ‡ .For the given background noise ξ and added noise components η m , the MSE R NE is a monotonically decreasing function of the sensor number M .Moreover, for a sufficiently large sensor number M → ∞, the MSE satisfies lim Proof of Theorem 4 is given in Appendix C. Interestingly, as the sensor number M → ∞, (16) can be asymptotically represented as which is just the noise-enhanced estimator proposed by Uhlich [42].Theorem 4 only tells us that the optimal noise is not a constant bias, but which type of noise is optimal to minimizing the MSE of the noise-enhanced estimator θNE ?This non-convex problem is in general intractable, because the term E x [E 2 η (g(x + η))] in ( 17) is a nonlinear functional of the PDF f η .Therefore, the minimization problem of the MSE min f η R NE usually employs the PDF approximation method [26], [42], [47], [48], [56] to obtain an approximate optimal solution form as with the normalization coefficients λ k ≥ 0 satisfying the constraint K k=1 λ k = 1, and the Gaussian window function φ(u) = exp(−u 2 /2)/ √ 2π, means µ k , standard deviations σ k ≥ 0. The approximate PDF f o η will asymptotically converge to the existing optimal PDF f o η as the number K of the window function increases [26], [42], [47], [48], [56].
Example 2: For instance, consider M = 1000 identical quantizers of ( 12) and other parameters are the same as in Example 1.The sequential quadratic programming [56] is used to numerically solve the approximate PDF f o η of (20).In Figs. 3 (a), (b) and (c), the approximate PDFs f o η are plotted for estimating the uniform distributed parameter θ buried in three background noise types, respectively.It is seen in Fig. 3 that the approximate optimal added noise PDF f o η exhibits non-trivial complicated shapes and varies with the background noise types.These approximate PDFs f o η implies a bona fide random noise, rather than a constant bias.Moreover, substituting the obtained approximate optimal noise PDFs f o η into (17), the corresponding MSE values of R NE (x) are reduced to 0.0448, 0.0267 and 0.0536, as listed in Table 1, which are almost equal to the corresponding MSEs achieved by the MMSE estimator θms (x).

D. NOISE BENEFITS IN NONIDENTICAL SENSORS
From ( 16), the noise-enhanced estimator θNE uniformly sets the same weight 1/M to the identical sensors g, which is inappropriate for an ensemble of nonidentical sensors g m shown in Fig. 4. Carrying out a LMMSE transform on the sensor outputs g m (x + η m ), a linear combination estimator is established as where the sensor output vector g = [g 1 (x + η 1 ), and w 0 is the bias weight [48].Then, the MSE of θLC can be expressed as An interesting fact of R LC in (22) is that the minimization of R LC with respect to weights, uncoupled with the minimization of R LC with respect to the added noise, can be first theoretically solved.Setting the derivative ∂R LC /∂w 0 = 0 and the gradient ∂R LC /∂w = 0 produce w 0 = E θ (θ )−w E x,η (g) and w = C −1 p, where p = E x,η {[θ − E θ (θ)][g − E x,η (g)]} is the cross-correlation vector of the parameter θ and the sensor  (21).Besides the injection of mutually independent added noise components η m , each sensor is also endowed with an adjustable weight w m .
vector g and is the covariance matrix of g [48], [57].Then, using the optimal weights w 0 and w, the linear combination estimator θLC in the LMMSE sense can be rewritten as and the minimized MSE R LC with respect to weights is given by Theorem 5: The linear combination estimator θLC is never worse than the noise-enhanced estimator θNE in (16) in the same environment, and the MSEs satisfy R LC ≤ R NE .
Theorem 5 is proved in Appendix D, and is illustrated in the following examples.
Example 3: Consider again Example 1 and minimize the MSE R LC in (24) by optimizing the added noise.For M = 1000 identical quantizers g m = g of ( 12) with the same threshold γ = 0, the approximate noise PDFs f o η of the optimization problem of min f η R LC are also numerically solved for three background noise types, as shown in Fig. 3 (d), (e) and (f), respectively.The corresponding MSE values of R LC are 0.0447, 0.0266 and 0.0533, as listed in Table 1, which also approach to the MSE achieved by the MMSE estimator θms .However, compared to the noise-enhanced estimator θNE , θLC improves the MSE slightly in estimating parameters from the observations of a large number of identical sensors.
Example 4: Consider M (even number) quantizers g m with two groups: one group of M /2 quantizers has the same threshold γ 1 = 0, and the other group of M /2 quantizers is with the same threshold γ 2 = 1.The background noise ξ is selected as Gaussian distributed with the zero-mean and standard deviation σ ξ = √ 0.1, and other conditions are the same as in Example 1.The MSEs of estimators θNE and θLC are plotted as a function of the sensor number M in Fig. 5, wherein the MSE is minimized with respect to the added noise by the approximate PDF in (20).It is seen in Fig. 5 that, upon increasing the sensor number M and dividing these  sensors into two groups, the MSE of the linear combination estimator θLC ( ) still approaches 0.0446 achieved by the MMSE estimator θms (dashed line) asymptotically.While, under the same condition, the MSE of the noise enhanced estimator θNE (•) approaches 0.0715 asymptotically at large M , rather than 0.0446.The reason is that the linear combination estimator θLC allocates different weights to sensors g m with different thresholds.For instance, for M = 200, the weights w m = 3.0976 × 10 −3 (m = 1, 2, • • • , 100) for the group of quantizers with threshold γ 1 , and the weights w k = 5.1738 × 10 −3 (k = 1, 2, • • • , 100) for the group of quantizers with threshold γ 2 .But the estimator θNE in ( 16) always endows all sensors with a fixed weight 1/M , regardless of the distinction of thresholds for two groups of sensors.
For comparison, the MSEs of both θLC ( ) and θNE ( * ) are also plotted in Fig. 5 for M identical sensors g with threshold γ 1 = 0, which also all approach the MMSE value of 0.0446 as the number M increases.For a moderate sensor number (e.g. 2 ≤ M ≤ 10 2 ), the MSE R LC ( ) of θLC also clearly outperforms the MSE R NE ( * ) of θNE .These points indicate the superiority of the linear combination estimator θLC over the noise enhanced estimator θNE for Bayesian parameter estimation.In addition, for the total number M = 200 of sensors with two groups, the approximate noise PDFs f o η that minimizes the MSE R LC to 0.0451 is also numerically solved and shown in Fig. 6, which also has quite a complicated structure.For other threshold settings of γ (not shown here), the superiority of the linear combination estimator is also confirmed.

IV. CONCLUSION
In this paper, added noise is intentionally injected into the observed data, we first theoretically address a crucial question of the increase of the BCRB of the updated observations, and then prove that the updated MMSE estimator yet cannot provide a lower MSE than that the original MMSE estimator.We mainly investigate the noise benefits in certain types of suboptimal Bayesian estimators that are widely employed due to their ease of implementation and low cost.For the noise-modified estimator in (7) or the LMMSE estimator in (13) based on a single sensor, the optimal added noise that minimizes the estimator MSE is just a constant bias, but not bona fide random noise.However, for the noise-enhanced estimator in (16) and the linear combination estimator in (21) established on an ensemble of sensors, it is observed that the optimal noise that improves the estimator and makes it as efficient as the original MMSE estimator is a bona fide random signal, rather than a constant bias.Especially, for an ensemble of two groups sensors with different settings, the linear combination estimator in (21), benefiting from the optimal added noise, can still approach the MSE of the original MMSE estimator when the sensor number is sufficiently large.
Some open questions remain.For instance, it is seen in ( 23) that the construction of the linear combination estimator θLC requires the theoretical two-moments of the cross-correlation between the parameter and the sensor outputs and the covariance of the sensor outputs.To find these statistical quantities we need the joint PDF of the observation data and the added noise, of which, most of time, we have no knowledge of the probabilistic structure.Thus, under this circumstance, how do we establish a practical estimator?If the observation data is stationary and ergodic, can we approximately compute these desired statistical characteristics from one sampling realization of data?Or on the minimization of the least squares error criterion, how do we establish an easily implementable least-square estimator and which kind of added noise is optimal for improving the MSE of the least-square estimator?In addition, in many signal estimation problems, the observations are obtained in a sequential order as time processes.So, can we present a sequential linear combination estimator in the LMMSE sense that continuously update weights and the added noise according to the new incoming data?These interesting questions deserve the further study.

APPENDIXES APPENDIX A PROOF OF THEOREM 2
Consider the case of s(θ ) = θ without loss of generality.For the updated observation data xn = θ + ξ n + η n = θ + z n , the joint PDF of the random parameter θ and the data vector x is described as f x,θ (x, θ) = f x|θ (x|θ )f θ (θ ), where the conditional PDF can be expressed as Then, the updated MMSE estimator is given by θms is given, then we now find the optimal added noise vector η to maximize the term E x[ θms (x) 2 ].

APPENDIX C PROOF OF THEOREM 4
If the optimal noise η has the PDF f o η (η) = δ(η − η ‡ ), then M equivalent constants η m = η ‡ are added to M identical sensors g with the same outputs g(x + η ‡ ).Thus, the noise enhanced estimator θNE = M m=1 g(x + η ‡ )/M = g(x + η ‡ ) reduces to the output of a single sensor.Moreover, E η (η m η k ) = (η ‡ ) 2 = 0 does not satisfy the the mutually independent assumption of η m .Therefore, the optimal noise must not be a constant bias.Using the Jensen inequality and E η [g(x + η)] = g(x + η) for the given observation data x, we obtain the inequality E η [g 2 (x + η)] > E 2 η [g(x + η)].Then, we have Immediately, we find From (34), we deduce that R NE in ( 17) is a monotonically decreasing function of the sensor number M , when the observation data x and the added noise are given.Furthermore, for a sufficiently large number M → ∞, the MSE R NE in (17) can be simplified as lim where the constant η * is given in (9).Then, Theorem 4 is proved.

APPENDIX D PROOF OF THEOREM 5
For an ensemble of sensors g m , the noise-enhanced estimator can be rewritten as and its minimum MSE can be expressed as with the optimal added noise PDF f o * η .However, for any PDF f η of the added noise and the M ×1 dimensional vector 1 of all ones, the MSE of the designed linear combination estimator in (23) can be expressed as ( Of course, even when the added noise PDF f η receives the expression f o * η that is optimal for the noise-enhanced estimator θNE , the inequality (38) also holds, resulting in R LC | f η =f o * η ≤ R min NE .Thus, Theorem 5 holds.

FIGURE 1 .
FIGURE 1. Block diagram representations of (a) the noise-modified estimator θNM in (7) and (b) the LMMSE estimator θL in(13).The optimal noise η is intentionally injected into a given sensor g for the improvement of the MSE of the designed estimator.
) Based on Theorem 2 and compared with the minimum MSEs achieved by the MMSE estimators θms (x) and θms (x), R min NM in (10) satisfies 2σ ξ and the standard deviation σ ξ > 0. Here, the interval bound a = 1, the quantizer threshold γ = 0 and the standard deviations σ ξ take √ 0.1, √ (4 − π )/20 and 1/ √ 5 for three types of considered background noise, respectively.It is seen in Table

FIGURE 2 .
FIGURE 2. Block diagram representation of the noise-enhanced estimator θNE in (16).Here, M mutually i.i.d.noise components η m in sensors are optimized to minimize the MSE of θNE .

FIGURE 3 .
FIGURE 3. Approximate PDFs f o η (η) for the noise enhanced estimator θNE with background noise types of (a) Gaussian, (b) Rayleigh and (c) Laplace distributions.For the linear combination estimator θLC in (21), approximate PDFs f o η (η) are also presented for (d) Gaussian, (e) Rayleigh and (f) Laplace background noise types.Here, the windows number K = 10 in (20).

FIGURE 4 .
FIGURE 4. Block diagram representation of the linear combination estimator θLC in(21).Besides the injection of mutually independent added noise components η m , each sensor is also endowed with an adjustable weight w m .

FIGURE 5 .
FIGURE 5. MSEs of the linear combination estimator θLC in (21) and the noise-enhanced estimator θNE in (16) versus the sensor number M.

FIGURE 6 .
FIGURE 6. Approximate PDF f o η (η) for the linear combination estimator θLC in (21) with two groups of nonidentical sensors g m .Here, the number of sensors is M = 200, one group of M/2 quantizers has the threshold γ 1 = 0, and the other group of M/2 quantizers is with the threshold γ 2 = 1.The background Gaussian noise ξ is with the zero-mean and standard deviation σ ξ = √ 0.1.The windows number K = 10 in (20).

TABLE 1 .
MSEs of various estimators with optimal added noise.