Consider a group of mobile sensors (or robots) moving in a plane and tracking the position of a moving target by processing distance measurements. In this paper, we study the case of global tracking, i.e., the position of the target is determined with respect to a fixed (global) frame of reference, instead of a relative *group-centered* one. Hence, we hereafter employ the assumption that the global position and orientation (pose) of each of the tracking sensors are known with high accuracy (e.g., from GPS and compass measurements).

In the next two sections, we present the target's state propagation equations and the sensors' measurement model.

### C. State and Covariance Update

Once the distance measurements, **z**(*k*+1), from all the sensors are available, the target's state estimate and its covariance are updated as
TeX Source
$$\eqalignno{\hat{\bf x}_T(k+1\vert k+1) & =\hat{\bf x}_T(k+1\vert k) + K_{k+1} \tilde{\bf z}(k+1\vert k)\cr P_{k+1\vert k+1}& = P_{k+1\vert k}- K_{k+1} S_{k+1} K_{k+1}^{\rm T}& \hbox{(10)}}$$where *K*_{k+1} = *P*_{k+1│ k} *H*_{k+1}^{T} *S*_{k+1}^{−1} is the Kalman gain, *S*_{k+1} = *H*_{k+1} *P*_{k+1│ k} *H*_{k+1}^{T}+ *R* is the measurement residual covariance, and *R* = diag(σ_{i}^{2}) is the measurement noise covariance.

Our objective in this paper is to determine the active sensing strategy that minimizes the uncertainty for the *position* estimate of the target. In order to account for the impact of the prior state estimates on the motion of the sensors, we first prove the following lemma.

*Lemma 1:* The posterior (updated) covariance for the target's position estimate depends on: 1) the prior (propagated) covariance submatrix of the target's *position* (i.e., it is independent of the uncertainty in the estimates of higher order time derivatives of the position such as velocity, acceleration, etc, and hence, it is independent of the target's motion model) and 2) the measurement information matrix corresponding to the target's *position*, i.e.,
TeX Source
$$P_{k+1\vert k+1,11}= \left(\left(P_{k+1\vert k,11} \right)^{-1} + H_{e, k+1}^{\rm T}R^{-1}H_{e, k+1} \right)^{-1}.\eqno{\hbox{(11)}}$$

*Proof:* The covariance matrices appearing in (11) are defined based on the following partition:
TeX Source
$$P_{\ell\vert j}= \left[\matrix{P_{\ell\vert j,11} & P_{\ell\vert j,12} \cr P_{\ell\vert j,12}^{\rm T} & P_{\ell\vert j,22}}\right]\eqno{\hbox{(12)}}$$where the 2 × 2 matrix *P*_{ℓ│ j,11} denotes the covariance for the target's *position* estimate, , at time-step ℓ given measurements up to time-step *j.*

Employing the matrix inversion lemma, the covariance update equation, [cf. (10)] can be written as
TeX Source
$$P^{-1}_{k+1\vert k+1} = P^{-1}_{k+1\vert k}+H_{k+1}^{\rm T}R^{-1}H_{k+1}.\eqno{\hbox{(13)}}$$Note that if the state vector contains only the position of the target, then (11) is identical to (13).

In the general case, when the state vector also contains higher order time derivatives of the position (e.g., velocity, acceleration, etc.), substituting
TeX Source
$$P^{-1}_{k+1\vert k}= \left[\matrix{A_{11} & A_{12} \cr A_{12}^{\rm T} & A_{22}}\right]\eqno{\hbox{(14)}}$$and
TeX Source
$$H_{k+1}^{\rm T}R^{-1}H_{k+1}= \left[\matrix{H_{e, k+1}^{\rm T}R^{-1}H_{e, k+1}& {\bf 0}_{2 \times (2N-2)} \cr{\bf 0}_{(2N-2) \times 2} & {\bf 0}_{(2N-2) \times (2N-2)}}\right]$$on the right-hand side of (13) yields
TeX Source
$$P_{k+1\vert k+1} = \left[\matrix{A_{11}+H_{e, k+1}^{\rm T}R^{-1}H_{e, k+1} & A_{12} \cr A_{12}^{\rm T} & A_{22}}\right]^{-1}. \eqno{\hbox{(15)}}$$Employing the properties of the Schur complement [26] for the inversion of a partitioned matrix, in (15) and (14), we obtain
TeX Source
$$\eqalignno{P_{k+1\vert k+1,11} & = \left(A_{11}+H_{e, k+1}^{\rm T}R^{-1}H_{e, k+1}-A_{12}A_{22}^{-1}A_{12}^{\rm T} \right)^{-1}\cr& =\left(\left(P_{k+1\vert k,11} \right)^{-1} + H_{e, k+1}^{\rm T}R^{-1}H_{e, k+1} \right)^{-1}.}$$

The importance of this lemma is that both optimization algorithms presented in Section V can be derived based on (11) for the position covariance update—instead of (10) or (13) for the whole state covariance update—regardless of the stochastic process model employed for describing the target's motion.

In the next section, we formulate the sensors' one-step-ahead *optimal motion strategy* as a constrained optimization problem and show that it can be exactly reformulated as that of minimizing the norm of the sum of a set of vectors of known length with constraints imposed on their directions.

### D. Problem Statement and Reformulation

As evident from (7)– (9) and (11), after each update step, the target's position covariance matrix will depend on all the next sensors' positions **p**_{i}(*k*+1) = [*x*_{i}(*k*+1) *y*_{i}(*k*+1)]^{T}, *i* = 1,…, *M.* Assuming that at time-step *k*, sensor-*i* is at location **p**_{i}(*k*) = [*x*_{i}(*k*) *y*_{i}(*k*)]^{T} and moves with speed *v*_{i}(*k*), at time-step *k*+1, its position will be
TeX Source
$$\eqalignno{x_i(k+1) & = x_i(k)+v_i(k)\delta{t}\; {\cos}\, \varphi_i(k)& \hbox{(16)}\cr y_i(k+1) & = y_i(k)+v_i(k)\delta{t}\; {\sin}\, \varphi_i(k)& \hbox{(17)}}$$where ϕ_{i}(*k*) ∊ [0,2π) is the heading direction of the sensor. We thus see that given the current sensor positions, **p**_{i}(*k*), the covariance for the target's position estimate after the update [cf. (11)] is a function of the sensors' speeds, *v*_{i}(*k*), and motion directions ϕ_{i}(*k*).

The problem we address in this paper is that of determining the sensors' *optimal motion strategy*, i.e., the set , that minimizes the *trace* of the target's position estimate covariance matrix. Based on the following lemma, we first show that minimizing the trace of the (posterior) covariance matrix requires optimization with respect to the *bearing directions* of the sensors toward the estimated position of the target, while the speed of each sensor only affects the constraints imposed on this problem.

*Lemma 2:* The following two optimization problems are equivalent.

• *Optimization Problem 1 (Π*_{1}):
TeX Source
$$\eqalignno{& \mathop{\rm minimize}\limits_{\varphi_1(k),…,\varphi_M(k), v_1(k),…, v_M(k)} \qquad {\rm{tr}}(P_{k+1\vert k+1,11}) \cr& {\rm{s.t.}}\; 0\leq v_i(k) \leq v_{i{\rm max}} \qquad\;\qquad \forall i=1,…, M.}$$

• *Optimization Problem 2 (Π*_{2}):
TeX Source
$$\eqalignno{& \mathop{\rm minimize}\limits_{\theta_1(k+1),…,\theta_M(k+1)}\quad {\rm{tr}}(P_{k+1\vert k+1,11})&\hbox{(18)} \cr& {\rm{s.t.}}\; \vert \theta_i(k+1) -\theta^{\prime}_{i}(k)\vert \leq \eta_{i{\rm max}}(k) \quad \forall i=1,…, M}$$with^{(2)}
TeX Source
$$\eqalignno{ \eta_{i{\rm max}}(k) &= {\rm arcsin}\left({v_{i{\rm max}}\delta{t}\over \hat{d}^{\prime}_{i}(k)}\right)& \hbox{(19)} \cr \hat{d}^{\prime}_{i}(k) &= \sqrt{(\hat{x}_T(k{+}1\vert k){-}x_i(k))^2 {+} (\hat{y}_T(k+1\vert k)-y_i(k))^2}&\hbox{(20)} \cr\theta^{\prime}_{i}(k)&= {\rm Atan2}(\hat{y}_T(k{+}1\vert k){-}y_i(k),\hat{x}_T(k{+}1\vert k)-x_i(k))&\hbox{(21)}}$$where (cf. Fig. 3) and θ_{i}^{′}(*k*) are the distance and bearing angle from the *current* location of sensor-*i*, **p**_{i}(*k*), to the *next* (predicted) position of the target .

*Proof:* Since the measurement matrix *H*_{e,k+1} [cf. (7)], and hence, the posterior covariance matrix [cf. (11)], has an explicit form in terms of the bearing angles, θ_{i}(*k*+1), toward the estimated target position, minimizing the trace of the covariance matrix can be performed using the θ_{i}(*k*+1), *i* = 1, …, *M*, as the *optimization variables*, instead of the heading direction, ϕ_{i}(*k*), or speed, *v*_{i}(*k*), of each sensor. Note, however, that although the variables {ϕ_{1}(*k*),…,ϕ_{M}(*k*)} are unconstrained, the bearing angles, {θ_{1}(*k*+1),…,θ_{M}(*k*+1)}, are constrained by the fact that the speed, *v*_{i}(*k*), of each sensor, is bounded by *v*_{i max}. Our objective here is to determine the constraints on the new optimization variables θ_{i}(*k*+1) and reveal their relation to *v*_{i max}.

Consider the geometry of this problem shown in Fig. 3. At time-step *k*, sensor-*i* is located at **p**_{i}(*k*) = [*x*_{i}(*k*) *y*_{i}(*k*)]^{T} and predicts, based on the motion model [cf. (3)], that the target will move to . Assume that sensor-*i* moves with speed *v*_{i} and reaches a point **p**_{i}(*k*+1) = [*x*_{i}(*k*+1) *y*_{i}(*k*+1)]^{T} located on a circle of radius *r* = *v*_{i}δ*t*, centered at its previous position **p**_{i}(*k*) (cf. Fig. 3, for *v*_{i} = *v*_{i max}), which does *not* include the target. From point *E* (i.e., the target's estimated location at time-step *k*+1), we draw two lines tangent to the circle where sensor-*i* will move to. The tangent points *A* and *B* correspond to the extreme values of the bearing angle that define the constraints on θ_{i}(*k*+1), i.e., θ_{i min}(*k*+1) ≤ θ_{i}(*k*+1) ≤ θ_{i max}(*k*+1), with
TeX Source
$$\eqalignno{\theta_{i{\rm min}}(k+1) & = \theta_{i}^{\prime}(k)-\eta_i(k)& \hbox{(22)}\cr\theta_{i{\rm max}}(k+1) & = \theta_{i}^{\prime}(k)+\eta_i(k)& \hbox{(23)}\cr\eta_{i}(k) & = {\rm arcsin}\left({v_{i}(k) \delta{t}\over \hat{d}^{\prime}_{i}(k)}\right)& \hbox{(24)}}$$where (24) results from the sine relation in the right triangle ADE, while (22) is derived from the relation for the external to the triangle ACE angle θ_{i}^{′}(*k*) (note that (23) can be easily derived in a similar manner from the geometry of the problem).

Since the inverse-sine function [cf. (24)] is monotonically increasing within the interval of concern (0 < η_{i}(*k*) < π/2), the angle η_{i}(*k*) is maximized when *r* = *r*_{i max}, which corresponds to *v*_{i} = *v*_{i max} for sensors moving with bounded speed. For η_{i}(*k*) = η_{i max}(*k*) [cf. (19)], the range of values of the bearing angles θ_{i}(*k*+1) is maximized (i.e., the constraints on the bearing angles are most relaxed), which leads to a smaller or equal minimum value for the objective function (covariance trace) compared to when η_{i}(*k*) < η_{i max}(*k*). Therefore, the speeds of all sensors are set to their maximum values and optimization is performed with respect to the bearing angles θ_{i}(*k*+1) within the constraints defined by (22) and (23).

*Corollary 1:* Given the optimal bearing angle θ_{i}(*k*+1), the optimal heading directions, ϕ_{i}(*k*) and ϕ_{i}′(*k*), of sensor-*i* (cf. Fig. 3) are computed from the following relations:
TeX Source
$$\eqalignno{\varphi_i(k) & = \theta_i(k+1) + \xi_i(k)& \hbox{(25)}\cr\varphi_i^{\prime}(k) & = \theta_i(k+1) + \pi- \xi_i(k)& \hbox{(26)}}$$where
TeX Source
$$\displaylines{\xi_i(k) = {\rm arcsin} \left({(\hat{y}_T(k+1\vert k) - y_i(k)){\rm cos}\,\theta_i(k+1)\over v_i(k)\delta{t}} \right. \hfill\cr\hfill\quad \left. - {(\hat{x}_T(k+1\vert k) - x_i(k)){\rm sin}\,\theta_i(k+1)\over v_i(k)\delta{t}} \right).\quad\hbox{(27)}}$$Among these two equivalent solutions, sensor-*i* should choose the one that brings it closer to the target so as to increase the probability of redetection later on.

*Proof:* The proof is described in [27].

At this point, we should note that the preceding analysis is not limited to the case of sensors moving with constant speed during each time-step. In fact, Lemma 2 can be directly applied to any higher order sensor motion model. For example, if a second-order model with bounded acceleration *a*_{i}(*k*) ≤ *a*_{i max} was used to describe the sensors' motion, then maximizing η_{i}(*k*), or equivalently *r* = *v*_{i}(*k*) δ *t* + (1/2) *a*_{i}(*k*) δ *t*^{2}, would require that the sensors move with maximum acceleration.

From here on, we turn our attention to determining the optimal bearing angles to the estimated target position given the constraints of Lemma 2. Before showing the final result of this section, we first prove the following properties for the objective function of the optimization problem.

*Lemma 3:* In the optimal target tracking problem using distance-only measurements, minimizing the trace of the target position estimates' covariance matrix is equivalent to^{(3)}

maximizing the determinant of its inverse;

maximizing the minimum eigenvalue of its inverse;

minimizing the difference of its eigenvalues

TeX Source
$$
\eqalignno{
& \quad \mathop{\rm minimize}\limits_{\theta_1,…,\theta_M} {\rm{tr}}(P_{k+1\vert k+1,11}) \cr
\mathop{\rm (a)}\limits_\Leftrightarrow & \quad \mathop{\rm maximize}\limits_{\bar\theta_1,…,\bar\theta_M}
{\rm det}((P_{k+1\vert k+1,11})^{-1})\cr
\mathop{\rm (b)}\limits_\Leftrightarrow & \quad \mathop{\rm maximize}\limits_{\bar\theta_1,…,\bar\theta_M}
\mu_{\rm min}((P_{k+1\vert k+1,11})^{-1})\cr
\mathop{\rm (c)}\limits_\Leftrightarrow & \quad \mathop{\rm minimize}\limits_{\bar\theta_1,…,\bar\theta_M}
\left(\mu_{\rm max}(P_{k+1\vert k+1,11})- \mu_{\rm min}(P_{k+1\vert k+1,11}) \right)
}$$where ,
*i* = 1, …, *M*, θ_{0} is a constant defined from the 2 × 2 unitary
(rotational) matrix appearing in the singular value decomposition of *P*_{k+1│ k,11} [cf. (29) and (30)
], and μ_{min}(⋅) and μ_{max}(⋅) denote the minimum and the maximum eigenvalues of their matrix arguments, respectively.

*Proof:* (a) Since *P*_{k+1│ k+1,11} is a 2 × 2 matrix, it is trivial to prove that
TeX Source
$${\rm{tr}}(P_{k+1\vert k+1,11}) = {{\rm{tr}}((P_{k+1\vert k+1,11})^{-1})\over {\rm{det}}((P_{k+1\vert k+1,11})^{-1})}.\eqno{\hbox{(28)}}$$Thus, for completing the proof of (a), it suffices to compute the inverse of the position covariance matrix *P*_{k+1│ k+1,11} and show that its trace is constant.

Note that since the covariance matrix *P*_{k+1│ k} for the state estimates is symmetric positive semidefinite, so is the covariance matrix *P*_{k+1│ k,11} of the target's position estimates. The singular value decomposition of (*P*_{k+1│ k,11})^{−1} yields
TeX Source
$$(P_{k+1\vert k,11})^{-1}=U \Sigma^{-1} U^{\rm T}\eqno{\hbox{(29)}}$$where Σ^{−1} = diag(μ_{1}^{′},μ_{2}^{′}), μ_{1}^{′}≥ μ_{2}^{′}≥ 0, and
TeX Source
$$U =\left[\matrix{{\cos}\, \theta_0 & -{\sin}\,\theta_0 \cr{\sin}\,\theta_0 & {\cos}\,\theta_0}\right]\; {\rm with }\; UU^{\rm T}=U^{\rm T}U=I_{2 \times 2}.\eqno{\hbox{(30)}}$$Substituting (29) in the right-hand side of (11), we have
TeX Source
$$\eqalignno{P_{k+1\vert k+1,11} & = (U \Sigma^{-1} U^{\rm T} + H_{e, k+1}^{\rm T}R^{-1} H_{e, k+1})^{-1} \cr& = U(\Sigma^{-1} + H_{n, k+1}^{\rm T}R^{-1}H_{n, k+1})^{-1} U^{\rm T}\cr& = U {\cal I}^{-1} U^{\rm T}}$$or equivalently
TeX Source
$$(P_{k+1\vert k+1,11})^{-1} = U {\cal I} U^{\rm T}\eqno{\hbox{(31)}}$$where
TeX Source
$$H_{n, k+1} = H_{e, k+1} U = \left[\matrix{{\rm cos}\ \bar\theta_1 & … & {\rm cos}\ \bar\theta_M \cr{\rm sin}\ \bar\theta_1 & … & {\rm sin}\ \bar\theta_M}\right]^{\rm T}$$with , and
TeX Source
$${\cal I} = \left[\matrix{\mu_1^{\prime} +\sum_{i=1}^M\sigma_i^{-2}{\rm cos}^2\ \bar\theta_i & \sum_{i=1}^M\sigma_i^{-2}{\rm cos}\ \bar\theta_i {\rm sin}\,\bar\theta_i \cr\sum_{i=1}^M\sigma_i^{-2}{\rm cos}\ \bar\theta_i {\rm sin}\,\bar\theta_i & \mu_2^{\prime} +\sum_{i=1}^M\sigma_i^{-2}{\rm sin}^2\ \bar\theta_i}\right].\eqno{\hbox{(32)}}$$Substituting (32) in (31) and noting that similarity transformations do not change the trace of a matrix, yields
TeX Source
$${\rm tr}((P_{k+1\vert k+1,11})^{-1}) = {\rm tr} ({\cal I}) = \mu_1^{\prime} + \mu_2^{\prime}+\sum_{i=1}^M\sigma_i^{-2} = c\eqno{\hbox{(33)}}$$which is constant.

(b) Let μ_{2} ≔ μ_{min}((*P*_{k+1│ k+1,11})^{−1}) ≤ μ_{1} ≔ μ_{max} ((*P*_{k+1│ k+1,11})^{−1}) be the minimum and maximum eigenvalues of the inverse covariance matrix for the position estimates. Based on the relations
TeX Source
$$\eqalignno{{\rm tr}((P_{k+1\vert k+1,11})^{-1}) & = \mu_1 + \mu_2 = c& \hbox{(34)}\cr{\rm det}((P_{k+1\vert k+1,11})^{-1})& = \mu_1 \mu_2& \hbox{(35)}}$$we have
TeX Source
$$\eqalignno{{{\rm maximize}}\ {\rm det}((P_{k+1\vert k+1,11})^{-1})\!\!\quad & \Leftrightarrow\quad\!\! {{\rm maximize}} \ (\mu_1 \mu_2)\cr\Leftrightarrow \quad {\rm{minimize}} \quad (- 4\mu_1 \mu_2)\!\!\quad & \Leftrightarrow \quad\!\! {\rm{minimize}}\ (c^2 {-} 4\mu_1 \mu_2) \cr\Leftrightarrow \quad {\rm{minimize}} \quad(\mu_1 - \mu_2)^2 \!\!\quad & \Leftrightarrow\quad\!\! {\rm{minimize}}\ (\mu_1 {-} \mu_2) \cr\Leftrightarrow \quad {\rm{minimize}} \quad (2 \mu_1 - c)\!\!\quad & \Leftrightarrow\quad\!\! {\rm{minimize}} \ (\mu_1).}$$(c) Note that μ_{max}(*P*_{k+1│ k+1,11}) = 1/μ_{2} and μ_{min}(*P*_{k+1│ k+1,11}) = 1μ_{1} and [cf. (34)]
TeX Source
$$\eqalignno{{\rm{minimize}} \quad \left({1\over \mu_2} - {1\over \mu_1}\right)\quad & \Leftrightarrow\quad {\rm{minimize}} \quad {\mu_1 - \mu_2\over \mu_1 \mu_2}\cr& \Leftrightarrow\quad {\rm{minimize}} \quad {2 \mu_1 - c\over -\mu_1^2 + c\mu_1}.}$$However, this last quantity is a monotonically increasing function of μ_{1} within the interval of concern [*c*/2, *c*] (from (34), it is μ_{2} ≤ *c*/2 ≤ μ_{1} ≤ *c*). Therefore, minimizing it is equivalent to minimizing μ_{1}, which, based on the result of (b), is equivalent to maximizing the determinant of the inverse covariance matrix.

The key result of this section is described by the following lemma.

*Lemma 4:* The optimal motions of a group of sensors estimating the position of a moving target can be determined by solving the following constrained optimization problem.

• *Optimization Problem 3 (Π*_{3}):
TeX Source
$$\eqalignno{& \mathop{\rm minimize}\limits_{\bar\theta_1,…,\bar\theta_M} \quad \left\Vert \lambda_0 + \sum_{i=1}^M\lambda_i{\rm exp}\left(j2\bar\theta_i\right)\right\Vert _2& \hbox{(36)}\cr& {\rm{s.t.}} \quad \bar\theta_{i{\rm min}} ≤\bar\theta_i≤\bar\theta_{i{\rm max}}, \quad \forall i=1,…, M& \hbox{(37)}}$$with and [cf. (29) and (30)]
TeX Source
$$\eqalignno{\lambda_0&=\mu_1^{\prime}-\mu_2^{\prime}\geq0,\, \quad \lambda_i=\sigma_i^{-2} < 0,\, i=1, …, M \cr \bar\theta_{i{\rm min}} &= \theta_{i{\rm min}} - \theta_0,\, \bar\theta_{i{\rm max}} = \theta_{i{\rm max}} - \theta_0& \hbox{(38)}}$$or equivalently
TeX Source
$$\eqalignno{& \mathop{\rm minimize}\limits_{\bar\theta_1,…,\bar\theta_M} \quad \left\Vert \sum_{i=0}^M {\bf v}_i \right\Vert _2& \hbox{(39)}\cr& {\rm{s.t.}} \quad \bar\theta_{i{\rm min}} ≤\bar\theta_i≤\bar\theta_{i{\rm max}} \quad \forall i=1,…, M& \hbox{(40)}}$$with (for *i* = 1, …, *M*)
TeX Source
$${\bf v}_0=[\lambda_0 \quad 0]^{\rm T}, \,{\bf v}_i=[\lambda_i{\rm cos}\,2\bar\theta_i \quad \lambda_i{\rm sin}\,2\bar\theta_i]^{\rm T}.$$

*Proof:* We first note that the constraints of (37) are the same as the ones for the variables θ_{i} of the second optimization problem in Lemma 2, transformed for the new variables . To prove the equivalence between the objective functions in (36) and (18), we rely on the equivalence between minimizing the trace of the covariance matrix and maximizing the determinant of the inverse covariance matrix, shown in Lemma 3, and proceed as follows.

Substituting (32) in (31), and employing the trigonometric identities , , , we have
TeX Source
$${\rm det}((P_{k+1\vert k+1,11})^{-1}) = {\rm det} ({\cal I}) = d_c - {1\over 4} d_{\bar\theta}\eqno{\hbox{(41)}}$$where
TeX Source
$$d_c = \left(\mu_1^{\prime}+{1\over 2}\sum_{i=1}^M\sigma_i^{-2}\right)\left(\mu_2^{\prime}+{1\over 2}\sum_{i=1}^M\sigma_i^{-2}\right)+{1\over 4}(\mu_1^{\prime}-\mu_2^{\prime})^2$$is constant, and
TeX Source
$$\eqalignno{d_{\bar{\bf \theta}} & = \left(\left(\mu_1^{\prime}-\mu_2^{\prime}\right)+\sum_{i=1}^M\sigma_i^{-2}{\rm cos}\,2\bar\theta_i\right)^2 + \left(\sum_{i=1}^M\sigma_i^{-2}{\rm sin}\,2\bar\theta_i\right)^2\cr& = \left\Vert \lambda_0 + \sum_{i=1}^M\lambda_i{\rm exp}\left(j2\bar\theta_i\right)\right\Vert _2^2 = \left\Vert \sum_{i=0}^M {\bf v}_i \right\Vert _2^2.& \hbox{(42)}}$$From (41), we conclude that maximizing the determinant of the inverse covariance matrix is equivalent to minimizing the quantity , i.e., the norm of the sum of the vectors **v**_{i}, *i* = 0, …, *M.*

We thus see that the original problem of minimizing the trace of the covariance matrix of the target's position estimate (cf. Lemma 2) is *exactly reformulated* to that of *minimizing the norm of the sum of **M*+1 vectors in 2-D (cf. Lemma 4). Note that although the vector **v**_{0} = [λ_{0} 0]^{T} remains constant (affixed to the positive *x* semiaxis), each of the vectors **v**_{i}, *i* = 1, …, *M*, has fixed length λ_{i}, but its direction can vary under the constraints described by (37). This geometric interpretation is depicted in Fig. 4.