IEEE Xplore At-A-Glance
  • Abstract

Optimal Motion Strategies for Range-Only Constrained Multisensor Target Tracking

In this paper, we study the problem of optimal trajectory generation for a team of mobile sensors tracking a moving target using distance-only measurements. This problem is shown to be NP-Hard, in general, when constraints are imposed on the speed of the sensors. We propose two algorithms, modified Gauss–Seidel relaxation and linear programming (LP) relaxation, for determining the set of feasible locations that each sensor should move to in order to collect the most informative measurements; i.e., distance measurements that minimize the uncertainty about the position of the target. These algorithms are applicable regardless of the process model that is employed for describing the motion of the target, while their computational complexity is linear in the number of sensors. Extensive simulation results are presented demonstrating that the performance attained with the proposed methods is comparable to that obtained with grid-based exhaustive search, whose computational cost is exponential in the number of sensors, and significantly better than that of a random, toward the target, motion strategy.

SECTION I

Introduction

TARGET tracking has recently attracted significant interest in the research community because of its importance in a variety of applications such as environmental monitoring [1], surveillance [2], human–robot interaction [3], as well as defence applications [4]. In order to obtain increased tracking accuracy and monitor extensive areas, a large number of sensors are often utilized for tracking, while communicating over a wireless sensor network. When multiple nodes obtain measurements of a target of interest, the acquired data can be processed, either at a fusion center or in a distributed fashion, in order to estimate the target's trajectory.

As an alternative to using static sensors, the deployment of mobile sensors (i.e., robots) for tracking offers significant advantages. By providing mobility to the sensors, a larger area can be covered without the need to increase the number of nodes in the sensing network [5]. Additionally, the spatial distribution of the sensors can change dynamically in order to adapt to the motion of the target. For example, a team of sensors can actively pursue a target, to avert the target's evasion from the sensors' visibility range [6].

Regardless of the estimation algorithm employed in a given application, the processing of every new measurement by a networked tracking system incurs a penalty in terms of use of communication bandwidth and CPU time, as well as in terms of power consumption. Since these resources are inevitably limited, it is necessary to devise active sensing algorithms that guarantee their optimal utilization. Moreover, in many tracking applications, the time needed for determining the trajectory of a target is critically important (e.g., when tracking a hostile target). Sensors that actively pursue a target and move to locations where they collect the most informative measurements, can achieve optimal tracking performance. That is, they can minimize the uncertainty about the position of the target significantly faster as compared to a random motion strategy.

In this paper, we study the problem of determining optimal trajectories for a team of sensors that track a moving target using range (distance) measurements. Since the measurement model is nonlinear, the locations where distance measurements are collected have a profound effect on the estimation accuracy. Consider, for example, the simple case of a single sensor tracking a target using distance measurements corrupted by Gaussian noise [cf. Fig. 1(a) and (b)]. In this scenario, the prior uncertainty for the position of the target Pk+1│ k is depicted by the solid-line 3σ ellipse shown in Fig. 1(a). If the sensor remains still and measures the distance to the target, then based solely on this measurement, the sensor believes that the target is within the dotted-line circular ring with probability 99.7%. Combining the prior estimate with this measurement, the posterior uncertainty Pk+1│ k+1 is only slightly reduced [dashed-line ellipse in Fig. 1(a)]. As evident, by remaining in the same position, the sensor's measurement provides limited information for the target's position along the x-direction. If instead, the sensor moves to a new location [cf. Fig. 1(b)], then combining this new measurement with the prior estimate will result in significant reduction of the uncertainty in both directions, but primarily along the x-axis. The improved confidence in the target-position estimate after this informative measurement is processed, is depicted by the small dashed-line ellipse in Fig. 1(b).

Figure 1
Fig. 1. (a) Suboptimal target tracking: The sensor remains in the same location. (b) Optimal target tracking: The sensor moves to the position that minimizes the uncertainty for the target's position along the x-axis. In both plots, the prior uncertainty (3σ) is denoted by a solid-line ellipse, the posterior by a dashed-line ellipse, while the measurement uncertainty is depicted as a circular ring (dotted-line) with center the location of the sensor.

In this paper, we extend this intuitive strategy to the case of one moving target and multiple mobile sensors and determine the optimal trajectories that the sensors should follow to minimize the error in the posterior estimate of the position of the target [7]. Here, optimality is sought with respect to the accuracy of the target's localization, i.e., we seek to minimize the trace of the covariance matrix of the target's position estimate. We show that regardless of the target's motion model, this optimization problem can be exactly reformulated as that of minimizing the norm of the sum of a set of vectors of known length (cf. Section III). The motion direction of each sensor affects the direction of the corresponding vector, while the speed of motion determines the range of possible angles (constraints) for each vector. We also prove that this optimization problem is indeed NP-Hard in general (cf. Section IV) and show that minimizing the trace of the covariance matrix is equivalent to maximizing the minimum eigenvalue of its inverse (cf. Section III).

Two novel relaxation algorithms, modified Gauss–Seidel relaxation (MGSR) and linear programming (LP) relaxation (LPR), are proposed for solving this problem (cf. Section V), and it is shown through extensive simulation studies that the performance attainable with each of them is comparable to that obtained with a grid-based exhaustive search algorithm. While the computational complexity of exhaustive search is prohibitively large (exponential in the number of sensors), both proposed relaxation methods have complexity only linear in the number of sensors, and are thus well suited for real-time implementations. Additionally, the accuracy achieved by both MGSR and LPR is significantly better than that obtained when following a “random” motion strategy (cf. Section VI).

Following a brief review of related literature in Section II, we present the formulation of the target tracking problem in Section III. In Section IV, we show that the problem is NP-hard. We describe two proposed relaxation algorithms in Section V. Extensive simulation results are presented in Section VI. Finally, in Section VII, the conclusions of this paper are drawn, and future research directions are suggested.

SECTION II

Literature Review

Target tracking has received considerable attention (e.g. , [8], [9]). In most cases, however, the sensors involved are static and the emphasis is on the optimal processing of the available information rather than the placement or repositioning of the sensors. The idea of choosing sensing locations in order to maximize information gain (also known as adaptive sensing or active perception [10]) has been applied to the problems of cooperative localization [11], simultaneous localization and mapping (SLAM) [10], parameter detection [12], [13], and optimal sensor selection [14]. In particular for single-sensor target tracking using bearing-only measurements, significant work has been presented in [15], [16], [17], [18], [19]. In these cases, the most common optimization criterion used is the determinant of the Fisher information matrix (FIM) over a horizon of one or multiple time-steps.

Due to the key differences in the observation model when distance, instead of bearing, measurements are used and the implications on the selection process of the next best sensing location, we hereafter limit our discussion to single- and multisensor active sensing approaches that use distance and bearing or distance-only measurements to the target. The latter case is the main focus of our work.

A. Active Target Tracking—Distance and Bearing Measurements

Stroupe and Balch [20] propose an approximate tracking behavior, where the mobile sensors attempt to minimize the target's location uncertainty using distance and bearing measurements. The objective function is the determinant of the target position estimates' covariance matrix. The optimization process in this case does not consider the set of all possible trajectories. Instead, a greedy search is performed over the discretized set of candidate headings, separately for each sensor. Additionally, the expected information gain from the teammates' actions is approximated by assuming that the other sensors' measurements in the next time-step will be the same as these recorded at their current locations.

Olfati-Saber [21] addresses the problem of distributed target tracking for mobile sensor networks with a dynamic communication topology. The author tackles the network connectivity issue using a flocking-based mobility model, and presents a modified version of the distributed Kalman filter algorithm for estimating the target's state. In this case, the sensors use both distance and bearing measurements to a target that moves in 2-D with constant velocity driven by zero-mean Gaussian noise, and seek to minimize their distance to the target, while avoiding collisions. The proposed algorithm does not consider positioning information from previous time-steps.

Chung et al. [22] present a decentralized motion planning algorithm for solving the multisensor target-tracking problem using both distance and bearing measurements. The authors employ the determinant of the target's position covariance matrix as the cost function. The decentralized control law in this case is based on the gradient of the cost function with respect to each of the sensor's coordinates with constant step-size of 1. The authors, however, do not account for the speed constraints on the motion of sensors. In addition, the convergence rate of the gradient-based method and the existence of local minima are not considered.

B. Active Target Tracking—Distance-Only Measurements

Contrary to [22], where sensors can transmit/receive information to/from all their teammates, in [23] they are confined to communicate with one-hop neighbors only. In this case, distance-only measurements are used, while both the trace and the determinant of the covariance matrix for the target's position estimates are considered as objective functions. The control law, with constant step size, is computed from the cost function's gradient with respect to each sensor's coordinates. As in [22], however, physical constraints on the motion of the sensors are not accounted for. Furthermore, the impact of the step-size selection on the convergence of the algorithm and the existence of local minima are not considered.

In [24], Martínez and Bullo address the problem of optimal sensor placement and motion coordination strategies for mobile sensor networks using distance-only measurements. In this case, all the sensors are assumed to be identical (i.e., same level of accuracy in the distance measurements). The authors consider the optimal sensor placement for (nonrandom) static target position estimation. The objective is to maximize the determinant of the FIM, or equivalently, minimize the determinant of the covariance matrix. However, the optimization process does not address the dynamic target case. Instead, the authors argue that the optimal sensor placement derived for the static target scenario is also expected to have good performance in the dynamic case. By not considering the prior estimates and assuming a homogeneous sensor team with no motion constraints, the optimal placement of the sensors can be computed analytically. The resulting control law requires that the sensors move on a polygon surrounding the static target so as the vectors from the target to each sensor are uniformly (in terms of direction) spaced.

The main drawback of the previous approaches is that no constraints on the speed of the sensors are considered. Furthermore, their impact on the computational complexity of the optimization algorithm used is not examined. The only exception is the work presented in [20]. In that case, however, these constraints are used only to define the discretized region over which the heading of each sensor is optimized independently (i.e., each sensor determines its next sensing location without considering the constraints on the motion of its teammates).

In this paper, we address the problem of optimal target tracking using distance measurements collected from teams of heterogeneous sensors. In our formulation, we account for the existence of prior information, the impact of which can be appreciated from the simple example shown in Fig. 1(a) and (b). Furthermore, we consider constraints on the speed of the sensors and prove that their inclusion makes the problem NP-Hard.

SECTION III

Problem Formulation

Consider a group of mobile sensors (or robots) moving in a plane and tracking the position of a moving target by processing distance measurements. In this paper, we study the case of global tracking, i.e., the position of the target is determined with respect to a fixed (global) frame of reference, instead of a relative group-centered one. Hence, we hereafter employ the assumption that the global position and orientation (pose) of each of the tracking sensors are known with high accuracy (e.g., from GPS and compass measurements).

Furthermore, we consider the case where each sensor can move in 2-D with speed vi, which is upper bounded by vi max, i = 1,…,M, where M is the number of sensors. Therefore, at time-step k+1, sensor-i can only move within a circular region centered at its position at time-step k with radius r = vi maxδt (cf. Fig. 2), where δt is the time-step. Note also that since the motion of the target can be reliably predicted for the next time-step only, our objective is to determine the next best sensing locations for all sensors at every time-step.

Figure 2
Fig. 2. Sensor's and target's motion. Each sensor moves in 2-D with speed vi, which is bounded by vi max. From time-step k to k+1, the sensor can only move within a circular region centered at its position at time-step k with radius r = vi maxδt. sipT is the target's position with respect to sensor-i. The distance measurement is the norm of sipT plus noise.

In the next two sections, we present the target's state propagation equations and the sensors' measurement model.

A. State Propagation

In this paper, we employ the extended Kalman filter (EKF) for recursively estimating the target's state, xT(k). This is defined as a vector of dimension 2N, where N is the highest order time derivative of the position described by the motion model, and can include components such as position, velocity, and acceleration:Formula TeX Source $${\bf x}_{T}(k)=[x_{T}(k)\; y_{T}(k)\; \dot{x}_{T}(k)\; \dot{y}_{T}(k)\; \ddot{x}_{T}(k)\; \ddot{y}_{T}(k)\; …]^{\rm T}.\eqno{\hbox{(1)}}$$

We consider the case that the target moves randomly, and assume that we know the stochastic model describing the motion of the target (e.g., constant acceleration or constant velocity, etc). However, as it will become evident later on, neither of our sensing strategies depends on the particular selection of the target's motion model.

The discrete-time state propagation equation isFormula TeX Source $${\bf x}_{T}(k+1)=\Phi_k{\bf x}_{T}(k)+G_k{\bf w}_{d}(k)\eqno{\hbox{(2)}}$$where wd is a zero-mean white Gaussian noise process with covariance Qd = E[wd(k)wdT(k)].

The estimate of the target's state is propagated by(1) Formula TeX Source $$\hat{\bf x}_T(k+1\vert k)=\Phi_k\hat{\bf x}_T(k\vert k)\eqno{\hbox{(3)}}$$where ○T(ℓ│ j) is the state estimate at time-step ℓ, after measurements up to time-step j have been processed.

The error-state covariance matrix is propagated asFormula TeX Source $$P_{k+1\vert k}=\Phi_k P_{k\vert k}\Phi_k^{\rm T}+G_kQ_dG_k^{\rm T}$$where Pℓ│ j is the covariance of the error, Formula, in the state estimate. The state transition matrix, Φk, and the process noise Jacobian, Gk, that appear in the preceding expressions depend on the motion model used [25]. In our paper, these can be arbitrary matrices since no assumptions on their properties are imposed.

B. Measurement Model

At time-step k+1, each sensor of the team measures its distance to the target, as shown in Fig. 2, and therefore, the measurement equation isFormula TeX Source $$\eqalignno{{\bf z}(k+1)& =\left[\matrix{d_1(k+1) \cr\vdots \cr d_M(k+1)}\right]+\left[\matrix{n_1(k+1) \cr \vdots\cr n_M(k+1)}\right] \cr& ={\bf d}(k+1)+{\bf n}(k+1)& \hbox{(4)}}$$with (for i = 1,…,M)Formula TeX Source $$\eqalignno{ d_i(k+1) &=\sqrt{^{s_i}{\bf p}_{T}(k+1)^{{\rm T}\;^{s_i}}\!{\bf p}_{T}(k+1)} \cr& =\sqrt{({\bf p}_{T}(k{+}1) {-}{\bf p}_{i}(k{+}1))^{\rm T}({\bf p}_{T}(k{+}1) {-}{\bf p}_{i}(k{+}1))} \cr& =\sqrt{(x_T(k{+}1){-}x_i(k{+}1))^2{+}(y_T(k{+}1){-}y_i(k{+}1))^2}}$$where sipT(k+1) is the position of the target with respect to sensor-i, and pT(k+1) = [xT(k+1) yT(k+1)]T, pi(k+1) = [xi(k+1) yi(k+1)]T are the positions of the target and the sensor, respectively, expressed in the global frame of reference. Note also that ni(k+1) is the noise in the i th sensor's distance measurement, which is a zero-mean white Gaussian process, independent of the noise in other sensors, with variance E[ni(k+1)nj(k+1)] = σ2i δij, where δij is the Kronecker delta.

The measurement equation (4) is a nonlinear function of the state variables. The measurement-error equation obtained by linearizing (4) isFormula TeX Source $$\eqalignno{\tilde{\bf z}(k+1\vert k) & =\;{\bf z}(k+1)-\hat{\bf z}(k+1\vert k) \cr& \simeq\; H_{k+1}\tilde{\bf x}_T(k+1\vert k)+{\bf n}(k+1)& \hbox{(5)}}$$whereFormula TeX Source $$\eqalignno{\hat{\bf z}(k+1\vert k)& =[\hat{d}_{1}(k+1\vert k) … \hat{d}_{M}(k+1\vert k)]^{\rm T} \cr\hat{d}_{i}(k+1\vert k)& = \sqrt{ \widehat{\Delta x}_{Ti}^2(k+1\vert k) + \widehat{\Delta y}_{Ti}^2(k+1\vert k)} \cr\widehat{\Delta x}_{Ti}(k+1\vert k) & =\hat{x}_T(k+1\vert k)-x_i(k+1) \cr\widehat{\Delta y}_{Ti}(k+1\vert k) & =\hat{y}_T(k+1\vert k)-y_i(k+1)\cr\tilde{\bf x}_T(k+1\vert k) & = {\bf x}_T(k+1)-\hat{\bf x}_T(k+1\vert k).}$$Note that the measurement matrix in (5) has a block column structure, which is given by the following expression:Formula TeX Source $$H_{k+1} =\left[\matrix{H_{e, k+1} & {\bf 0}_{M \times (2N-2)}}\right]\eqno{\hbox{(6)}}$$where 2N is the dimension of the state vector, andFormula TeX Source $$\eqalignno{H_{e, k+1}^{\rm T} & =\left[\matrix{{\rm cos}\ \theta_1(k+1) & … & {\rm cos}\ \theta_M(k+1) \cr{\rm sin}\ \theta_1(k+1) & … & {\rm sin}\ \theta_M(k+1)}\right]\qquad& \hbox{(7)}\cr{\cos}\;\theta_i(k+1)& ={\widehat{\Delta x}_{Ti}(k+1\vert k)\over \hat{d}_{i}(k+1\vert k)}& \hbox{(8)}\cr{\sin}\;\theta_i(k+1)& ={\widehat{\Delta y}_{Ti}(k+1\vert k)\over \hat{d}_{i}(k+1\vert k)}.& \hbox{(9)}}$$The angle θi that appears in the preceding equations represents the bearing angle of sensor-i toward the estimated position of the target, expressed in global coordinates (cf. Fig. 3).

Figure 3
Fig. 3. Geometric interpretation of the bearing angle constraints: Since the speed vi(k) of each sensor is bounded by vi max, the bearing angle θi(k+1) is constrained in the interval [θi min(k+1),θi max(k+1)].

C. State and Covariance Update

Once the distance measurements, z(k+1), from all the sensors are available, the target's state estimate and its covariance are updated asFormula TeX Source $$\eqalignno{\hat{\bf x}_T(k+1\vert k+1) & =\hat{\bf x}_T(k+1\vert k) + K_{k+1} \tilde{\bf z}(k+1\vert k)\cr P_{k+1\vert k+1}& = P_{k+1\vert k}- K_{k+1} S_{k+1} K_{k+1}^{\rm T}& \hbox{(10)}}$$where Kk+1 = Pk+1│ k Hk+1T Sk+1−1 is the Kalman gain, Sk+1 = Hk+1 Pk+1│ k Hk+1T+ R is the measurement residual covariance, and R = diag(σi2) is the measurement noise covariance.

Our objective in this paper is to determine the active sensing strategy that minimizes the uncertainty for the position estimate of the target. In order to account for the impact of the prior state estimates on the motion of the sensors, we first prove the following lemma.

Lemma 1: The posterior (updated) covariance for the target's position estimate depends on: 1) the prior (propagated) covariance submatrix of the target's position (i.e., it is independent of the uncertainty in the estimates of higher order time derivatives of the position such as velocity, acceleration, etc, and hence, it is independent of the target's motion model) and 2) the measurement information matrix corresponding to the target's position, i.e.,Formula TeX Source $$P_{k+1\vert k+1,11}= \left(\left(P_{k+1\vert k,11} \right)^{-1} + H_{e, k+1}^{\rm T}R^{-1}H_{e, k+1} \right)^{-1}.\eqno{\hbox{(11)}}$$

Proof: The covariance matrices appearing in (11) are defined based on the following partition:Formula TeX Source $$P_{\ell\vert j}= \left[\matrix{P_{\ell\vert j,11} & P_{\ell\vert j,12} \cr P_{\ell\vert j,12}^{\rm T} & P_{\ell\vert j,22}}\right]\eqno{\hbox{(12)}}$$where the 2 × 2 matrix Pℓ│ j,11 denotes the covariance for the target's position estimate, Formula, at time-step ℓ given measurements up to time-step j.

Employing the matrix inversion lemma, the covariance update equation, [cf. (10)] can be written asFormula TeX Source $$P^{-1}_{k+1\vert k+1} = P^{-1}_{k+1\vert k}+H_{k+1}^{\rm T}R^{-1}H_{k+1}.\eqno{\hbox{(13)}}$$Note that if the state vector contains only the position of the target, then (11) is identical to (13).

In the general case, when the state vector also contains higher order time derivatives of the position (e.g., velocity, acceleration, etc.), substitutingFormula TeX Source $$P^{-1}_{k+1\vert k}= \left[\matrix{A_{11} & A_{12} \cr A_{12}^{\rm T} & A_{22}}\right]\eqno{\hbox{(14)}}$$andFormula TeX Source $$H_{k+1}^{\rm T}R^{-1}H_{k+1}= \left[\matrix{H_{e, k+1}^{\rm T}R^{-1}H_{e, k+1}& {\bf 0}_{2 \times (2N-2)} \cr{\bf 0}_{(2N-2) \times 2} & {\bf 0}_{(2N-2) \times (2N-2)}}\right]$$on the right-hand side of (13) yieldsFormula TeX Source $$P_{k+1\vert k+1} = \left[\matrix{A_{11}+H_{e, k+1}^{\rm T}R^{-1}H_{e, k+1} & A_{12} \cr A_{12}^{\rm T} & A_{22}}\right]^{-1}. \eqno{\hbox{(15)}}$$Employing the properties of the Schur complement [26] for the inversion of a partitioned matrix, in (15) and (14), we obtainFormula TeX Source $$\eqalignno{P_{k+1\vert k+1,11} & = \left(A_{11}+H_{e, k+1}^{\rm T}R^{-1}H_{e, k+1}-A_{12}A_{22}^{-1}A_{12}^{\rm T} \right)^{-1}\cr& =\left(\left(P_{k+1\vert k,11} \right)^{-1} + H_{e, k+1}^{\rm T}R^{-1}H_{e, k+1} \right)^{-1}.}$$

The importance of this lemma is that both optimization algorithms presented in Section V can be derived based on (11) for the position covariance update—instead of (10) or (13) for the whole state covariance update—regardless of the stochastic process model employed for describing the target's motion.

In the next section, we formulate the sensors' one-step-ahead optimal motion strategy as a constrained optimization problem and show that it can be exactly reformulated as that of minimizing the norm of the sum of a set of vectors of known length with constraints imposed on their directions.

D. Problem Statement and Reformulation

As evident from (7)(9) and (11), after each update step, the target's position covariance matrix will depend on all the next sensors' positions pi(k+1) = [xi(k+1) yi(k+1)]T, i = 1,…, M. Assuming that at time-step k, sensor-i is at location pi(k) = [xi(k) yi(k)]T and moves with speed vi(k), at time-step k+1, its position will beFormula TeX Source $$\eqalignno{x_i(k+1) & = x_i(k)+v_i(k)\delta{t}\; {\cos}\, \varphi_i(k)& \hbox{(16)}\cr y_i(k+1) & = y_i(k)+v_i(k)\delta{t}\; {\sin}\, \varphi_i(k)& \hbox{(17)}}$$where ϕi(k) ∊ [0,2π) is the heading direction of the sensor. We thus see that given the current sensor positions, pi(k), the covariance for the target's position estimate after the update [cf. (11)] is a function of the sensors' speeds, vi(k), and motion directions ϕi(k).

The problem we address in this paper is that of determining the sensors' optimal motion strategy, i.e., the set Formula, that minimizes the trace of the target's position estimate covariance matrix. Based on the following lemma, we first show that minimizing the trace of the (posterior) covariance matrix requires optimization with respect to the bearing directions of the sensors toward the estimated position of the target, while the speed of each sensor only affects the constraints imposed on this problem.

Lemma 2: The following two optimization problems are equivalent.

Optimization Problem 1 (Π1):Formula TeX Source $$\eqalignno{& \mathop{\rm minimize}\limits_{\varphi_1(k),…,\varphi_M(k), v_1(k),…, v_M(k)} \qquad {\rm{tr}}(P_{k+1\vert k+1,11}) \cr& {\rm{s.t.}}\; 0\leq v_i(k) \leq v_{i{\rm max}} \qquad\;\qquad \forall i=1,…, M.}$$

Optimization Problem 2 (Π2):Formula TeX Source $$\eqalignno{& \mathop{\rm minimize}\limits_{\theta_1(k+1),…,\theta_M(k+1)}\quad {\rm{tr}}(P_{k+1\vert k+1,11})&\hbox{(18)} \cr& {\rm{s.t.}}\; \vert \theta_i(k+1) -\theta^{\prime}_{i}(k)\vert \leq \eta_{i{\rm max}}(k) \quad \forall i=1,…, M}$$with(2)Formula TeX Source $$\eqalignno{ \eta_{i{\rm max}}(k) &= {\rm arcsin}\left({v_{i{\rm max}}\delta{t}\over \hat{d}^{\prime}_{i}(k)}\right)& \hbox{(19)} \cr \hat{d}^{\prime}_{i}(k) &= \sqrt{(\hat{x}_T(k{+}1\vert k){-}x_i(k))^2 {+} (\hat{y}_T(k+1\vert k)-y_i(k))^2}&\hbox{(20)} \cr\theta^{\prime}_{i}(k)&= {\rm Atan2}(\hat{y}_T(k{+}1\vert k){-}y_i(k),\hat{x}_T(k{+}1\vert k)-x_i(k))&\hbox{(21)}}$$where (cf. Fig. 3) Formula and θi(k) are the distance and bearing angle from the current location of sensor-i, pi(k), to the next (predicted) position of the target Formula.

Proof: Since the measurement matrix He,k+1 [cf. (7)], and hence, the posterior covariance matrix [cf. (11)], has an explicit form in terms of the bearing angles, θi(k+1), toward the estimated target position, minimizing the trace of the covariance matrix can be performed using the θi(k+1), i = 1, …, M, as the optimization variables, instead of the heading direction, ϕi(k), or speed, vi(k), of each sensor. Note, however, that although the variables {ϕ1(k),…,ϕM(k)} are unconstrained, the bearing angles, {θ1(k+1),…,θM(k+1)}, are constrained by the fact that the speed, vi(k), of each sensor, is bounded by vi max. Our objective here is to determine the constraints on the new optimization variables θi(k+1) and reveal their relation to vi max.

Consider the geometry of this problem shown in Fig. 3. At time-step k, sensor-i is located at pi(k) = [xi(k) yi(k)]T and predicts, based on the motion model [cf. (3)], that the target will move to Formula. Assume that sensor-i moves with speed vi and reaches a point pi(k+1) = [xi(k+1) yi(k+1)]T located on a circle of radius r = viδt, centered at its previous position pi(k) (cf. Fig. 3, for vi = vi max), which does not include the target. From point E (i.e., the target's estimated location at time-step k+1), we draw two lines tangent to the circle where sensor-i will move to. The tangent points A and B correspond to the extreme values of the bearing angle that define the constraints on θi(k+1), i.e., θi min(k+1) ≤ θi(k+1) ≤ θi max(k+1), withFormula TeX Source $$\eqalignno{\theta_{i{\rm min}}(k+1) & = \theta_{i}^{\prime}(k)-\eta_i(k)& \hbox{(22)}\cr\theta_{i{\rm max}}(k+1) & = \theta_{i}^{\prime}(k)+\eta_i(k)& \hbox{(23)}\cr\eta_{i}(k) & = {\rm arcsin}\left({v_{i}(k) \delta{t}\over \hat{d}^{\prime}_{i}(k)}\right)& \hbox{(24)}}$$where (24) results from the sine relation in the right triangle ADE, while (22) is derived from the relation for the external to the triangle ACE angle θi(k) (note that (23) can be easily derived in a similar manner from the geometry of the problem).

Since the inverse-sine function [cf. (24)] is monotonically increasing within the interval of concern (0 < ηi(k) < π/2), the angle ηi(k) is maximized when r = ri max, which corresponds to vi = vi max for sensors moving with bounded speed. For ηi(k) = ηi max(k) [cf. (19)], the range of values of the bearing angles θi(k+1) is maximized (i.e., the constraints on the bearing angles are most relaxed), which leads to a smaller or equal minimum value for the objective function (covariance trace) compared to when ηi(k) < ηi max(k). Therefore, the speeds of all sensors are set to their maximum values and optimization is performed with respect to the bearing angles θi(k+1) within the constraints defined by (22) and (23).

Corollary 1: Given the optimal bearing angle θi(k+1), the optimal heading directions, ϕi(k) and ϕi′(k), of sensor-i (cf. Fig. 3) are computed from the following relations:Formula TeX Source $$\eqalignno{\varphi_i(k) & = \theta_i(k+1) + \xi_i(k)& \hbox{(25)}\cr\varphi_i^{\prime}(k) & = \theta_i(k+1) + \pi- \xi_i(k)& \hbox{(26)}}$$whereFormula TeX Source $$\displaylines{\xi_i(k) = {\rm arcsin} \left({(\hat{y}_T(k+1\vert k) - y_i(k)){\rm cos}\,\theta_i(k+1)\over v_i(k)\delta{t}} \right. \hfill\cr\hfill\quad \left. - {(\hat{x}_T(k+1\vert k) - x_i(k)){\rm sin}\,\theta_i(k+1)\over v_i(k)\delta{t}} \right).\quad\hbox{(27)}}$$Among these two equivalent solutions, sensor-i should choose the one that brings it closer to the target so as to increase the probability of redetection later on.

Proof: The proof is described in [27].

At this point, we should note that the preceding analysis is not limited to the case of sensors moving with constant speed during each time-step. In fact, Lemma 2 can be directly applied to any higher order sensor motion model. For example, if a second-order model with bounded acceleration ai(k) ≤ ai max was used to describe the sensors' motion, then maximizing ηi(k), or equivalently r = vi(k) δ t + (1/2) ai(k) δ t2, would require that the sensors move with maximum acceleration.

From here on, we turn our attention to determining the optimal bearing angles to the estimated target position given the constraints of Lemma 2. Before showing the final result of this section, we first prove the following properties for the objective function of the optimization problem.

Lemma 3: In the optimal target tracking problem using distance-only measurements, minimizing the trace of the target position estimates' covariance matrix is equivalent to(3)

  1. maximizing the determinant of its inverse;

  2. maximizing the minimum eigenvalue of its inverse;

  3. minimizing the difference of its eigenvalues

Formula TeX Source $$ \eqalignno{ & \quad \mathop{\rm minimize}\limits_{\theta_1,…,\theta_M} {\rm{tr}}(P_{k+1\vert k+1,11}) \cr \mathop{\rm (a)}\limits_\Leftrightarrow & \quad \mathop{\rm maximize}\limits_{\bar\theta_1,…,\bar\theta_M} {\rm det}((P_{k+1\vert k+1,11})^{-1})\cr \mathop{\rm (b)}\limits_\Leftrightarrow & \quad \mathop{\rm maximize}\limits_{\bar\theta_1,…,\bar\theta_M} \mu_{\rm min}((P_{k+1\vert k+1,11})^{-1})\cr \mathop{\rm (c)}\limits_\Leftrightarrow & \quad \mathop{\rm minimize}\limits_{\bar\theta_1,…,\bar\theta_M} \left(\mu_{\rm max}(P_{k+1\vert k+1,11})- \mu_{\rm min}(P_{k+1\vert k+1,11}) \right) }$$where Formula, i = 1, …, M, θ0 is a constant defined from the 2 × 2 unitary (rotational) matrix appearing in the singular value decomposition of Pk+1│ k,11 [cf. (29) and (30) ], and μmin(⋅) and μmax(⋅) denote the minimum and the maximum eigenvalues of their matrix arguments, respectively.

Proof: (a) Since Pk+1│ k+1,11 is a 2 × 2 matrix, it is trivial to prove thatFormula TeX Source $${\rm{tr}}(P_{k+1\vert k+1,11}) = {{\rm{tr}}((P_{k+1\vert k+1,11})^{-1})\over {\rm{det}}((P_{k+1\vert k+1,11})^{-1})}.\eqno{\hbox{(28)}}$$Thus, for completing the proof of (a), it suffices to compute the inverse of the position covariance matrix Pk+1│ k+1,11 and show that its trace is constant.

Note that since the covariance matrix Pk+1│ k for the state estimates is symmetric positive semidefinite, so is the covariance matrix Pk+1│ k,11 of the target's position estimates. The singular value decomposition of (Pk+1│ k,11)−1 yieldsFormula TeX Source $$(P_{k+1\vert k,11})^{-1}=U \Sigma^{-1} U^{\rm T}\eqno{\hbox{(29)}}$$where Σ−1 = diag(μ12), μ1≥ μ2≥ 0, andFormula TeX Source $$U =\left[\matrix{{\cos}\, \theta_0 & -{\sin}\,\theta_0 \cr{\sin}\,\theta_0 & {\cos}\,\theta_0}\right]\; {\rm with }\; UU^{\rm T}=U^{\rm T}U=I_{2 \times 2}.\eqno{\hbox{(30)}}$$Substituting (29) in the right-hand side of (11), we haveFormula TeX Source $$\eqalignno{P_{k+1\vert k+1,11} & = (U \Sigma^{-1} U^{\rm T} + H_{e, k+1}^{\rm T}R^{-1} H_{e, k+1})^{-1} \cr& = U(\Sigma^{-1} + H_{n, k+1}^{\rm T}R^{-1}H_{n, k+1})^{-1} U^{\rm T}\cr& = U {\cal I}^{-1} U^{\rm T}}$$or equivalentlyFormula TeX Source $$(P_{k+1\vert k+1,11})^{-1} = U {\cal I} U^{\rm T}\eqno{\hbox{(31)}}$$whereFormula TeX Source $$H_{n, k+1} = H_{e, k+1} U = \left[\matrix{{\rm cos}\ \bar\theta_1 & … & {\rm cos}\ \bar\theta_M \cr{\rm sin}\ \bar\theta_1 & … & {\rm sin}\ \bar\theta_M}\right]^{\rm T}$$with Formula, andFormula TeX Source $${\cal I} = \left[\matrix{\mu_1^{\prime} +\sum_{i=1}^M\sigma_i^{-2}{\rm cos}^2\ \bar\theta_i & \sum_{i=1}^M\sigma_i^{-2}{\rm cos}\ \bar\theta_i {\rm sin}\,\bar\theta_i \cr\sum_{i=1}^M\sigma_i^{-2}{\rm cos}\ \bar\theta_i {\rm sin}\,\bar\theta_i & \mu_2^{\prime} +\sum_{i=1}^M\sigma_i^{-2}{\rm sin}^2\ \bar\theta_i}\right].\eqno{\hbox{(32)}}$$Substituting (32) in (31) and noting that similarity transformations do not change the trace of a matrix, yieldsFormula TeX Source $${\rm tr}((P_{k+1\vert k+1,11})^{-1}) = {\rm tr} ({\cal I}) = \mu_1^{\prime} + \mu_2^{\prime}+\sum_{i=1}^M\sigma_i^{-2} = c\eqno{\hbox{(33)}}$$which is constant.

(b) Let μ2 ≔ μmin((Pk+1│ k+1,11)−1) ≤ μ1 ≔ μmax ((Pk+1│ k+1,11)−1) be the minimum and maximum eigenvalues of the inverse covariance matrix for the position estimates. Based on the relationsFormula TeX Source $$\eqalignno{{\rm tr}((P_{k+1\vert k+1,11})^{-1}) & = \mu_1 + \mu_2 = c& \hbox{(34)}\cr{\rm det}((P_{k+1\vert k+1,11})^{-1})& = \mu_1 \mu_2& \hbox{(35)}}$$we haveFormula TeX Source $$\eqalignno{{{\rm maximize}}\ {\rm det}((P_{k+1\vert k+1,11})^{-1})\!\!\quad & \Leftrightarrow\quad\!\! {{\rm maximize}} \ (\mu_1 \mu_2)\cr\Leftrightarrow \quad {\rm{minimize}} \quad (- 4\mu_1 \mu_2)\!\!\quad & \Leftrightarrow \quad\!\! {\rm{minimize}}\ (c^2 {-} 4\mu_1 \mu_2) \cr\Leftrightarrow \quad {\rm{minimize}} \quad(\mu_1 - \mu_2)^2 \!\!\quad & \Leftrightarrow\quad\!\! {\rm{minimize}}\ (\mu_1 {-} \mu_2) \cr\Leftrightarrow \quad {\rm{minimize}} \quad (2 \mu_1 - c)\!\!\quad & \Leftrightarrow\quad\!\! {\rm{minimize}} \ (\mu_1).}$$(c) Note that μmax(Pk+1│ k+1,11) = 1/μ2 and μmin(Pk+1│ k+1,11) = 1μ1 and [cf. (34)]Formula TeX Source $$\eqalignno{{\rm{minimize}} \quad \left({1\over \mu_2} - {1\over \mu_1}\right)\quad & \Leftrightarrow\quad {\rm{minimize}} \quad {\mu_1 - \mu_2\over \mu_1 \mu_2}\cr& \Leftrightarrow\quad {\rm{minimize}} \quad {2 \mu_1 - c\over -\mu_1^2 + c\mu_1}.}$$However, this last quantity is a monotonically increasing function of μ1 within the interval of concern [c/2, c] (from (34), it is μ2c/2 ≤ μ1c). Therefore, minimizing it is equivalent to minimizing μ1, which, based on the result of (b), is equivalent to maximizing the determinant of the inverse covariance matrix.

The key result of this section is described by the following lemma.

Lemma 4: The optimal motions of a group of sensors estimating the position of a moving target can be determined by solving the following constrained optimization problem.

Optimization Problem 3 (Π3):Formula TeX Source $$\eqalignno{& \mathop{\rm minimize}\limits_{\bar\theta_1,…,\bar\theta_M} \quad \left\Vert \lambda_0 + \sum_{i=1}^M\lambda_i{\rm exp}\left(j2\bar\theta_i\right)\right\Vert _2& \hbox{(36)}\cr& {\rm{s.t.}} \quad \bar\theta_{i{\rm min}} ≤\bar\theta_i≤\bar\theta_{i{\rm max}}, \quad \forall i=1,…, M& \hbox{(37)}}$$with Formula and [cf. (29) and (30)]Formula TeX Source $$\eqalignno{\lambda_0&=\mu_1^{\prime}-\mu_2^{\prime}\geq0,\, \quad \lambda_i=\sigma_i^{-2} < 0,\, i=1, …, M \cr \bar\theta_{i{\rm min}} &= \theta_{i{\rm min}} - \theta_0,\, \bar\theta_{i{\rm max}} = \theta_{i{\rm max}} - \theta_0& \hbox{(38)}}$$or equivalentlyFormula TeX Source $$\eqalignno{& \mathop{\rm minimize}\limits_{\bar\theta_1,…,\bar\theta_M} \quad \left\Vert \sum_{i=0}^M {\bf v}_i \right\Vert _2& \hbox{(39)}\cr& {\rm{s.t.}} \quad \bar\theta_{i{\rm min}} ≤\bar\theta_i≤\bar\theta_{i{\rm max}} \quad \forall i=1,…, M& \hbox{(40)}}$$with (for i = 1, …, M)Formula TeX Source $${\bf v}_0=[\lambda_0 \quad 0]^{\rm T}, \,{\bf v}_i=[\lambda_i{\rm cos}\,2\bar\theta_i \quad \lambda_i{\rm sin}\,2\bar\theta_i]^{\rm T}.$$

Proof: We first note that the constraints of (37) are the same as the ones for the variables θi of the second optimization problem in Lemma 2, transformed for the new variables Formula. To prove the equivalence between the objective functions in (36) and (18), we rely on the equivalence between minimizing the trace of the covariance matrix and maximizing the determinant of the inverse covariance matrix, shown in Lemma 3, and proceed as follows.

Substituting (32) in (31), and employing the trigonometric identities Formula, Formula, Formula, we haveFormula TeX Source $${\rm det}((P_{k+1\vert k+1,11})^{-1}) = {\rm det} ({\cal I}) = d_c - {1\over 4} d_{\bar\theta}\eqno{\hbox{(41)}}$$whereFormula TeX Source $$d_c = \left(\mu_1^{\prime}+{1\over 2}\sum_{i=1}^M\sigma_i^{-2}\right)\left(\mu_2^{\prime}+{1\over 2}\sum_{i=1}^M\sigma_i^{-2}\right)+{1\over 4}(\mu_1^{\prime}-\mu_2^{\prime})^2$$is constant, andFormula TeX Source $$\eqalignno{d_{\bar{\bf \theta}} & = \left(\left(\mu_1^{\prime}-\mu_2^{\prime}\right)+\sum_{i=1}^M\sigma_i^{-2}{\rm cos}\,2\bar\theta_i\right)^2 + \left(\sum_{i=1}^M\sigma_i^{-2}{\rm sin}\,2\bar\theta_i\right)^2\cr& = \left\Vert \lambda_0 + \sum_{i=1}^M\lambda_i{\rm exp}\left(j2\bar\theta_i\right)\right\Vert _2^2 = \left\Vert \sum_{i=0}^M {\bf v}_i \right\Vert _2^2.& \hbox{(42)}}$$From (41), we conclude that maximizing the determinant of the inverse covariance matrix is equivalent to minimizing the quantity Formula, i.e., the norm of the sum of the vectors vi, i = 0, …, M.

We thus see that the original problem of minimizing the trace of the covariance matrix of the target's position estimate (cf. Lemma 2) is exactly reformulated to that of minimizing the norm of the sum of M+1 vectors in 2-D (cf. Lemma 4). Note that although the vector v0 = [λ0 0]T remains constant (affixed to the positive x semiaxis), each of the vectors vi, i = 1, …, M, has fixed length λi, but its direction can vary under the constraints described by (37). This geometric interpretation is depicted in Fig. 4.

Figure 4
Fig. 4. Geometric interpretation of the optimal motion strategy problem: The M+1 vectors shown have fixed lengths λi, i = 0, …, M. The vector v0 is affixed to the positive x semiaxis, while the direction of each of the vectors vi, i = 1, …, M can change within the interval denoted by the enclosing dashed lines, based on the motion of the corresponding sensor. The objective is to find the directions of the vectors vi, i = 1, …, M—directly related to the optimal heading directions of the sensors—that minimize the Euclidean norm of ∑i = 0Mvi.
SECTION IV

Computational Complexity

We now analyze the complexity of the optimization problem described in Lemma 4. The main result of this section is that the problem of determining the optimal constrained motion for a team of M < 1 mobile sensors tracking a moving target using distance-only measurements is NP-Hard in general (cf. Section IV-B).

Before considering the general case of multiple sensors, however, we first focus on determining the optimal solution when only a single sensor tracks the target. The main reason for this is that the closed-form solution derived for this case is extended and generalized to form the basis of the MGSR algorithm presented in Section V.

A. Single-Sensor Target Tracking: Closed-Form Solution

For M = 1, the optimization problem described by (36) is simplified to(4)Formula TeX Source $$\eqalignno{& \mathop{\rm minimize}\limits_{\bar\theta_1} \Vert \lambda_0+\lambda_1{\rm exp}(j2\bar\theta_1)\Vert _2 \Leftrightarrow \mathop{\rm minimize}\limits_{\bar\theta_1} \Vert {\bf v}_0+{\bf v}_1\Vert _2 \cr& \quad {\rm{s.t.}} \quad \bar\theta_{1{\rm min}}≤\bar\theta_1≤\bar\theta_{1{\rm max}}}$$with v0 = λ0 [1 0]T and Formula.

This norm-minimization problem can be solved trivially by maximizing the angle between the two vectors (i.e., setting Formula as close to π as possible, while satisfying the constraints on it). The closed-form solution for the optimal value isFormula TeX Source $$\bar\theta_{1}^{\ast}= \left\{\matrix{\displaystyle{n \pi\over 2},\hfill &\displaystyle \hbox{if}\; {n \pi\over 2} \in \left[\bar\theta_{1{\rm min}}, \bar\theta_{1{\rm max}} \right],\, n\ \hbox{is odd} \hfill\hfill \cr\displaystyle\bar\theta_{1{\rm min}},\hfill &\displaystyle \hbox{if}\; {n \pi\over 2} \notin \left[\bar\theta_{1{\rm min}}, \bar\theta_{1{\rm max}} \right] \hbox{and}\hfill\cr&\displaystyle \quad \Big\vert {n \pi\over 2} - \bar\theta_{1{\rm min}} \Big\vert ≤ \Big\vert {n\pi\over 2} - \bar\theta_{1{\rm max}} \Big\vert \hfill \cr\bar\theta_{1{\rm max}},\hfill & \hbox{otherwise}.\hfill}\right.\eqno{\hbox{(43)}}$$

Once Formula is determined, the optimal θ1 is computed as Formula.

Intuitively, the result of (43) can be explained as follows: Recall that θ0 is the direction of the eigenvector u1 = [cos θ0 sin θ0]T corresponding to the maximum eigenvalue, μ1, of the prior information matrix (Pk+1│ k,11)−1, while (θ0+π/2) is the direction of eigenvector u2 = [cos (θ0+π/2) sin (θ0+\π/2)]T corresponding to the minimum eigenvalue, μ2, of (Pk+1│ k,11)−1 [cf. (29) and (30)]. When only one sensor is available, it should always move so as the new measurement contributes information along (or as close as possible to) the direction where the least information is available. This is best achieved when Formula, and hence, θ1 = θ0 +π/2.

Notice that the solution described by (43) for one sensor can be adapted and generalized to determine the motion of multiple sensors. In such case, the objective function [cf. (36)] will be sequentially minimized over each variable Formula separately, while considering the remaining bearings (i.e., Formula) as constant during that step. In fact, our MGSR algorithm follows this idea and its solution at each iteration has similar closed form as (43).

B. Multisensor Target Tracking: NP-Hardness

As shown in [27], the objective function in (18), and equivalently, in (36), is nonconvex in the optimization variables Formula. More importantly, in this section, we prove that the problem of determining the optimal constrained motion of multiple sensors tracking a moving target with range measurements is NP-Hard in general (cf. Theorem 1). We proceed by first considering the following well-known NP-Complete problem [28, Ch. 3].

• Partition Problem

Given M positive integers λ1,…,λM, determine whether there exist ζi∊{−1,+1}, i = 1,…,M, such that ∑i = 1Mλi ζi = 0.

and

• Optimization Problem 33)

Formula TeX Source $$\eqalignno{\mathop{\rm minimize}\limits_{\bar\theta_1,…,\bar\theta_M} \quad & \left(\left(\sum_{i=1}^M \lambda_i{\rm cos}\ 2\bar\theta_i \right)^2+\left(\sum_{i=1}^M \lambda_i{\rm sin}\ 2\bar\theta_i \right)^2\right)^{1\over 2}\qquad& \hbox{(44)} \cr{\rm{s.t.}} \quad 0 & ≤\bar\theta_i≤\pi/2,\quad \forall i=1,…, M \cr& \quad \lambda_i\in{\bb Z}^+,\quad \forall i=1,…, M& \hbox{(45)}}$$which is an instance of optimization problem Π3 described by (36) and (37), for Formula and Formula.

Proving by restriction [28, Ch. 3] that Π3 is NP-Hard, in general, requires to show that solving(5) Π3, which is a special case of Π3, is equivalent to solving the partition problem. Since the partition problem is NP-Complete, it will follow that the general problem Π3 is at least as hard as that, i.e., Π3 is NP-Hard. We first prove that the answer to the partition problem is positive (“yes”), if and only if Π3 achieves optimal value of zero.

Lemma 5: For positive integers λ1,…,λM, there exist ζi ∊{−1,+1}, i = 1,…,M, such that ∑i = 1Mλi ζi = 0, if and only if, the optimal value of Π3 is 0.

Proof: (Necessary): Assume ∃ ζi ∊{−1,+1},i = 1,…,M, such thatFormula TeX Source $$\sum_{i=1}^M\lambda_i \zeta_i=0.\eqno{\hbox{(46)}}$$Based on these, consider the following choice of Formula for Π3Formula TeX Source $$\bar\theta_i^{\ast}= \left\{\matrix{0, \quad & {\rm if} \quad \zeta_i=1\cr\pi/2, \quad & \quad {\rm if} \quad \zeta_i=-1.} \right.\eqno{\hbox{(47)}}$$Note that Formula, satisfies the constraints of Π3 [cf. (45)]. Additionally, it is easy to verify Formula and Formula. Substituting in the objective function (squared) of Π3 [cf. (44)] yieldsFormula TeX Source $$\left(\sum_{i=1}^M \lambda_i{\rm cos}\ 2\bar\theta_i^{\ast} \right)^2+\left(\sum_{i=1}^M \lambda_i{\rm sin}\ 2\bar\theta_i^{\ast} \right)^2 = \left(\sum_{i=1}^M \lambda_i \zeta_i \right)^2 = 0$$where the last equality follows from (46).

Since the objective function of Π3 is always nonnegative and the choice of Formula [cf. (47)] based on ζi achieves zero, the set Formula is the optimal solution of Π3.

(Sufficient): Suppose Formula, with Formula, andFormula TeX Source $$\left(\left(\sum_{i=1}^M \lambda_i{\rm cos}\,2\bar\theta_i^\ast \right)^2+\left(\sum_{i=1}^M \lambda_i{\rm sin} \ 2\bar\theta_i^\ast \right)^2\right)^{1\over 2}=0.\eqno{\hbox{(48)}}$$This last equality for the objective function of Π3 requiresFormula TeX Source $$\eqalignno{& \sum_{i=1}^M \lambda_i{\rm sin}\ 2\bar\theta_i^\ast=0, \quad \hbox{and}& \hbox{(49)}\cr& \sum_{i=1}^M \lambda_i{\rm cos}\ 2\bar\theta_i^\ast=0.& \hbox{(50)}}$$

Note that the constraints on Formula [cf. (45)] imply that Formula. Additionally, since λi > 0, it follows from (49) that Formula. Thus, there exist Formula, such that Formula [cf. (50)].

Lemma 5 establishes a one-to-one correspondence between every instance(6) of Π3 and that of the partition problem. In particular, if we are able to solve the optimization problem Π3, then by examining its optimal value, we can answer the partition problem, i.e., zero (vs. positive) optimal value for the objective function of Π3 corresponds to positive (vs. negative) answer to the partition problem. Based on the result of Lemma 5, we hereafter state and prove the main result of this section.

Theorem 1: The problem of determining the optimal constrained motion of a team of mobile sensors tracking a moving target using distance-only measurements is NP-Hard in general.

Proof: Assume that the general problem Π3 is not NP-Hard. Then, there exists a polynomial-time algorithm that can solve all instances of Π3, and hence, Π3. From Lemma 5, however, the answer to the partition problem can be determined based on the optimal value of Π3. This implies that the partition problem can be solved in polynomial time, which is a contradiction.

SECTION V

Problem Solution

As shown in the previous section, the problem of optimal trajectory generation for multiple sensors with mobility constraints that track a moving target using range-only measurements is NP-Hard in general. Hence, finding the global optimal solution for the original optimization problem, or for its equivalent formulations (cf. Π1 ⇔ Π2 ⇔ Π3 ), becomes extremely difficult. Ideally, the optimal solution can be determined if one discretizes the space of possible heading directions of all sensors and performs an exhaustive search. This approach, however, has computational complexity exponential in the number of sensors, which is of limited practical use given realistic processing constraints.

In order to design algorithms that can operate in real time, appropriate relaxations of the original optimization problem become necessary. In the next two sections, we present two methods for solving the problem under consideration, namely, MGSR and LPR. Both algorithms have computational complexity linear in the number of sensors, which ensures real-time implementations even for a large number of sensors. Furthermore, as shown in Section VI, they both achieve tracking accuracy indistinguishable of that of exhaustive search.

A. Modified Gauss–Seidel Relaxation

Motivated by the simplicity of the closed-form solution for the case of one sensor (cf. Section IV-A), a straightforward approach to finding a minimum of the optimization problem Π3 would be to iteratively minimize its objective function [cf. (36)] for each optimization variable separately, i.e., [29, Ch. 3]

1) Nonlinear Gauss–Seidel Algorithm

Formula TeX Source $$\eqalignno{\mathop{\rm min}\limits_{ \bar \theta_i^{(\ell +1)}} & \left\Vert \lambda_0 + \sum_{ \kappa = 1 }^{ i-1} \left(\lambda_{ \kappa } {\rm exp} \left(j2 \bar \theta_{ \kappa }^{(\ell +1)} \right) \right) \quad \right. \cr& \left. + \sum_{ \kappa = i+1 }^M \left(\lambda_{ \kappa } {\rm exp} \left(j2 \bar \theta_{ \kappa }^{(\ell)} \right) \right) +\lambda_i {\rm exp} \left(j2 \bar \theta_i^{(\ell + 1)} \right) \right\Vert _2 \cr{\rm{s.t.}}& \quad \bar\theta_{i{\rm min}} ≤ \bar\theta_i^{(\ell +1)} ≤ \bar\theta_{i{\rm max}}}$$where Formula is the new optimal value of Formula, and Formula, are the remaining vector directions, considered fixed during this step, computed sequentially during the previous iterations.

However, it is easy to demonstrate that this sequential gradient-based approach is prone to being trapped in local minima. For example, consider the simple case of two sensors, with no constraints imposed on Formula. For Formula, and initial bearing directions Formula (cf. Fig. 5), the optimal values after the first 3 iterations areFormula TeX Source $$\eqalignno{\hbox{Initial condition}: 2\bar\theta_2 & = - 2.6180,\; 2\bar\theta_1= \quad 3.1416 \cr\hbox{First iteration}: 2\bar\theta_2 & = \quad 3.1416, \; 2\bar\theta_1 = -3.1416 \cr\hbox{Second iteration}: 2\bar\theta_2 & = -3.1416, \; 2\bar\theta_1= \quad 3.1416 \cr\hbox{Third iteration}: 2\bar\theta_2 & = -3.1416, \; 2\bar\theta_1= \quad 3.1416.}$$

Figure 5
Fig. 5. Norm-minimization example for the sum of the vectors with norms Formula. The direction of v0 is fixed while the directions of v1 and v2 are the optimization variables. (Top) Initial vector directions: Formula and Formula. (Middle) Final vector directions computed by the nonlinear Gauss–Seidel algorithm: Formula. The norm of the sum in this case is Formula that corresponds to a local minimum. (Bottom) Final vector directions computed by the modified Gauss–Seidel relaxation algorithm: Formula. The norm of the sum in this case is 0 that corresponds to the global minimum.

As evident, this algorithm converges to a local minimum Formula. The objective function value in this case is Formula, while the true global minimum is 0, obtained for Formula.

To overcome this limitation, we propose the following modification.

2) Modified Gauss–Seidel Relaxation

Formula TeX Source $$\eqalignno{\mathop{\rm{min.}}\limits_{\bar \theta_i^{(\ell +1)}} & \left\Vert \lambda_0 + \sum_{ \kappa = 1 }^{ i-1} \left(\lambda_{ \kappa}{\rm exp} \left(j2 \bar \theta_{ \kappa }^{(\ell +1)} \right) \right) \quad \right. \cr&\!\!\!\!\!\!\! \left. {+}\! \sum_{ \kappa {=} i{+}1 }^M \left(\lambda_{ \kappa } {\rm exp} \left(j2 \bar \theta_{ \kappa }^{(\ell)} \right) \right)\! {+}\lambda_i {\rm exp} \left(j2 \bar \theta_i^{(\ell {+} 1)} \right)\!{+} {\bf v}_{M{+}1} \right\Vert _2 \cr{\rm{s.t.}}& \quad \bar\theta_{i{\rm min}} ≤ \bar\theta_i^{(\ell +1)} ≤ \bar\theta_{i{\rm max}}& \hbox{(51)} \cr\hbox{with}\quad & {\bf v}_{M+1}:= - \alpha \left(\lambda_0 + \sum_{\kappa=1}^{i-1} \left(\lambda_{\kappa} {\rm exp}\left(j2\bar\theta_{\kappa}^{(\ell +1)}\right)\right) \right. \cr& \qquad\qquad \left. + \sum_{\kappa=i}^M \left(\lambda_{\kappa}{\rm exp}\left(j2\bar\theta_{\kappa}^{(\ell)}\right)\right) \right)& \hbox{(52)}}$$where we have introduced the perturbation vector vM+1, which is proportional to the sum of the vectors computed in the previous iteration. The parameter α∊[0,1] is termed the relaxation factor. When α = 0, this method becomes identical with the nonlinear Gauss–Seidel algorithm, while for α = 1, it results in Formula, and therefore, the solution does not change between iterations. We thus see that the perturbation vector vM+1 reduces the convergence rate of the MGSR algorithm by smoothing the cost function. This makes the algorithm less sensitive to local minima at the expense of increasing the number of iterations required to converge.

This is demonstrated for the previous example (cf. Fig. 5). In this case, the optimal values computed by the MGSR algorithm are:Formula TeX Source $$\eqalignno{\hbox{Initial condition}: 2\bar\theta_2 & =-2.6180, \; 2\bar\theta_1=3.1416 \cr\hbox{First iteration}: 2\bar\theta_2 & =-2.3625, \; 2\bar\theta_1=2.4927 \cr\quad … \quad\qquad & \quad… … \cr\hbox{Fourth iteration}: 2\bar\theta_2 & =-2.3562, \; 2\bar\theta_1=2.3563 \cr\hbox{Fifth iteration}: 2\bar\theta_2 & =-2.3562, \; 2\bar\theta_1=2.3562.}$$Noting that 2.3562 = 3π/4, we see that the MGSR method returns the global minimum.

The optimization process in the MGSR algorithm is carried out only for one variable (i.e., Formula) at every step using a similar closed-form solution as the one used in the single-sensor case (cf. Section IV-A). Thus, the MGSR process has computational complexity, per iteration step, only linear in the number of sensors. Furthermore, it is easily implemented, has low memory requirements, and, as demonstrated in Section VI, it achieves the same level of positioning accuracy as the exhaustive search approach. For clarity, we present the basic steps of the MGSR process in Algorithm 1.

B. Linear Programming Relaxation

In this section, an alternative relaxation is introduced that leads to the formulation of an LP algorithm for solving the constrained optimal motion generation problem. We start by defining the following problem.

• Optimization Problem 4

4)Formula TeX Source $$\eqalignno{\mathop{\rm maximize}\limits_{\bar\theta_1,…,\bar\theta_M} \quad & \mu_{\rm min}({\cal I})& \hbox{(53)} \cr{\rm{s.t.}} \quad & \bar\theta_{i{\rm min}} ≤\bar\theta_i≤\bar\theta_{i{\rm max}}, \quad \forall i=1,…, M}$$with I defined in (32), which (cf. Lemma 3) is exactly equivalent to the optimization problems Π1−Π3, and proceed to show the remaining of the following relations:(7)Formula TeX Source $$\Pi_1 ⇔ \Pi_2 ⇔ \Pi_3 ⇔ \Pi_4 ⇔ \Pi_5 \leftarrow \Pi_6 \leftarrow \Pi_7 ⇔ \Pi_8 ⇔ \Pi_9$$where Πi ← Πj denotes that the optimization problem Πj is a relaxation of Πi, i.e., the feasible set of Πi is a subset of that of Πj. The NP-Hard problem Π5, the semidefinite programming (SDP) problems Π6−Π8, and the LP problem Π9, whose solution is the basis of the LPR algorithm, are defined hereafter.

Lemma 6: The optimization problems Π4 and Π5 are equivalent.

• Optimization Problem 5

5)Formula TeX Source $$\eqalignno{{\rm maximize}& \quad \beta& \hbox{(54)} \cr{\rm{s.t.}} \quad & \Sigma^{-1}+\sum_{i=1}^M\lambda_iX_i-\beta I_{2 \times 2} \succeq 0& \hbox{(55)} \cr& X_i=\left[\matrix{x_i & z_i \cr z_i & y_i}\right]\succeq 0 \qquad \forall i=1,…, M& \hbox{(56)} \cr& {\rm{rank}}(X_i)=1 \qquad \forall i=1,…, M& \hbox{(57)} \cr& {\rm{tr}}(X_i)=1 \qquad \forall i=1,…, M& \hbox{(58)} \cr& X_{i,11}^{(l)}≤ x_i ≤ X_{i,11}^{(r)} \qquad \forall i=1,…, M& \hbox{(59)} \cr& {{\rm cos}\ 2\eta_{i{\rm max}}\over 2}≤\left[\matrix{{\rm cos}\ 2\check\theta_i \cr {\rm sin}\ 2\check\theta_i}\right]^{\rm T}\left[\matrix{x_i-1/2\cr z_i}\right]≤{1\over 2}\cr& \hfill \forall i=1,…, M\quad& \hbox{(60)}}$$with ηi max and Σ−1 defined in (19) and (29), respectively,Formula TeX Source $$\eqalignno{\check\theta_i& :={(\bar\theta_{i{\rm min}}+\bar\theta_{i{\rm max}})\over 2}& \hbox{(61)} \cr X_{i,11}^{(l)}& :=\mathop{\rm min}\limits_{\bar\theta_i\in[\bar\theta_{i{\rm min}}, \bar\theta_{i{\rm max}}]}{\rm cos}^2\bar\theta_i& \hbox{(62)} \cr X_{i,11}^{(r)}& :=\mathop{\rm max}\limits_{\bar\theta_i\in[\bar\theta_{i{\rm min}}, \bar\theta_{i{\rm max}}]}{\rm cos}^2\bar\theta_i.& \hbox{(63)}}$$

Proof: The proof proceeds in four steps.

  1. Modification of the objective function and introduction of constraint (55): Since μmin( I)≥β ⇔ I≽β I2×2, where “≽” denotes that Formula is positive semidefinite, it follows thatFormula TeX Source $$\eqalignno{&\kern-2pc{\rm maximize} \quad \mu_{\min}({\cal I})\quad\!\!\!⇔ \quad\!\!\! {\rm maximize} \quad \beta \cr&\kern6.4pc\quad {\rm{s.t.}} \quad {\cal I}-\beta I_{2 \times 2} \succeq 0.& \hbox{(64)}}$$DefiningFormula TeX Source $$X_i:=\left[\matrix{{\rm cos}^2\ \bar\theta_i & {\rm cos}\ \bar\theta_i{\rm sin}\ \bar\theta_i \cr{\rm cos}\ \bar\theta_i{\rm sin}\ \bar\theta_i & {\rm sin}^2\ \bar\theta_i}\right],\qquad i=1,…, M\eqno{\hbox{(65)}}$$and substituting in (32), yieldsFormula TeX Source $${\cal I}=\Sigma^{-1}+\sum_{i=1}^M\lambda_iX_i\eqno{\hbox{(66)}}$$with λi ≔σi−2. Finally, substituting (66) in (64) results in the constraint (55).

  2. Constraints (56)(58): From (65), it is evident that Xi has the following properties:Formula TeX Source $$X_i\succeq0\qquad {\rm{rank}}(X_i)=1 \qquad\hbox{and} \qquad {\rm{tr}}(X_i)=1.\eqno{\hbox{(67)}}$$Conversely, it is easy to show that any 2×2 matrix Xi satisfying the aforementioned constraints can be written in the form of (65). Hence, we conclude that requiring a matrix Xi to be of the form of (65) is equivalent to Xi satisfying the constraints in (56)(58).

  3. Constraint (59): This is a direct result of the constraint Formula and the definition Formula.

  4. Constraint (60): Since Formula and Formula, the constraint on zi could be determined based on the constraint (59) on xi and the trigonometric relation Formula between xi and zi. However, this would result in two feasible regions for zi and complicate the process of recovering Formula. Instead, we hereafter determine a linear inequality constraint on zi based on xi.

Substituting (56) and (65) in the following relation, yieldsFormula TeX Source $$2 \left[\matrix{{\rm cos}\ 2\ \check\theta_i \cr{\rm sin}\ 2\ \check\theta_i}\right]^{\rm T}\left[\matrix{x_i-1/2 \cr z_i}\right] = {\rm cos}\ 2(\bar\theta_i-\check\theta_i)\eqno{\hbox{(68)}}$$with θi defined in (61).

Our objective now is to determine the range of feasible values of Formula. Subtracting (22) from (23), we haveFormula TeX Source $$\eqalignno{ \eta_{i{\rm max}} &= {\bar\theta_{i{\rm max}} - \bar\theta_{i{\rm min}}\over 2} = \check\theta_i-\bar\theta_{i{\rm min}} = \bar\theta_{i{\rm max}}-\check\theta_i \cr& \Rightarrow \bar\theta_{i{\rm min}} = \check\theta_i - \eta_{i{\rm max}},\, \quad \bar\theta_{i{\rm max}} = \check\theta_i + \eta_{i{\rm max}}.\qquad& \hbox{(69)}}$$

Substituting these last two relations on both sides of the inequality Formula and rearranging terms, yieldsFormula TeX Source $$0 ≤ \vert \bar\theta_i-\check\theta_i\vert ≤ \eta_{i{\rm max}} ≤ \pi/2\eqno{\hbox{(70)}}$$where the right-most inequality is due to the geometry of the problem (cf. Fig. 3). Since the cosine function is monotonically decreasing within the interval [0, π], from (70), we haveFormula TeX Source $$\eqalignno{0 & ≤ 2 \vert \bar\theta_i-\check\theta_i\vert ≤ 2 \eta_{i{\rm max}} ≤ \pi \cr{\rm cos}\,2 \eta_{i{\rm max}} & ≤ {\rm cos}\, 2 \vert \bar\theta_i-\check\theta_i\vert ≤ 1.& \hbox{(71)}}$$Noting that Formula and substituting the left-hand side of (68) in (71), results in the affine constraint (60).

Note that based on the equivalence relation of Lemma 6, Π5 has the same computational complexity as Π1−Π4, and thus, it cannot be solved in polynomial time. In order to devise an efficient algorithm that will support a real-time implementation, we need to modify Π5 so that it becomes convex. Dropping the rank constraints in (57), yields the following relaxed version of Π5.

• SDP Optimization Problem 6

6)Formula TeX Source $$\eqalignno{{\rm maximize}& \quad \beta & \hbox{(72)}\cr{\rm{s.t.}} \quad & \Sigma^{-1}+\sum_{i=1}^M\lambda_iX_i-\beta I_{2 \times 2} \succeq 0 \cr& X_i\succeq 0 \qquad \forall i=1,…, M \cr& {\rm{tr}}(X_i)=1 \qquad \forall i=1,…, M \cr& X_{i,11}^{(l)}≤ x_i ≤ X_{i,11}^{(r)} \qquad \forall i=1,…, M \cr& {{\rm cos}\ 2\ \eta_{i{\rm max}}\over 2}≤\left[\matrix{{\rm cos}\ 2\check\theta_i \cr{\rm sin}\ 2\check\theta_i}\right]^{\rm T}\left[\matrix{x_i-1/2\cr z_i}\right]≤{1\over 2} \cr& \hfill \forall i=1,…, M.\qquad}$$

It is clear that Π6 is a relaxation of Π5 because the feasible set of Π5 is a subset of that of Π6. Moreover, with respect to the variables Xi, i = 1,…,M, and β, Π6 is an SDP problem, which can be solved using a polynomial-time algorithm [30, Ch. 4]. However, solving Π6 requires computations at least in the order of Formula [30, Ch. 11], which makes real-time implementations prohibitive when M is large (note that this solution approach is not considered in the results shown in Section VI). In order to further reduce the computational complexity, we make a second modification to Π5 by dropping the constraints in (57) and (60) simultaneously to obtain the following relaxed version of Π5.

• SDP Optimization Problem 7

7)Formula TeX Source $$\eqalignno{{\rm maximize}& \quad \beta& \hbox{(73)} \cr{\rm{s.t.}} \quad & \Sigma^{-1}+\sum_{i=1}^M\lambda_iX_i-\beta I_{2 \times 2} \succeq 0 & \hbox{(74)} \cr& X_i\succeq 0 \qquad \forall i=1,…, M \cr& {\rm{tr}}(X_i)=1 \qquad \forall i=1,…, M \cr& X_{i,11}^{(l)}≤ x_i ≤ X_{i,11}^{(r)} \qquad \forall i=1,…, M.}$$

Note again that the feasible sets of Π5 and Π6 are subsets of that of Π7; hence, Π7 is a relaxation of both Π5 and Π6.

Although Π7 is also an SDP problem, we will show that it is exactly equivalent to an LP problem whose solution has computational complexity linear in the number of sensors. We proceed by first proving the following lemma.

Lemma 7: The SDP optimization problems Π7 and Π8 are equivalent in the optimal value.

• SDP Optimization Problem 8

8)Formula TeX Source $$\eqalignno{{\rm maximize}& \quad \beta& \hbox{(75)} \cr{\rm{s.t.}} \quad & \Sigma^{-1}+\sum_{i=1}^M\lambda_iX_i-\beta I_{2 \times 2} \succeq 0& \hbox{(76)} \cr& X_i\succeq 0 \qquad \forall i=1,…, M& \hbox{(77)} \cr& {\rm{tr}}(X_i)=1 \qquad \forall i=1,…, M& \hbox{(78)} \cr& X_{i,11}^{(l)}≤ x_i ≤ X_{i,11}^{(r)} \qquad \forall i=1,…, M \cr& z_i=0 \qquad \forall i=1,…, M.& \hbox{(79)}}$$

Proof: In order to prove the equivalence of Π7 and Π8, it suffices to show that both problems have the same optimal value. Denoting as β7 and β8 the optimal values of Π7 and Π8, respectively, we will prove the equality β7 = β8 by showing that β8≤β7 and β7≤β8.

  • (a) β8≤β7: Note that the feasible set of Π8 is contained in that of Π7 and both problems have the same objective function; hence β8≤β7.

  • (b) β7≤β8: We denote as {X1,…,XM} one of the optimal solution(s) corresponding to Π7, and define:Formula TeX Source $$C^{\ast} := \Sigma^{-1}+\sum_{i=1}^M\lambda_iX_i^{\ast}=\left[\matrix{a^{\ast} & b^{\ast} \cr b^{\ast} & d^{\ast}}\right]\succeq\beta_7^{\ast}I_{2 \times 2}$$where the last inequality follows from optimality and the constraint (74), and yieldsFormula TeX Source $$\beta_7^{\ast}≤{\rm min}\{a^{\ast}, d^{\ast}\}.\eqno{\hbox{(80)}}$$

We further define as Formula TeX Source $$ \eqalignno{ X_i^{\prime} & := {\rm diag}(X_i^{\ast}),\qquad i=1,…, M\cr C^{\prime} & := \Sigma^{-1} + \sum_{i=1}^M \lambda_i X_i^{\prime} = \left[\matrix{ a^{\ast} & 0 \cr 0 & d^{\ast}}\right] \cr \beta_8^{\prime} & := {\rm max}\; \beta& \hbox{(81)} \cr & \quad {\rm s.t.} \quad C^{\prime} - \beta I_{2 \times 2} \succeq 0 } $$where from (81), it is evident that Formula TeX Source $$ \beta_8^{\prime} = {\rm min}\{a^{\ast}, d^{\ast}\}. \eqno{\hbox{(82)}} $$

Furthermore, note that Xi satisfies all constraints of Π8 (i.e., {Xi, i = 1,2,…,M} is in the feasible set of Π8), and thereforeFormula TeX Source $$\beta_8^{\prime}≤\beta_8^{\ast}.\eqno{\hbox{(83)}}$$Combining (80), (82), and (83), we have:Formula TeX Source $$\qquad\qquad\beta_7^{\ast} ≤ {\rm min}\{a^{\ast}, d^{\ast}\} = \beta_8^{\prime} ≤ \beta_8^{\ast}.$$

The final result of this section is provided by the following lemma.

Lemma 8: The SDP optimization problem Π8 is equivalent to the LP problem Π9.

• LP Optimization Problem 9 (Π9)

Formula TeX Source $$\eqalignno{{\rm maximize}& \quad \beta& \hbox{(84)} \cr{\rm{s.t.}} \quad & \mu_1^{\prime}+\sum_{i=1}^M\lambda_ix_i-\beta\geq 0& \hbox{(85)}\cr& \mu_2^{\prime}+\sum_{i=1}^M\lambda_i-\sum_{i=1}^M\lambda_ix_i-\beta\geq 0& \hbox{(86)}\cr& X_{i,11}^{(l)}≤ x_i≤ X_{i,11}^{(r)} \qquad \forall i=1,…, M& \hbox{(87)}}$$with μ1 and μ2 defined in (29).

Proof: Note that in the formulation of the SDP problem Π8, the off-diagonal constraints (79) are satisfied by forcing all matrices Xi to become diagonal. Hence, the linear matrix inequality constraint in (76) can be decomposed into the following two linear scalar inequalities:Formula TeX Source $$\eqalignno{& \mu_1^{\prime}+\sum_{i=1}^M\lambda_ix_i-\beta\geq 0& \hbox{(88)} \cr& \mu_2^{\prime}+\sum_{i=1}^M\lambda_iy_i-\beta\geq 0& \hbox{(89)}}$$where (88) is the same as the constraint in (85). Additionally, solving for yi from the constraint tr(Xi) = xi + yi = 1 [cf. (78)], and substituting in (89), yields the constraint (86).

Finally, from the definitions of Xi,11(l) and Xi,11(r) [cf. (62) and (63)], it is evident that the constraint in (87) makes the one in (77) redundant.

The LP problem Π9, which is a relaxation of the NP-Hard problems Π1 ⇔ ⋅⋅⋅ ⇔ Π5, can be solved efficiently using linprog from MATLAB [31]. Note also that the relaxations employed for deriving Π9 do not affect the feasibility of the solution (i.e., any solution of Π9 is within the feasible set of Π5). Once the optimal solution xi∗, i = 1,…,M of Π9 is computed, the optimal bearing directions Formula, i = 1,…,M are calculated from Formula [cf. (56) and (65)]. If multiple solutions exist for Formula, we choose the one that brings the sensor closer to the target.

Finally, we should note that although the computational cost for solving an LP problem can be in the order of Formula, it can be shown that the solution of Π9 requires only Formula operations (i.e., linear in the number of sensors) due to the special structure of the matrices involved (cf. [27]).

SECTION VI

Simulation Results

In order to evaluate the two presented constrained optimal motion strategies, MGSR and LPR, we have conducted extensive simulation experiments and compared the performance of MGSR and LPR to the following methods.

1) Grid-Based Exhaustive Search: In this case, we discretize the space of the sensors' heading directions and perform an exhaustive search over all possible combinations of these to find the one that minimizes the trace of the covariance matrix for the target's position estimates. Ideally, the grid-based exhaustive search (GBES) should return the global optimal solution, and it could be used as a benchmark for evaluating the MGSR and LPR, if the grid size is sufficiently small. However, this is difficult to guarantee in practice since its computational complexity and memory requirements are exponential in the number of sensors. Hence, implementing the GBES becomes prohibitive when the number of sensors, M, increases and/or when the size of the grid cells decreases.

2) Random Motion: This is a modification of an intuitive strategy that would require the sensors to move toward the target. In this case, however, and in order to ensure that the sensors do not converge to the same point (i.e., zero baseline), we require that at every time-step, sensor-i selects its heading direction with uniform probability toward points within the arc ACB shown in Fig. 3, i.e., each sensor is required to move toward the target at a random angle.

We have also implemented gradient descent-based algorithms with: 1) constant step-size (GDC) and 2) successive step-size reduction (GDS). The simulation results demonstrate that MGSR and LPR achieve better performance as compared to GDC and GDS. Due to space limitations, the interested reader is referred to [27] for the implementation details and results of the GDC and GDS algorithms.

A. Simulation Setup

For the purposes of this simulation, we adopt a zero-acceleration target motion modelFormula TeX Source $$\dot{\bf x}_{T}(t) = F\; {\bf x}_{T}(t) + G\; {\bf w}(t)\eqno{\hbox{(90)}}$$whereFormula TeX Source $$F = \left[\matrix{0 & 0 & 1 & 0\cr0 & 0 & 0 & 1\cr0 & 0 & 0 & 0\cr0 & 0 & 0 & 0} \right]\quad G = \left[\matrix{0 & 0\cr0 & 0\cr1 & 0\cr0 & 1} \right] \quad{\bf x}_{T}(t) = \left[\matrix{x_T(t) \cr y_T(t) \cr \dot{x}_T(t) \cr \dot{y}_T(t)} \right]$$and w(t) = [wx(t) wy(t) ]T is a zero-mean white Gaussian noise vector with covariance E [w(t)wT(τ)] = qI2 δ(t−τ), q = 10, and δ(t−τ) is the Dirac delta. In our implementation, we discretize the continuous-time system model [cf. (90)] with time-step δ t = 0.1 s.

The initial true state of the target is xT(0) = [0,0,−8,4]T. The initial estimate for the target's state is ○T(0) = [2,−2,0,0]T. This can be obtained by processing the first measurements from the sensors at time-step 0. At the beginning of the experiment, the sensors are randomly distributed within a circle of radius 5 m, which is at a distance of about 20 m from the target's initial position. The maximum speed for each sensor is set to 10 m/s, i.e., the largest distance that a sensor can travel during any time-step is 1 m. The duration of the simulations is 10 s (i.e., 100 time-steps). At every time-step, we employ the methods described [i.e., GBES, MGSR, LPR, and random motion (RM)] to calculate the next heading direction of each sensor. Throughout the simulations, we set the grid size for the GBES method to π/200 and the relaxation factor for the MGSR strategy to α = 0.5.

B. Target Tracking With Two Sensors (Homogeneous Team)

We first investigate the scenario where two identical sensors track a moving target, i.e., the covariance matrix of the noise in the distance measurements is R = σ2 I2 × 2 with σ = 1.

The time evolution of the trace of the target's position covariance in a typical simulation is shown in Fig. 6. As expected, the performance of the optimized approaches (i.e., MGSR, LPR, and GBES) is significantly better as compared to that of the nonoptimized case (i.e., RM). Additionally, the uncertainty in the target's position estimates (trace of the covariance matrix) achieved by either of the two proposed motion strategies, MGSR and LPR, is indistinguishable of that of the GBES, at a cost linear, instead of exponential, in the number of sensors. These results are typical for all experiments conducted and are summarized, for the average of 100 trials, in Fig. 7.

Figure 6
Fig. 6. [Two-sensors case] Trace of the target's position covariance matrix. Comparison between GBES, MGSR, LPR, and random motion (RM).
Figure 7
Fig. 7. [Two-sensors case, Monte Carlo simulations] Average trace of the target's position covariance matrix in 100 experiments.

Fig. 8(a)–(d) depicts the actual and estimated trajectories of the target, along with the trajectories of the two sensors, when employing as motion strategy MGSR, LPR, GBES, and RM, respectively. As evident, the accuracy of the target's position estimates for both MGSR and LPR are significantly better than the case of RM, and almost identical to that of GBES. Additionally, for both MGSR and LPR, the EKF produces consistent estimates, i.e., the target's true position is within the 3 σ ellipse centered at the target's estimated position. This is not the case for the RM strategy where the inconsistency is due to the large errors in the state estimates used for approximating the measurement Jacobian.

Figure 8
Fig. 8. [Two-sensors case] Trajectories of the two sensors, and the actual and estimated trajectories of the target, when employing as motion strategy. (a) MGSR. (b) LPR. (c) GBES. (d) RM. The ellipses denote the 3 σ bounds for the target's position uncertainty at the corresponding time-steps.
Figure 9
Fig. 9. [Two-sensors case] The angle formed by sensor-1, the target, and sensor-2 vs. time. As time increases, this angle approaches 90°.

Note also that for both MGSR and LPR [cf. Fig. 8(a) and (b)], although the two sensors start close to each other, they immediately move in separate directions and eventually form a right angle with vertex the location of the target (cf. Fig. 9). This interesting result is explained as follows: Based on Lemma 3, the optimal motion strategy for the two sensors minimizes the difference between the maximum and the minimum eigenvalue of the covariance matrix. Once this difference approaches zero, the eigenvalues of the prior covariance matrix are almost identical and the uncertainty ellipse becomes a circle. In this case, for M = 2, we have [cf. (38)]: λ0 = μ1− μ2≃ 0, λ1 = λ2 = 1/σ2 = 1,v0 ≃ [0 0]T, Formula, and Formula. Hence, the optimal solution to (39) is Formula, which requires that the two sensors should move so as to measure their distances to the target from perpendicular directions.

Figure 10
Fig. 10. [Four-sensors case] Trajectories of the four sensors, and the actual and estimated trajectories of the target, when employing as motion strategy. (a) MGSR. (b) LPR. (c) GBES. (d) RM. The ellipses denote the 3 σ bounds for the uncertainty of the target's position estimates at the corresponding time-steps.

C. Target Tracking With Four Sensors (Heterogeneous Team)

We hereafter examine the performance of the MGSR and LPR motion strategies for a heterogeneous team of four sensors tracking a moving target. In this case, the covariance matrix of the noise in the distance measurements is set to R = diag(σi2), with σ12 = 1 and σ22 = σ32 = σ42 = 3.

Fig. 10(a)(d) depicts the actual and estimated trajectories of the target, along with the trajectories of the four sensors, when employing as motion strategy MGSR, LPR, GBES, and RM, respectively. As for the case of two sensors, the accuracy of the target's position estimates for both MGSR and LPR are significantly better than that of RM and almost identical to that of GBES. Furthermore, the EKF estimates from the MGSR, LPR, and GBES are consistent.

Interestingly, in this case, the heterogeneous sensor team splits into two groups. Sensor-1 (the most accurate one with distance measurement noise variance σ12 = 1) follows the target from the left, while sensors 2, 3, and 4 form a separate cluster approaching the target from the right while moving very close to each other. The reason for this is the following: As sensors 2, 3, and 4 measure their distances to the target from approximately the same location at every time-step, their independent distance measurements become equivalent, in terms of accuracy, to one with varianceFormula TeX Source $${1\over \sigma_{2,3,4}^2} \simeq {1\over \sigma_2^2} + {1\over \sigma_3^2} + {1\over \sigma_4^2} = 1,\, \hbox{or }\sigma_{2,3,4}^2 \simeq 1.$$Hence, this problem becomes equivalent to that of two sensors with equal noise variances (cf. Section VI-B), with the difference that in this case, the “second” sensor is realized by requiring sensors 2–4 to move close to each other.

Finally, we should note that for this case, the time evolution of the trace of the target's position covariance matrix is almost identical to that of Fig. 6.

SECTION VII

Conclusion and Future Work

In this paper, we address the problem of constrained optimal motion strategies for heterogeneous teams of mobile sensors tracking a moving target using range-only measurements. Our objective is to determine the best locations that the sensors should move to at every time-step in order to collect the most informative distance measurements, i.e., the measurements that minimize the trace of the target's position covariance matrix. We have shown that this problem can be exactly reformulated to that of minimizing the norm of the sum of vectors of different lengths with constraints imposed on their directions. These constraints, which result from limitations on the maximum speed of each sensor, make the problem NP-Hard, in general.

In order to provide solutions that can be implemented in real time, we have introduced two algorithms for determining the optimal motion of the sensors: MGSR and LPR. In the case of MGSR, the objective function and constraints remain identical to those of the original problem, while the minimization process is a relaxation of the closed-form solution for the case of a single sensor, applied sequentially to minimize the cost function of multiple sensors. Alternatively, by relaxing the constraints on the original problem, we have derived the LPR motion strategy for the sensor team. The presented relaxation methods have computational complexity linear in the number of sensors with the MGSR performing slightly better as compared to LPR. Additionally, both MGSR and LPR achieve accuracy significantly better as compared to an RM strategy that requires the sensors to move toward the target, and indistinguishable to that of a GBES algorithm that considers all possible combinations of motions and has computational complexity exponential in the number of sensors.

A straightforward extension of our research is to include additional constraints on the motion of the sensors, imposed by more restrictive sensor kinematic models or obstacles in their surroundings [32]. In these cases, the extra constraints can be handled by appropriately modifying the expressions in Section III-D, which will further reduce the range of feasible bearing angles to the target. Additionally, we intend to investigate distributed implementations of both the MGSR and LPR algorithms using single-bit [33] or multibit [34] messages, broadcasted between the sensors, or transmitted via local (single-hop) communications [35], to account for limitations on the sensors' communication bandwidth and range.

Footnotes

Manuscript received August 06, 2007; revised March 05, 2008. First published September 19, 2008; current version published nulldate. This paper was recommended for publication by Associate Editor D. Sun and Editor L. Parker upon evaluation of the reviewers' comments. This work was supported in part by the University of Minnesota (DTC) and in part by the National Science Foundation under Grant EIA-0324864, Grant IIS-0643680, and Grant IIS-0811946.

K. Zhou is with the Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455 USA (e-mail: kezhou@cs.umn.edu).

S. I. Roumeliotis is with the Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN 55455 USA (e-mail: stergios@cs.umn.edu).

Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org.

1. In the remainder of the paper, the “hat” symbol ∘ is used to denote the estimated value of a quantity, while the “tilde” symbol ~ is used to signify the error between the actual value of a quantity and its estimate. The relationship between a variable x and its estimate ○ is Formula.

2. Note that if Formula, then θi(k+1) ∊ [0,2π), i.e., no constraint is imposed on the bearing angle to the target. We hereafter consider the most challenging case, when all bearing angles are constrained.

3. For clarity, from here on we drop the time indices from the bearing angles θi(k+1) and Formula.

4. Interestingly, the minimization of the trace of the covariance matrix for the case of a single sensor can be shown to be exactly equivalent to the maximization of the resulting Rayleigh quotient. Due to space limitations see [27] for the derivation details.

5. Here, “solve” means to find the global optimal solution and its value.

6. Note that the parameters for both the partition problem and the optimization problem Π3 are λ1, …, λM. An instance of these two problems is obtained by specifying particular values for λ1, …, λM.

7. Note that the equivalence relations between the previously defined optimization problems Π1–Π4 has already been established based on the results of Lemmas 2–4.

References

1. Design and implementation of wireless sensor networks for habitat monitoring,

J. Polastre

Design and implementation of wireless sensor networks for habitat monitoring,, Master's thesis, Univ. California,, Berkeley, CA, May 23, 2003

2. Dual-camera system for multi-level activity recognition

R. Bodor, R. Morlok, N. Papanikolopoulos

Sendai, Japan
Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Sep. 28/Oct. 2, 2004, 643–648

3. Towards robotic assistants in nursing homes: Challenges and results

J. Pineau, M. Montemerlo, M. Pollack, N. Roy, S. Thrun

Robot. Auton. Syst., vol. 42, issue (3–4), p. 271–281, 2003-03

4. Tracking an incoming ballistic missile using an extended interval Kalman filter

G. M. Siouris, G. Chen, J. Wang

IEEE Trans. Aerosp. Electron. Syst., vol. 33, issue (1), p. 232–240, 1997-01

5. Tracking targets using multiple robots: The effect of environment occlusion

B. Jung, G. S. Sukhatme

Auton. Robots, vol. 13, issue (3), p. 191–205, 2002-11

6. Probabilistic pursuit-evasion games: Theory, implementation, and experimental evaluation

R. Vidal, O. Shakernia, H. J. Kim, D. H. Shim, S. Sastry

IEEE Trans. Robot. Autom., vol. 18, issue (5), p. 662–669, 2002-10

7. Optimal motion strategies for range-only distributed target tracking

K. Zhou, S. I. Roumeliotis

Minneapolis, MN
Proc. Am. Control Conf., Jun. 14–16, 2006, 5195–5200

8. Parameter estimation for target tracking with uncertain sensor positions

R. Barsanti, M. Tummala

Sydney, Australia
Proc. IEEE Int. Symp. Circuits Syst., May 6–9, 2001, 257–260

9. On optimal track-to-track fusion

K. C. Chang, R. K. Saha, Y. Bar-Shalom

IEEE Trans. Aerosp. Electron. Syst., vol. 33, issue (4), p. 1271–1276, 1997-10

10. Adaptive mobile robot navigation and mapping

H. J. S. Feder, J. J. Leonard, C. M. Smith

Int. J. Robot. Res., vol. 18, issue (7), p. 650–668, 1999-07

11. Optimized motion strategies for cooperative localization of mobile robots

N. Trawny, T. Barfoot

New Orleans, LA
Proc. IEEE Int. Conf. Robot. Autom., Apr. 26–May 1, 2004, 1027–1032

12. Adaptive sensing for instantaneous gas release parameter estimation

V. N. Christopoulos, S. I. Roumeliotis

Barcelona, Spain
Proc. IEEE Int. Conf. Robot. Autom., Apr. 18–22, 2005, 4461–4467

13. Multi robot trajectory generation for single source explosion parameter estimation

V. N. Christopoulos, S. I. Roumeliotis

Barcelona, Spain
Proc. IEEE Int. Conf. Robot. Autom., Apr. 18–22, 2005, 2803–2809

14. The sensor selection problem for bounded uncertainty sensing models

V. Isler, R. Bajcsy

IEEE Trans. Autom. Sci. Eng., vol. 3, issue (4), p. 372–381, 2006-10

15. Optimization of the observer motion for bearings-only target motion analysis

J. P. Le Cadre

San Diego, CA
Proc. 36th IEEE Conf. Decision Control, Dec. 10–12, 1997, 3126–3131

16. Optimal observer maneuver for bearings-only tracking

J. M. Passerieux, D. Van Cappel

IEEE Trans. Aerosp. Electron. Syst., vol. 34, issue (3), p. 777–788, 1998-07

17. Comparison of suboptimal strategies for optimal own-ship maneuvers in bearings-only tracking

A. Logothetis, A. Isaksson, R. J. Evans

Philadelphia, PA
Proc. Am. Control Conf., Jun. 24–26, 1998, 3334–3338

18. Trajectory design for target motion estimation using monocular vision,

E. W. Frew

Trajectory design for target motion estimation using monocular vision,, Ph.D. dissertation, Stanford Univ., Stanford, CA, 2003-08

19. Dynamic sensor planning and control for optimally tracking targets

J. R. Spletzer, C. J. Taylor

Int. J. Robot. Res., vol. 22, issue (1), p. 7–20, 2003

20. Value-based action selection for observation with robot teams using probabilistic techniques

A. W. Stroupe, T. Balch

Robot. Auton. Syst., vol. 50, issue (2–3), p. 85–97, 2005-02

21. Distributed tracking for mobile sensor networks with information-driven mobility

R. Olfati-Saber

New York, NY
Proc. Am. Control Conf., Jul. 11–13, 2007, 4606–4612

22. A decentralized motion coordination strategy for dynamic target tracking

T. H. Chung, J. W. Burdick, R. M. Murray

Orlando, FL
Proc. IEEE Int. Conf. Robot. Autom., May 15–19, 2006, 2416–2422

23. Distributed cooperative active sensing using consensus filters

P. Yang, R. A. Freeman, K. M. Lynch

Rome, Italy
Proc. IEEE Int. Conf. Robot. Autom., Apr. 10–14, 2007, 405–410

24. Optimal sensor placement and motion coordination for target tracking

S. Martínez, F. Bullo

Automatica, vol. 42, issue (4), p. 661–668, 2006-04

25. Estimation With Applications to Tracking and Navigation

Y. Bar-Shalom, X. R. Li, T. Kirubarajan

New York, NY
Wiley, 2001-06

26. Matrix Analysis

R. A. Horn, C. R. Johnson

New York, NY
Cambridge Univ. Press, 1990-02

27. Optimal motion strategies for range-only distributed target tracking

K. Zhou, S. I. Roumeliotis

Dept. Comput. Sci. Eng. Univ., Minnesota, Tech. Rep.2006-004, 2006-04, http://mars.cs.umn.edu/tr/reports/Ke06.pdf

28. Computers and Intractability: A Guide to the Theory of NP-Completeness

M. R. Garey, D. S. Johnson

San Francisco, CA
Computers and Intractability: A Guide to the Theory of NP-Completeness, (ser. Books in the Mathematical Sciences), Freeman, 1979-01

29. Parallel and Distributed Computation: Numerical Methods

D. P. Bertsekas, J. N. Tsitsiklis

Belmont, MA
Athena Scientific, 1997-09

30. Convex Optimization

S. Boyd, L. Vandenberghe

New York, NY
Cambridge Univ. Press, 2004-03

31. Matlab documentations

[Online]. Available at, http://www.mathworks. com/

32. An autonomous sensor-based path-planner for planetary microrovers

S. L. Laubach, J. W. Burdick

Detroit, MI
Proc. IEEE Int. Conf. Robot. Autom., May 10–15, 1999, 347–354

33. SOI-KF: Distributed Kalman filtering with low-cost communications using the sign of innovations

A. Ribeiro, G. B. Giannakis, S. I. Roumeliotis

IEEE Trans. Signal Process., vol. 54, issue (12), p. 4782–4795, 2006-12

34. Distributed quantized Kalman filtering with scalable communication cost

E. J. Msechu, S. I. Roumeliotis, A. Ribeiro, G. B. Giannakis

IEEE Trans. Signal Process., vol. 56, issue (8), p. 3727–3741, 2008-08

35. Consensus in ad hoc WSNS with noisy links—Part ii: Distributed estimation and smoothing of random signals

I. Schizas, G. B. Giannakis, S. I. Roumeliotis, A. Ribeiro

IEEE Trans. Signal Process., vol. 56, issue (4), p. 1650–1666, 2008-04

Authors

Ke Zhou

Ke Zhou

(S'06) received the B.Sc. and M.Sc. degrees in control science and engineering from the Zhejiang University, Hangzhou, China, in 2001 and 2004, respectively. He is currently working toward the Ph.D. degree with the Department of Electrical and Computer Engineering (ECE), University of Minnesota, Minneapolis.

Ke Zhou His current research interests include multirobot systems, optimization, and active sensing.

Stergios I. Roumeliotis

Stergios I. Roumeliotis

(M'02) received the Diploma from the National Technical University, Athens, Greece, in 1995 and the M.S. and Ph.D. degrees from the University of Southern California, Los Angeles, in 1999 and 2000, respectively, all in electrical engineering.

Stergios I. Roumeliotis From 2000 to 2002, he was a Postdoctoral Fellow at the California Institute of Technology, Pasadena. Between 2002 and 2008, he was an Assistant Professor and is currently an Associate Professor with the Department of Computer Science and Engineering, University of Minnesota, Minneapolis. His current research interests include inertial navigation of aerial and ground autonomous vehicles, distributed estimation under communication and processing constraints, and active sensing for networks of mobile sensors.

Dr. Roumeliotis was the recipient of the NSF CAREER Award (2006), the McKnight Land-Grant Professorship Award (2006–2008), the International Conference on Robotics and Automation (ICRA) Best Reviewer Award (2006), the One NASA Peer Award (2006), and the One NASA Center Best Award (2006). Papers he has coauthored have received the Robotics Society of Japan Best Journal Paper Award (2007), the ICASSP Best Student Paper Award (2006), the NASA Tech Briefs Award (2004), and one of them was the Finalist for the International Conference on Intelligent Robots and Systems (IROS) Best Paper Award (2006). He is currently an Associate Editor of the IEEE TRANSACTIONS ON ROBOTICS.

Cited By

Interrobot Transformations in 3-D

Robotics, IEEE Transactions on, vol. 26, issues (2), p. 226–243, 2010

Corrections

No Corrections

Media

No Content Available

Indexed by Inspec

© Copyright 2011 IEEE – All Rights Reserved