Regional Variance for Multi-Object Filtering

Recent progress in multi-object filtering has led to algorithms that compute the first-order moment of multi-object distributions based on sensor measurements. The number of targets in arbitrarily selected regions can be estimated using the first-order moment. In this work, we introduce explicit formulae for the computation of the second-order statistic on the target number. The proposed concept of regional variance quantifies the level of confidence on target number estimates in arbitrary regions and facilitates information-based decisions. We provide algorithms for its computation for the probability hypothesis density (PHD) and the cardinalized probability hypothesis density (CPHD) filters. We demonstrate the behaviour of the regional statistics through simulation examples.


I. INTRODUCTION
Multi-target tracking dates back to the 1970s due to the requirement for aerospace or ground-based surveillance applications [1], [2] and involves estimating the states of a time varying number of targets using sensor measurements [3].The Finite Set Statistics (FISST) methodology [4] provides an alternative to the conventional approaches [3] in which targets are described as individual tracks, by modelling the collection of target states as a (simple) point process or Random Finite Set (RFS).In particular, the collection of target states is a set whose size -the number of targets -and elements -the states -are both random.
Multi-target RFS models lead to the well known Bayesian recursions for filtering sensor observations thereby providing a coherent Bayesian framework.These recursions, however, are not tractable for an increasing number of targets [4].Instead, the FISST methodology provides a systematic approach for approximating the Bayes optimal filtering distribution through its incomplete characterisations.Mahler's Probability Hypothesis Density (PHD) [5] and Cardinalized Probability Hypothesis Density (CPHD) [6] filters focus primarily on the extraction of the first moment density (also known as the intensity or the Probability Hypothesis Density) of the posterior RFS distribution, a real-valued function on the state space whose integral in any region B provides the mean target number inside B [5].A more recent filter [7] has been developed in order to propagate the full posterior RFS distribution under specific assumptions on the target behaviour.
In this article, we are concerned with the second-order information on the local target number in an arbitrary region B, which gives a measure of uncertainty associated with the mean target number.The quantification of the confidence on the first moment density is useful for problems involved with information-based decision such as distributed sensing [8]- [10], and multi-sensor estimation and control [11]- [14].We propose a unified description for the first and the secondorder regional statistics and derive explicit formulae for the mean target number and the variance in target number.The mathematical framework we introduce builds upon recent developments in multi-object modelling and filtering [15]- [17] and has the potential of leading to the derivations of closed form expressions for regional higher-order statistics of RFS distributions.Previous studies [6], [18] have investigated higher-order statistics in target number, but evaluated in the whole state space and not in any arbitrary region.We provide algorithms for the computation of the regional variance using both the PHD and the CPHD filters.
The structure of the article is as follows: Section II provides background on point processes and multi-object filtering, and introduces the regional variance in target number.In Section III, we discuss the principles underpinning the PHD and CPHD filters before we give the details on constructing the regional statistics for the PHD and the CPHD filters, the main results of this article.In Section IV we demonstrate the proposed concept through simulation examples and then we conclude (Section V).The proofs of the results in Section III are in Appendices A and B. The computational procedures are given in Appendix C.

II. POINT PROCESSES AND MULTI-OBJECT FILTERING
In this section, we introduce background and notation used throughout this article.We first give a brief review of point processes (Section II-A) and define the regional statistics (Section II-B).In Section II-C we introduce the functional differential that is used to extract the regional statistics of point processes from their generating functionals, which are covered in Section II-D.Section II-E overview the Bayesian framework from which the PHD and CPHD filters are constructed.

A. Point processes
In this article, the objects of interest -the targets -have individual states x in some target space X ⊂ R dx , typically consisting of position and velocity variables.The multi-object filtering framework focuses on the target population rather than individual targets.Both the target number and the target states are unknown and (possibly) time-varying.So, we describe the target population by a point process Φ whose number of elements and element states are random.A realisation of a point process Φ is a set of points ϕ = {x 1 , . . ., x N } depicting a specific multi-target configuration.
More formally, a point process Φ on X is a measurable mapping: from some probability space (Ω, F , P) to the measurable space (E X , B EX ), where E X is the point process state space, i.e., the space of all the finite sets of points in X , and B EX is the Borel σ-algebra on E X [19].We describe Φ by its probability distribution on (E X , B EX ) generated by P, denoted by P Φ (as in the study of random variables).The probability density p Φ of the point process Φ, if it exists, is the Radon-Nikodym derivative of the probability measure P Φ with respect to (w.r.t.) the Lebesgue measure.The Finite Set Statistics methodology for target tracking [6] considers the representation of RFSs through their multiobject density f Φ (derived from p Φ ).This approach has the distinctive merit of producing more intuitive and accessible results facilitating rather direct derivations of filtering algorithms such as the PHD filter [5].However, the regional variance in target number does not necessarily admit a density, in the general case.Therefore, we chose to adopt a measuretheoretical formulation, based on more general representations of point processes [19], [20], out of practical necessity.A thorough discussion on the relation between measures and associated densities can be found in [21], [22].

B. Regional statistics: mean and variance in target number
Unlike real-valued random variables, the space of point processes is not endowed with an expectation operator from which various statistical moments could be derived.Recall from the definition (1) of a point process Φ that two realisations ϕ, ϕ ′ ∈ E X are sets of points.Since the sum of two sets (e.g. ) is ill-defined, so would be the "usual" expectation operator E[Φ] on point processes.
Nevertheless, point processes can alternatively be described by the point patterns they produce in the target state space X rather than by their realisations in the process state space E X (see Figure 1).For any Borel set B ∈ B X , where B X is the Borel σ-algebra on X , the integer-valued random variable counts the number of targets falling inside B according to the point process Φ [19].Using the well-defined statistical moments of the integer-valued random variables N Φ (B) for any B ∈ B X , one can define the moment measures of the point process Φ.
Fig. 1: Point process and counting measure.
For any regions B, B ′ ∈ B X , the first and second moment measures µ Φ , µ Φ are defined by where x 1:n = {x 1 , . . ., x n }, and The first moment measure µ Φ (B) provides the expected number of targets or mean target number inside B, while µ Note that, B and B ′ can be selected such that they overlap1 , i.e., B ∩ B ′ = ∅.In particular, the variance var Φ of the point process Φ [19] in any region B ∈ B X is defined by ( Note that the variance is a function, but not a measure, on the Borel σ-algebra B X .It does not necessarily admit a density, in general, even if µ Φ and µ Φ do.This fact motivates the measure-theoretical approach adopted throughout this article.
The regional statistics (µ Φ (B), var Φ (B)) provide an approximate description of N Φ (B), i.e. the local number of target in B according to the point process Φ: Φ (B) is the mean target number within B; • var Φ (B) quantifies the dispersion of the target number within B around its mean value.
Note that higher-order moments of a point process can be defined -from the joint expectation of random variables N Φ (B) as for the variance (4) -in order to provide a more complete description of the target number inside B. Derivation of such higher-order statistics is left out of the scope of this article.

C. Functional differentiation
Statistical quantities describing a point process can be extracted through differentiation of various functionals, such as its probability generating functional (PGFl) or its Laplace functional (see Section II-D).Several functional differentials may be defined.Moyal used the Gâteaux differential [23] in his early study on point processes [24]; although it is endowed with a sum and a product rule similar to ordinary differentials of real-valued functions, it lacks a chain (or composition) rule that would facilitate the derivation of multi-object filtering equations.
In this article we exploit the multi-object filtering framework in [15], [16], which considers the chain differential [25], in order to prove the results we present in Section III.A restriction of the Gâteaux differential, the chain differential admits a composition rule.The chain differential δF (h; η) of a functional F , (evaluated) at function h in the direction (or increment) η, is defined as where {η n } n 0 is a sequence of functions η n converging (pointwise) to η, {ǫ n } n 0 is a sequence of positive real numbers converging to zero, if the limit exists and is identical for any admissible sequences {η n } n 0 and {ǫ n } n 0 [25].An example of chain differentiation for multi-object filtering is given in [26].

D. Generating functionals
The PGFl of a point process Φ is defined by the expectation where h is a test function, i.e., a real-valued function belonging to the space of bounded measurable functions on X , such that 0 h(x) 1 and 1−h vanishes outside some bounded region of X [20].The Laplace functional [19], [20] of a point process Φ is given by the expectation Both functionals fully characterise the probability distribution P Φ and are linked by the relation The probability distribution and the factorial moment measures of a point process can easily be retrieved from functional differentials of the PGFl, making the PGFl a popular tool in multi-object filtering.Mahler's original construction of the PHD [5] and CPHD [6] filters, for example, exploits the differentiated PGFl.In our derivations for the second-order moment measure, we use non-factorial moment measures which are easily retrieved from the Laplace functional [19].
To be precise, the factorial moment measures α (n) have a different construction and definition than the non-factorial moment measures µ (n) and will not be considered further in this article with the notable exception of the first factorial moment measure α (1) , which coincides with the first (nonfactorial) moment measure µ (1) .The first and second moment measures of a point process Φ in any regions B, B ′ ∈ B X are given by the differentials [19] µ where 1 B is the indicator function on B For the sake of simplicity, the superscript on the first moment measures is omitted in the rest of the article and µ Φ is denote by µ Φ .

E. Multi-target Bayesian filtering
In multi-object detection and tracking problems, the target process Φ k|k is a point process providing a stochastic description of the posterior distribution of the targets in the state space at time k > 0, based on the measurement history up to time k.
Bayesian filtering principles are applicable to the multiobject framework [6].The law of the filtered state P Φ k|k is updated through sequences of prediction steps -according (acc.) to target birth, motion, and death models -and data update steps -acc.to the current set of measurements 2 z k 1:m ∈ E Z .The full multi-target Bayes' filter reads as follows [4]: where T k|k−1 is the Markov transition kernel between time steps k − 1 and k, and L k is the multi-measurement/multitarget likelihood at time step k (detailed later) 3 .Equivalent expression of the multi-target Bayes' filter can be provided through generating functionals.The PGFls of the predicted Φ k|k−1 and updated Φ k|k processes are [15]: Using ( 9), we can write an equivalent expression with the Laplace functionals: For the sake of tractability, assumptions are often made on the prior Φ k−1|k−1 and/or the predicted Φ k|k−1 processes which subsequently lead to closed-form expressions of specific filters propagating incomplete information.

III. THE PHD AND THE CPHD FILTERS WITH REGIONAL VARIANCE IN TARGET NUMBER
In this section, we aim to provide the regional statistics of the updated target process for the CPHD and the PHD filters.We review both filters and identify the updated process from which we wish to produce the statistics in Section III-A.We then provide the expression of its first (Section III-B) and second (Section III-C) moment measures for both filters.The main results of this article, the regional statistics for the CPHD and the PHD filters, follow in Section III-D.We discuss the procedures to extract the regional statistics for the Sequential Monte Carlo (SMC) implementations of the CPHD and PHD filters in Section III-E.
The expressions of the first moment measures are well established results from the usual PHD [5] and the CPHD [6] filters.The derivation presented in this article, however, exploits the recent framework proposed in [15].On the other hand, the expression of the second moment measure is a novel result exposed in the authors' recent conference papers [27], [28].

A. Principle
The PHD [5] and the CPHD [6] filters are perhaps the most popular approximations to the multi-target Bayes' filter (13), (14).The predicted target process Φ k|k−1 is either approximated by an independent and identically distributed (i.i.d.) process (CPHD filter), or by a Poisson process (PHD filter).
An i.i.d.process [29] is completely described by 1) its cardinality distribution ρ Φ 4 , and 2) its first moment measure 5 µ Φ .Hence, the CPHD filter propagates a cardinality distribution ρ Φ and a moment measure µ Φ .A Poisson process is a specific case of an i.i.d.process in which the cardinality distribution is a Poisson distribution with rate µ Φ (X ) = µ Φ (dx).Hence, a Poisson process is completely described by its first moment measure µ Φ , propagated by the PHD filter (see Figure 2).
The updated target process Φ k|k is not, in the general case, i.i.d.(respectively Poisson) even if the predicted Φ k|k−1 is; that is, the updated probability distribution P Φ k|k is not completely described by the output of the CPHD (respectively PHD) filter.As a consequence, the computation of the variance var Φ k|k provides additional information on the updated process Φ k|k , before its collapse into a i.i.d.(respectively Poisson) process in the next time step (see Figure 2).
As shown in Figure 2, this article focuses on the generation of additional information describing the updated target process; hence, the prediction step (15) will not be further 4 ρ Φ (n) is the probability that a realisation ϕ of the point process Φ has size n, i.e. the probability that there are exactly n targets in the surveillance scene. 5An i.i.d.process Φ is usually described by the Radon-Nikodym derivative of its first moment measure µ Φ w.r.t. to the Lebesgue measure, also called its first moment density v Φ or intensity or Probability Hypothesis Density [5].Since we are interested in producing higher-order statistics on the target number, i.i.d.processes on targets are described by their first moment measure µ Φ instead.I.i.d processes on measurements, however, are still described by their intensity v Φ or, to be precise, by their normalised intensity or spatial distribution (see Theorem 1 and 2).Fig. 2: PHD and CPHD filtering with variance.
mentioned.The rest of the article describes the extraction of the information statistics (µ Φ k|k , var Φ k|k ) at an arbitrary time step k > 0. For the sake of simplicity, we discard the time subscripts and denote the predicted and the update processes with Φ and Φ + respectively.In addition, we denote the current set of measurements by z 1:m .

B. First moment measure (CPHD and PHD updates)
Lemma 1.First moment measure (CPHD update) [6], [30] The first moment measure of the updated process Φ + in any region B ∈ B X , under the assumptions that [6]: 1) The predicted process Φ is an i.
where the corrector terms ℓ 1 (φ) and ℓ 1 (z) are given by where (following the notation introduced by Vo, et.al. in [30]): where for any region B ∈ B X : where P is the single-measurement/single-target observation kernel, i.e.
The function e d is the elementary symmetric function of order applied in (21) to the set c(z) |z ∈ z 1:m and abusively noted e d (z 1:m ).
The proof is given in Appendix B (Section B-B).
Corollary 1.First moment measure (PHD update) [5] The first moment measure of the updated process Φ + in any region B ∈ B X , under the assumptions given in Lemma 1 and the additional assumptions that [5]: 1) The predicted process Φ is Poisson; 2) The clutter is Poisson, whose rate is denoted by λ c ; is given by The proof is given in Appendix B (Section B-C).

C. Second moment measure (CPHD and PHD updates)
Lemma 2. Second moment measure (CPHD update) Under the assumptions given in Lemma 1, the second moment measure of the updated process Φ + in any regions B, B ′ ∈ B X is given by where the corrector terms ℓ 2 (φ), ℓ 2 (z), and ℓ 2 (z, z ′ ) are given by: The proof is given in Appendix B (Section B-D).

Corollary 2. Second moment measure (PHD update)
Under the assumptions given in Corollary 1, the second moment measure of the updated process Φ + in any regions B, B ′ ∈ B X is given by The proof is given in Appendix B (Section B-F).

D. Main results
The two following theorems are the main results of this article.Their proof is given in Appendix B (Section B-G).
Theorem 1. Regional statistics (CPHD update) Under the assumptions given in Lemma 1, the regional statistics 6 of the updated process Φ + in any region B ∈ B X are given by var , or zero otherwise.Theorem 2. Regional statistics (PHD update) Under the assumptions given in Corollary 1, the regional statistics of the updated process Φ + in any region B ∈ B X are given by var

E. Discussion on implementation
We consider SMC implementations of the PHD and the CPHD filters and equip them with regional statistiscs.The resulting algorithms are given in Appendix C.
The SMC-PHD filter with regional variance can be easily drawn from the usual SMC-PHD filter [21].Indeed, the regional variance is computed using the terms that are already computed to find the regional mean (34) in the SMC-PHD filter (see Algorithm 2).The computational complexity of the PHD filter with the variance is still linear w.r.t. the number of current measurements m.
Similarly, the construction of the SMC-CPHD filter with regional variance is an extension to the well-known SMC-CPHD filter [29].As shown in Algorithm 1, the additional corrector terms ℓ 2 (φ), ℓ 2 (z), and ℓ 2 (z, z ′ ) (30) are computed in parallel to the usual corrector terms ℓ 1 (φ) and ℓ 1 (z) (20).In the usual CPHD filter, the bulk of the computational cost stems from the computation of ℓ 1 (φ) and ℓ 1 (z) in the filtering equation (32) or, more specifically, the elementary symmetric functions (27) appearing in the Υ 0 and Υ 1 terms (21).The number of operations to compute e d (z 1:m ) is evaluated at m log 2 m in [30] and m + 1 elementary symmetric functions must be computed for ℓ 1 (φ) and ℓ 1 (z).Thus, it has been shown by Vo et al. that the computational complexity of the CPHD filter is O(m 2 log 2 m), where m is the number of current measurements [30].
The corrector terms ℓ 2 (φ) and ℓ 2 (z) (30), required for the computation of the regional variance (33), do not involve new elementary symmetric functions and can be found in parallel to ℓ 1 (φ) and ℓ 1 (z) without significant additional cost (see Algorithm 1).On the other hand, ℓ 2 (z, z ′ ) involves m(m−1) 2 different Υ 2 terms (21) with additional elementary symmetric functions e d (z, z ′ ) -for every couple of distinct measurements z, z ′ .Thus, the computational complexity of the SMC-CPHD filter with regional variance is O(m 3 log 2 m).

IV. SIMULATION EXAMPLES
In this section, we demonstrate the concept of regional variance for the PHD and the CPHD filters using the multitarget scenario illustrated in Fig. 3.A range-bearing sensor located at the origin takes measurements from five targets that appear and disappear over time in the surveillance scene.The sensor Field of View (FoV) is the circular region centred at the origin and with radius 3500 m.The standard deviations in range and bearing are selected as 5 m and 1 • respectively.The clutter is generated from a Poisson process with rate λ = 20 and uniform over the FoV.
The state of targets is described by a location [x, y] and a velocity [ ẋ, ẏ] component, and the subset of R 4 that falls in the FoV is the state space X .The state transitions follow a linear constant velocity motion model and (slight) additive zero mean process noise after getting initiated with the values given in Table I.Trajectories of targets 1 and 2 cross each other at time t = 55 s.

A. Variance as a global statistic
In this example, we consider the regional variance over the FoV under different target detection probabilities.Doing so, we demonstrate the effect of the probability of detection p d on the uncertainty of the estimated target number.We simulate measurements with p d = 0.95, 0.90, and 0.85 and run both the CPHD and the PHD filters.The mean and the variance in the target number within the FoV (given by the regional statistics evaluated in the whole FoV) are computed using Algorithms 2 and 1.
In Fig. 4(a)-(c), we present the mean target number in the FoV (blue line) computed using the CPHD filter, together with the ground truth (black line).The variance in target number within the FoV is used to quantify the level of uncertainty in the mean target number.Specifically, we present confidence intervals as the ±1 square root of the regional variance which in turn admits a standard deviation interpretation.We note that the uncertainty increases as we lower the probability of detection, coinciding with our intuition.The behaviour of the confidence bounds computed using the PHD filter is similar as seen in Fig. 4(d)-(f).
The regional variances used to find the aforementionned confidence intervals are presented in Fig. 5.In Fig. 5(a), we plot the results obtained using the CPHD filter as p d goes from 0.95 to 0.85.Similar plots for the PHD filter are provided in Fig. 5(b).The increasing uncertainty with the decreasing p d can clearly be seen.We also note that the variance over the FoV grows significantly more with the PHD than with the CPHD filter as p d is lowered.

B. Variance as a local statistic
In this example, we illustrate the variance evaluated in regions of various sizes within the FoV.Specifically, we consider concentric circular regions of growing radius around the location of target 1 while its trajectory crosses that of  target 2 (Fig. 6).We vary the radiuses from r = 1 m to 200 m with 1 m steps at time steps t = 51, 55 and 59 s.The distance between the targets are 76.1, 5.4 and 78.9 m., respectively, at these time instants, so, the regions with larger radius cover both targets.We compute both the mean target number in these concentric regions and the associated uncertainty quantified by the proposed regional variance.We expect the mean target number to be monotonically increasing as a function of the radius and to reach approximately two for the larger circles.The regional variance, on the other hand, is not necessarily monotonic and we expect its envelope to be an indicator of whether target 1 can be resolved in the sense that we can identify circular regions that contain only target 1 with high confidence.
In Fig. 7(a)-(c), we present the plots of the regional mean and variance in target number (solid black lines) from the CPHD filter as a function of the radius, for a typical run.For r = 200 m, the mean target number in the region is approximately two with very small variance suggesting that with very high confidence, both targets are covered at t = 51, 55 and 59 s.As the radius increases from r = 1 m (and the circumferences of the regions depart from target 1), the uncertainty starts increasing until it reaches a local maximum.The behaviour of the variance curves, after the local maximum and until they reach a small steady value, is of concern.In both Fig. 7(a) and (c), the local minimum separating the two maximums clearly indicates that target 1 is contained with high confidence in a circle whose radius equals the value at the mininum (as the mean target number also reaches one at this minimum).When the targets are located at their closest positions, (Fig. 7(b)), we cannot identify such regions.
We contrast these results with those obtained after filtering the measurements of an inferior range-bearing sensor which has 12.5 m and 2.5 • standard deviations in range and bearing, respectively.The regional variance for this sensor at t = 51, 55 and 59 s (solid red lines in Fig. 7(a)-(c)) stays at a high level until the expected target number reaches two, and, in turn, we are unable to select a region that contains only target 1 with high confidence.In other words, the two targets are not resolved at these time instants.
In Fig. 7(d)-(f), we present similar results obtained using the PHD filter.We note that the PHD filter performs as well as the CPHD filter in terms of the ability to resolve the two targets in this particular scenario.As a result, the regional variance computed by any of the filters can effectively be used to assess the level of uncertainty in the estimated number of targets in arbitrary regions.

V. CONCLUSION
The motivation of this work was to develop multi-object estimators that are able to provide information about the expected number of targets and the uncertainty of the target number in any arbitrary region of the surveillance scene.This level information has never previously been available to operators through track-based multi-target estimators.Providing the regional variance in target number, alongside the regional mean target number, has the potential to give an enhanced picture for surveillance scenarios to address sensor management and resource allocation problems.
Multi-object estimation in a surveillance scene with a challenging environment is the focus of the multi-object paradigm often known as Finite Set Statistics, which leads to filtering algorithms built upon multi-object probability densities rather than probability measures.However, since such implementations are insufficiently general to represent second-order information about the target number in any arbitrary region, this article adopts a measure-theoretical approach which enables the computation of the regional variance of multi-object estimators.A comprehensive description of the theoretical construction and the practical implementation of the regional mean and variance in target number, in the context of PHD and CPHD filtering, is provided and illustrated on simulated data.
• Π m,n is the set of all the partitions of indexes {i 1 , ...i m , j 1 , ..., j n } solely composed of tuples of the form (i a , j b ) (target x j b is detected and produces measurement z ia ), (φ, j b ) (target x j b is not detected), or (i a , φ) (measurement z ia is clutter); • π φ = #{i|(i, φ) ∈ π} is the number of clutter measurements given by partition π.Note that both the predicted probability measure (39) and the likelihood function (40) are symmetrical w.r.t. the targets.This property will help simplify the full multi-target Bayes update (16) to tractable approximations for both PHD and CPHD filters.Substituting (39) into (16) gives Let us first fix an arbitrary target number n ∈ N and consider the quantity L(z µΦ(X ) .Since the likelihood is symmetrical w.r.t. the targets, the integration variables x 1:n play an identical role and using (40) yields Note that, since the targets are identically distributed, measurement/target pairings (z i , x j1 ) and (z i , x j2 ) are equivalent for integration purpose in (42).Thus, selecting a partition π ∈ Π m,n reduces to the choice of: • A number d of detections; • A collection of d measurements in z 1 , . . ., z m ; • An arbitrary collection of d detected targets in x 1 , . . ., x n .Therefore, (42) simplifies as follows: using the Υ function defined in (21).The multiplying constant in (41), found to be z∈z1:m c(z), will appear as well in the expression of the numerator of the updated PGFl (16) developed in Appendix A in Section B-B and B-D will be omitted from now on.Finally, substituting (43b) in (41) yields the result (36).
We now move to the PHD filter.Since a Poisson process is a specific case of a i.i.d.process, we start from the CPHD result (36) with the additional assumptions that: 1) The predicted process is Poisson: We may write: where (44f) is the factorised form of (44e).

B. Lemma 1
Proof.Using (10), the first moment measure µ Φ+ in some B ∈ B X is retrieved from the first order differential [15] of the updated PGFl ( 16): The expression of the denominator in (45b) is detailed separately in Property 1 (Section A).Using Corollary 1 in [15], the numerator expands as follows: the proof being given in Appendix B (Section B-E).Substituting (56) in the numerator of (55) gives Once again, the symmetry of L(z 1:m |x 1:n ) and P Φ (dx 1:n ) w.r.t. to the targets in the case of the CPHD filter (see ( 39) and ( 40)) allows the simplification of (57).We have: The first likelihood term in (58b), just as in the proof of Lemma 1, expands following (49).Now, considering the general expression of the likelihood (40), the second likelihood term in (58b) can be split following partitions where none of the targets x, x ′ are detected, those where only one is detected and those where both are detected.That is: Substituting ( 59) and ( 49) in (58b), then substituting the result in the expression of the second moment measure (55b) finally yields µ (2) where the corrector terms ℓ 2 (φ), ℓ 2 (z), and ℓ 2 (z, z ′ ), following a similar development as shown in the proofs of Property 1 (Section B-A) and 1 (Section B-B), are as defined by (30).
Proof.Expanding the exponential gives , where p p1:n is the multinomial Then, using Corollary 1 in [15] yields Thus, it follows that

F. Corollary 2
Proof.Just as the Poisson assumption simplified the expression of Υ 0 as shown in the development (44), it simplifies the expression of Υ 2 :

G. Theorems 1 and 2
Proof.The first order statistic µ Φ+ (B) is given by Lemma 1.Following the definition of the variance (5), the secondorder statistic var Φ+ (B) is the second moment measure µ The proof of Theorem 2 is identical, except that Corollaries 1 and 2 are used instead of Lemmas 1 and 2.

APPENDIX C ALGORITHMS
Algorithm 1 CPHD filter with variance: data update (adapted from [29]) and information statistics Input Predicted intensity: {w (i) , x (i) } J i=1 Cardinality distribution: {ρ(n)} nmax n=0 Current measurements: z 1:m Maximum cardinality: n max Missed detection and measurement terms for 1 i J do w (i),φ ← P (φ|x (i) )w (i) for z k ∈ z 1:m do w (i),z k ← P (z k |x (i) )w (i) end for end for Compute global missed detection term µ φ Φ (X ) ← J i=1 w (i),φ Compute global measurement terms for z k ∈ z 1:m do µ z k Φ (X ) ← , B ′ ) denotes the joint expectation of the target number inside B and B ′ .

Fig. 5 :
Fig. 5: Regional variance, integrated in the whole FoV (a) using the CPHD, (b) the PHD filter, for p d = 0.95, 0.90, and 0.85.The plots are the averages over 100 Monte Carlo runs.

Fig. 7 :
Fig. 7: Regional mean (plain lines) and variance (dotted lines) in circular regions centred at the position of target 1 at time t = 51, 55 and 59 s for the CPHD (a)-(c) and the PHD (d)-(f) filters, respectively.Results are given for a superior (black lines) and an inferior (red lines) range-bearing sensor.