Neural Enhanced Belief Propagation for Multiobject Tracking

Algorithmic solutions for multi-object tracking (MOT) are a key enabler for applications in autonomous navigation and applied ocean sciences. State-of-the-art MOT methods fully rely on a statistical model and typically use preprocessed sensor data as measurements. In particular, measurements are produced by a detector that extracts potential object locations from the raw sensor data collected for a discrete time step. This preparatory processing step reduces data flow and computational complexity but may result in a loss of information. State-of-the-art Bayesian MOT methods that are based on belief propagation (BP) systematically exploit graph structures of the statistical model to reduce computational complexity and improve scalability. However, as a fully model-based approach, BP can only provide suboptimal estimates when there is a mismatch between the statistical model and the true data-generating process. Existing BP-based MOT methods can further only make use of preprocessed measurements. In this paper, we introduce a variant of BP that combines model-based with data-driven MOT. The proposed neural enhanced belief propagation (NEBP) method complements the statistical model of BP by information learned from raw sensor data. This approach conjectures that the learned information can reduce model mismatch and thus improve data association and false alarm rejection. Our NEBP method improves tracking performance compared to model-based methods. At the same time, it inherits the advantages of BP-based MOT, i.e., it scales only quadratically in the number of objects, and it can thus generate and maintain a large number of object tracks. We evaluate the performance of our NEBP approach for MOT on the nuScenes autonomous driving dataset and demonstrate that it has state-of-the-art performance.


I. INTRODUCTION
Multi-object tracking (MOT) [1]- [23] enables emerging applications including autonomous driving, applied ocean sciences, and indoor localization.MOT aims at estimating the states (e.g., positions and possibly other parameters) of moving objects over time, based on measurements provided by sensing technologies such as Light Detection and Ranging (LiDAR), radar, or sonar [1]- [23].An inherent problem in MOT is measurement-origin uncertainty, i.e., the unknown association between measurements and objects.MOT is further complicated by the fact that the number of objects is unknown, i.e., for the initialization and termination of object tracks, track management schemes need to be employed.

A. Model-Based and Data-Driven MOT
Typically MOT methods rely on measurements that have been extracted from raw sensor data in a detection process.For example, an object detector [24]- [29] can be applied to LiDAR scans or images at each time step independently, and the detected objects are then used as measurements for MOT [30], [31].This common strategy is referred to as "detectthen-track".Based on the assumption that, at each time step, an object can generate at most one measurement and each measurement can be originated by at most one object, data association can be cast as a bipartite matching problem.
The first class of MOT methods follows a global nearest neighbor association approach [1].Here, a Hungarian [32] or a greedy matching algorithm is used to perform "hard" measurement-to-object associations [15]- [22].To improve the reliability of hard associations, these methods often rely on discriminative shape information of objects and measurements.Shape information is extracted from raw sensor data based on deep neural networks [21], [22] and used to compute pairwise distances between objects and measurements more accurately.The methods in this class typically rely on heuristics for track management.
BP, also known as the sum-product algorithm, [34]- [36] provides an efficient and scalable solution to high-dimensional inference problems.BP operates by "passing messages" along the edges of the factor graph [34] that represents the statistical model of the inference problem.Important algorithms such as the Kalman filter, the particle filter [37], and the JPDA filter [1] are instances of BP.By exploiting the structure of the graph, BP-based MOT methods [3]- [6] are highly scalable.In particular, using BP, "soft" probabilistic data association can be performed for hundreds of objects.This makes it possible to generate and maintain a very large number of potential object tracks and, in turn, achieve state-of-the-art MOT performance [3]- [6].
Existing BP-based methods entirely rely on "handdesigned" statistical models.However, the statistical models are often unable to accurately represent all the intricate details of the true data-generating process.This mismatch leads to suboptimal object state estimates.In addition, since BP methods rely on the detect-then-track strategy, important objectrelated information might be discarded by the object detector.On the other hand, learning-based methods are fully datadriven, i.e., they do not make use of any statistical model.Typically, learning-based methods rely on deep neural networks, which facilitate the extraction of all relevant information from raw sensor data [21], [22].However, learning-based MOT typically makes use of potentially unreliable heuristics for track management and only performs well in "big data" problems.
A graph neural network (GNN) [38], [39] is a graphical model formed by neural networks.The neural network "passes messages", i.e., exchange processing results, along the edges of the GNN.This mechanism is similar to the message passing performed by BP.It has been demonstrated that in Bayesian estimation problems, a GNN can outperform loopy BP [40] if sufficient training data is available.Recently, neural enhanced belief propagation (NEBP) [41] was introduced.In NEBP, a graph neural network (GNN) [38], [39] that matches the topology of the factor graph is established.After training the GNN, the GNN messages can complement the corresponding BP messages to correct errors introduced by cycles and model mismatch.The resulting hybrid message passing method combines the benefits of model-based and data-driven inference.In particular, NEBP can leverage the performance advantages of GNNs in big data problems.The benefits of NEBP have been demonstrated in decoding [41] and cooperative localization [42] problems.

B. Contribution and Paper Organization
In this paper, we address the fundamental question of how model-based and data-driven approaches can be combined in a hybrid inference method.In particular, we aim to develop a BP-based MOT method that augments its "hand-designed" statistical model with information learned from raw sensor data.As a result, we propose NEBP for MOT.Here, BP messages calculated for probabilistic data association are combined with the output of a GNN.The GNN uses object detections and features learned from raw sensor information as inputs.It can improve the MOT performance of BP by introducing datadriven false alarm rejection and object shape association.
False alarm rejection aims at identifying which measurements are likely false alarms.For measurements that have been identified as a potential false alarm, the false alarm distribution in the statistical model used by BP is locally increased.This reduces the probability that the measurement is associated with an existing object track or initializes a new object track.Object shape association computes improved association probabilities by also taking features of existing object tracks and measurements that have been learned from raw sensor data into account.Compared to BP for MOT, the resulting NEBP method for MOT can improve object declaration and estimation performance if annotated data is available, and consequently provide state-of-the-art performance in big data MOT problems.
The key contributions of this paper are summarized as follows.
• We introduce NEBP for MOT where probabilistic data association is enhanced by learned information provided by a GNN.
• We present the procedure and the loss function, used for the training of the GNN, that enable false alarm rejection and object shape association.
• We apply the proposed method to an autonomous driving dataset and demonstrate state-of-the-art object tracking performance.An overview of the proposed NEBP method for MOT is presented as a flow diagram in Fig. 1.Here, black boxes show the computation modules performed by both conventional BP and NEBP.The red boxes show the additional modules only performed by the proposed NEBP method.A detailed description of each module will be provided in Sections IV and V.
In modern MOT scenarios with high-resolution sensors, it is often challenging to capture object shapes and the corresponding data-generating process by a statistical model.Thus, in contrast to the extended object tracking strategy [5], [43], [44], the influence of object shapes on data generation is best learned directly from data.This paper advances over the preliminary account of our method provided in the conference publication [45] by (i) introducing a new factor graph representation, which is a more accurate description of the proposed NEBP method; (ii) presenting more details on the development and implementation of the proposed NEBP approach for MOT; and (iii) conducting a comprehensive evaluation based on real data that highlights why NEBP advances BP in MOT applications, and (iv) providing a detailed complexity analysis of the proposed NEBP for MOT method.Note that the new factor graph representation does not alter the resulting NEBP method for MOT.This paper is organized as follows.Section II reviews the general BP and NEBP algorithm.Section III describes the system model and statistical formulation.Section IV reviews the factor graph and the BP for MOT algorithm.Section V develops the proposed NEBP for MOT algorithm.Section VI introduces the loss function used for NEBP training.Section VII discusses experimental results and Section VIII concludes the paper.

A. Factor Graph and Belief Propagation
A factor graph [34], [46] is a bipartite undirected graph G f = (V f , E f ) that consists of a set of edges E f and a set of vertices or nodes V f = Q ∪ F .A variable node q ∈ Q represents a random variable x q and a factor node s ∈ F represents a factor ψ s x (s) .The argument x (s) of a factor, consists of certain random variables x q (each x q can appear in several x (s) ).Variable nodes and factor nodes are typically depicted by circles and boxes, respectively.The joint i ∈ {1, . . ., I} from Current Time Fig. 1.Flow diagram of one time step of conventional BP and the proposed NEBP method for MOT.Black boxes show the computation modules performed by both BP and NEBP.The red boxes show additional modules only performed by the proposed NEBP method.The goal of both MOT methods is to obtain estimates xi and ri of object state x i and existence variable r i ∈ {0, 1} for all objects i ∈ {1, . . ., I}.First, raw sensor data Z is preprocessed by an object detector and the resulting object detection vector z consists of the measurements used for MOT.In addition to the measurements z, approximate marginal posterior distributions ("beliefs") f x − i , r − i , i ∈ {1, . . ., I − } that have been computed at the previous time step are used as input for MOT.The belief propagation (BP) method reviewed in Section II-A performs operations on the factor graph discussed in Section IV to compute updated beliefs f (x i , r i ), i ∈ {1, . . ., I}.Estimates xi and ri can be computed from the beliefs f (x i , r i ) as discussed in Section III-D.Compared to conventional BP for MOT, the proposed NEBP approach introduces a shape and motion feature extraction module and a GNN, discussed in Section V-A and V-B, respectively.The shape and motion feature extraction module computes shape and motion features h shape and h motion from raw sensor data Z and Z − of the previous and current time.The GNN computes NEBP messages φ based on the conventional BP messages φ and the features h shape and h motion , to obtain more accurate beliefs f (x i , r i ), i ∈ {1, . . ., I} and thus more accurate estimates xi and ri as discussed in Section V-C.probability density function (PDF) represented by the factor graph reads where ∝ denotes equality up to a multiplicative constant.BP [34], also known as the sum-product algorithm, can compute marginal PDFs f (x q ), q ∈ Q efficiently.BP performs local operations on the factor graph.The local operations can be interpreted as "messages" that are passed over the edges of the graph.There are two types of messages.At message passing iteration ℓ ∈ {1, • • • , L}, the messages passed from variable nodes to factor nodes are defined as In addition, the messages passed from factor nodes to variable nodes are given by where N Q (•) ⊆ Q and N F (•) ⊆ F denote the set of neighboring variable and factor nodes, respectively.If ψ s x (s) is a singleton factor node in the sense that it is connected to a single variable node x q , i.e., x (s) = x q , then the message from the factor node to the variable node is equal to the factor node itself, i.e., φ q (x q ) φ s→q (x q ) = ψ s x q .
For future use, we introduce the joint set of messages q→s , φ (ℓ+1) s→q q∈Q,s∈F and φ (ℓ+1) , φ (ℓ+1) s→q q∈Q,s∈F as well as the function that summarizes all message computations (2) and (3) related to one iteration ℓ.
After message passing is completed, one can subsequently obtain "beliefs" f (x q ), for each variable node x q , computed as the product of all incoming messages, i.e., If the factor graph is a tree, then the beliefs are exactly equal to the marginal PDF, i.e., f (x q ) = f (x q ).In factor graphs with loops, BP is applied in an iterative manner and the message passing order is not unique.Different message passing orders may lead to different beliefs.The beliefs f (x q ) provided by this "loopy BP" scheme are only approximations of marginal posterior PDFs f (x q ).However, the beliefs f (x q ) have been observed to be very accurate in many applications [4], [47], [48].

B. Neural Enhanced Belief Propagation
NEBP [41] is a hybrid message passing method that combines the benefits of model-based and data-driven inference.In particular, NEBP aims at improving the BP solution by augmenting the factor graph by a GNN.While BP messages are calculated based on the statistical model represented by the factor graph, GNN messages are computed based on information learned from data.In NEBP, a GNN that matches the network topology of the factor graph is introduced.An iterative message passing procedure is performed on the GNN.
In one GNN iteration, nodes send messages to their neighboring nodes (cf.( 5)-( 6)), receive messages from neighboring nodes, and aggregate received messages to update their node embeddings (cf.( 7)-( 8)).In particular, at message passing iteration ℓ ∈ {1, • • • , L}, the equations that describe message passing of the GNN are given as follows [41].The messages exchanged between variable nodes q ∈ Q to factor nodes s ∈ F along the edges of the GNN are given by the vectors where the e q→s as well as the e s→q , are edge attribute vectors and g Q→F (•) as well as g F →Q (•) are referred to as edge functions [41].In addition, after GNN messages have been exchanged, so-called node embedding vectors h (ℓ) s and h (ℓ) q for factor node s ∈ F and variable node q ∈ Q, are computed as m (ℓ+1) q→s (7) m (ℓ+1) s→q , e q (8) where the e q are node attribute vectors [41] and g F (•) as well as g Q (•) are referred to as node functions [41].
Edge and node functions are the neural networks of the GNN.For future use, we introduce the joint set of GNN messages m (ℓ) = {m q→s } q∈Q,s∈F , node embeddings h (ℓ) = {h (ℓ)  s , h (ℓ) q } q∈Q,s∈F , and attributes e = {e q , e s→q , e q→s } q∈Q,s∈F as well as the function h (ℓ+1) , m (ℓ+1) = GNN(h (ℓ) , e) that summarizes all GNN computations (5)-( 8) at iteration ℓ.Note that singleton factor nodes are not included in the GNN.
Based on the BP message passing procedure discussed in Section II-A and the GNN message passing procedure discussed above, the hybrid NEBP method can be summarized as follows.In particular, at iteration ℓ where φ(ℓ) F →Q = φ(ℓ) s→q q∈Q,s∈F is the set of NEBP messages from the last iteration ℓ that are passed from factor nodes to variables nodes.The BP messages φ (ℓ+1) serve as the edge attributes for GNN message passing in (5)- (8).This can be seen as providing a preliminary data association solution computed by conventional BP, which does not make use of the object shape information, to the GNN.The GNN then aims at refining this preliminary solution by also taking the object shape information into account.Providing a preliminary solution to the GNN can make training and inference more efficient and accurate [41].
Finally, the NEBP messages at the current iteration, are calculated as where g nebp,1 h (ℓ) , m (ℓ) and g nebp,2 h (ℓ) , m (ℓ) are neural networks that, in general, output a positive vector with the same dimension as φ s→q and • is element-wise multiplication.The BP messages passed from variable nodes to factor nodes are not neural enhanced.
After the last message passing iteration ℓ = L, the beliefs for each variable node x q are calculated as

III. SYSTEM MODEL AND STATISTICAL FORMULATION
In this section, we review the system model of BP-based MOT and the multiobject declaration and state estimation problem BP-based MOT aims to solve.

A. Potential Objects and Object States
The number of objects is unknown and time-varying.We describe this scenario by introducing N k potential objects (POs) [3], [4] where N k is the maximum possible number of objects 1 .At time k, the existence of a PO n ∈ {1, . . ., N k } is modeled by a binary random variable r k,n ∈ {0, 1}.PO n exists, in the sense that it represents an actual object, if and only if r k,n = 1.The kinematic state of PO n is modeled by a random vector x k,n that consists of the object's position and possibly motion information.The augmented PO state is defined as In what follows, we will refer to augmented PO states simply as PO states.We also introduce the joint PO state vector at time k as T of preprocessed measurements from raw sensor data Z k , i.e., z k = g det (Z k ).The joint measurement vector that consists of all preprocessed measurements up to time k is denoted as There are two types of POs: • New POs represent objects that, for the first time, have generated a measurement at the current time step k.Their states are denoted as • Legacy POs represent objects that already have generated at least one measurement at previous time steps k ′ < k.Their states are denoted by , where I k is the total number of legacy POs.
All new POs that have been introduced at time k−1 become legacy POs at time k.Thus, the number of legacy POs at time A pruning step that limits the growth of the number of PO states will be discussed in Section III-D.For 1 The number of POs N k is the maximum possible number of actual objects that have produced a measurement so far [4]. 2Introducing a new PO is equal to initializing a new potential object track [4].
future reference, we further define the joint PO states POs represent actual objects that already have generated at least one measurement.In addition, there may also be actual objects that have not generated any measurements yet.These objects are referred to as "unknown" objects.Unknown objects are independent and identically distributed according to f u (•).The number of unknown objects is modeled by a Poisson distribution with mean µ u .The statistical model for unknown objects induces a statistical model for new POs [4] as further discussed in Section IV.

B. Data Association Vector and Measurement Model
MOT is subject to measurement origin uncertainty, i.e., it is unknown which actual object generates which measurement z k,j .It is also possible that a measurement is not originated from any actual object.Such a measurement is referred to as a false alarm.Furthermore, an actual object may also not generate any measurements.This is referred to as missed detection.We assume that an object can generate at most one measurement and a measurement can originate from at most one object; this is known as the "data association assumption." Since every actual object that has generated a measurement is represented by a PO, we can model measurement origin uncertainty by PO-to-measurement associations.These associations are represented by multinoulli random variables.In particular, the PO-to-measurement association at time k can be described by the "object-oriented" data association (DA) vector The case where legacy PO i generates measurement j at time k, is represented by a k,i = j ∈ {1, . . ., J k }.On the other hand, the case where legacy PO i does not generate any measurement at time k is represented by a k,i = 0.
The computation complexity of MOT can be reduced by also introducing the "measurement-oriented" DA vector [49], [50] represents the case where measurement j is originated by legacy PO i.In addition, b k,j = 0 represents the case where measurement j is not originated by any legacy PO.Modeling PO-to-measurement associations in terms of both a k and b k is redundant in that b k can be determined from a k and vice versa.However, the resulting hybrid representation makes it possible to check the consistency of the data association assumption based on indicators that are only a function of two scalar association variables.In particular, we introduce the indicator function and is equal to 1 otherwise.If and only if a data association event can be expressed by both an object-oriented a k and a measurementoriented association vector b k , then the event does not violate the data association assumption and all indicator functions are equal to one.Finally, we also introduce the joint DA vectors a It is assumed that an actual object generates a measurement with probability of detection p d .If and only if a PO i represents an actual object, i.e., r k,i = 1, it can generate a measurement.If measurement z k,j has been generated by ) is arbitrary.For example, if we have a linear measurement with respect to PO state x k,n with zero-mean, additive Gaussian noise, i.e.
If measurement z k,j has not been generated any PO, it is a false alarm measurement.False alarm measurements are independent and identically distributed according to f FA (z k,j ).The number of false alarm measurements is modeled by a Poisson distribution with mean µ FA .

C. Object Dynamics
The PO states y k−1,i are assumed to evolve independently and identically according to a Markovian dynamic model [1].In addition, for each PO at time k − 1, there is a legacy PO at time k.The state transition function of the joint PO state y k−1 at time k − 1, can thus be expressed as where the state-transition PDF ) models the dynamics of individual POs and is given as follows.If PO i does not exist at time k − 1, i.e., r k−1,i = 0, then it cannot exist at time k either.The statetransition PDF for r k−1,i = 0 is thus given by where f D (x k,i ) is an arbitrary "dummy" PDF since states of nonexisting POs are irrelevant.If PO i exists at time k − 1, i.e., r k−1,i = 1, then, the probability that it stills exists at time k is given by the survival probability p s .If PO i still exists at time k, its state The state-transition PDF for r k−1,i = 1, is thus given by

D. Declaration, Estimation, Initialization, and Termination
At each time step k, our goal is to declare whether a PO n ∈ {1, . . ., N k } exists and to estimate the PO states x k,n of all existing POs, based on all measurements In the Bayesian setting, object declaration and state estimation essentially amount to, respectively, calculating the marginal posterior existence probabilities p(r k,n = 1|z 1:k ) and the marginal posterior state PDFs f (x k,n |r k,n = 1, z 1:k ).Then, a PO n is declared to exist if p(r k,n = 1|z 1:k ) is larger than a suitably chosen threshold T dec [51,Ch. 2].Furthermore, for each declared PO n, an estimate of x k,n is provided by the minimum mean-square error (MMSE) estimator [51,Ch. 4] xk Both p(r k,n = 1|z 1:k ) and f (x k,n |r k,n = 1, z 1:k ) can be obtained from the marginal posterior PDFs of augmented Fig. 2. Factor graph (a) and corresponding bipartite graph neural network (GNN) (b) for a single time step k of the considered NEBP approach for MOT.BP and GNN messages are shown.The time index k is omitted.A GNN node was introduced for each legacy PO and each new PO.The topology of the GNN, which only matches the part of the factor graph that models the data generating process, will be discussed in Section V-B.Following the topology of the data association part of the factor graph in (a), GNN edges were introduced such that the bipartite GNN shown in (b) is obtained.The following shorthand notation is used: Thus, the problem to be solved is finding an efficient computation of f (y k,n |z 1:k ).For future reference, we introduce the notation rk,n = p(r k,n = 1|z 1:k ).
Track initialization and termination can be summarized as follows.We initialize a new potential object track for each measurement.The initial existence probability of each potential object track is determined by the statistical model for unknown objects discussed in Section III-B.With this initialization approach, the number of object tracks grows linearly with time k.Therefore, we terminate ("prune") potential object tracks by removing legacy and new POs with existence probabilities below a threshold T pru from the state space.

IV. CONVENTIONAL BP-BASED MOT ALGORITHM
In this section, we review the BP-based MOT approach.Contrary to the original BP-based MOT approach, we introduce an alternative factor graph which makes it easier to describe the proposed NEBP method presented in Section V.
By using common assumptions, the factorization structure of the joint posterior PDF f (y Note that often there are no POs at time k = 0, i.e., N 0 = 0. The factor q(y k,i , a k,i ; z k ) q(x k,i , r k,i , a k,i ; z k ), describing the measurement model of the sensor for legacy POs, is defined as where 1(a) ∈ {0, 1} is the indicator function of the event a = 0, i.e., 1(a) = 1 if a = 0 and 0 otherwise.The factor v(y k,j , b k,j ; z k,j ) v(x k,j , r k,j , b k,j ; z k,j ), describing the measurement model of the sensor as well as prior information for new POs, is given by where f D (x k,j ) is an arbitrary "dummy" PDF.Here, the distribution f u (x k,j ) and mean number µ u of unknown objects are used as prior information for new POs.Note that a detailed derivation of the factors q(y k,i , a k,i ; z k ) and v(y k,j , b k,j ; z k,j ) is provided in [4].
The factorization in (13) provides the basis for a factor graph representation.Contrary to [4], in this work, we consider an alternative factor graph where PO states and association variables are combined in joint variable nodes.In particular, legacy PO states y k,i and object-oriented association variables a k,i form joint nodes "y k,i , a k,i ".In addition, new PO states y k,j and measurement-oriented association variables b k,j form joint nodes "y k,j , b k,j ".This combination of variable nodes is motivated by the fact that in the original factor graph there is exactly one a k,i connected to the corresponding y k,i and exactly one b k,j connected to the corresponding y k,j .The resulting alternative factor graph leads to a presentation of the proposed method in Section V that is consistent with BP message passing rules [34] as well as the original work on NEBP [41].A single time step of the considered factor graph is shown in Fig. 2a.
Next, BP is applied to efficiently compute the beliefs f (y k,n ) that approximate the marginal posterior PDF f (y k,n |z 1:k ).Since the considered factor graph in Fig. 2a has loops, a specific message-passing order has to be chosen.As in [3], [4], we choose an order that is based on the following rules: (i) BP messages are only sent forward in time, and (ii) iterative message passing is only performed for data association and at each time step individually.
In what follows, we will briefly discuss the calculation of BP messages on the considered factor graph shown in Fig. 2a.Note that messages sent from the singleton factor nodes "q(y k,i , a k,i ; z k )" to the joint variable nodes "y k,i , a k,i ", and messages sent from the singleton factor nodes "v(y k,j , b k,j ; z k,j )" to the joint variable nodes "y k,j , b k,j " are equal to the singleton factor nodes "q(y k,i , a k,i ; z k )" and "v(y k,j , b k,j ; z k,j )" themselves.Thus, we reuse the same notation for factor nodes and corresponding messages.
These combined messages can be further simplified [52] as follows.Because of the binary consistency constraints expressed by Ψ i,j (a k,i , b k,j ), each message comprises only two different values.In particular, ϕ (ℓ) Ψi,j →b k,j (b k,j ) in (22) takes on one value for b k,j = i and another for all b k,j = i.Furthermore, ν (ℓ) Ψi,j →a k,i (a k,i ) in (23) takes on one value for a k,i = j and another for all a k,i = j.Thus, each message can be represented (up to an irrelevant constant factor) by the ratio of the first value and the second value, hereafter denoted as ϕ Ψi,j →a k,i (a k,i ).By exchanging simplified messages the computational complexity of each message passing iteration only scales as O(I k J k ) (see [4], [52] for details).Furthermore, it has been shown in [52] that iterative probabilistic data association following ( 22)-( 23) and its simplified version discussed above are guaranteed to converge.
3) Belief Calculation: Finally, after the last message passing iteration ℓ = L, the beliefs f (y k,i , a k,i ), i ∈ {1, . . ., I k } and f (y k,j , b k,j ), j ∈ {1, . . ., J k } are computed according to where C k,i , and C k,j are normalizing constants that make sure that f (y k,i , a k,i ) and f (y k,j , b k,j ) sum and integrate to unity.The marginal beliefs f (y k,i ), f (y k,j ), p(a k,i ), and p(b k,j ) can then be obtained from f (y k,i , a k,i ) and f (y k,j , b k,j ) by marginalization.In particular, the approximate marginal posterior PDFs of augmented states f (y . ., J k } are used for object declaration and state estimation as discussed in Section III-D.Furthermore, the approximate marginal association probabilities p(a k,i ) = p(a k,i |z 1:k ), i ∈ {1, . . ., I k } and p(b k,j ) = p(b k,j |z 1:k ), j ∈ {1, . . ., J k } are used in a preprocessing step for performance evaluation discussed in Sections VII-A and VII-B.

V. PROPOSED NEBP-BASED MOT ALGORITHM
In this section, we start with a discussion on how neural networks can extract features from raw sensor data by using previous state estimates and preprocessed measurements.We then introduce the proposed NEBP framework for MOT, which compared to BP for MOT uses features as an additional input.Since we limit our discussion to a single time step, we will omit the time index k in what follows.

A. Feature Extraction
We consider two types of features: (i) features that represent motion information (e.g., position and velocity) and (ii) features that represent shape information.
For each legacy PO i = 1, . . ., I , the shape feature h ai,shape is extracted as h ai,shape = g shape,1 (Z where x− i is approximate MMSE state estimate of legacy PO i at the previous time step.Furthermore, Z − is the raw sensor data at the previous time step and g shape,1 (Z − ; x− i ) is a neural network.
Similarly, for each preprocessed measurement z j , j = 1, . . ., J the shape feature h bj ,shape is obtained as where g shape,2 (Z; z j ) is again a neural network and Z is the raw sensor data collected at the current time step.
Finally, for each legacy PO i = 1, . . ., I and each measurement j = 1, . . ., J , a motion feature is computed according to h ai,motion = g motion,1 (x − i , r− i ) h bj ,motion = g motion,2 (z j ) where r− i is the approximate existence probability of legacy PO i.Furthermore, g motion,1 (x − i , r− i ) and g motion,2 (z j ) are again neural networks.We will discuss one particular instance of shape feature extraction in Section VII-B.

B. GNN Topology and BP Message Enhancement
The conjecture of this work is that in many MOT applications (i) object dynamics and existence can be described accurately by a statistical model represented by the PDFs f (x k,i |x k−1,i ), f u (x k,j ) and parameters p s , µ u ; (ii) measurement detection and the resulting measurements of the object's position can also be described well by a statistical model represented by PDFs f (z k,j |x k,n ), f FA (z k,j ) and parameters p d , µ FA ; but (iii) object shape information are difficult to represent accurately by a statistical model.Thus, we can make use of models available for (i) and (ii), but for (iii), we best learn the influence of object shape information on measurement detection directly from the data itself.Thus, contrary to the original NEBP approach, in our NEBP method, only the parts of the MOT factor graph that model the data generating process are matched by the GNN.These matched parts are highlighted in Fig. 2. All factor nodes in this part of the factor graph are either singleton or pairwise factor nodes.As discussed in Section II-B, in NEBP singleton factor nodes are not matched by GNN nodes.In addition, in our considered factor graph in Fig. 2(a), the pairwise factor nodes "Ψ i,j (a i , b j )", j = 1, . . ., J and i = 1, . . ., I represent simple binary consistency constraints.Thus, we do not explicitly model factor nodes by GNN nodes.The node embeddings of GNNs nodes introduced for the variable nodes "y i , a i ", i = 1, . . ., I are denoted as h ai and the node embeddings introduced for variables nodes "y j , b j ", j = 1, . . ., J are denoted as h bj .Finally, following the topology of the factor graph for data association in Fig. 2(a), GNN edges are introduced such that the bipartite GNN shown in Fig. 2(b) is obtained.
We recall from Section II-A that for singleton factor nodes, the message passed to the adjacent variable node is equal to the factor node itself.As a result, q(x i , r i , a i ; z) and v(x j , r j , b j ; z j ) not only describe factor nodes but also the messages that are enhanced.There are two challenges related to directly enhancing these messages based on the GNN according to (5)-( 9), i.e., (i) the codomain of q(x i , r i , a i ; z) and v(x j , r j , b j ; z j ) can be very large, which complicates the training of the GNN [53] (see also Sections V-C and VI) and (ii) the messages q(x i , r i , a i ; z) and v(x j , r j , b j ; z j ) involve the continuous random variables x i and x j which makes it impossible to enhance them by the output of a GNN for every possible value of x i and x j individually.
To address the first challenge, we introduce normalized versions 3 of the original BP messages as where C q = J ai=0 r i ∈{0,1} q(x i , r i , a i ; z)dx i and C v = I bj =0 r i ∈{0,1} v(x j , r i , b j ; z j )dx i are the normalization constants.Note that after normalization the codomain of q s (x i , r i , a i ; z) and v s (x j , r j , b j ; z j ) is limited to the interval [0, 1].
The second challenge is addressed by enhancing BP messages q s (x i , r i , a i ; z), i ∈ {1, . . ., I} and v s (x j , r i , b j ; z j ), j ∈ {1, . . ., J} as follows (cf.(9)) Here, ω j ∈ (0, 1) and µ i (j) ∈ R + are computed from information provided by the GNN as discussed in the following Section V-C.The other entries of the messages qs (x i , r i , a i ; z), i ∈ {1, . . ., I} and ṽs (x j , r j , b j ; z j ), j ∈ {1, . . ., J} remain unenhanced, i.e., qs (x i , 0, a i ; z) = q s (x i , 0, a i ; z), qs (x i , 1, a i = 0; z) = q s (x i , 1, a i = 0; z), ṽs (x j , 0, b j ; z j ) = v s (x j , 0, b j ; z j ), and ṽs (x j , 1, b j = i; z j ) = v s (x j , 1, b j = i; z j ), i ∈ {1, . . ., I}.Note that calculating NEBP messages according to (31) and ( 32) avoids enhancing q(x i , r i , a i ; z) and v(x j , r j , b j ; z j ) for every possible value of x i and x j .All other BP messages are not enhanced.Finally, neural enhanced data association can be performed by replacing the functions β i (a i ) and ξ j (b j ) in ( 22) and (23) with their neural enhanced counterparts βs,i (a i ) and ξs,j (b j ).These neural enhanced counterparts are obtained by replacing in (24) and ( 25) the BP messages q(x i , r i , a i ; z) and v(x j , r j , b j ; z j ) with the NEBP messages qs (x i , r i , a i ; z) and ṽs (x j , r j , b j ; z j ), respectively.In particular, for i ∈ {1, . . ., I} we obtain and βs,i (a i = 0) = β i (a i = 0).Similarly, for j ∈ {1, . . ., J} we get ξs,j (b and ξs,j (b The value β i (a i = j), j ∈ {1, . . ., J} provides a likelihood ratio for the measurement with index j being associated to the legacy PO with index i [4].In addition, ξ j (b j = 0) provides a likelihood ratio for the measurement with index j being generated by a new PO.The shape association term µ i (j) ≥ 0 in (33), calculated by the GNN implements object shape association, which can be interpreted as follows.The GNN compares the shape feature extracted for legacy PO j with the shape feature extracted for measurement j and, if there is a good match, outputs a large µ i (j) > 0. According to (33), this effectively increases the likelihood ratio that the legacy PO i is associated with the measurement j.Note that there is no shape association term in (34).Since the shape feature extracted for new PO j would be the same as the shape feature for measurement j, comparing shape features as performed for legacy POs and measurements is not possible.
The scalar ω j ∈ (0, 1) in ( 33) and (34) provided by the GNN implements false alarm rejection.In particular, ω j < 1 is equal to the local increase of the false alarm distribution according to fFA (z j ) = 1 ωj f FA (z j ) (cf. ( 29), ( 30), (14), and ( 15)).In (33), this local increase of the false alarm distribution makes it less likely that the measurement z j is associated to a legacy PO.In (34), this local increase reduces the existence probability of the new PO introduced for the measurement z j .

C. Statement of the NEBP for MOT Algorithm
NEBP for MOT consists of the following steps: 1) Conventional BP: First, the conventional BP-based MOT algorithm is run until convergence, from which we obtain 2) GNN Messages: Next, GNN message passing is executed iteratively.In particular, at iteration p ∈ {1, . . ., P } the messages passed along the edges of GNN are computed as where g e (•) is the edge neural network.Furthermore, node embedding vectors of each node are obtained as where g n (•) is the node neural network.The iterative processing scheme is initialized by setting node embeddings equal to motion and shape features, i.e., h (1)  ai = [h T ai,motion h T ai,shape ] T and h 3) NEBP Messages: After computing ( 35)- (38) for P iterations, the refinement ω j used in (31) and ( 32) is calculated as Here, g s (•) is a neural network and σ(x) = 1/(1 + e −x ) ∈ (0, 1) is the sigmoid function.Furthermore, the temperature T and the bias δ are hyperparameters [54] that make it possible to calibrate the transition of the sigmoid.Finally, the refinement µ i (j) used in (31) is obtained as where g d (•) is another neural network and ReLU(•) is the rectified linear unit.
4) Belief Calculation: Finally, iterative probabilistic DA is again run until convergence by replacing q s (x i , r i , a i ; z) and v s (x j , r i , b j ; z j ) in ( 29) and ( 30) by its neural enhanced counterparts qs (x i , r i , a i ; z) and ṽs (x j , r i , b j ; z j ) in ( 31) and (32), respectively.This results in the enhanced messages φ(L) Ψi,j →bj (b j ) and ν(L) Ψi,j →ai (a i ) (cf.Section IV-2), which are then used for the calculation of legacy PO beliefs f (y i ), i ∈ {1, . . ., I} and new PO beliefs f (y j ), j ∈ {1, . . ., J} as discussed in Section IV-3.

D. Complexity Analysis
In this section, we analyze and compare the computational complexity of the conventional BP and the proposed NEBP methods for MOT.Since both BP and NEBP for MOT follow the detect-then-track paradigm, the complexity of the detector has to be taken into account.We denote by |Z| the number of raw sensor data points.For example, if a LiDAR sensor is considered, this is the number of points of the LiDAR point cloud, and if a camera sensor is used, this is the number of pixels of the camera image.Then the number of operations needed for detection is c det |Z|, where c det is a constant that depends on the size and type of neural network used as the detector g det (•).As discussed in [3], [4], the number of operations needed for the conventional BP method for MOT algorithm is c bp I J, where c bp is a constant that depends on the number of message passing iterations for DA, the number of particles, and further parameters.In total, the number of operations for BP is c det |Z| + c bp IJ.Thus, the computational complexity of BP scales as O(|Z| + IJ).
The additional operations of NEBP compared to conventional BP are related to feature extraction and the GNN.Feature extraction as discussed in ( 26)-( 28) requires c shape |Z| + c motion (I + J) operations, where c shape , c motion are constants that depend on the size and type of the neural networks g shape (•), g motion (•), respectively.The GNN is a fully connected bipartite graph, i.e., it consists of two sets of nodes, and each node in the first set is connected via an edge to each node in the second set.The number of nodes in each set is equal to I and J, respectively.GNN messages are exchanged on the IJ edges of the network according to (35) and (36).This is followed by GNN messages aggregation in (37) and (38), as well as BP message refinement in (39) and (40).The total number of operations is hence equal to c gnn,1 IJ c gnn,2 I + c gnn,3 J, where c gnn,• depends on the size and type of neural networks g e (•), g n (•), g s (•), and g d (•).It can thus be seen, that the computational complexity of NEBP also scales as O(|Z|+IJ).Note that due to the additional operations performed by NEBP, the runtime of NEBP is longer compared to BP. Runtimes of BP and NEBP are further analyzed in Section VII-D.

VI. LOSS FUNCTION AND TRAINING
Training of the proposed NEBP method is performed in a supervised manner.It is assumed that a training set consisting of ground truth object tracks is available.A ground truth object track is a sequence of object positions.Every sequence is characterized by an object identity (ID).During the training phase, the parameters of all neural networks are updated through back-propagation, which computes the gradient of the loss function.The loss function has the form L = L r + L a , where the two contributions L r and L a are related to false alarm rejection and object shape association, respectively.Thus, we consider the following binary cross-entropy loss [55, Chapter 4.3] for false alarm rejection, i.e., where ω gt j ∈ {0, 1} is the pseudo ground truth label for each measurement and ǫ ∈ R + is a tuning parameter.The pseudo ground truth label ω gt j is equal to 1 if the distance between the measurement and any ground truth position is smaller or equal to T dist , and 0 otherwise.The tuning parameter ǫ ∈ R + addresses the imbalance problem in learning-based binary classification (see [56] for details).This problem is caused by the fact that, since missing an object is typically more severe than producing a false alarm, object detectors produce more false alarm measurements than true measurements.
Since βs,i (a i = j) in ( 33) represents the likelihood that the legacy PO i is associated to the measurement j, ideally µ i (j) is large if PO i is associated to the measurement j, and is equal to zero if they are not associated.Thus, we consider the following binary cross-entropy loss for object shape association, i.e., where σ(x) = 1/(1 + e −x ) is the sigmoid function and J is the pseudo ground truth association vector of legacy PO i ∈ {1, . . ., I}.In each pseudo ground truth association vector µ gt i , at most one element is equal to one and all the other elements are equal to zero.We apply µ * i (j) instead of µ i (j) in the binary entropy loss (42).This is because the otherwise ReLU operation "blocks" certain gradients, i.e., gradients ∂L a /∂µ * i (j) are zero for negative values of µ * i (j).It was been observed, that by performing backpropagation by also making use of the gradients related to the negative values of µ * i (j), the GNN can be trained more efficiently.
At each time step, pseudo ground truth association vectors are constructed from measurements and ground truth object tracks based on the following rules: • Get Measurement IDs: First, the Euclidean distance between all ground truth positions and measurements is computed.Next, the Hungarian algorithm [1] is performed to find the best association between ground truth positions and measurements.Finally, all measurements that have been associated with a ground truth position and have a distance to that ground truth position that is smaller than T dist , inherit the ID of the ground truth position.All other measurements do not have an ID.
• Update Legacy PO IDs: Legacy POs inherit the ID from the previous time step.If a legacy PO with ID has a distance not larger than T dist to a ground truth position with the same ID, it keeps its ID.If a legacy PO i ∈ {1, . . ., I} has the same ID as measurement j ∈ {1, . . ., J}, the entry µ gt i (j) is set to one.All other entries µ gt i (j), i ∈ {1, . . ., I}, j ∈ {1, . . ., J} are set to zero.
• Introduce New PO IDs: A new PO j ∈ {1, . . ., J} inherits the ID from the corresponding measurement if the measurement has an ID that is different from the ID of any legacy PO.All other new POs do not have an ID.

VII. NUMERICAL RESULTS
To validate the performance of our method, we present results in an autonomous driving scenario.
A. Experimental Setup 1) Dataset: Our numerical evaluation is based on the nuScenes autonomous driving dataset [57], which contains 1000 autonomous driving scenes.We use the official predefined dataset split, where 700 scenes are considered for training, 150 for validation, and 150 for testing.Each scene has a length of roughly 20 seconds and contains keyframes (frames with ground truth annotations) sampled at 2Hz.There are seven object classes.The proposed MOT method and reference techniques are performed for each class individually.If not stated otherwise, all the operations described next are performed for each class separately.In this paper, we only consider LiDAR data provided by the nuScenes dataset.A scene of the considered autonomous driving application is shown in Fig. 3.
2) System Model: The state of a PO x k,n ∈ R4 consists of its 2-D position and 2-D velocity.Preprocessed measurements are extracted from the LiDAR data.For the extraction of preprocessed measurements, we employed the CenterPoint [13] detector which is based on deep learning 4 .Any preprocessed measurement z k,j consists of 2-D position, 2-D velocity, and a confidence score 0 < s k,j ≤ 1.
Object dynamics are modeled by a constant-velocity motion model [59].Object tracking is performed in a global reference frame that is predefined for each scene [57].The considered region of interest (ROI) is defined by [x e,k − 54, x e,k + 54] × [y e,k − 54, y e,k + 54], where (x e,k , y e,k ) is the 2-D position of the "ego vehicle" that is equipped with the LiDAR sensor.The PDFs that describe false alarms f FA (•) and unknown objects f u (•) are uniformly distributed over the ROI.The measurement model that defines the likelihood function f (z k,j |x k,n ) is linear with additive Gaussian noise, i.e., z k,j = H k x k,n +v k,j , where v k,j ∼ N (0, R) with R being the diagonal covariance matrix.The probability of survival is set to p s = 0.999.The threshold for target declaration is T dec = 0.5.
The pruning threshold discussed in Section III-D is set to T pru = 10 −3 .In addition, we also prune new POs with p(b k,j = 0) < 0.8 to further reduce the number of false objects and computational complexity.All other parameters  For the BP part of the proposed NEBP method, we use the particle-based implementation introduced in [3].
3) Performance Metrics: We consider using the average multi-object tracking accuracy (AMOTA) metric proposed in [16] to evaluate the performance of our algorithm.In addition, we also use the widely used CLEAR metrics [60] and track quality measures [61] that include the number of identity switches (IDS) and track fragments (Frag).The number of IDSs is increased if a ground truth object is matched to an estimated object with index i at the current time step, while it was matched to an estimated object with index j = i at a previous time.The number of Frags is increased if a ground truth object is matched to an estimated object at the previous time step, but it is not matched to any estimated objects at the current time step.Note that AMOTA is the primary metric used by the nuScenes tracking challenge [57].

B. Implementation Details
For shape features extraction as discussed in Section V-A, a neural network g shape (•) g shape,1 (•) = g shape,2 (•) that consists of two stages is introduced.The first stage is a VoxelNet [24], a neural network architecture that is used as the backbone for a variety of object detectors [13], [26].The VoxelNet takes the LiDAR scan Z k as input, and outputs a 3D tensor of size 180 × 180 × 512.This tensor is typically referred to as feature map.The first two dimensions of the feature map form a grid with 180 × 180 elements that cover the ROI.For each grid point, there is a feature vector with 512 elements.The second stage is a convolutional neural network (CNN) that consists of two convolutional layers and a single-hidden-layer multi-layer perceptron (MLP).Here, we use a CNN since it has fewer trainable parameters compared to an MLP and is thus easier to train.Note that CNNs have been widely used for feature extraction [24], [62].The CNN extracts shape features from a reduced feature map, as discussed next.
For the extraction of shape features in the second stage, at first, the grid point of the feature map that corresponds to the considered POs or measurements is located.Then, the feature vector at this grid point and the 8 feature vectors at adjacent grid points are extracted.As a result, for each PO and each measurement, a reduced feature map of size 3 × 3 × 512 is extracted.This reduced feature map is then used as the input of a CNN.Finally, the CNN computes the shape feature h ai,shape or h bj ,shape .The considered feature map of dimension 180×180×512 has been precomputed by the CenterPoint [13] method.The same VoxelNet is shared across all seven object classes.Its parameters remain fixed during the training of the proposed method.
The other neural networks g e (•), g n (•), g d (•), g s (•), and g motion (•) g motion,1 (•) = g motion,2 (•) are MLPs with a single hidden layer and a leaky ReLU activation function.All feature vectors, i.e., h ai,motion and h ai,shape , i ∈ {1, . . ., I} as well as h bj ,motion and h bj ,shape , j ∈ {1, . . ., J}, consist of 128 elements.The number of GNN iterations is P = 3. Training of the proposed method is performed based on the Adam optimizer [63].The batch size, learning rate, and the number of "epochs", i.e., the number of times the Adam optimizer processes the entire training dataset, are set to 1, 10 −4 , and 8, respectively.The hyperparameter ǫ in ( 41) is set to 0.1 and the threshold T dist for the pseudo ground truth extraction discussed in Section VI is set to 2 meters.
Evaluation of AMOTA requires a score for each estimated object.It was observed that a high AMOTA performance is obtained by calculating the estimated object score as a combination of existence probability and measurement score.In particular, for legacy PO i ∈ {1, . . ., I} we calculate an estimated object score as For new PO j ∈ {1, . . ., J} the estimated object score is given by s j = p(r j = 1) + s j .

C. Calibration
The calibration of the sigmoid introduced in ( 39) is performed as follows.For training, we set T = 1 and δ = 0.However, for inference we set T > 0 and δ > 0 such that the sigmoid in (39) transitions to one quicker and for smaller values of ω * j .The different calibration values for inference are necessary because the loss function used for training and the AMOTA metric used for performance evaluation behave differently.In particular, for performance evaluation based on AMOTA, missing an object is significantly more severe than a false object.(Note that the AMOTA metric can not directly be used for training because it is not differentiable [55].)The calibration values T and δ used for inference are selected based on a grid search over possible values T ∈ T = {0.

D. Performance Evaluation
For performance evaluation, we use state-of-the-art reference methods that all use measurements provided by the CenterPoint detector [13], which was the best LiDAR-only object detector for the nuScenes dataset at the time of the submission of this paper.In particular, BP refers to the conventional BP-based MOT method [4].CenterPointT refers to the tracking method proposed in [13].It uses a heuristic to create new tracks and a greedy matching algorithm based on the Euclidean distance to associate measurements provided by the CenterPoint detector.The methods in [19], [22], [64] all follow a similar strategy.The CBMOT method [64] adopts a score update function for estimated object scores.Chiu et al. [22] make use of a hybrid distance that combines the Mahalanobis distance with a proposed deep feature distance.SimpleTrack [19] uses the generalized IoU (GIoU) as the distance for measurement association.In SimpleTrack [19], the object detector is also applied to non-keyframes and has a measurement rate of 10Hz.The Immortal tracker [20] has a measurement rate of 2Hz.It follows the tracking approach of SimpleTrack, except that it never terminates tracks.PMB [11] implements a Poisson multi-Bernoulli filter for MOT that relies on a global nearest neighbor approach for data association.Finally, OGR3MOT [17] utilizes a network flow formulation and transforms the data association problem into a classification problem.
In Table I and II, we present the tracking performance of the considered methods on the nuScenes validation and test sets based on measurements provided by the CenterPoint detector.The symbol "-" in Table II indicates that the metric is not reported.It can be seen that the proposed NEBP approach  outperforms all reference methods in terms of AMOTA performance.Furthermore, it can be observed, that BP and NEBP achieve a much lower IDS and Frag metric compared to the reference methods.This is because both BP and NEBP make use of a statistical model to determine the initialization and termination of tracks [4], which is more robust compared to the heuristic track management performed by other reference methods.The improved AMOTA performance of NEBP over BP comes at the cost of a slightly increased IDS and Frag.Qualitative results for a single time step of an example autonomous driving scene are shown in Fig. 4. It can be seen that compared to CenterPointT, BP can reduce the number of false objects significantly, while NEBP can reduce the number of false alarms even further.We also report estimation performance results based on the generalized optimal subpattern assignment (GOSPA) metric [65].This metric can be split up into three components, namely, localization error, false estimated objects, and missed ground truth objects.NEBP outperforms BP in all three components.We also compare the performance of the proposed NEBP method with BP based on measurements provided by different object detectors.In particular, in addition to measurements provided by the CenterPoint [13] detector, we also consider measurements provided by the PointPillar [25] and the Megvii [26] detectors.Results based on the nuScenes validation set are shown in Table III.For all three detectors, NEBP can outperform BP in terms of AMOTA, and at the same time maintain a similar number of IDS and Frag.These results indicated that the proposed NEBP method is robust with respect to the chosen object detector.
All experiments were executed on a single Nvidia P100 GPU.For training, eight epochs of the nuScenes training set were performed.The total training time was measured as 30 hours.The inference times of BP and NEBP applied to the nuScenes validation were measured as 658 seconds and 1137 seconds.These times do not include the execution of the object detector, which yields a runtime of 822 seconds.NEBP has a higher computational complexity compared to BP due to the additional operations discussed in Section V-D.

E. Ablation Study
In this section, we analyze the contribution of different algorithmic components to the overall performance of our NEBP method.In particular, we analyze the degradations of NEBP performance that are the result of the ablation of specific algorithmic components.All ablation studies are based on the nuScenes validation set, and measurements provided by the CenterPoint detector [13].
Table V shows the tracking performance of these NEBP variants, conventional BP, and the proposed NEBP method.It can be seen that NEBP-m can not achieve any performance improvements compared to BP.This is not surprising since object motion is already modeled accurately by the statistical model and, compared to BP, NEBP-m does not make use of any additional information.On the other hand, both NEBP-a and NEBP-r can achieve an improved AMOTA performance compared to BP.This is because NEBP-a and NEBP-r incorporate additional information in the form of shape features and address the fact that the statistical model used by BP does not accurately model the true data-generating process.In particular, in the statistical model, false alarm measurements are uniformly distributed over the ROI.Furthermore, false alarm measurement and their number are also assumed to be independent and identically distributed across time.However, these assumptions do often not hold in real-world MOT applications such as the considered autonomous driving scenario.This is because physical structures and other reflecting features in the environment can generate so-called persistent false alarm measurements.These false alarm measurements are not uniformly distributed and are not independent across time.Thus, they are not accurately represented by the considered statistical model.This model mismatch degrades tracking performance and is addressed by false alarm rejection performed by NEBP and NEBP-r.Object shape association as performed by NEBP and NEBP-a improves data association by using object shape information provided by shape features.Finally, Table V also reports performance improvements that result from the calibration process discussed in Section VII-C.In particular, "NEBP-nc" is the NEBP variant where no calibration has been performed, i.e., we have T = 1 and δ = 0 for training and inference.It can be seen that calibration can significantly improve the performance of NEBP.
The effect of temperature parameters T > 0 for five representative object classes is shown in Table VI.For each class, the bias δ is fixed to the value provided above.Note that T = +∞, is equivalent to discarding all measurements with ω j < σ(δ).It can be seen that NEBP does not always achieve the best AMOTA for T = +∞.In cases where it is difficult to determine whether a measurement is a false alarm, using a temperature T < +∞ can be more robust, as it does not directly discard the measurements with ω j < σ(δ), but instead reduces the estimated object score (43) of POs that are likely to generate these measurements.

VIII. CONCLUSION
In this paper, we present a neural enhanced belief propagation (NEBP) method for multi-object tracking (MOT) that enhances the solution of model-based belief propagation (BP) by making use of shape features learned from raw sensor data.Our approach conjectures that learned information can reduce model mismatch and thus improve data association and rejection of false alarms.A graph neural network (GNN) that matches the topology of the factor graph used for modelbased data association is introduced.For false alarm rejection, the GNN identifies measurements that are likely false alarms.
For object shape association, the GNN computes corrections terms that result in more accurate association probabilities.The proposed approach can improve the object declaration and state estimation performance of BP while preserving its favorable scalability of the computational complexity.Furthermore, the proposed NEBP method inherits the robust track management of BP-based algorithms.We employed the nuScenes autonomous driving dataset for performance evaluation and demonstrated state-of-the-art object tracking performance.Due to robust track management NEBP yields a much lower number of identity switches (IDS) and track fragments (Frag) compared to non-BP-based reference methods.A promising direction for future research is an application of the proposed NEBP approach to multipath-aided localization [66]. .

Fig. 3 .
Fig.3.Top-down view of the considered autonomous driving scene.For each ground truth and estimated vehicle, we plot tracks as well as positions and bounding boxes at the last time step.This scene is part of the nuScenes autonomous driving dataset.used in the system model are extracted from training data.For the BP part of the proposed NEBP method, we use the particle-based implementation introduced in[3].

Fig. 4 .
Fig. 4. Top-down view and single time step for an example autonomous driving scene.Ground truth objects (a) as well as estimated object provided by CenterPointT (b), BP (c), and NEBP (d) are shown.Note that BP and NEBP do not provide object size and orientation estimates.Thus, for each estimated object, the size and orientation of the measurement with the largest association probability are shown.