GNSS-Imaging Data Fusion for Integrity Enhancement in Autonomous Vehicles

The transport sector is experiencing a fast and revolutionary development, moving toward the deployment of autonomous vehicles. Thus, the accuracy of the position information and the integrity of the navigation system have become key factors. In this article, we address the problem of the enhancement of the integrity of the position provided by a global navigation satellite system receiver by exploiting sensor fusion. To this aim, we estimate the lateral offset and heading of the vehicle with respect to a georeferenced roadway centerline from the images supplied by an on-board camera. Moreover, we perform integrity monitoring based on the implementation of the solution separation in the parity space. The numerical results indicate that the use of sensor fusion and digital map allows us to attain a longitudinal protection level reduction with respect to the case in which sensor fusion is not exploited. More specifically, a decrease of about 70% is achieved when a single constellation is used, while reduction is less relevant, about 15%, when two constellations are employed.


I. INTRODUCTION
The ongoing transport evolution entails the rise of strict positioning requirements, and nowadays, global navigation satellite systems (GNSSs) are the main resource for outdoor localization. These systems, however, are vulnerable to faults (e.g., satellite and constellation failure), signal degradation (e.g., ionospheric scintillations, multipath), and external threats (e.g., jamming and spoofing), thus failing to satisfy the current safety demands. The main answers to this need are the employment of multisensor approaches and the development of fault detection and mitigation methods. Multisensor approaches can be considered the main research direction for navigation and positioning [1]. Hybrid positioning techniques often involve inertial measurement units (IMUs) [2], [3] and visual sensors [4]- [6]. For what concerns fault management methods, several works focus on integrity monitoring [1], [7], [8]. This can be defined as the prompt supply of information concerning the level of reliability of the navigation system. To deal with integrity issues, two fundamental concepts have to be introduced: the protection level (PL) and the alert limit. The former represents an estimate of the largest position error that will not be exceeded with a probability greater than the tolerable misleading information probability, whereas the latter corresponds to the maximum PL, which can be allowed while ensuring a safe navigation [9]. The methods that achieve this goal exploiting redundant global positioning system (GPS) satellite measurements have been defined by receiver autonomous integrity monitoring (RAIM) techniques by the Federal Aviation Administration [10]. In addition, thanks to the development of dual-frequency and multiconstellation GNSS receivers, a new version of RAIM, known as advanced RAIM (ARAIM), has been proposed in order to meet more stringent requirements [11].
In this article, we aim at using a multisensor approach to perform integrity enhancement. We exploit the additional information provided by an on-board camera for improving the integrity of the GNSS-based position estimate on both the lateral and longitudinal directions. More specifically, based on our previous work [12], we employ the images acquired by the camera to estimate the lateral offset and heading of the vehicle with respect to a georeferenced roadway centerline and define a virtual track model for road navigation. The integrity check is then performed by projecting the GNSS estimate onto the track and evaluating the components of the difference between the constrained and unconstrained estimates in the parity space. In [12], we introduced the concept of virtual track and assessed the performances of the imaging subsystem for the estimation of the lateral offset and heading of the vehicle. The conducted evaluations confirmed the suitability of on-board cameras to perform this task. Here, we propose to use the information provided by the cameras for integrity monitoring purposes, and we evaluate the performances of sensor fusion on the resulting PL.
The rest of this article is organized as follows. In Section II, the related works are reported. In Section III, the mathematical framework is introduced. In Section IV, the vision-based track estimation is presented. Then, in Section V, the sensor fusion process is detailed. In Section VI, the integrity monitoring is described. In Section VII, the simulation results are discussed. Finally, Section VIII concludes this article.

II. RELATED WORKS
The concept of vision-aided navigation systems has been widely explored in the literature. For instance, Chen et al. [4] proposed a 2-D road representation to model the longitudinal and lateral maneuvers of a target vehicle and exploited radar and image sensor measurements for tracking it. However, they considered a radar, which provided the range and bearing of the target, and a camera (installed on a helicopter or an unmanned aerial vehicle), which allowed to measure the displacement of the target from the road axis. Another possibility is to exploit visual sensors installed on-board. This has been explored for different kinds of vehicles, such as aircraft [5] and cars [6]. Venable and Raquet [5] presented an image-aided navigation solution to overcome IMU error drift in the presence of GNSS outage. More specifically, they defined an offline database creation procedure and an online navigation algorithm, which computed the aircraft position by exploiting the previously built database. Song et al. [6] proposed employing a smartphone rear camera to compute the lane-level car position. More in detail, they exploited lane marks and road boundary signs to identify the lane in which the vehicle was located and inertial sensors to detect lane switching.
Concerning GNSS integrity monitoring, a deep insight in the ARAIM performance and optimization can be found in [13]- [16]. In particular, as detailed in [15], the evaluation of an upper bound for the integrity risk can be performed by resorting to the parity space representation. In fact, this bound is independent from the fault vector. The parity space concept has also been exploited in [7] and [8]. More in detail, Zhao et al. [7] demonstrated that the multihypothesis solution separation (MHSS) tests are redundant and proposed a new fault detection method that implemented the MHSS representation in the parity space. Zhao et al. [8] defined an ARAIM method, which bounded the integrity risk in the parity space. To do so, they linked the mean positioning error to the noncentrality parameter of the RAIM test statistic.
In addition to GNSS-based integrity monitoring, multisensor approaches have been proposed in [1], [17], and [18]. More specifically, in [17], cameras have been used to detect road markings and improve the positioning information. Then, an integrity check has been added to verify the consistency between GNSS, camera, and dead-reckoning measurements. The authors defined a state vector composed of the position and heading of the vehicle and declared a measurement faulty if the Mahalanobis distance between predicted and updated states exceeded a threshold defined on the basis of a target false alarm probability. The faulty measurements were then excluded from the position computation process, thus resulting in an accuracy enhancement. In [18], a similar approach was used to derive an integrity check of the position information provided by a map-matching algorithm, in the form of an "ON/OFF" characterization of the coherence with the GNSS. However, in that case, the GNSS was used as the source of the information for the integrity monitoring, and GNSS faults have not been accounted for. Moreover, fault detection was performed by thresholding the Mahalanobis distance between the map-matching-based position estimate and the GNSS one, and no indication of PL was given. Finally, Meng and Hsu [1] proposed a Kalman-filter-based integrity monitoring method for dealing with multisensor navigation systems. Moreover, they underlined that traditional RAIM techniques cannot be directly applied to multisensor integration and proposed a solution separation (SS) method with sensor exclusion instead of measurement exclusion.
Differently from the mentioned approaches, in this article, we use the information provided by the on-board camera as an external integrity source for monitoring GNSS integrity. The on-board camera is not employed for detecting the lane in which the vehicle is moving, but for identifying its location within the lane. Moreover, in the analyzed navigation system, the visual component is not the primary source of position information, but it is integrated to define a road constraint, which is exploited for integrity purposes. To assess the performances of the proposed approach, we evaluate the impact of the sensor fusion of the computed PL.

III. MATHEMATICAL FRAMEWORK
In the following, we assume that the location of the vehicle has to be expressed with respect to some application-dependent conventional terrestrial reference system (CTRS), like the Earth-centered, Earth-fixed. Nevertheless, to exploit the constraint represented by the road surface, it is more convenient to compute the vehicle's location in terms of its mileage with respect to a starting point, measured along the centerline of the roadway, and by its lateral position with respect to that line, and then convert these coordinates into the CTRS. For roadways with multiple parallel lanes, any lane centerline can be equivalently used. More formally, neglecting isolated discontinuous bumps, we can describe the road as a smooth surface, and we can adopt, as local reference frame, the Darboux frame associated with the centerline.
To illustrate this concept, let us first consider an ideal vehicle whose motion is restricted to a single traffic lane and whose driver perfectly follows its centerline. Then, without loss of generality, let us consider the motion of the middle point Q of the axis of the first axle. When the vehicle moves, the axle's axis defines a ruled regular surface M, parallel to the road surface, while the point Q defines a curve that lies in M whose orthogonal projection P on the road surface defines the vehicle's track that, in this case, coincides with the lane centerline. Apart from small discontinuities, the centerline is modeled by a continuous differentiable curve X Cl (s) parameterized by the arc length s, also referred in the following as mileage, while the road surface is approximated by the ruled regular surface G parallel to M.
Then, as illustrated in Fig. 1, given a pointP of the centerline with mileages and coordinates X Cl (s), the Darboux trihedron {X Cl (s), e T (s), e U (s), e V (s)} with origin inP can be defined as follows.
1) e T (s) is the unit vector tangent to the curve, pointing in the direction of motion 2) e V (s) is the normal unit vector of the surface G inP, pointing downward, restricted to the curve X Cl (s). 3) e U (s) is the unit vector given by the cross product of e V (s) and e T (s) We remark that parameterizing the track path by the arc length implies that e T = 1.
Then, whenP moves along the centerline, the moving Darboux trihedron defines the Darboux moving frame {X Cl (s), e T (s), e U (s), e V (s)}.
As illustrated in Fig. 2, the vehicle can present a lateral offset with respect to the centerline. Moreover, its heading may differ from the centerline direction. Without loss of generality, for compliance with the SAE SJ 2945/1 norm [19], we adopt as vehicle's intrinsic reference system the frame { , e ξ , e η , e ζ }, whose origin is the center of the rectangle on the road plane that encompasses the farthest forward, rearward, and side-to-side points on the vehicle. The unit vector e ξ is parallel to the vehicle's longitudinal axis, e ζ is orthogonal to the road plane pointing down, and e η = e ζ × e ξ , as illustrated in Fig. 3.
Thus, for a given mileage s 0 , let X be the column vector of the coordinates of with respect to the Darboux frame {X Cl (s 0 ), e T (s 0 ), e U (s 0 ), e V (s 0 )} with origin in P 0 . Then, the vector X of the coordinates of with respect to  the CTRS is where D DC (s 0 ) is the 3 × 3 transformation matrix from the Darboux frame to the CTRS. For the sake of compactness of the notation, in the following, we will omit s 0 whenever unnecessary. Incidentally, we recall that D DC can be written in terms of the Darboux frame unit vectors, represented with respect to the CTRS, as follows: On the other hand, the GNSS measures are referred to the antenna reference point (ARP), while measures extracted from the imaging system are referred to the camera's optical center. Therefore, it is worth considering that, under the rigid body approximation, given any point P with a displacement equal to ξ with respect to the intrinsic vehicle reference system with origin in , its coordinates with respect to the CTRS can be computed as where T r is the transformation matrix from the intrinsic vehicle reference system with origin in to a frame with the same origin but with axes parallel to those of the Darboux frame.

IV. VEHICLE'S HEADING AND LATERAL OFFSET ESTI-MATION
A big effort has been spent during the last decade in robust lane marking detection algorithms to support lane departure warning and lane keeping functions for assisted driving [20]- [22]. Therefore, here, we focus our attention on the task of estimating the orientation of the vehicle with respect to the Darboux frame and the lateral offset of the vehicle with respect to the central line, assuming that the lane markings have already been extracted from the image. Since all the measures are referred to the centerline (both in terms of shift and direction), a road digital map (RDM) containing all the georeferencing information concerning the centerline is assumed to be available.
On the other hand, since we model the vehicle as a rigid body leaning on the road surface, pitch and roll are considered null, and only the yaw angle θ has to be determined in order to specify the vehicle orientation (see Fig. 2).
Let us consider the case of a single front-facing camera whose field of view covers a sufficiently large portion of the road so that road markings are visible. For the sake of simplicity and compactness, we assume that the camera axis is parallel to the principal axis of the vehicle. More in general, a transformation matrix would be needed in order to align the images as if the axes were parallel. In addition, we assume that the camera has been previously calibrated in order to correct lens distortion, and the intrinsic parameters after image rectification are known. Therefore, a simple pinhole camera model can be employed to represent image formation.
Let us indicate as vanishing point (VP) the point of the image plane corresponding to the intersection between the lane markings, and as x Cl 1 the horizontal offset of the VP with respect to camera's optical center (see Fig. 4). Then, as detailed in Appendix A, the yaw angle θ between the centerline direction and the vehicle heading can be computed asθ where f is the focal length of the camera. In addition, given a horizontal slice of the image corresponding to a depth d, let x c MD 1 be the horizontal offset of the midpoint of the segment bounded by the images of the lane markings with respect to the camera's optical center (see the yellow dotted line in Fig. 4). Then, as detailed in Appendix A, the lateral offset u OC of the camera's optical center from the centerline can be estimated as follows: In the case of absolute GNSS positioning, the receivers solve a nonlinear system relating the measured pseudorange vector ρ(k) to the unknown receiver location X Rx and to the receiver clock offset δt (k). To do so, an iterative procedure based on the first-order Taylor's series expansion of the geometric range equations around a point corresponding to an initial guess of the receiver location is employed. For this reason, we select the point of the curve X Cl (s) corresponding to an initial guess of the vehicle mileage, s 0 .
Denoting with s, u, and v the components of the receiver position X ARP with respect to the Darboux frame with origin in X Cl (s 0 ), we obtain the following linearized system: where ρ is the vector of the pseudorange offsets, H ρ is the GNSS observation matrix, z is the vector of the unknowns, and ρ is the measurement noise, usually modeled as a zero-mean Gaussian random variable with covariance matrix R ρ . In more detail, ρ is defined as the difference between the vector of the measured pseudoranges ρ and the vector ρ 0 of the error-free pseudoranges corresponding to the linearization point X Cl (s 0 ), i.e., Moreover, z is given by and H ρ is defined as In (11), E Rx is the matrix composed of the satellite lineof-sight unit vectors {e p Rx , p = 1, . . . , N Sat } pointing toward the receiver (12) and 1 N Sat is a column vector of size N Sat × 1 with elements equal to 1. We remark that, since the components of the receiver position offset appearing in (8) are expressed with respect to the Darboux frame, the satellite line-of-sight unit vectors appearing in (12) need to be referred to the same frame.
We observe that even though the proposed model has been introduced considering the absolute GNSS positioning, it can be extended to the differential GNSS approach. In this latter case, the pseudorange offset vector ρ includes the differential corrections.
The exploitation of the road constraint and of the measurement of lateral offset with respect to the centerline provided by the imaging system can be performed by adding two measurement equations related to u and v, respectively, as follows: where u ARP is the lateral offset of the ARP, v ARP is the height of the ARP above ground, and u and v are the measurement error components for u m and v m , respectively. In order to characterize the error components contributing to u and v , let us consider that u ARP is related to the lateral offset u OC of the camera's optical center from the centerline and to the vehicle's yaw angle θ provided by the imaging system as follows: is the vector of the offset of the ARP with respect to the optical center in the vehicle's intrinsic reference frame. The imaging system introduces an error on u ARP , which will be denoted by OC . As for v ARP , it is the height of the ARP with respect to the road plane measured after the installation of the receiver on the vehicle. The height error on v ARP will be denoted by H . Moreover, it has to be considered that the RDM may be affected by errors. Denoting the track coordinates stored in the RDM versus the mileage as X Map (s), the error of the digital map corresponding to the mileage s is Survey (s) = X Cl (s) − X Map (s).
Therefore, the error contributions to the additional measurements u m and v m are given by so that the component of Survey (s) along e U is included in The previous equations can be written in a compact form as follows: and Thus, the full linearized system exploiting the GNSS and imaging system measures and the road constraint is the following: Although in our tests we estimated z in a single step to optimize the computational cost, in this section, we would like to highlight the impact of adopting sensor fusion. To this aim, we show how to perform the estimation of z through a sequential approach, in which we first employ the GNSS pseudorange measures and then, through (16), the imaging system measures.
As the first step, having no a priori information, we evaluate the maximum likelihood estimate. This last corresponds to the weighted least squares solution when the inverse of the GNSS measurements noise covariance matrix R ρ is employed as weighting matrix, so that the estimatê z G based only on GNSS iŝ Moreover, from (21), it follows that for the covariance matrix of the estimation error P G , we have In the second step, we combineẑ G with the measures q provided by the visual sensors. The estimateẑ SF after sensor fusion is, thus, obtained through the relationship (24) where the gain is and the covariance matrix of the sensor fusion solution is We observe that the above equations represent a special case of the Kalman filter equations when the state transition matrix is set to the identity matrix and the covariance matrix of the input is null. Under these circumstances, the state of the system remains unchanged from one measure to the next.
As detailed in Appendix B, when the errors affecting the digital map and the lateral offset measurement are negligible with respect to the error of the GNSS solution, the estimates of the lateral offset and of the height of the camera above ground are set to the observed values, i.e.,û SF = u m ,v SF = v m , and the vehicle mileage is set to where denotes the correlation coefficient between the components of the GNSS estimation error. That is, as demonstrated in Appendix B, the sensor fusion estimate is obtained by projecting the GNSS solution onto the virtual track, which in this case is the line parallel to the centerline, but with lateral offset u m and height v m with respect to the road surface (shown in yellow in Fig. 5). This projection is performed along the conjugate direction with respect to the inverse of the covariance matrix of the estimation error, R −1 s q . By denoting as [ẑ G ] 1:3 the first three components ofẑ G , Fig. 5 depicts the ellipsoid defined by the quadratic form x T R −1 s q x, the conjugate direction to the virtual track with respect to that ellipsoid, and the resulting sensor fusion estimate [ẑ SF ] 1:3 .

VI. INTEGRITY MONITORING
To assess the improvement in integrity that can be obtained through the exploitation of the information provided by the imaging system, we analyze the case of integrity monitoring that makes use of SS [23], [24]. This technique has been proposed as a fault detection method in [25]. The core idea of the SS approach is to detect a fault by comparing the distance between the navigation solutions obtained using different subsets of measurements. Among the implementations of SS methods, in the following, we employ the projection-line-based multihypothesis solution separation (PLB-MHSS) technique proposed in [7] due to its improved performances with respect to basic MHSS methods in terms of fault detection rate and PL.
For the sake of compactness, we report a general derivation of the SS technique, regardless of the fact that we are considering sensor fusion or not. The two cases of GNSS only (indicated by the subscript G) and sensor fusion (indicated by the subscript SF) estimates are reported in Table I.
First of all, it is convenient to introduce the vector of normalized measures m = W μ, obtained by premultiplying the original measures by the square roots of the inverses of their measurement noise covariance matrices, so that m =Hz + whereH is the n × m normalized observation matrix and is now a sample of a zero-mean, uncorrelated, Gaussian random variable with unit variance, so that R = I. Moreover, n and m denote the total number of observations and the number of unknowns, respectively. Then, the projection p of the normalized measurement vector into the null space of the observation matrixH, also known in the literature as the parity vector, can be computed as follows: where Q is the (n − m) × n parity matrix associated with H. Consequently, by adding to the GNSS measures u m , the lateral offset provided by the imaging system, and v m , the height of the camera above ground, the dimension of the observation matrix, and consequently of the parity space, is increased by two. The effective gain, in terms of fault detection, depends on the mutual geometry among satellites, road, and receiver.
As detailed in [10], the parity matrix Q can be computed by first applying the singular value decomposition toH, so that we can writeH and then partitioning U into an n × m matrix U 1 and an n × (n − m) matrix U 2 so that U = U 1 U 2 . Then, we can set Q = U T 2 . We recall that QQ T = U T 2 U 2 = I [10]. In addition, the components of p are mutually uncorrelated and have a unit variance, so that the covariance matrix of p is R p = I.
As demonstrated in [7], the SSs can be directly computed in the parity space. To this aim, let us denote with H = {H i , i = 0, 1, ..., h} the set of mutually exclusive, jointly exhaustive hypotheses concerning the healthy/faulty status of the GNSS constellation, with H 0 corresponding to the fault-free status and the remaining hypotheses corresponding to single and multiple faults. Let us define as F i = {i 1 , i 2 , ..., i n i } the set of indices associated with the faulty observations involved in the ith fault hypothesis H i , where n i indicates the number of faults associated with the hypothesis H i . Then, the observation matrixH 0 i corresponding to the exclusion from the position, velocity, and time (PVT) computation of the faulty satellites can be obtained from the fault-free observation matrixH by setting to zero the rows corresponding to F i , as follows: and δ j denotes the 1 × n row vector with the jth component equal to 1 and all other elements null. Then, assuming that the number of fault-free measurements under fault hypothesis H i is at least equal to or larger than the number of unknowns (i.e., n − m ≥ n i ) and QA i is full rank, based on [7], the SS z i = z 0 − z i corresponding to the ith fault hypothesis can be written in terms of the parity vector as follows: andH † = (H TH ) −1HT is the Moore-Penrose left pseudoinverse ofH. The above assumptions are necessary for A T i Q T QA i to be invertible. They are commonly assumed in the literature (see, e.g., [13] and [15]) and are usually verified under common operating conditions, especially when using a multiconstellation receiver. Nevertheless, there might be cases in which these assumptions are violated. If n − m < n i , rank(QA i ) < n i , or the condition number of A T i Q T QA i exceeds a predefined threshold, the navigation system issues a warning.
Moreover, let us define as λ k the m × 1 column vector with the kth component equal to 1 and all other elements null. Then, from (33), it follows that the kth component of z i can be computed as According to [7], the fault detection test statistic can be obtained as the largest projection of the parity vector onto the projection lines defined in (36), namely: and an alarm can be raised whenever τ exceeds the fault detection threshold T defined as for a certain false alarm probability P fa .
In addition, from (35), for the variance σ 2 ss i,k of z i k , we obtain: Then, according to [13] and [14], the integrity risk constraint imposes that where I REQ is the integrity risk allocated to the GNSS location determination system, P NM is the prior probability of faults that do not need to be monitored against, because they have a very low probability, and P HMI is the hazardous misleading information probability. Then, based on [7], the protection level PL k for the kth component of the vehicle position (where k equal to 1, 2, and 3 corresponds to the longitudinal, lateral, and downward component, respectively) can be computed by solving the following equation: where I k REQ is the integrity risk allocated to the kth component of z and P k NM is the corresponding prior probability of not monitored faults.
Moreover, σ 0,k and σ i,k are the standard deviations of the full set estimation error and of the estimation error of the solution corresponding to the exclusion of the faulty observations of H i , for the kth component of z, and P H i is the a priori probability of H i . The PL computation procedure is summarized in Fig. 6.

VII. EXPERIMENTAL RESULTS
In this section, we quantify the gain obtained by complementing GNSS measurements with the lateral vehicle offset provided by the imaging system and the digital map. The lateral accuracy is mainly driven by the accuracy of the imaging subsystem, and according to [12], this last outperforms the GNSS at least by one order of magnitude when code pseudoranges are employed. For this reason, we expect that the lateral PL undergoes a significant improvement thanks to sensor fusion. Therefore, we also analyze the effects of sensor fusion on the integrity of the estimate of the longitudinal component of the vehicle position, which is less conceivable. The information on the lateral offset provided by the visual input, in fact, impacts on the longitudinal position due to the cross-correlation among the GNSS error components, as shown in (27). To assess the performances, we computed both the lateral protection level PL Lat and the longitudinal protection level PL Long with and without sensor fusion, by solving (41).
The experiments have been performed assuming a standard deviation of 0.1 m for u m , in accordance with the results reported in [12]. Regarding the vertical offset measurements, we observe that they are mainly affected by the  quality of the RDM. For the creation of the latter, we assumed the use of satellite geodesy with RTK postprocessing or similar surveying techniques providing centimeter-level or decimeter-level accuracy. Consequently, for the performance assessment in this article, we adopt a value of 0.1 m for the standard deviation of v m . The experimental evaluation has been carried out considering the set of sites shown in Fig. 7, which correspond to the locations of 39 International GNSS Service stations [26]. To account for the dependence of the PL on the satellite line of sights, an observation period of 24 h has been adopted. Moreover, possible course directions in the range [0 • , 180 • ] with 1 • step have been considered. However, for the sake of readability, in the following, some of the results have been aggregated with respect to two station subsets, with almost constant latitude the first and almost constant longitude the second, corresponding to the stations with the orange mark on the map. Concerning the GNSS error model, we considered the reference case, in which corrections of global hazards related to signal-in-space errors, like those caused by satellite faults and ionospheric and tropospheric anomalous conditions, are mitigated by means of a local area differential GNSS augmentation system. Therefore, for their first-order statistics, we adopted the same models employed in [27] for the evaluation of the expected performances of a GNSS receiver installed on board of a train. In essence, the code pseudoranges are affected by the ionospheric and tropospheric residual errors, by the code noise multipath (CNMP) errors affecting the receivers of the augmentation network, and by the CNMP error affecting the on-board receiver, whose statistics are summarized in Table II. More specifically, O f is the obliquity factor, σ 2 vig_n is the variance of nominal ionosphere, c s,τ represents the carrier smoothing time of the GNSS receiver, x v is the distance of the vehicle from the nearest reference station, v v is the vehicle speed, R e is the Earth radius, el is the satellite elevation angle, and h I is the ionosphere thin shell height. Further details can be found in [27]. We note that the CNMP error includes the receiver thermal noise. The choice of adopting the same model as in [27] is motivated by the fact  that trains and road vehicles share the same electromagnetic environment, characterized by multipath as the major source of hazards. As in [27], to account for a larger multipath impact with respect to the avionic scenario, we inflated its contribution to the error budget by a factor k infl = 3. Finally, for the PL computation, we considered a target P HMI of 10 −7 , a P fa of 10 −3 , and a single fault hypothesis corresponding to an a priori probability equal to 10 −3 . Moreover, in the PVT estimate, only satellites with elevation exceeding 10 • have been considered. The temporal evolution of PL Lat versus the vehicle course for a GPS receiver located in Ondřejov (Czech Republic), before and after sensor fusion, is reported in Fig. 8. Similarly, the results obtained for the same receiver position based on the joint use of GPS and GALILEO constellations are shown in Fig. 9. As expected, the lateral PL reduction is significant both when only GPS is employed and when a multiconstellation approach is used (it is worth noticing the different scales for the colormap). For this reason, we   chose to analyze more in detail the behavior of PL Long whose improvement is less predictable. The temporal evolution of PL Long versus the vehicle course for the GPS receiver located in Ondřejov, before and after sensor fusion, is reported in Fig. 10. Similarly, the results obtained for the same receiver based on the joint use of GPS and GALILEO constellations are shown in Fig. 11. Both in the case of single and multiple constellations, the use of sensor fusion allows us to decrease the PL. This is more evident when a single constellation is employed since, as expected, the joint use of two constellations already improves the PL. However, as shown in Fig. 11, even in the multiconstellation scenario, the benefits of sensor fusion are noticeable. We remark that, for the sake of comparison with similar results reported in the literature for the avionic case, we adopted the same parameters for the variance of the local hazards for both the constellations. From Figs. 10 and 11, it appears that the performance improvement due to sensor fusion is significant for any course orientation.  Since the PL variations with respect to the course direction are sufficiently smooth, for the sake of compactness, in Figs. 12 and 13, only the data related to four directions, i.e., 0 • , 45 • , 90 • , and 135 • , are reported for the whole set of stations. These figures confirm that the performance improvement occurs regardless of the course direction. Moreover, the PL assumes its minimum and maximum values at 0 • and 90 • , respectively. Owing to this phenomenon, the sensitivity of the performances with respect to the latitude and the longitude of the receiver has been assessed by means of a one-day simulation for a North-South and East-West course only. As performance indicator the cumulative distribution of the PL (i.e., the probability of PL Long not exceeding a specific level) has been adopted. The results related to the set of stations having almost the same latitude (station IDs: 35, 20, 1, 2) are shown in Figs. 14 and 15. They confirm that multiconstellation scenarios and sensor fusion techniques allow us to achieve a smaller PL. According to the mentioned figures, in addition, the achieved performances do not change with the longitude of the receiver. Moreover, although the PL reduction is significant regardless of the  course direction, the PL assumes different values for the East-West and for the North-South courses. More in detail, the 0 • course corresponds to better results as can be clearly noticed, for instance, from the values assumed by the PL when the cumulative distribution function (CDF) equals 0.8 (highlighted by the vertical line in Figs. 14 and 15). This phenomenon occurs in both the single-and multiconstellation cases, as well as for sensor fusion. Then, we selected a set of stations having almost the same longitude (station IDs: 20, 38, 3, 36, 19, 15). The results are shown in Figs. 16 and 17, and they confirm that multiconstellation scenarios and sensor fusion techniques allow us to achieve a smaller PL. The achieved results also show that the performances improve when the latitude of the receiver increases, regardless of the course direction. However, as discussed before, the 0 • course corresponds to better results (as highlighted by the vertical line in Figs. 16  and 17). The motivation for this variation resides in the nonuniform distribution of the satellites with respect to the visible portion of the sky.
On the other hand, the contribution to integrity carried by the additional information provided by the imaging subsystem and by the RDM can be appreciated by  computing the ratio between the protection level PL SF Long after sensor fusion and the PL corresponding to the use of GNSS only. This last will be indicated as PL G Long when only GPS is employed, and as PL GG Long when both GPS and GALILEO are used. As a consequence, the PL reduction indicator γ Long can be defined as for the GPS-only case, and in the multiconstellation scenario. We reported the results both for the GPS only case and for the GPS and GALILEO scenario in Fig. 18. It is worth noticing that when the impact of sensor fusion is relevant, a significant PL reduction is obtained, thus causing a decrease in the values of γ Long . Fig. 18 reports the boxplot of the temporal average of γ Long for the 39 receiver locations, considering four possible vehicle course directions. As clearly shown, for a single-constellation receiver (i.e., GPS), the PL reduction is around 70%, regardless of the course direction. On the other hand, when two constellations are employed, the PL enhancement is less relevant, leading to a γ GG Long around 85%, which corresponds to a PL reduction of 15%, regardless of the course direction.

VIII. CONCLUSION
In this article, we exploit the additional information about the lateral offset of a vehicle with respect to a georeferenced roadway centerline, provided by an on-board camera, to enhance the integrity of the GNSS-based position estimate on both the lateral and longitudinal directions. More in detail, a virtual track model for road navigation has been defined. The integrity check is performed by projecting the GNSS estimate onto the track obtained by applying the offset provided by the imaging system to the centerline and evaluating the components of the difference between the constrained and unconstrained estimates in the parity space. In order to verify, under the best circumstances, the impact of sensor fusion on the PL, some assumptions concerning the visual component have been introduced. More specifically, the camera is assumed to be calibrated and stable on the vehicle. Moreover, the analysis performed in [12] for the shift and rotation estimation assumes favorable illumination and weather conditions, and the presence of well-maintained lane markings. In addition, in real scenarios, the camera may move due to vibrations or bumpy roads, and a periodic calibration will be needed. These assumptions will be progressively released in future works in order to evaluate the performances of the proposed method under less favorable conditions. The performances of the proposed method have been evaluated through simulations, and the obtained results confirm the suitability of camera sensors to enhance the position integrity. The tests have been performed for both the single-and multiple-constellation scenarios. Additional benefits are also expected from the joint use of GPS and GALILEO, thanks to the larger signal bandwidth and the different signal modulation of GALILEO in addition to its second frequency. Moreover, the new features not available on other constellations, such as highaccuracy positioning and synchronization information and signal authentication service based on the encrypted codes contained in the signals, have to be considered. All these elements will contribute to improve the quality, robustness, and resiliency of the positioning performance to respond to the challenging needs of the automotive applications.

APPENDIX A LATERAL OFFSET AND HEADING COMPUTATION
In order to extract the vehicle's lateral offset and orientation from a perspective view of the road, it is convenient to operate through homogeneous coordinates. However, considering that, in general, in computer vision and in computer graphics, depth is associated with the third coordinate, to make use of relations and algorithms in the same form as they are daily used, it is convenient to make a permutation of the vehicle's intrinsic frame from {ξ, η, ζ }, to {η, ζ , ξ }. The same permutation is applied to the Darboux frame, in order to obtain the world reference system. Nevertheless, the final results directly provide the offset and orientation and do not require additional permutations.
Let us recall that the mapping from the homogeneous coordinates X = (X 1 , X 2 , X 3 , X 0 ) of any point P of the real world into the homogeneous coordinates x = (x 1 , x 2 , x 0 ) of the corresponding point of the image plane can be represented by the ordered cascade of three transformations: 1) a translation of the world reference system so that its origin coincides with the optical center of the video camera modeled by the following partitioned matrix: where (t 1 , t 2 , t 3 , 1) are the homogeneous coordinates of camera's optical center and t is the vector t = [t 1 t 2 t 3 ] T ; 2) a rotation of the world reference system, modeled by the matrix T r , so that its axes are aligned with the camera axes. Since, in this article, we model the vehicle as a rigid body leaning on the ground, only the yaw Euler angle θ may be not null. Considering that with the mentioned permutation, the axis of rotation is X 2 , and the matrix T r takes the form 3) a perspective transformation modeled by the matrix where f is the focal length of the camera.
Therefore, given X, the vector X p = [x 1 x 2 x 3 x 0 ] T is computed as follows: Then, in order to compute the homogeneous coordinates x of the point of the image plane, the third coordinate of X p is discarded. Thus, denoting with G opt the matrix obtained from T p by deleting the third row, i.e., the whole operation can be written in compact matrix form as follows: In (50), X C represents the homogeneous coordinates of the point P with respect to the reference system centered on the camera's optical center and aligned with the camera axes, for which we have Therefore, from (50), it immediately follows that the VP of the optical axis of the camera and of its parallels is the optical center. In fact, the homogeneous coordinates of the VP can be computed by projecting into the image plane the point at infinity of coordinates X C 3 = (0, 0, 1, 0) corresponding to the X 3 axis, then obtaining On the other hand, when the vehicle heading differs from the centerline direction by a yaw angle θ, the homogeneous coordinates of the centerline point at infinity are X C Cl = (m, 0, 1, 0) with m = tan θ. Therefore, the homogeneous coordinates x Cl of the corresponding VP are From (53), it follows that the yaw angle can be estimated by evaluating the shift of the intersection between the lane markings, extracted from the image, with respect to the optical center. In fact, we havê From (50), it follows that, given a point P at a depth d, a relative height h, and a lateral offset u, for which, therefore, , the homogeneous coordinates x p are (55) Consequently, the Cartesian coordinates of the point of the image plane are Let us consider that if the camera is affected by a lateral offset u with respect to the centerline and a yaw angle θ, then denoting with h the camera height with respect to ground and with W the lane width, the world coordinates of the lane marks at depth d are Therefore, from (56), we have that the Cartesian coordinates of the corresponding points in the image plane are Thus, the coordinates of the midpoint of the segment bounded by the images of the lane marks at depth d are From (59), we have that the lateral offset u can be estimated from the midpoint coordinates extracted from a slice of the image at depth d as follows: Based on (54), we may also writê (61) We observe that a slice of the road at depth d maps into the image row with vertical offset equal to f h/(1 + d ), being h the height of the camera above ground. On the other hand, averaging (61) with respect to d allows us to reduce the estimation error. As an alternative, algorithms that fit the noisy edges corresponding to the lane markings extracted from the acquired image with straight lines, like those based on the Hough transform, directly provide the line parameters. In this case, it is convenient to apply (61) with d = 0, so that the estimation simplifies as follows:

APPENDIX B SENSOR FUSION FOR IMAGING AND DIGITAL MAP ERRORS TENDING TO ZERO
For a better understanding of the role of the information provided by the imaging system and by the RDM, let us examine the case in which the accuracy of this information significantly exceeds the one of the GNSS estimate, so that the measurement errors associated with the lateral offset and to the digital map can be considered negligible. This situation can be modeled by setting to null the R q covariance matrix in the equations providing the sensor fusion estimate and its covariance matrix. Let us denote the individual elements of the GNSS estimation error covariance matrix P G as detailed in the following: To simplify the derivation, it is useful to rewrite P G and the observation matrix H q in the following partitioned forms: Then, we have and H q P G H T q = P q .
Therefore, for the sensor fusion gain, we have From the previous relation, we also have that Note that the estimation equation can be written in the following equivalent form: Therefore, based on (68) and (69), for the sensor fusion estimate, we have δt SF = δt G + P δt q P −1 The gains P s q P −1 q and P δt q P −1 q can then be detailed by observing that P s q = u s σ u σ s v s σ v σ s (75) P δt q = uδt σ u σ δt vδt σ v σ δt (76) and Therefore, we finally have For the sensor fusion formula, the following geometrical interpretation can be given. Since the noise affecting the lateral offset u m and the height v m above the ground is considered negligible, the sensor fusion estimate is obtained by projecting the GNSS solution onto the line parallel to the centerline with lateral offset u m and height v m with respect to the road surface. The projection is performed along the direction conjugate to the unit vector e T with respect to the ellipsoid defined by the inverse of the covariance matrix R s q of the estimation error affecting s, u, and v, i.e., R s q = P s P s q P T