IEEE Xplore At-A-Glance
  • Abstract

Large-Scale SLAM Building Conditionally Independent Local Maps: Application to Monocular Vision

Simultaneous localization and mapping (SLAM) algorithms based on local maps have been demonstrated to be well suited for mapping large environments as they reduce the computational cost and improve the consistency of the final estimation. The main contribution of this paper is a novel submapping technique that does not require independence between maps. The technique is based on the intrinsic structure of the SLAM problem that allows the building of submaps that can share information, remaining conditionally independent. The resulting algorithm obtains local maps in constant time during the exploration of new terrain and recovers the global map in linear time after simple loop closures without introducing any approximations besides the inherent extended Kalman filter linearizations. The memory requirements are also linear with the size of the map. As the algorithm works in a covariance form, well-known data-association techniques can be used in the usual manner. We present experimental results using a handheld monocular camera, building a map along a closed-loop trajectory of 140 m in a public square, with people and other clutter. Our results show that the combination of conditional independence, which enables the system to share the camera and feature states between submaps, and local coordinates, which reduce the effects of linearization errors, allow us to obtain precise maps of large areas with pure monocular SLAM in real time.



THE SIMULTANEOUS localization and mapping (SLAM)problem consists in processing the information obtained by a sensor installed on a mobile platform to obtain an estimate of its own pose while building a map of the environment. It has been the subject of continuous attention during the last two decades (for recent reviews, see [1], [2], [3]). The first consistent solution proposed, and still a popular one, is extended Kalman filter (EKF)-SLAM [4], [5], [6], which represents the vehicle pose and the location of a set of environment features in a joint state vector that is estimated using the EKF. Under the assumption of white Gaussian noise, the EKF provides a suboptimal way to deal with the uncertainties associated with the motion and measurement processes, due to the inherent linearization errors. To clarify, in the rest of the paper, we will refer to the EKF-SLAM solution as suboptimal as opposed to other approximated techniques that introduce additional approximations besides linearization.

Despite its relative success, the EKF-SLAM algorithm suffers from two main limitations.

  1. It requires updating the full map covariance matrix after each measurement, giving a memory complexity of O(n2) and a time complexity ofO(n2) per step, where n is the total number of features stored in the map [5].

  2. The EKF linearization approximations produce optimistic values for the map covariance matrix and introduce errors in the estimation, which may result in inconsistency [7], [8].

Techniques based on building submaps confront both problems at the same time. The main motivation for using submaps is clear: if a large area is split into several submaps with the number of features bounded by a constant, the submaps can be built in constant time per step. To clarify terminology, in this paper, we will use the generic term submap for a map of a small area inside a larger map. We will call absolute submap a submap expressed in global coordinates. We will use the term local submap or simply local map for a submap expressed with respect to a local coordinate frame. Although there is no formal proof, there is strong empirical evidence that using local submaps also improves the consistency of the EKF-SLAM [8]. The intuitive explanation is that, in local maps, uncertainty is small and the linearization errors introduced in the EKF remain small. Another advantage of these algorithms is that they allow direct implementation of data association methods since they work with covariance matrices.

The main contribution of this paper is a novel technique that allows the use of submap algorithms, avoiding the limitations imposed by the requirement of statistical independence between maps. The technique is based on the intrinsic structure of the SLAM problem that allows us to build submaps that can share information, remaining conditionally independent. During exploration of new terrain, it obtains local maps in constant time. After simple loop closures, it can recover the global map in linear time, without introducing any approximations besides the inherent EKF linearizations. As it works in covariance space, robust data association algorithms such as joint compatibility branch and bound (JCBB) [9]can be directly used. The technique has been implemented using absolute submaps or local submaps. In the second case, the effects of linearization errors are minimized, and the maps obtained are actually more precise and consistent than the maps obtained by the techniques based on EKF or extended information filter (EIF) that use global coordinates.

Section II discusses the related work. The basic technique for building conditionally independent submaps is introduced in Section III and particularized in Section IV for the case of Gaussian maps in covariance form. Section V presents the algorithms for exploration and loop closing. Section VI shows the application of the technique to the challenging case of pure monocular SLAM, closing a loop of140 m with a handheld camera in a public square. Finally, in Section VII, we summarize the main characteristics of the algorithm presented and propose future work. A preliminary version of this paper was presented in [10]. Apart from a more detailed presentation and discussion of the technique, this paper adds the loop closing technique and new experiments demonstrating the performance of the method.


Related Work

In the context of Gaussian filters, several techniques have been proposed to address the computational complexity problem. Postponement [11] and the compressed EKF filter(CEKF) [12] reduce the computational cost by making updates in a local area around the robot, delaying the global map update until the vehicle moves to another area. The result obtained is suboptimal, but the global map update is stillO(n2).

Techniques based on the information filter take advantage of the near to sparse structure of the information matrix (the inverse of the map covariance matrix)to reduce the computational burden. In the sparse EIF(SEIF) [13], the information matrix of the SLAM posterior is approximated by rounding to zero the small off-diagonal elements. This prevents the interlandmark links from forming, and therefore, limits the density of the information matrix. The Thin-Junction Tree Filter (TJTF) [14]and the exactly SEIF (ESEIF) [15]maintain the sparsity by discarding some weak information such as the robot odometry. The exactly sparse delayed-state filters (ESDFs)[16] avoid the previous approximations by including the trajectory of the vehicle in the state vector that makes the information matrix exactly sparse. Nevertheless, some approximations are still performed when portions of the mean state vector are recovered from its canonical (information) form in order to evaluate the jacobians.

The main advantage of information filters is that both the measurement and motion steps can be performed by updating the information vector and matrix in constant time. However, to recover the estimated value of the map state, a sparse linear system has to be solved. This has been addressed using conjugate gradient [13], relaxation [17], or multilevel relaxation [18], which require quadratic or, at best, linear time to converge (see [19] for a discussion). A recent and very efficient technique that also works in information space is the treemap algorithm [20], which requiresO(log n) time per step to recover a part of the state and O(n)to recover the whole map. However, it has only been tested using simulations, with known data association.

One important limitation of the techniques based on the information form is the difficulty of performing data association since the covariance matrix is not available. Most techniques resort to approximating the classical individual Mahalanobis gating, which is known to be problematic in difficult data association scenarios [9].

The problem of map consistency has motivated algorithms such as the unscented Kalman filter (UKF) [21] that achieve better consistency properties, but do not take into account the computational complexity problem.

Finally, some techniques based on building submaps confront complexity and consistency issues at the same time. The first technique using absolute submaps was decoupled stochastic mapping [22]. The main difficulty of the technique was that absolute submaps are not statistically independent and some approximations were needed to get rid of the dependencies, introducing inconsistency in the map. In local submaps, the base reference is usually chosen to be the first robot pose when the local map was started. This allows local maps to be initialized with zero uncertainty in the robot pose. Under the assumption of white noise and if no information is shared between maps, local maps are statically independent and thus uncorrelated [23]. Local maps can be consistently combined using map joining [23], or the equivalent constrained local submap filter (CLSF) [24], to obtain the global map in an O(n2) operation. The more recent Divide and Conquer SLAM [25] is able to recover the global map in amortized O(n) time, provided that the overlap between maps remains small.

However, it is important to note that for a set of local maps to be independent, no information can be shared between them. This has several consequences.

  1. Features that are seen from two neighboring local maps have a different estimation in each map. If the information that both features are the same were used, the map independence would be destroyed. This information can only be used when recovering the global map with map joining, which has O(n2)cost. More efficient techniques such as constant-time SLAM (CTS)[26], Atlas [27] or hierarchical SLAM [28] discard this information that results in weak links between maps, obtaining approximated solutions.

  2. Loop consistency can be imposed at the global map level as hierarchical SLAM does, but the corrections obtained cannot be propagated to the individual features inside the local maps because this would destroy map independence. Other techniques such as CTS and Atlas simply discard the loop constraints to remain efficient.

  3. Sensors with partial observability such as monocular vision require the integration of measurements taken from several robot poses to obtain an accurate estimation of a feature. With independent maps, features that are observed at the end of one local map and at the beginning of the next map remain quite imprecise in both maps.

  4. Vehicle states such as velocities or sensor biases that have been estimated in real time cannot be transferred between maps. For example, this precludes the use of inertial sensors. Also, sensors that give absolute measurements such as Global Positioning System (GPS) or compass cannot be used without destroying map independence.

These limitations are particularly important in the extreme case of pure monocular SLAM where the only sensory input is a single camera, with no odometry. Under these conditions, real-time EKF-SLAM has been successfully demonstrated in small areas [29], [30], [31]. The first system able to extend the approach to large outdoor areas is based on building independent local maps that are combined using the hierarchical SLAM approach [32]. In that system, the constraint of map independence forces to start each local map from scratch, without any information about the environment or camera velocities. This makes the system slightly unreliable as the most critical part, map initialization, is repeated once and again along the trajectory. Furthermore, as in monocular SLAM the scale is intrinsically unobservable, the different local maps obtained have quite different scale factors.

The method proposed in this paper avoids these problems by buildingconditionally independent submaps, which can share information about the environment and vehicle states: the submaps are conditionally independent given the common states. The idea of conditional independence has been previously used in Rao–Blackwellized particle filter (RBPF) SLAM in a different sense:the estimations of the elements in the map are conditionally independent given the robot trajectory [33]. Recent optimizations of this approach have produced very efficient and accurate techniques for indoor and outdoor SLAM with laser data [34]. The idea of conditional independence between local maps has been recently applied in [35], but the method makes the approximation that there are no common features between submaps; this approximation is not needed in our technique.

Our method presents some similitude with the treemap algorithm [20] in the sense that flows of information are transferred between submaps to update previous map estimates. However, we use the covariance form instead of the information form, which allows us to apply effective data association algorithms such as JCBB. The second crucial difference is the use of local coordinates that improves precision, as shown in our experiments. Finally, our technique represents the information using sequential local maps instead of ordering features in a tree structure that has to be maintained and balanced, resulting in an algorithm that is easier to implement.


Building Conditionally Independent Submaps

A. Basic Probability Concepts

For the reader's convenience, this section summarizes the basic probability concepts that will be used in the rest of the paper. More detailed presentations can be found in [1] and [36].

The conditional probability of a random variable x given the value of the random variable y is defined asFormula TeX Source $$p({\bf x} \vert {\bf y})={p({\bf x},{\bf y}) \over p({\bf y})}\eqno{\hbox{(1)}}$$where p(x,y) is the joint distribution and p(y)is the marginal distribution of y. In a more general case,Formula TeX Source $$p({\bf x}\vert {\bf y},{\bf z})={p({\bf x},{\bf y}\vert {\bf z}) \over p({\bf y}\vert {\bf z})}.\eqno{\hbox{(2)}}$$Two random variables are independent whenFormula TeX Source $$p({\bf x}, {\bf y})= p({\bf x})p({\bf y})\eqno{\hbox{(3)}}$$which is equivalent toFormula TeX Source $$p({\bf x}\vert {\bf y})= p({\bf x}).\eqno{\hbox{(4)}}$$Intuitively, this means that knowledge of y does not provide any information about x.

Two random variables x and y are conditionally independent given z whenFormula TeX Source $$p({\bf x}, {\bf y}\vert {\bf z})= p({\bf x}\vert {\bf z})p({\bf y}\vert {\bf z})\eqno{\hbox{(5)}}$$which is equivalent toFormula TeX Source $$p({\bf x}\vert {\bf y},{\bf z})= p({\bf x}\vert {\bf z}).\eqno{\hbox{(6)}}$$In this case, if z is known, y does not provide any additional information about x.

In the case of two random variables that are jointly Gaussian, with mean and covariance given byFormula TeX Source $$p({\bf x},{\bf y}) = {\cal N} \left(\left[\matrix{\hat{{\bf x}} \cr\hat{{\bf y}} }\right],\left[\matrix{P_x & P_{xy} \cr P_{yx} & P_{y} }\right]\right)\eqno{\hbox{(7)}}$$the process of marginalization consists simply in choosing the appropriate rows and columns of the mean vector and the covariance matrix Formula TeX Source $$p({\bf y})=\int p({\bf x},{\bf y})\, d{\bf x} = {\cal N} \left(\hat{{\bf y}}, P_y \right)\eqno{\hbox{(8)}}$$and the process of conditioning is performed by [37]Formula TeX Source $$\eqalignno{p({\bf x} \vert {\bf y})&={p({\bf x},{\bf y}) \over p({\bf y})} ={\cal N} \left(\hat{{\bf x}}^\prime, P^\prime_x \right)&\hbox{(9)}\cr\hat{{\bf x}}^\prime &= \hat{{\bf x}} + P_{xy}P_{yy}^{-1}({\bf y}-\hat{{\bf y}})&\hbox{(10)}\cr P^\prime_x &= P_x-P_{xy}P_{yy}^{-1}P_{yx}.&\hbox{(11)}}$$

B. Conditionally Independent Absolute Submaps

Figure 1
Fig. 1. Bayesian network that describes the probabilistic dependencies between SLAM variables. It also reveals the intrinsic SLAM structure.

Fig. 1 (top) shows an example of a Bayesian network that represents the probabilistic dependencies between stochastic variables involved in SLAM. Node xi represents the state of the platform at the i th time step, ui models the motion applied to the system at xi, node fj represents the j th feature of the map, and zi compactly represents all feature observations taken from the i th platform location. Without loss of generality, we will use this example to illustrate the development of the technique.

The graph describes a map in which the vehicle has moved along five different locations x1:5 and has observed five features f1:5 during the trajectory. As the inputsu1:4 and observations z1:4 are known, the probability density function (pdf)associated with the graph is given byFormula TeX Source $$p({\bf x}_{1:5},{\bf f}_{1:5}\vert {\bf z}_{1:4},{\bf u}_{1:4}).\eqno{\hbox{(12)}}$$

This pdf represents the joint distribution of the whole map and the trajectory. Now assume that we want to estimate the same map by building two submaps as shown in Fig. 1. In submap 1, the vehicle starts at x1 and finishes at x3 and has observed features f1:3 through measurements z1:2. Therefore, at the end of submap 1, the pdf that describes the map estimate is given byFormula TeX Source $$p({\bf x}_{1:3},{\bf f}_{1:3}\vert {\bf z}_{1:2},{\bf u}_{1:2}).\eqno{\hbox{(13)}}$$

Differences with current independent submap techniques begin now. Instead of starting submap 2 from scratch, we want to take advantage of the available estimation of features that are in the border between both submaps. In the example, feature f3, which is visible from both submaps, will be copied to the second map. In addition, if we want to build absolute submaps, we should also include in submap 2 the current vehicle estimate x3. So, the pdf that describes the initial state of submap 2 is just the result of marginalizing out the elements of the pdf (13) we are not interested inFormula TeX Source $$p({\bf x}_3,{\bf f}_3\vert {\bf z}_{1:2},{\bf u}_{1:2}) = \int p({\bf x}_{1:3},{\bf f}_{1:3}\vert {\bf z}_{1:2},{\bf u}_{1:2})\, d{\bf x}_{1:2}\, d{\bf f}_{1:2}.\eqno{\hbox{(14)}}$$

Then, the vehicle continues traversing the second area, building submap 2. The vehicle has been in two new positions x4:5, has reobserved feature f3 and indirectly x3 throughz3, and has observed two new features f4:5 through measurements z3:4. Therefore, the final pdf of submap 2 is given byFormula TeX Source $$p({\bf x}_{3:5},{\bf f}_{3:5}\vert {\bf z}_{1:4},{\bf u}_{1:4}).\eqno{\hbox{(15)}}$$As can be noticed in (13) and (15), both local maps share in common a robot location x3, a feature f3, and some measurements (z1:2,u1:2); hence, they are not independent.

For clarity and generality, several nodes in the Bayesian network will be grouped together, as shown in Fig. 1(bottom). The notations used are as follows.

  1. xA: Features and robot positions that are only observed in the first submap. In the example, this corresponds tof1:2 and x1:2.

  2. xB: Features and robot positions that are only observed in the second submap, i.e., f4:5 and x4:5.

  3. xC: Common features and robot position that are observed both in the first and second submaps, i.e., f3 and x3.

  4. za: Inputs and observations in the first submap gathered from features inxA and xC, i.e., u1:2 and z1:2.

  5. zb: Inputs and observations in the second submap gathered from features inxB and xC, i.e., u3:4 and z3:4.

As can be seen in Fig. 1, the only connection between the set of nodes (xA, za) and (xB, zb) is through node xC, i.e. , both subgraphs are d-separated given xC [38] . This implies that nodes xA and za are conditionally independent of nodes xB and zb given node xC. Intuitively, this means that if xC is known, submaps 1 and 2 do not carry any additional information about each other. In the following, we will call this the submap conditional independence (CI) property, which can be stated as Formula TeX Source $$\eqalignno{ & p({\bf x}_A\vert {\bf x}_B,{\bf x}_C, {\bf z}_a,{\bf z}_b) = p({\bf x}_A\vert {\bf x}_C, {\bf z}_a) \cr & p({\bf x}_B\vert {\bf x}_A,{\bf x}_C, {\bf z}_a,{\bf z}_b) = p({\bf x}_B\vert {\bf x}_C, {\bf z}_b). &\hbox{(16)} }$$

C. Conditionally Independent Local Maps

The method described earlier can be easily adapted to building sequences of conditionally independent local maps, each with its own local base reference. Let us return to the moment when the first map was finished in the example of Fig. 1. With absolute maps, the last vehicle position x3 and feature f3 were chosen to initialize submap 2 in order to represent both maps with respect to the same reference and take the advantage of the available estimation of the vehicle and the feature. Instead, we now want to represent submap2 with respect to a local reference given by the current vehicle position x3, and still use the information about featuref3 in submap 2. For doing so in a consistent way, a copy of feature f3 expressed in the new reference must be calculated and included in submap 1. In the following, a prime will be used to denote entities relative to the new base reference:Formula TeX Source $${\bf f}_3^{\prime} = \ominus {\bf x}_3 \oplus {\bf f}_3.\eqno{\hbox{(17)}}$$After this process, the pdf that describes submap 1 isFormula TeX Source $$p({\bf x}_{1:3},{\bf f}_{1:3}, {\bf f}_3^\prime\vert {\bf z}_{1:2},{\bf u}_{1:2}).\eqno{\hbox{(18)}}$$The new local map will start with robot position x3′ being exactly zero. Obviously, this variable is completely independent of submap 1. By marginalizing (18), we obtain the pdf that describes the initial state of submap 2Formula TeX Source $$p({\bf f}_3^\prime\vert {\bf z}_{1:2},{\bf u}_{1:2}).\eqno{\hbox{(19)}}$$Once the vehicle has traversed the second submap and has incorporated all observations gathered in it, the pdf associated with the final estimate of submap 2 is Formula TeX Source $$p({\bf x}_{4:5}^\prime,{\bf f}_{3:5}^\prime\vert {\bf z}_{1:4},{\bf u}_{1:4}).\eqno{\hbox{(20)}}$$Fig. 2 shows the Bayesian network that corresponds to the new algorithm. As it can be seen, the structure of the network is the same as in Fig. 1, bottom. The only difference is that the part shared by both mapsxC in this case corresponds to the local representation of feature f3′. As a consequence, the submap CI property (16) is valid for local submaps as well as for absolute submaps.

Figure 2
Fig. 2. Bayesian network that illustrates the process to generate a local map with its own base reference.

D. Recovering the Global Map

The process of building the two conditionally independent submaps can be summarized in three steps.

  1. Build the first submap, obtainingFormula TeX Source $$p({\bf x}_A,{\bf x}_C\vert {\bf z}_a).\eqno{\hbox{(21)}}$$

  2. In the case of local maps, add to the first map the common elements relative to the last robot pose. Start the second map with the result of marginalizing out the noncommon elementsFormula TeX Source $$p({\bf x}_C\vert {\bf z}_a)=\int p({\bf x}_A,{\bf x}_C\vert {\bf z}_a)\, d{\bf x}_A.\eqno{\hbox{(22)}}$$

  3. Continue building the second submap adding new features to it, obtainingFormula TeX Source $$p({\bf x}_B,{\bf x}_C\vert {\bf z}_a,{\bf z}_b).\eqno{\hbox{(23)}}$$

Our objective now is to combine the maps in (21) and (23) to obtain the joint distribution that corresponds to the global map. For doing so, the global map can be factorized as follows: Formula TeX Source $$\eqalignno{ & p({\bf x}_A,{\bf x}_B,{\bf x}_C\vert {\bf z}_a,{\bf z}_b) \cr &\quad = p({\bf x}_A\vert {\bf x}_B,{\bf x}_C,{\bf z}_a,{\bf z}_b)p({\bf x}_B,{\bf x}_C\vert {\bf z}_a,{\bf z}_b) \cr &\quad = p({\bf x}_A\vert {\bf x}_C,{\bf z}_a)p({\bf x}_B,{\bf x}_C\vert {\bf z}_a,{\bf z}_b) &\hbox{(24)} }$$where the first equality comes from (2) and the second from the submap CI property (16). The second term in the factorization is directly the second submap (23). The first term can be obtained from the first submap by conditioning Formula TeX Source $$p({\bf x}_A\vert {\bf x}_C,{\bf z}_a)={p({\bf x}_A,{\bf x}_C\vert {\bf z}_a) \over p({\bf x}_C\vert {\bf z}_a)}.$$

Therefore, all the information needed to recover the global map can be obtained from the information stored in each of the submaps. Notice that no assumptions have been made about the particular distribution of the probability densities. The previous factorizations only depend on general probabilistic theorems and the intrinsic structure of SLAM.


Case of Gaussian Submaps

In this section, we will focus on the case when the probability densities are Gaussians represented in covariance form. Suppose we have built two submapsFormula TeX Source $$\eqalignno{p({\bf x}_A,{\bf x}_C\vert {\bf z}_a) &= {\cal N} \left(\left[\matrix{\hat{{\bf x}}_{A_a} \cr\hat{{\bf x}}_{C_a} }\right],\left[\matrix{P_{A_a} & P_{AC_a} \cr P_{CA_a} & P_{C_a} \cr}\right]\right)&\hbox{(25)}\cr p({\bf x}_C,{\bf x}_B\vert {\bf z}_a,{\bf z}_b) &= {\cal N}\left(\left[\matrix{\hat{{\bf x}}_{C_{ab}} \cr\hat{{\bf x}}_{B_{ab}} \cr}\right],\left[\matrix{P_{C_{ab}} & P_{CB_{ab}} \cr P_{BC_{ab}} & P_{B_{ab}} }\right]\right)&\hbox{(26)}}$$where upper case subindices are for state vector components whereas lower case subindices describe which observations z have been used to obtain the estimate. For example, in the first submap, common elements xC have been estimated using only observations za; hence, the mean and covariance estimates are denoted by Formula andPCa, respectively.

We are interested in recovering the global map, represented byFormula TeX Source $$\eqalignno{& p({\bf x}_A,{\bf x}_B,{\bf x}_C\vert {\bf z}_a,{\bf z}_b) \cr&= {\cal N} \left(\left[\matrix{\hat{{\bf x}}_{A_{ab}} \cr\hat{{\bf x}}_{C_{ab}} \cr\hat{{\bf x}}_{B_{ab}} }\right],\left[\matrix{P_{A_{ab}} & P_{AC_{ab}} & P_{AB_{ab}} \cr P_{CA_{ab}} & P_{C_{ab}} & P_{CB_{ab}} \cr P_{BA_{ab}} & P_{BC_{ab}} & P_{B_{ab}} }\right]\right).&\hbox{(27)}}$$Comparing (26) and (27), we observe that the second local map by itself coincides exactly with the last two blocks of the global map. Only the terms related to xA in the global map will need to be computed. This is because the first submap has only been updated with the observations za, but not with the more recent observations zb. In the next sections, we will show how to backpropagate zb to update the first submap and how to compute the correlation between both submaps PABab.

A. Backpropagation

From the submaps CI property, we know thatFormula TeX Source $$p({\bf x}_A\vert {\bf z}_a,{\bf z}_b,{\bf x}_C) = p({\bf x}_A\vert {\bf z}_a,{\bf x}_C) = {\cal N} (\hat{{\bf x}}_{A\vert C}, P_{A\vert C}).\eqno{\hbox{(28)}}$$

The conditional distribution p(xA|za,zb,xC) can be obtained from the global map by marginalizing out xB using (8) and conditioning on xC using (10) and (11)Formula TeX Source $$\eqalignno{\hat{{\bf x}}_{A\vert C} &= \hat{{\bf x}}_{A_{ab}}+P_{{AC}_{ab}}P_{C_{ab}}^{-1}({\bf x}_C-\hat{{\bf x}}_{C_{ab}})&\hbox{(29)}\cr P_{A\vert C} &= P_{A_{ab}}-P_{{AC}_{ab}}P_{C_{ab}}^{-1}P_{{CA}_{ab}}.&\hbox{(30)}}$$

The conditional probability p(xA|za,xC) can also be obtained from the first map by conditioning on xC, which givesFormula TeX Source $$\eqalignno{ \hat{{\bf x}}_{A\vert C} &= \hat{{\bf x}}_{A_a}+P_{{AC}_a}P_{C_a}^{-1}({\bf x}_C-\hat{{\bf x}}_{C_a})&\hbox{(31)}\cr P_{A\vert C} &= P_{A_a}-P_{{AC}_a}P_{C_a}^{-1}P_{{CA}_a}.&\hbox{(32)}}$$Equating (29)(32) for all xC, and after some manipulations, we obtain the following backpropagation equations:Formula TeX Source $$\eqalignno{K &= P_{{AC}_a}P_{C_a}^{-1} \cr&= P_{{AC}_{ab}}P_{C_{ab}}^{-1}&\hbox{(33)}\cr P_{{AC}_{ab}} &= K P_{C_{ab}}&\hbox{(34)}\cr P_{A_{ab}} &= P_{A_a} + K(P_{{CA}_{ab}} - P_{{CA}_a})&\hbox{(35)}\cr\hat{{\bf x}}_{A_{ab}}&= \hat{{\bf x}}_{A_a}+ K(\hat{{\bf x}}_{C_{ab}}-\hat{{\bf x}}_{C_a}).&\hbox{(36)}}$$

Observe that, in order to propagate the influence of the new observations zb to the first map, we only need the mean and covariance of the common elements from the second map:Formula and PCab. An important property of the previous equations is that xA can be updated with the information contained in zb without having to compute the correlations between both maps PABab.

An interesting property of the backpropagation equations is that they can be applied at any moment. They work correctly even if we backpropagate twice the same information: the terms inside the parentheses in (35) and (36) will be zero and the maps will remain unchanged. This allows us to schedule the backpropagation in moments with low CPU loads, or to delay it until a loop closure is detected.

B. Computing the Correlation Between Submaps

If you want to obtain the covariance matrix of the whole map, the correlation term PABab should also be computed. For doing so, we first obtain the expression of the covariance of p(xA,xB|za,zb,xC) by conditioning the global map onxC:Formula TeX Source $$\left[\matrix{P_{A_{ab}}-P_{AC_{ab}}P_{C_{ab}}^{-1}P_{CA_{ab}}& P_{AB_{ab}}-P_{AC_{ab}}P_{C_{ab}}^{-1}P_{CB_{ab}}\cr P_{BA_{ab}}-P_{BC_{ab}}P_{C_{ab}}^{-1}P_{CA_{ab}} & P_{B_{ab}}-P_{BC_{ab}}P_{C_{ab}}^{-1}P_{CB_{ab}}\cr}\right]\!.\eqno{\hbox{(37)}}$$

Due to the submaps CI property, we know that xA and xBare conditionally independent given xC, and therefore, the correlation term in (37) must be zero, which gives the following expression for the correlation term:Formula TeX Source $$\eqalignno{P_{AB_{ab}}&= P_{AC_{ab}}P_{C_{ab}}^{-1}P_{CB_{ab}} \cr&= KP_{CB_{ab}}.&\hbox{(38)}}$$

However, computing all the correlation blocks in the global map is an O(n2) operation and, in fact, they are never required by our method, as will be explained next.

Figure 3
Fig. 3. Schematic representation of the elements of the total map covariance matrix that are indeed calculated with the method proposed. Mi represents submap i elements, whereas Ci are the common elements between submap i and i+1.

EKF-SLAM With Conditionally Independent Submaps

A. Exploration

Fig. 3 shows a schematic view of the elements of the total covariance matrix that are actually calculated during the process of building up a sequence of conditionally independent submaps. Notice that the off-diagonal blocks of the matrix are not zero because the submaps are not independent. However, they are not required to obtain the global map. If the maximum size of the submaps is bounded by a constant, the process of building the CI submaps is O(1) per step. In the absence of loop closures, the last submap, including the current robot pose, is already suboptimal. The suboptimal estimation of the previous submaps can be obtained with a complete backpropagation in O(n). It is important to point out that the backpropagation operation is, in fact, delayed until a submap is revisited or loop closing is detected reducing even more the computational complexity of the algorithm.

uFigure 1

A simple implementation of our SLAM method is shown in Algorithm 1. The implementation follows the structure of the standard EKF SLAM algorithm but introduces two new functions: map_transition andback_propagation. Function back_propagation is implemented directly using (33)(36). Functionmap_transition creates a new submap when the number of features in the current map exceeds a given threshold. When using absolute submaps, the common features are directly copied to the new map mj+1, and the last robot pose in map mj is replicated twice in the new submap. One of the copies will change as the robot moves through the new map carrying the current position, while the other will remain as a common element with mapmj to perform backpropagation. In the case of local submaps, map mj is augmented with the common features expressed relative to the last robot pose in map mj. These features are then copied to map mj+1 that is started with the robot pose equal to zero.

When using absolute submaps, our technique is similar to postponement [11] and compressed EKF [12] in the sense that most operations are performed in a local area, and then, the results are propagated to the rest of the map, obtaining the same solution as with the basic EKF-SLAM.However, we never need to compute the covariance matrix of the whole map, which reduces the computational to O(1) for the local operations and to O(n) for the complete backpropagation.

B. Loop Closing

Figure 4
Fig. 4. Changes produced in the maps structure when our loop closing algorithm is applied. (Top) Dependencies between submaps before closing the loop. (Middle) Final Bayesian Network after reobserving f1 and closing the loop. (Bottom) Some nodes are grouped together to show more clearly that the CI between submaps is maintained.
Figure 5
Fig. 5. (Top) When the first submap is revisited, the CI is preserved by including the robot position x6 in the common elements between both submaps. (Bottom) By marginalizing x6 from submap 2 and relocating the robot in submap 1, the CI property is also guaranteed but the odometry information is lost.

Fig. 4 (top) shows the dependencies between three absolute submaps that have been built using our technique, before a loop closure. The pdfs that define the state of each submap are Formula TeX Source $$\eqalignno{ \hbox{Submap} \quad 1 & \rightarrow p({\bf x}_{1:3},{\bf f}_{1:3}\vert {\bf z}_{1:2}) \cr \hbox{Submap} \quad 2 &\rightarrow p({\bf x}_{3:5},{\bf f}_{3:5}\vert {\bf z}_{1:4}) \cr \hbox{Submap} \quad 3 &\rightarrow p({\bf x}_{5:7},{\bf f}_{5:7}\vert {\bf z}_{1:6}). &\hbox{(39)}}$$

Observe that the most updated map is the current map, submap 3, that takes into account all the available observations.

Now assume that the robot is at position x7 and it closes a loop by observing feature f1 through measurement z7. The algorithm used to maintain the CI between submaps is as follows.

Figure 6
Fig. 6. Screenshot of the features extracted and the map built. The whole process of building a sequence of CI local maps can be seen in the accompanying video. The colors used in the video are green for features predicted and matched, blue for predicted but not matched, and red for matchings rejected by JCBB.
  1. The loop closing features, in this example f1, are copied to the common parts of all the intermediate submaps belonging to the loop, including the current submap. The correlation of the copied features with the elements of each submap is calculated with (38). The pdfs of the submaps are now given byFormula TeX Source $$\eqalignno{\hbox{Submap} \quad 1 &\rightarrow p({\bf x}_{1:3},{\bf f}_{1:3}\vert {\bf z}_{1:2}) \cr\hbox{Submap} \quad 2 &\rightarrow p({\bf x}_{3:5},{\bf f}_{3:5},{\bf f}_1\vert {\bf z}_{1:4}) \cr\hbox{Submap} \quad 3 &\rightarrow p({\bf x}_{5:7},{\bf f}_{5:7},{\bf f}_1\vert {\bf z}_{1:6}).&\hbox{(40)}}$$

  2. The current submap (submap 3) is updated with the loop closing observations (z7) using the standard EKF equations. The state of the Bayesian network after performing the previous operations is shown in Fig. 4, middle. In Fig. 4 (bottom), we have grouped some nodes together to clearly show that the CI property between submaps still holds. Submap 3 is now described byFormula TeX Source $$\eqalignno{\hbox{Submap} \quad 3&\rightarrow p({\bf x}_{5:7},{\bf f}_{5:7},{\bf f}_1\vert {\bf z}_{1:7}).&\hbox{(41)}}$$

  3. Due to the CI property, submaps 1 and 2 are updated using thebackpropagation equations (33)(36), obtainingFormula TeX Source $$\eqalignno{\hbox{Submap} \quad 1&\rightarrow p({\bf x}_{1:3},{\bf f}_{1:3}\vert {\bf z}_{1:7}) \cr\hbox{Submap} \quad 2&\rightarrow p({\bf x}_{3:5},{\bf f}_{3:5},{\bf f}_1\vert {\bf z}_{1:7}).&\hbox{(42)}}$$

Notice that after applying this procedure, all the submaps are suboptimal (up to EKF linearization errors) because they have been updated with all the available information. The price paid to maintain conditional independence is that all the submaps belonging to the loop contain a copy of the loop closing features.

In our algorithm, the current submap is always kept suboptimal. With the loop closing procedure described earlier, when the information is propagated to a neighboring map, it becomes suboptimal. Repeating the process along the chain of submaps allows obtaining the global suboptimal map. Under the assumption that the size of the common part between submaps is bounded by a constant, the cost of each propagation is O(1), and the total cost of obtaining the global map after a loop closure is O(n). For this assumption to hold, the number of loops each local map belongs to must be bounded by a constant, regardless of the size of the environment. In extremely loopy environments, like a Manhattan-like world, this requirement can be easily violated. Maintaining efficiency in such situations is the subject of further investigations.

C. Revisiting a Map

When a submap is revisited, the robot state that performs the transition to the revisited submap has to be included as a common element between both maps in order to preserve the CI property. Fig. 5 (top) shows an example in which the robot returns to the first submap when it is at x6 and reobserves feature f2. Including x6 as a common element of both maps preserves their CI, without introducing any approximation. A potential drawback of this approach is that the size of the common parts can grow without bound when revisiting the same environment indefinitely. However, if the number of times the submaps are revisited is bounded by a constant, the global map can still be obtained in O(n).

An alternative approximate solution that improves efficiency is to marginalize out the robot in the current map and relocate it in the revisited map, as shown in Fig. 5(bottom). A similar technique is used in ESEIF [15]to maintain the sparsity of the information matrix. In this case, the odometry link is disregarded, and therefore, we lose some information (in the figure, node u5 has disappeared).Nevertheless, the information loss is minimal because we can use the features common to both maps to relocate the robot with good precision. In our pure monocular SLAM application, we do not even have odometry. Instead, we simply have a prediction of the camera location using a constant velocity model, whose accuracy is negligible compared with the accuracy of the visual observations.


Experimental Results

The algorithm proposed has been tested using real data obtained in an urban environment using a handheld monocular camera. The features extracted from the images are Harris points. Data association is performed by predicting the feature locations in the image and searching for them with normalized correlation [30]. The set of matched features is further verified using the JCBB algorithm [9] that has been demonstrated to add the needed robustness to build monocular maps in urban areas [32]. The method implemented to detect loop closing is based on the map-to-map matching algorithm proposed in [32]. Basically, this method uses unary constraints, in this case, the normalized correlation between features patches, and binary constraints, the relative distances between feature points in space, to find the maximal subset of geometrically compatible matchings. To speed up the search, a specialized version of the geometric constraints branch and bound (GCBB) algorithm [39] is implemented. In case of positive matches, we obtain which subset of features in the current map corresponds to a subset of features in a previous map.

Figure 7
Fig. 7. Comparison of the results obtained building two local maps. (Left) Classical independent local maps. (Middle) Conditionally independent local maps before backpropagation. (Right) After backpropagation.
Figure 8
Fig. 8. Map solutions obtained during the first 1000 steps of the experiment using a standard EKF implementation and our method with absolute submaps. Both solutions are superimposed in the figures to stress the differences before performing (left) the backpropagation and to emphasize the identity of the solutions (right) after backpropagating to all the previous maps.

The state vector of each submap Mi contains the final camera location xci and the 3-D location of all features(y1i, …,yni), with respect to the map base reference (absolute or local). For the feature representation, we use the inverse-depth model proposed in [31]Formula TeX Source $$\eqalignno{{\bf x}^T &= ({\bf x}_c^T, {\bf y}_1^T, {\bf y}_2^T, \ldots, {\bf y}_n^T)&\hbox{(43)}\cr{\bf x}_c^T &= ({\bf r}^T, {\bf \Psi}^T, {\bf v}^T, {\bf w}^T)&\hbox{(44)}\cr{\bf y}_i &= (x_i \, y_i \, z_i \, {\theta}_i \, {\phi}_i \, {\rho}_i)^T.&\hbox{(45)}}$$This feature model represents the feature state as the camera optical center location (xi yi zi) when the feature point was first observed, and the azimuth and elevation(θi φi) of the ray from the camera to the feature point. Finally, the depth di along this ray is represented by its inverse ρi = 1/di. The main advantage of the inverse-depth parametrization is that it allows consistent undelayed initialization of the 3-D point features, regardless of their distance to the camera.

Figure 9
Fig. 9. Comparison of the computational times required by the standard EKF and our CI method using absolute submaps after processing the first 1000 images of the experiment in a MATLAB implementation. At the end of the CI absolute time, we also show the minimal time spent on performing backpropagation to update all previous submaps.

The camera state xc contains the position of the camera in Cartesian coordinates r, its attitude in Euler anglesΨ, the linear velocity v, and its angular velocity w. The process model used for the camera motion is a constant velocity model with white Gaussian noise in the linear and angular accelerations. Using pure monocular vision, without any kind of odometry, the scale of the map is not observable. However, by choosing appropriate values for the initial velocities and the covariance of the process noise, the EKF-SLAM is able to obtain an approximate scale for the map.

Figure 10
Fig. 10. Final maps obtained with pure monocular SLAM building conditionally independent submaps in (left) absolute coordinates and (right) in local coordinates. The plots on the top show the maps obtained without backpropagation or loop closing. The ellipsoids correspond to the absolute uncertainty in the camera position at the end of each submap. The plots on the bottom show the final maps obtained after loop closing and backpropagation.

The experiment has been carried out along a public square in our hometown. The trajectory was performed with the handheld camera looking to the right and closing a loop following approximately the same path. The sequence contains 2700 images taken at 20 Hz along a path of around 140 m. During the map building process, approximately 500 salient features are extracted and tracked from the surrounding buildings and objects. Fig. 6 shows one of the images obtained in the experiment with the corresponding features extracted and the map that is being built. The process of building a sequence of conditionally independent local submaps can be seen in the accompanying video. On the images, the features are depicted on their predicted locations.

The characteristics of the experiment make it suitable to show the benefits of sharing information between maps. Since a feature can be seen from different local maps, the technique proposed turns out to be very useful, allowing us to reuse a feature without having to reinitialize it in each local map. In addition, linear and angular velocities of the camera, v andw, can be consistently shared between consecutive submaps avoiding significant scale changes, a problem that needs to be addressed in techniques that build independent submaps [32]. Fig. 7 shows an example of the advantages of our technique with respect to previous local mapping techniques. By sharing information with the first map, the second submap has the same scale and is more precise than in the case of independent maps. Actually, the second submap is suboptimal (up to the EKF linearization approximations). After backpropagation, the estimation of the features in the first submap is also improved to become suboptimal.

Figure 11
Fig. 11. Final map and trajectory estimates superimposed to an aerial image of the real environment.

Fig. 8 compares the solutions obtained by a standard EKF algorithm and our method with absolute submaps for the first 1000 steps of the experiment. The number of absolute submaps created along this trajectory is 6. The EKF and the absolute submaps are superimposed in the figures to facilitate the comparison. Left figure shows our solution before performing thebackpropagation, notice that both maps present several differences although the last absolute submap gives the same solution as the EKF since it is equally updated. On the right figure, we can see that after updating the previous maps with thebackpropagation, both solutions are exactly identical as was expected from the theoretical analysis.

The running times of both algorithms in a MATLAB implementation are shown in Fig. 9. Notice that our algorithm runs in constant time since the size of the submaps is bounded whereas the standard EKF solution grows quadratically, being more than ten times slower than our method after the first 1000 steps. When the backpropagation is performed to update the previous maps, the extra time required turns out to be just 0.17 s, which has little effect in the time of the last submap as can be seen in the figure.

For comparison purposes, the whole dataset has been processed building absolute submaps and local submaps. In both cases, the maximum number of features per map has been limited to 50 and the total number of local maps created is 15. Fig. 10 presents the results obtained by both algorithms. The top plots show the maps obtained until the moment where the loop was detected by the map-to-map matching algorithm. The ellipsoids show the uncertainty in the camera position at the end of each local map, in absolute coordinates.

It can be noticed in the top left figure that the absolute submap technique gives an optimistic result. The uncertainty associated with the last camera position, around x = −8, y = 17, cannot explain the big gap that appears between the first estimate of the top wall and the new estimate obtained on the second pass. Nevertheless, the map matching algorithm used allows us to realize that both walls are indeed the same, and the loop can be closed as it is shown in the bottom left figure. However, as the estimate was inconsistent, the final map has to be slightly deformed to accomplish the loop constraint.

In contrast, the local submap technique achieves better consistency, and as a consequence, a better estimate of the map and the trajectory. This is noticeable in the larger size of ellipsoids before the loop closure constraint is applied, which include the path performed during the first pass. After imposing the loop closure, the map obtained is quite precise. Fig. 11 shows the final map superimposed on a satellite image of the environment obtained from Google Earth. The scale and the absolute position and orientation of the map, which are not observable with pure monocular SLAM, were adjusted by hand to draw the figure. Notice how the feature points mapped follow the shape formed by the surrounding buildings.

Using the implementation described in [32], both algorithms are able to build the sequence of submaps up to 60features in real time at 20 Hz, including all image processing. For this map size, the running time for a standard EKF-SLAM implementation would increase quadratically up to about 2 s per step. In our current MATLAB implementation, the whole process of loop detection, loop closing, and backpropagation takes 1.15, 0.3, and 1.8 s, respectively. We expect that an optimized C++implementation will take a fraction of a second. To maintain real-time performance at video frequency, loop closing can be implemented on a separate lower priority thread.



In this paper, we have proposed a new technique that allows the use of submap algorithms, avoiding the constraint imposed by the requirement of probabilistic independence between them. Using this method, salient features of the environment or vehicle state components, such as velocity or global attitude, can be shared between local maps in a consistent manner. Our experiments show that this is extremely valuable to reduce the errors committed during the first steps of map initialization, specially for monocular vision.

Under the assumption that the size of the common parts between submaps is bounded by a constant, the backpropagation algorithm allows us to make updates from local map to local map in constant time. In addition, a loop closing algorithm that takes advantage of the structure of the conditionally independent maps has been proposed. By means of this algorithm, the loop closure can be performed with a computational cost that is linear in the number of local maps instead of quadratic in the total number of features. So, the global cost of our method is O(1) during exploration and O(n) during loop closing. Memory requirements are also O(n), because the whole covariance matrix is never computed.

Unlike many other techniques, this performance gain is not obtained by sacrificing precision. Our technique does not use sparsification or other approximations, apart from the intrinsic EKF linearizations. Using absolute submaps, the result obtained is the same as with the classical EKF-SLAM algorithm. Using local submaps, the inconsistencies introduced by the linearization errors are reduced, and the results obtained are much better, for a small fraction of the cost.

We believe that this paper opens the way for developing new efficient submapping algorithms. We plan to extend the technique to larger environments, where hierarchical map decomposition and nonlinear optimization techniques may be useful. The method presented here relies on the common part between submaps being small to achieve efficiency. Environments with more complicated topologies may require the development of new algorithms and maybe approximations. Regarding applications, we have demonstrated real-time monocular SLAM in moderately large urban environments. For reliable loop detection in larger areas, appearance-based methods [40] or image-to-map matching techniques [41] will be investigated. We are also interested in large-scale SLAM with systems that include inertial or other sensors, where the proposed technique will allow us to consistently share global information or sensor biases across submaps. A work in this line is [42] where the CI technique proposed here allows the consistent sharing of vehicle states and compass measurements between the local maps.


The authors would like to thank J. Neira, L. M. Paz, J. M. M. Montiel, and J. Civera for fruitful discussions and their help with the experimental setup.


Manuscript received August 09, 2007; revised April 15, 2008. First published September 26, 2008; current version published nulldate. This paper was recommended for publication by Associate Editor W. Burgard and Editor L. Parker upon evaluation of the reviewers' comments. This work was suported in part by the European Union under Project RAWSEEDS FP6-IST-045144 and in part by the Dirección General de Investigación of Spain under Project SLAM6DOF DPI2006-13578.

The authors are with the Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza E-50018, Spain (e-mail:;

This paper has supplementary downloadable multimedia material available at In the video, we present experimental results using a handheld monocular camera in which we build a map along a closed-loop trajectory of 140 m in a public square, with people and other clutter. The whole process is based on building a sequence of CI local maps where each local map is represented with a different color. The colors used in the camera images are green for features predicted and matched, blue for predicted but not matched, and red for matchings rejected by JCBB. The video cislam_xvid.avi has been tested on media player Windows Media Player with xvid codec from

Color versions of one or more of the figures in this paper are available online at


1. Probabilistic Robotics

S. Thrun, W. Burgard, D. Fox

Cambridge, MA
MIT Press, 2005-09

2. Simultaneous localization and mapping: Part I

H. Durrant-Whyte, T. Bailey

IEEE Robot. Autom. Mag. Vol. 13, issue (2) pp. 99–110 2006-06

3. Simultaneous localization and mapping (SLAM): Part II

T. Bailey, H. Durrant-Whyte

IEEE Robot. Autom. Mag. Vol. 13, issue (3) pp. 108–117 2006-09

4. A stochastic map for uncertain spatial relationships

R. Smith, M. Self, P. Cheeseman

in Proc., Proc. Robot. Res., 4th Int. Symp., O. Faugeras, G. Giralt, Cambridge, MA: MIT Press, 1988, pp. 467–474

5. The SPmap: A probabilistic framework for simultaneous localization and map building

J. A. Castellanos, J. M. M. Montiel, J. Neira, J. D. Tardós

IEEE Trans. Robot. Autom. Vol. 15, issue (5) pp. 948–953 1999-10

6. A solution to the simultaneous localization and map building (SLAM) problem

M. W. M. G. Dissanayake, P. Newman, S. Clark, H. F. Durrant-Whyte, M. Csorba

IEEE Trans. Robot. Autom. Vol. 17, issue (3) pp. 229–241 2001-06

7. A counter example to the theory of simultaneous localization and map building

S. J. Julier, J. K. Uhlmann

Proc. IEEE Int. Conf. Robot. Autom. Seoul Korea, 2001 4, pp. 4238–4243

8. Robocentric map joining: Improving the consistency of EKF-SLAM

J. A. Castellanos, R. Martínez-Cantín, J. Neira, J. D. Tardós

Robot. Auton. Syst., Vol. 55, issue (1) pp. 21–29 2007-01

9. Data association in stochastic mapping using the joint compatibility test

J. Neira, J. D. Tardós

IEEE Trans. Robot. Autom. Vol. 17, issue (6) pp. 890–897 2001-12

10. Scalable SLAM building conditionally independent local maps

P. Piniés, J. D. Tardós

Proc. IEEE/RJS Int. Conf. Intell. Robots Syst. San Diego, CA, 2007-10/11, pp. 3466–3471

11. Towards constant time SLAM using postponement

J. Knight, A. Davison, I. Reid

Proc. IEEE/RJS Int. Conf. Intell. Robots Syst. 2001 1 pp. 406–412

12. Optimization of the simultaneous localization and map-building algorithm for real-time implementation

J. E. Guivant, E. M. Nebot

IEEE Trans. Robot. Autom. Vol. 17, issue (3) pp. 242–257 2001-06

13. Simultaneous localization and mapping with sparse extended information filters

S. Thrun, Y. Liu, D. Koller, A. Y. Ng, Z. Ghahramani, H. Durrant-Whyte

Int. J. Robot. Res., Vol. 23, issue (7/8) pp. 693–716 2004

14. Thin junction tree filters for simultaneous localization and mapping

M. A. Paskin

Proc. Int. Joint Conf. Artif. Intell., San Francisco, CA, 2003 pp. 1157–1164

15. Exactly sparse extended information filters for feature-based SLAM

M. R. Walter, R. M. Eustice, J. J. Leonard

Int. J. Robot. Res., Vol. 26, issue (4) pp. 335–359 2007

16. Exactly sparse delayed-state filters for view-based SLAM

R. M. Eustice, H. Singh, J. J. Leonard

IEEE Trans. Robot. Vol. 22, issue (6) pp. 1100–1114 2006-12

17. Fast, on-line learning of globally consistent maps

T. Duckett, S. Marsland, J. Shapiro

Auton. Robots, Vol. 12, issue (3) pp. 287–300 2002-05

18. A multigrid algorithm for simultaneous localization and mapping

U. Frese, P. Larsson, T. Duckett

IEEE Trans. Robot. Vol. 21, issue (2) pp. 1–12 2004-04

19. A discussion of simultaneous localization and mapping

U. Frese

Auton. Robots, Vol. 20, issue (1) pp. 25–42 2006-01

20. Treemap: An O(log n) algorithm for indoor simultaneous localization and mapping

U. Frese

Auton. Robots, Vol. 21, issue (2) pp. 103–122 2006-09

21. A new extension of the Kalman filter to nonlinear systems

S. Julier, J. Uhlmann

Proc. Int. Symp. Aerosp./Def. Sens., Simul. Control, Orlando, FL 1997 pp. 182–193

22. A computationally efficient method for large-scale concurrent mapping and localization

J. J. Leonard, H. J. S. Feder

in Proc. Robot. Res., 9th Int. Symp., D. Koditschek, J. Hollerbach, Snowbird, UT: Springer-Verlag, 2000, pp. 169–176

23. Robust mapping and localization in indoor environments using sonar data

J. D. Tardós, J. Neira, P. M. Newman, J. J. Leonard

Int. J. Robot. Res., Vol. 21, issue (4) pp. 311–330 2002

24. An efficient approach to the simultaneous localisation and mapping problem

S. B. Williams, G. Dissanayake, H. Durrant-Whyte

Proc. IEEE Int. Conf. Robot. Autom. Washington, DC, 2002, 1, pp. 406–411

25. Divide and conquer: EKF SLAM in O(n)

L. M. Paz, J. D. Tardós, J. Neira

IEEE Trans. Robot. Vol. 24, issue (5) 2008-10

26. Consistent, convergent and constant-time SLAM

J. Leonard, P. Newman

Proc. Int. Joint Conf. Artif. Intell., Acapulco, Mexico 2003-08, pp. 1143–1150

27. SLAM in large-scale cyclic environments using the atlas framework

M. Bosse, P. M. Newman, J. J. Leonard, S. Teller

Int. J. Robot. Res., Vol. 23, issue (12) pp. 1113–1139 2004-12

28. Hierarchical SLAM: Real-time accurate mapping of large environments

C. Estrada, J. Neira, J. D. Tardós

IEEE Trans. Robot. Vol. 21, issue (4) pp. 588–596 2005-08

29. Real-time simultaneous localisation and mapping with a single camera

A. J. Davison

Proc. Int. Conf. Comput. Vis., Nice, France, Oct. 13–16, 2003 2, pp. 1403–1410

30. MonoSLAM: Real-time single camera SLAM

A. J. Davison, I. D. Reid, N. D. Molton, O. Stasse

IEEE Trans. Pattern Anal. Mach. Intell. Vol. 29, issue (6) pp. 1052–1067 2007-06

31. Inverse depth parametrization for monocular SLAM

J. Civera, A. J. Davison, J. M. M. Montiel

IEEE Trans. Robot. Vol. 24, issue (5) 2008-10

32. Mapping large loops with a single hand-held camera

L. Clemente, A. J. Davison, I. D. Reid, J. Neira, J. D. Tardós

Atlanta, GA
Proc. Robot. : Sci. Syst.,, 2007-06

33. FastSLAM: A factored solution to the simultaneous localization and mapping problem

M. Montemerlo, S. Thrun, D. Koller, B. Wegbreit

Proc. AAAI Nat. Conf. Artif. Intell. 2002, pp. 593–598

34. Improved Techniques for Grid Mapping With Rao–Blackwellized particle filters

G. Grisetti, C. Stachniss, W. Burgard

IEEE Trans. Robot. Vol. 23, issue (1) pp. 34–46 2007-02

35. Toward a unified Bayesian approach to hybrid metric-topological SLAM

J. L. Blanco, J. A. Fernandez-Madrigal, J. González

IEEE Trans. Robot. Vol. 24, issue (2) pp. 259–270 2008-04

36. Probability, Random Variables and Stochastic Processes

A. Papoulis S. Pillai

New York
Probability, Random Variables and Stochastic Processes, McGraw-Hill 2002

37. Estimation With Applications to Tracking and Navigation

Y. Bar-Shalom, X. R. Li, T. Kirubarajan

New York
Estimation With Applications to Tracking and Navigation, Wiley, 2001

38. Pattern Recognition and Machine Learning

C. M. Bishop

New York
Pattern Recognition and Machine Learning, Springer-Verlag, 2006

39. Linear time vehicle relocation in SLAM

J. Neira, J. D. Tardós, J. A. Castellanos

Proc. IEEE Int. Conf. Robot. Autom. Taipei, Taiwan R.O.C., Sep. 10–19, 2003, pp. 427–433

40. Probabilistic appearance based navigation and loop closing

M. Cummins, P. Newman

Proc. IEEE Int. Conf. Robot. Autom. Rome, Italy Apr. 10–14, 2007, pp. 2042–2048

41. Automatic relocalisation for a single-camera simultaneous localisation and mapping system

B. Williams, P. Smith, I. Reid

Proc. IEEE Int. Conf. Robot. Autom. Roma, Italy, Apr. 10–14, 2007, pp. 2784–2790

42. Underwater SLAM in man-made structured environments

D. Rivas, P. Ridao, J. D. Tardós, J. Neira

J. Field Robot., Vol. 25, issue (8) pp. 1–24 2008-08


Pedro Piniés

Pedro Piniés (S'07) was born in Bilbao, Spain, in 1979. He received the M.S. degree in telecommunication engineering in 2004 from the University of Zaragoza, Zaragoza, Spain, where he is currently working toward the Ph.D. degree with the Robotics, Perception and Real Time Group.

Pedro Piniés His current research interests include simultaneous localization and mapping (SLAM), mobile robotics, computer vision, and probabilistic inference.

Juan D. Tardós

Juan D. Tardós (M'05) was born in Huesca, Spain, in 1961. He received the M.S. and Ph.D. degrees in electrical engineering from the University of Zaragoza, Zaragoza, Spain, in 1985 and 1991, respectively.

Juan D. Tardós He is currently a Professor with the Departamento de Informática e Ingeniería de Sistemas, University of Zaragoza, where he is in charge of courses in robotics, computer vision, and artificial intelligence. His current research interests include simultaneous localization and mapping (SLAM), perception, and mobile robotics.

Cited By

Large-Scale SLAM Building Conditionally Independent Local Maps: Application to Monocular Vision

Robotics, IEEE Transactions on, vol. 24, issues (5), p. 1094–1106, 2008

Pure Topological Mapping in Mobile Robotics

Robotics, IEEE Transactions on, vol. 26, issues (6), p. 1051–1064, 2010


No Corrections




47,276 KB

Indexed by Inspec

© Copyright 2011 IEEE – All Rights Reserved