Cooperative SLAM for multiple UGVs navigation using SVSF filter

ABSTRACT The aim of this paper is to present a cooperative simultaneous localization and mapping (CSLAM) solution based on a laser telemeter. The proposed solution gives the opportunity to a group of unmanned ground vehicles (UGVs) to construct a large map and localize themselves without any human intervention. Many solutions proposed to solve this problem, most of them are based on the sequential probabilistic approach, based around Extended Kalman Filter (EKF) or the Rao-Blackwellized particle filter. In our work, we propose a new alternative to avoid these limitations, a novel alternative solution based on the smooth variable structure filter (SVSF) to solve the UGV SLAM problem is proposed. This version of SVSF-SLAM algorithm uses a boundary layer width vector and does not require covariance derivation. The new algorithm has been developed to implement the SVSF filter for CSLAM. Our contribution deals with adapting the SVSF to solve the CSLAM problem for multiple UGVs. The algorithms developed in this work were implemented using a swarm of mobile robots Pioneer 3–AT. Two mapping approaches, point-based and line-based, are implemented and validated experimentally using 2D laser telemeter sensors. Good results are obtained by the Cooperative SVSF-SLAM algorithm compared with the Cooperative EKF-SLAM.


Introduction
In the last few years, the simultaneous localization and mapping (SLAM) became an important topic of research in the robotics community. Accurate navigation can be achieved by accurate localization within an accurate map. Another technique based on entropy minimization is proposed in [1,2]. The foremost problems of cooperative SLAM (CSLAM) to be solved are in the computational efficiency parts, software/network performance, and robustness to modelling errors. The use of multiple unmanned ground vehicles (UGVs) systems is prefered for their benefits in single-robot systems. The main advantages of the cooperative system are: increasing the efficiency of the overall system, extending the error tolerances, improving reliability and/or performances, rapidity, flexibility and reduction of cost [1]. The CSLAM is crucial in global positioning system. To solve CSLAM problem, we can use inexpensive sensors which is a challenge for researchers. So far, only a little effort has been made in the field of multiple vehicles SLAM. According to Martinelli et al. [3], a number of vehicles can move within their environment and build their map of environment cooperatively using sensor data fusion. Some other solutions for CSLAM, specifically cooperative visual SLAM [4], are based on an Extended Kalman filter. However, the latter is very sensitive to outliers and suffers from linearization. Likewise, the lower bound for the map accuracy is violated due to errors introduced during EKF linearization process producing inconsistent estimates [5,6]. Solving CSLAM and CVSLAM based on particle filter estimations is presented by Thrun in [7]. Other researches have shown an important drawback related to the computation time which makes it not suitable for hard realtime applications as in airborne navigation [8,9]. Some latest development has been made by proposing filters that try to approximate the nonlinear SLAM problem and recently CSLAM is presented in [10]. The work presented in [11] is a part of research work on autonomous navigation for multiple robots based on local submap strategy. In [12], a vision-based position drift minimization method is proposed for cooperative navigation of quad-rotor and ground robot. The recent research seeks for using a multiple small mobile robots which have only limited sensing and computational resources and each robot uses a laser pointer and an event-based vision sensor to compute distances from its surroundings [13]. As an alternative method, another novel algorithm is presented, known as the smooth variable structure filter (SVSF). The SVSF is a relatively new predictor-corrector estimation approach [14]. In 2007, the SVSF was introduced and is based on variable structure theory and sliding-mode concepts [15]. As proved in many researches by Habibi, the SVSF offers a robust and stable estimate to model uncertainties and errors [16]. To solve the SLAM for multiple UGVs, we can consider two main kinds of solutions: centralized architecture and decentralized architecture. In this paper, multiple UGVs system based on centralized architecture is considered. The work presented in this paper is organized as follows: Section 2 depicts the process model of UGV and the observation model. Section 3 describes the UGV-SLAM algorithm. The CSLAM of multiple UGVs is described in Section 4. Simulation and discussion are presented in Section 5. Experimentation and discussion of results are presented in Section 6. Finally, the conclusion is provided in Section 7.

Process model of UGV
The UGV used in our work is the Pioneer 3-AT (Figure 1), the P3-AT is a non-holonomic robot with four wheels. The state vector considered is [X r , Y r , u r ] T with (X r ; Y r ) the coordinates of the robot in the global coordinate and u r its orientation. The non-holonomic constraint is written as follows [17]: The state model equations of robot are where w is the rotation speed. When v y = 0, and v = v x , the kinematic robot model, by discretizing expression (2), will be The robot evolution model reflects the relationship between the robot previous states X R,k and its current state X R,k+1 . DT is the time step, e x r ;y r ;u r are the noise that arise from the encoder, wheels slipping, etc. The control vector of the robot is the translation and rotation velocity given by U k = [v, w] T . In SLAM algorithm, the state vector contains two parts, the UGV position given by X R = [X r , Y r , u r ] T 2 R 3 , and the estimated map given by m i = [x 1 , y 1 , …, x M , y M ] T 2 R 2M , with M as the total number of landmarks. We can write Equation (3) as follows:

Direct observation point-based model
Perception provides landmark measurements (range and bearing) relative to the UGV frame ( Figure 2(a)). We assume that we have M landmarks in the environment. We assume also that the laser range sensor is located  vertically above the centre of the wheel axis. When a landmark m i is observed, then the UGV will use it as a measurement. The observed quantities are nonlinear functions of the state. X m ig and Y m ig are the coordinates of m i in the world frame. r i and b i are the range and bearing of a number of the landmarks in the robot frame. Position and orientation of the UGV in the world frame are given by: [X r , Y r , u r ] T . Each point of the scan is considered as a landmark and is represented by two parameters in polar coordinates [r i ; b i ] [16,18]. The representation of point coordinates in the global frame according to its coordinates in the local frame is given by The overall transformation to local inverse is given as follows: where X m ig ; Y m ig À Á : the coordinates of landmark in the global frame. (X rl ; Y rl ) : the coordinates of landmark in the local frame. (X r , Y r ) : position of the robot in the global frame. u r : the orientation of the robot.
The superscript i is the ith sample in landmark sample sets. The observation given by the laser is Z k = [r i, k , b i,k ] T . The direct observation model Z i,k is expressed as [18]: where e r i is the noise of measured range and e b i is the noise of bearing in the local frame.

Inverse observation point-based model
Consider the example shown in Figure 2(b), where the robot is at position (X r ; Y r ) and observing a landmark m new with coordinates (X new ; Y new ) in the world frame using a laser rangefinder. Let Z i = [r il ; b il ] be the observation of landmark m new by the UGV. The landmark mapping model is an inverse observation model X m new ; using the UGV pose and laser measurement, it can be written as follows [16,18]:

Direct observation line-based model
In the literature, there are several techniques of line fitting. We used Euler representation, where the line is depicted by In this case, the direct observation model [r il ; b il ] is given by (Figure 3).
[r il , b il ] T : polar parameters in the UGV frame of the ith line feature.
[r ig , b ig ] T : polar parameters in the world frame of the ith line feature.

Inverse observation line-based model
The mapping model is an inverse observation model [r ig ; b ig ] of used features; based on knowledge of the UGV state and the observation, it can be written as follows [18]:

UGV-SLAM
On the basis of range and bearing measurements taken with respect to line and pointwise landmarks, the UGV simultaneously builds a map of the environment and localizes itself within it. In this paper, we present two methods to associate landmarks: the first one is pointto-point association. This method is based on minimizing the Euclidean distance. Since the processing time of the "point-to-point" approach is very important and needs a huge memory space, essentially in long-term navigation, we propose another association method which is faster and more robust. This method is based on the association "line-to-line", which is very efficient for line extraction [19]. The appropriate popular tools to work with a nonlinear dynamic model are EKF-SLAM and the FAST-SLAM. These filters permit to localize and build a map of an unknown environment using only observations relative to extraction features which are detected by its sensors. The extended Kalman filter suffers from the linearization problem, and the FAST-SLAM, which uses Rao-Blackwellised approach for particle filter, is not suitable for real-time implementation. The development of a robust and stable predictor-corrector estimation filter based on sliding-mode theory is proposed to make it suitable for UGV localization and mapping problem. Different approaches are implemented and compared. We start with the popular EKF solution, then we use robust SVSF filter. The latter does not make any assumption on noise characteristics. Furthermore, it is robust against error modelling which is crucial for real-time applications.

EKF algorithm
System and nonlinear measurement model used for state estimation can be written in discrete form as follows: where X k+1 is the state vector and U k is the input control vector. The objective of the SLAM problem is to recursively estimate the state X k+1 of the landmark according to the measurement Z k+1 at a time step. W k is an uncorrelated zero mean white process which has known covariance Q k , and V k is a measured noise with where X k+1/k represents the predicted state vector and X k+1/k+1 denotes the estimated state vector, (k+1) is the Kalman gain matrix, P represents the covariance matrix; A k and F k are the Jacobian matrices of the state equation f with respect to the state vector X k+1 and the noise variable W k , respectively; H k+1 is the Jacobian matrix of the measurement h with respect to the state vector X k+1 and the noise variable V k , respectively.

SVSF algorithm
The variable structure filter (VSF) was created in 2003 as a new predictor-corrector estimation approach [15]. The SVSF was introduced in 2007 by Saeid R.
Habibi [20]. This filter is based on the sliding-mode control and estimation techniques. It is formulated in a predictor-corrector mode, where gain switching is used to ensure that the estimates converge to true state values [14]. As shown in Figure 4, the SVSF makes use of an existing subspace and of a smoothing boundary layer to keep the estimates bounded within a region of the true state trajectory. The estimation process is summarized in Equations (20)-(26). The SVSF is totally different from EKF which have neither a covariance matrix nor uses it. The SVSF utilizes a switching gain to converge the estimates to within a boundary of the true state values (i.e. existence subspace) [14,15]. The gain is calculated as a function of the error in the predicted output, using the priori and the posteriori measurement errors [21,22]: where e z kþ1=k is the priori measurement error; e z k=k is the posteriori measurement error; ' is the smoothing boundary layer widths; g is the convergence rate; presents "Schur" element-by-element multiplication; + refers to the pseudo-inverse of a matrix; H k+1, j = h k+1, j (F X, i ) is the derivative of h (measurement matrix) with respect to the state vector X k+1 . We note that h depends only on the robot pose R k+1 and the location of the ith landmark, where i represents the index of its associated landmark in the system state vector X k+1 at time k; j defines the index of an individual landmark observation in Z k+1, j at time k. The matrix F X, i is calculated as follows: We can write (H k+1 ) + as follows: Noting that ' À1 is established from the smoothing boundary layer vector ' which is formulated as a diagonal matrix represents the number of measurements. Satð' À1 e z kþ1=k Þ represents the saturation function The update of the state estimates can be calculated as follows: The priori and posteriori output errors estimates [14] are given by We note that in the SVSF-SLAM algorithm, Z k+1, j estimation process is that, for each observation, only the UGV pose and the observed landmark m i which is associated, will be updated; in contrary, in the EKF-SLAM algorithm, we update the entire system state vector X k+1 . As the SVSF-SLAM, for more details, refer to [16].

Cooperative SLAM of multiple UGVs
This work proposes a centralized approach to solve the cooperative multiple UGV-SLAM problem based on laser observation model ( Figure 5). A robust nonlinear SVSF filter is implemented to increase the consistency of the CSLAM solution. Each UGV performs a part of computation for such a task, but a central processor is necessary to combine the computations of each UGV to determine the final outcome [6]. The proposed algorithm is more robust, stable and adapted for real-time applications. In multiple vehicles SLAM issue, the estimated state vector becomes the position of the multiple vehicles and the positions of point feature observations in the environment. This process is known as multiple UGVs SLAM or CSLAM. In the probabilistic approach, the pose of robots UGV j is x r j ; y r j ; u r j Â Ã T , with j = 1:N and N is the number of used robots. In this case, the state vector X k contains the poses of the UGVs as well as the M landmarks m i = [m 1 …m M ]. The nonlinear discretetime state transition equation given by Equation (3) can be extended to the multiple UGVs case as follows: f is a discrete nonlinear model, X k+1 is the state vector at time step (k+1), W k is some additive process noises, Z k+1 is the observation taken at time k+1 by all the UGVs, V k is some additive observation noises. The principle of the filtering method is to estimate X k+1 using the observation model Z k+1 and h is a continuous observation model. The CSLAM state vector is given by The Jacobian matrix of f with respect to X k+1 (F sys ¼ @f @X kþ1 ) is given by ; where F ugvj is the Jacobian matrix of the equation f with respect to the state vector X k+1 . The Jacobian is computed at each moment around the estimated current state. This process linearizes the process model around the estimated state. The state vector of each UGV is described as follows: The observation vector by UGVs is given by: The P k+1 is the global covariance matrix estimated at time (k+1) by all the UGVs: The Jacobian matrix of f with respect to w k (F u;sys ¼ @f @w k ) is given by : The Jacobian matrix of h with respect to X k+1 for the jth UGV and ith landmark is The Jacobian matrix of f with respect to v k is The off-diagonal elements of the covariance matrix represent the correlation between pairs of state variables. The main drawback of the EKF-SLAM algorithm is the computational complexity due to the dynamical size of the SLAM state vector. This latter increases quadratically with the number of landmarks in the map when new features are observed. In a two-dimensional world, the size of the covariance matrix is (2M + 3) by (2M + 3), where M is the total number of landmarks on the map. As a good alternative, the SVSF represents better performances than the EKF. The SVSF uses two critical variables in its process: the priori and posteriori output error estimates, without using the covariance matrix. A CC-SLAM-based SVSF algorithm is introduced in Section 4.

Simulation and discussion
Numerical simulations are reported to illustrate the performance of the proposed approach. In our simulations, we assume that two UGVs are navigating in unknown environment using odometer and laser 2D models. The results of our approach are compared with the cooperative EKF-SLAM algorithm. In this study, the sampling rates used for each sensor and filter are as follows: The simulation results provided in Figures 6, 7

First experiment: with white-centred Gaussian noise
In the first experiment, we assume a white-centred Gaussian noise, for process and observation model, where In this case, the scenario represents a simple case that involves two robots moving through a circular trajectory and crossing to measure their relative positions and make the simultaneous localization and mapping with the EKF-SLAM and SVSF-SLAM approaches ( Figure 6). Observing a new landmark improves the assessment of the position of the robots. Consequently, they eliminate some parts of the landmark uncertainty previously observed. As can be seen from Figures 6  and 7, when the noise is centred Gaussian, reasonably good EKF-SLAM-based estimation is obtained for x, y and u, compared to the SVSF-SLAM multi-robots trajectory. Otherwise, we see clearly from Figure 7 that the estimated error increases significantly following x, y and u. Moreover, from the same figure, we can observe that at the iteration 300, a loop closing is detected. At this moment, we observe a significant improvement of the accuracy of the EKF-SLAM as well the SVSF-SLAM, especially when we use a white Gaussian noise. When the robot closes the loop (revisiting old landmarks or the UGVs observe common feature), UGVs will also correct the positions of landmarks (the map).
The RMSE (root mean square error) for the UGVs position is given in Figure 8. In this figure, the RMSE of SVSF-SLAM and EKF-SLAM is compared in the presence of white Gaussian noise. In this case, the EKF-SLAM algorithm is more accurate than SVSF-SLAM algorithm. This can be interpreted by the optimality of the EKF when the process and observation models are accurately known.
% coordinates of i th feature observed by j th UGV. For j=1 :N % numbers of UGVs. For i=1 :L % number of feature observed by j th U GV j , -Find the correspond between Z i,j k+1 andẐ k+1 If the correspondence is found.
-Compute the Measurement Error e Zk+1 = Z i,j k+1 −Ẑ k+1 -Compute the Gain using the priori and the posteriori Measurement Errors.

Second experiment: with non-centred Gaussian noise
In this experiment, we assume a white noise with bias. We use the following variance covariance matrix for process and observation models, respectively.
When the process and observation noises are noncentred, the performance of the EKF-SLAM estimator decreases significantly as shown in Figure 9. In this case, the corrected values of x, y and u of the cooperative SVSF-SLAM are more accurate than those estimated by the cooperative EKF-SLAM. This can be explained by the efficiency of the SVSF algorithm compared to the EKF algorithm. SLAM by SVSF shows much better navigation and mapping performances than EKF-SLAM and provides an accurate position of the robots. Comparing the results of this experiment, we can say that the SVSF may perform much better than the EKF (see Figure 10), because this new filter is robust and more suitable for real-time implementation. The RMSE for the UGVs position in this experiment is given in Figure 11. We can see that the cooperative SVSF-SLAM provides the best RMSE in comparison with the cooperative EKF-SLAM.

Third experiment: with coloured noise
In this experiment, in order to see the robustness of the SVSF-SLAM algorithm compared to the EKF-SLAM, we assume a coloured noise, for process and observation model, where  Figure 12 shows the pose of the two robots following x, y and u axes. As can be seen, from this Figure, x position and u orientation estimate with significant accuracy when using cooperative SVSF-SLAM. This can be explained by the fact that SVSF filter can  provide precision and robustness against uncertain parameters and error modelling. Figure 13 compares the position estimation errors of the EKF and SVSF-SLAM. As can be seen from this figure, good performances are obtained by the new filter for both robots. The SVSF-SLAM provides the best RMSE in comparison with the EKF-SLAM, when we use coloured noise as shown in Figure 14. From the second experiment, we can conclude that the SVSF-SLAM algorithm is more robust and more accurate, even in the presence of coloured noise in the process and observation model. Furthermore, when a loop closure is detected, significant decrease of the RMSE is obtained for SVSF algorithm. The SVSF-SLAM provides the best RMSE in comparison with the EKF-SLAM, when we use noncentred Gaussian noise as shown in Figures 10 and 13. These results clearly validate the advantages of the SVSF-SLAM over the EKF-SLAM, especially when the system or observation models are not accurately known and the process and observation noises are not white or non-centred noise. This result is confirmed by the third experiment, when the process and observation models are coloured noise.

Computational time evaluation
The structure of cooperative SVSF-SLAM algorithm has been shown to reduce the computational complexity of the algorithm. The EKF-SLAM use of the full size of the state vector to update the pose and the map estimation is considered as inconvenient, because in this case the computation time will increase with the size of the state vector. On the other hand, the SVSF-SLAM uses only the pose of UGV and the current feature to update the map and the pose estimation which makes the computational time independent of the size of the state vector. The cooperative SVSF-SLAM remains almost constant per iteration depending on the number of visible features ( Figure 15). However, the computational time of the cooperative EKF-SLAM grows quadratically over time.

Experiment MobileSim simulator
In this case, we try to evaluate our algorithms using MobileSim simulator. The scene dimensions are 20 m£16 m ( Figure 16). The multi-UGV SVSF/EKF-SLAM with two approaches such as point and line segments features is validated and the environment maps are issued from the simulation results (see Figure 17).

Experimentation and discussion of the results
In this experiment, we use two robots Pioneer P3 ¡ AT, which are equipped with two lasers mounted on top of each one (see Figure 19). The P3-AT robots are operated in real time using an ARIA platform and a   Wi-Fi network socket to keep a data link between a fixed station PC and the mobile robots via vncviewer. exe program (Virtual Network Computing), as can be seen in Figure 18. When the robots move on the map, they use the laser to detect landmarks such as points or lines. In this experiment, we assume that the speed of mobile robot is 0.4 m/s, the sampling time is 0.02 s, and the maximum range of the SICK laser is 15 m. In this section, the experimental data used are collected from experiment locations.    In this experiment, the UGV will navigate in rectangular area with dimension approximately 14 m£8 m (see Figure 19). The maps provided by the cooperative SVSF/EKF-SLAM, shown in Figure 20, is more accurate and descriptive for the navigation environment. Besides, we can see that the borders of the two rooms are conserved and clearly shown. The feature association becomes more complicated by using line approaches in comparison with the points since line segments have to take the best and correct map. For each part, simulation and experimental results have been presented to verify the effectiveness of the cooperative SVSF-SLAM for multiple UGVs algorithms.

Conclusion
In this work, we studied a number of technical problems that are necessary to be solved in order to increase UGVs autonomy. First, UGVs localization has been investigated. Then, the UGV map building was presented, followed by an implementation of a SLAM solution using 2D laser telemetry observation. Finally, the proposed solution is validated by the CSLAM case with two types of primitive, point-based approach and line-based approach. Our contribution was the proposition of a new solution for CSLAM for multiple UGVs navigation. The proposed algorithms were validated by simulation and real data on two mobile robots Pioneer P3-AT, and satisfactory results (with a good accuracy and robustness) were obtained with SVSF filter without any assumption on the process or/and observation model accuracy. A real-time experiment illustrates the effectiveness of cooperative SVSF-SLAM algorithm to resolve multiple UGVs localization and mapping problems.

Disclosure statement
No potential conflict of interest was reported by the authors.