1
|
Wang X, Zhang H, Peng G. Evaluating and Optimizing Feature Combinations for Visual Loop Closure Detection. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-022-01575-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
2
|
Mereu R, Trivigno G, Berton G, Masone C, Caputo B. Learning Sequential Descriptors for Sequence-Based Visual Place Recognition. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3194310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Riccardo Mereu
- Visual and Multimodal Applied Learning Lab, Department of Control and Computer Engineering, Politecnico di Torino, Torino, Italy
| | - Gabriele Trivigno
- Visual and Multimodal Applied Learning Lab, Department of Control and Computer Engineering, Politecnico di Torino, Torino, Italy
| | - Gabriele Berton
- Visual and Multimodal Applied Learning Lab, Department of Control and Computer Engineering, Politecnico di Torino, Torino, Italy
| | - Carlo Masone
- Visual and Multimodal Applied Learning Lab, Department of Control and Computer Engineering, Politecnico di Torino, Torino, Italy
| | - Barbara Caputo
- Visual and Multimodal Applied Learning Lab, Department of Control and Computer Engineering, Politecnico di Torino, Torino, Italy
| |
Collapse
|
3
|
Yu M, Zhang L, Wang W, Huang H. Loop Closure Detection by Using Global and Local Features With Photometric and Viewpoint Invariance. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:8873-8885. [PMID: 34699356 DOI: 10.1109/tip.2021.3116898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Loop closure detection plays an important role in many Simultaneous Localization and Mapping (SLAM) systems, while the main challenge lies in the photometric and viewpoint variance. This paper presents a novel loop closure detection algorithm that is more robust to the variance by using both global and local features. Specifically, the global feature with the consolidation of photometric and viewpoint invariance is learned by a Siamese Network from the intensity, depth, gradient and normal vectors distribution. The local feature with rotation invariance is based on the histogram of relative pixel intensity and geometric information like curvature and coplanarity. Then, these two types of features are jointly leveraged for the robust detection of loop closures. The extensive experiments have been conducted on the publicly available RGB-D benchmark datasets like TUM and KITTI. The results demonstrate that our algorithm can effectively address challenging scenarios with large photometric and viewpoint variance, which outperforms other state-of-the-art methods.
Collapse
|
4
|
SVG-Loop: Semantic–Visual–Geometric Information-Based Loop Closure Detection. REMOTE SENSING 2021. [DOI: 10.3390/rs13173520] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Loop closure detection is an important component of visual simultaneous localization and mapping (SLAM). However, most existing loop closure detection methods are vulnerable to complex environments and use limited information from images. As higher-level image information and multi-information fusion can improve the robustness of place recognition, a semantic–visual–geometric information-based loop closure detection algorithm (SVG-Loop) is proposed in this paper. In detail, to reduce the interference of dynamic features, a semantic bag-of-words model was firstly constructed by connecting visual features with semantic labels. Secondly, in order to improve detection robustness in different scenes, a semantic landmark vector model was designed by encoding the geometric relationship of the semantic graph. Finally, semantic, visual, and geometric information was integrated by fuse calculation of the two modules. Compared with art-of-the-state methods, experiments on the TUM RBG-D dataset, KITTI odometry dataset, and practical environment show that SVG-Loop has advantages in complex environments with varying light, changeable weather, and dynamic interference.
Collapse
|
5
|
Prados Sesmero C, Villanueva Lorente S, Di Castro M. Graph SLAM Built over Point Clouds Matching for Robot Localization in Tunnels. SENSORS 2021; 21:s21165340. [PMID: 34450782 PMCID: PMC8399184 DOI: 10.3390/s21165340] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 07/29/2021] [Accepted: 08/03/2021] [Indexed: 11/30/2022]
Abstract
This paper presents a fully original algorithm of graph SLAM developed for multiple environments—in particular, for tunnel applications where the paucity of features and the difficult distinction between different positions in the environment is a problem to be solved. This algorithm is modular, generic, and expandable to all types of sensors based on point clouds generation. The algorithm may be used for environmental reconstruction to generate precise models of the surroundings. The structure of the algorithm includes three main modules. One module estimates the initial position of the sensor or the robot, while another improves the previous estimation using point clouds. The last module generates an over-constraint graph that includes the point clouds, the sensor or the robot trajectory, as well as the relation between positions in the trajectory and the loop closures.
Collapse
|
6
|
Garg S, Milford M. SeqNet: Learning Descriptors for Sequence-Based Hierarchical Place Recognition. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3067633] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
7
|
Sequence-based visual place recognition: a scale-space approach for boundary detection. Auton Robots 2021. [DOI: 10.1007/s10514-021-09984-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
8
|
Zaffar M, Garg S, Milford M, Kooij J, Flynn D, McDonald-Maier K, Ehsan S. VPR-Bench: An Open-Source Visual Place Recognition Evaluation Framework with Quantifiable Viewpoint and Appearance Change. Int J Comput Vis 2021. [DOI: 10.1007/s11263-021-01469-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractVisual place recognition (VPR) is the process of recognising a previously visited place using visual information, often under varying appearance conditions and viewpoint changes and with computational constraints. VPR is related to the concepts of localisation, loop closure, image retrieval and is a critical component of many autonomous navigation systems ranging from autonomous vehicles to drones and computer vision systems. While the concept of place recognition has been around for many years, VPR research has grown rapidly as a field over the past decade due to improving camera hardware and its potential for deep learning-based techniques, and has become a widely studied topic in both the computer vision and robotics communities. This growth however has led to fragmentation and a lack of standardisation in the field, especially concerning performance evaluation. Moreover, the notion of viewpoint and illumination invariance of VPR techniques has largely been assessed qualitatively and hence ambiguously in the past. In this paper, we address these gaps through a new comprehensive open-source framework for assessing the performance of VPR techniques, dubbed “VPR-Bench”. VPR-Bench (Open-sourced at: https://github.com/MubarizZaffar/VPR-Bench) introduces two much-needed capabilities for VPR researchers: firstly, it contains a benchmark of 12 fully-integrated datasets and 10 VPR techniques, and secondly, it integrates a comprehensive variation-quantified dataset for quantifying viewpoint and illumination invariance. We apply and analyse popular evaluation metrics for VPR from both the computer vision and robotics communities, and discuss how these different metrics complement and/or replace each other, depending upon the underlying applications and system requirements. Our analysis reveals that no universal SOTA VPR technique exists, since: (a) state-of-the-art (SOTA) performance is achieved by 8 out of the 10 techniques on at least one dataset, (b) SOTA technique in one community does not necessarily yield SOTA performance in the other given the differences in datasets and metrics. Furthermore, we identify key open challenges since: (c) all 10 techniques suffer greatly in perceptually-aliased and less-structured environments, (d) all techniques suffer from viewpoint variance where lateral change has less effect than 3D change, and (e) directional illumination change has more adverse effects on matching confidence than uniform illumination change. We also present detailed meta-analyses regarding the roles of varying ground-truths, platforms, application requirements and technique parameters. Finally, VPR-Bench provides a unified implementation to deploy these VPR techniques, metrics and datasets, and is extensible through templates.
Collapse
|
9
|
Chen L, Jin S, Xia Z. Towards a Robust Visual Place Recognition in Large-Scale vSLAM Scenarios Based on a Deep Distance Learning. SENSORS 2021; 21:s21010310. [PMID: 33466401 PMCID: PMC7796086 DOI: 10.3390/s21010310] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 12/31/2020] [Accepted: 01/04/2021] [Indexed: 11/16/2022]
Abstract
The application of deep learning is blooming in the field of visual place recognition, which plays a critical role in visual Simultaneous Localization and Mapping (vSLAM) applications. The use of convolutional neural networks (CNNs) achieve better performance than handcrafted feature descriptors. However, visual place recognition is still a challenging task due to two major problems, i.e., perceptual aliasing and perceptual variability. Therefore, designing a customized distance learning method to express the intrinsic distance constraints in the large-scale vSLAM scenarios is of great importance. Traditional deep distance learning methods usually use the triplet loss which requires the mining of anchor images. This may, however, result in very tedious inefficient training and anomalous distance relationships. In this paper, a novel deep distance learning framework for visual place recognition is proposed. Through in-depth analysis of the multiple constraints of the distance relationship in the visual place recognition problem, the multi-constraint loss function is proposed to optimize the distance constraint relationships in the Euclidean space. The new framework can support any kind of CNN such as AlexNet, VGGNet and other user-defined networks to extract more distinguishing features. We have compared the results with the traditional deep distance learning method, and the results show that the proposed method can improve the performance by 19–28%. Additionally, compared to some contemporary visual place recognition techniques, the proposed method can improve the performance by 40%/36% and 27%/24% in average on VGGNet/AlexNet using the New College and the TUM datasets, respectively. It’s verified the method is capable to handle appearance changes in complex environments.
Collapse
Affiliation(s)
- Liang Chen
- Correspondence: ; Tel.: +86-185-5040-8581
| | | | | |
Collapse
|
10
|
He J, Zhou Y, Huang L, Kong Y, Cheng H. Ground and Aerial Collaborative Mapping in Urban Environments. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2020.3032054] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
11
|
Liang Z, Zhu S, Fang F, Jin X. Simultaneous Localization and Mapping in a Hybrid Robot and Camera Network System. J INTELL ROBOT SYST 2020. [DOI: 10.1007/s10846-010-9446-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
12
|
Robust Loop Closure Detection Integrating Visual–Spatial–Semantic Information via Topological Graphs and CNN Features. REMOTE SENSING 2020. [DOI: 10.3390/rs12233890] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Loop closure detection is a key module for visual simultaneous localization and mapping (SLAM). Most previous methods for this module have not made full use of the information provided by images, i.e., they have only used the visual appearance or have only considered the spatial relationships of landmarks; the visual, spatial and semantic information have not been fully integrated. In this paper, a robust loop closure detection approach integrating visual–spatial–semantic information is proposed by employing topological graphs and convolutional neural network (CNN) features. Firstly, to reduce mismatches under different viewpoints, semantic topological graphs are introduced to encode the spatial relationships of landmarks, and random walk descriptors are employed to characterize the topological graphs for graph matching. Secondly, dynamic landmarks are eliminated by using semantic information, and distinctive landmarks are selected for loop closure detection, thus alleviating the impact of dynamic scenes. Finally, to ease the effect of appearance changes, the appearance-invariant descriptor of the landmark region is extracted by a pre-trained CNN without the specially designed manual features. The proposed approach weakens the influence of viewpoint changes and dynamic scenes, and extensive experiments conducted on open datasets and a mobile robot demonstrated that the proposed method has more satisfactory performance compared to state-of-the-art methods.
Collapse
|
13
|
RHIZOME ARCHITECTURE: An Adaptive Neurobehavioral Control Architecture for Cognitive Mobile Robots—Application in a Vision-Based Indoor Robot Navigation Context. Int J Soc Robot 2020. [DOI: 10.1007/s12369-019-00602-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
14
|
Zaffar M, Ehsan S, Milford M, McDonald-Maier K. CoHOG: A Light-Weight, Compute-Efficient, and Training-Free Visual Place Recognition Technique for Changing Environments. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2969917] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
15
|
Han D, Li Y, Song T, Liu Z. Multi-Objective Optimization of Loop Closure Detection Parameters for Indoor 2D Simultaneous Localization and Mapping. SENSORS 2020; 20:s20071906. [PMID: 32235456 PMCID: PMC7180885 DOI: 10.3390/s20071906] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2020] [Revised: 03/13/2020] [Accepted: 03/20/2020] [Indexed: 11/24/2022]
Abstract
Aiming at addressing the issues related to the tuning of loop closure detection parameters for indoor 2D graph-based simultaneous localization and mapping (SLAM), this article proposes a multi-objective optimization method for these parameters. The proposed method unifies the Karto SLAM algorithm, an efficient evaluation approach for map quality with three quantitative metrics, and a multi-objective optimization algorithm. More particularly, the evaluation metrics, i.e., the proportion of occupied grids, the number of corners and the amount of enclosed areas, can reflect the errors such as overlaps, blurring and misalignment when mapping nested loops, even in the absence of ground truth. The proposed method has been implemented and validated by testing on four datasets and two real-world environments. For all these tests, the map quality can be improved using the proposed method. Only loop closure detection parameters have been considered in this article, but the proposed evaluation metrics and optimization method have potential applications in the automatic tuning of other SLAM parameters to improve the map quality.
Collapse
Affiliation(s)
- Dongxiao Han
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 201900, China; (D.H.); (T.S.); (Z.L.)
| | - Yuwen Li
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 201900, China; (D.H.); (T.S.); (Z.L.)
- Shanghai Robot Industrial Technology Research Institute, Shanghai 200062, China
- Correspondence:
| | - Tao Song
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 201900, China; (D.H.); (T.S.); (Z.L.)
- Shanghai Robot Industrial Technology Research Institute, Shanghai 200062, China
| | - Zhenyang Liu
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 201900, China; (D.H.); (T.S.); (Z.L.)
| |
Collapse
|
16
|
Neubert P, Schubert S, Protzel P. A Neurologically Inspired Sequence Processing Model for Mobile Robot Place Recognition. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2927096] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
17
|
Abstract
This paper focuses on loop-closure detection (LCD) for a visual simultaneous localization and mapping (SLAM) system. We present a strategy that combines a Bayes filter and features from a pre-trained convolution neural network (CNN) to perform LCD. Rather than using features from only one layer, we fuse features from multiple layers based on spatial pyramid pooling. A flexible Bayes model is then formulated to integrate the sequential information and similarities that are computed by features at different scales. The introduction of a penalty factor and bidirectional propagation enables our approach to handle complex trajectories. We present extensive experiments on challenging datasets, and we compare our approach to state-of-the-art methods, to evaluate it. The results show that our approach can ensure remarkable performance under severe condition changes and handle trajectories that have different characteristics. We also show the advantages of Bayes filters over sequence matching in the experiments, and we analyze our feature fusion strategy by visualizing the activations of the CNN.
Collapse
Affiliation(s)
- Qiang Liu
- School of Mechanical Engineering, Dalian University of Technology, Dalian 116024, Liaoning, P. R. China
| | - Fuhai Duan
- School of Mechanical Engineering, Dalian University of Technology, Dalian 116024, Liaoning, P. R. China
| |
Collapse
|
18
|
|
19
|
Sequence-based sparse optimization methods for long-term loop closure detection in visual SLAM. Auton Robots 2018. [DOI: 10.1007/s10514-018-9736-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
20
|
Peter L, Tella-Amo M, Shakir DI, Attilakos G, Wimalasundera R, Deprest J, Ourselin S, Vercauteren T. Retrieval and registration of long-range overlapping frames for scalable mosaicking of in vivo fetoscopy. Int J Comput Assist Radiol Surg 2018; 13:713-720. [PMID: 29546573 PMCID: PMC5953985 DOI: 10.1007/s11548-018-1728-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 02/28/2018] [Indexed: 11/17/2022]
Abstract
Purpose The standard clinical treatment of Twin-to-Twin transfusion syndrome consists in the photo-coagulation of undesired anastomoses located on the placenta which are responsible to a blood transfer between the two twins. While being the standard of care procedure, fetoscopy suffers from a limited field-of-view of the placenta resulting in missed anastomoses. To facilitate the task of the clinician, building a global map of the placenta providing a larger overview of the vascular network is highly desired. Methods To overcome the challenging visual conditions inherent to in vivo sequences (low contrast, obstructions or presence of artifacts, among others), we propose the following contributions: (1) robust pairwise registration is achieved by aligning the orientation of the image gradients, and (2) difficulties regarding long-range consistency (e.g. due to the presence of outliers) is tackled via a bag-of-word strategy, which identifies overlapping frames of the sequence to be registered regardless of their respective location in time. Results In addition to visual difficulties, in vivo sequences are characterised by the intrinsic absence of gold standard. We present mosaics motivating qualitatively our methodological choices and demonstrating their promising aspect. We also demonstrate semi-quantitatively, via visual inspection of registration results, the efficacy of our registration approach in comparison with two standard baselines. Conclusion This paper proposes the first approach for the construction of mosaics of placenta in in vivo fetoscopy sequences. Robustness to visual challenges during registration and long-range temporal consistency are proposed, offering first positive results on in vivo data for which standard mosaicking techniques are not applicable. Electronic supplementary material The online version of this article (10.1007/s11548-018-1728-4) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Loïc Peter
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| | - Marcel Tella-Amo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Dzhoshkun Ismail Shakir
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | | | | | - Jan Deprest
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.,Department of Development and Regeneration, Cluster Woman and Child, Centre for Surgical Technologies, KU Leuven, Leuven, Belgium
| | - Sébastien Ourselin
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Tom Vercauteren
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.,Department of Development and Regeneration, Cluster Woman and Child, Centre for Surgical Technologies, KU Leuven, Leuven, Belgium
| |
Collapse
|
21
|
Zhang G, Liu H, Dong Z, Jia J, Wong TT, Bao H. Efficient Non-Consecutive Feature Tracking for Robust Structure-From-Motion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:5957-5970. [PMID: 27623586 DOI: 10.1109/tip.2016.2607425] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Structure-from-motion (SfM) largely relies on feature tracking. In image sequences, if disjointed tracks caused by objects moving in and out of the field of view, occasional occlusion, or image noise are not handled well, corresponding SfM could be affected. This problem becomes severer for large-scale scenes, which typically requires to capture multiple sequences to cover the whole scene. In this paper, we propose an efficient non-consecutive feature tracking framework to match interrupted tracks distributed in different subsequences or even in different videos. Our framework consists of steps of solving the feature "dropout" problem when indistinctive structures, noise or large image distortion exists, and of rapidly recognizing and joining common features located in different subsequences. In addition, we contribute an effective segment-based coarse-to-fine SfM algorithm for robustly handling large data sets. Experimental results on challenging video data demonstrate the effectiveness of the proposed system.
Collapse
|
22
|
Fast Robot Localization Approach Based on Manifold Regularization with Sparse Area Features. Cognit Comput 2016. [DOI: 10.1007/s12559-016-9427-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
23
|
Korrapati H, Mezouar Y. Multi-resolution map building and loop closure with omnidirectional images. Auton Robots 2016. [DOI: 10.1007/s10514-016-9560-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
24
|
Lowry S, Sunderhauf N, Newman P, Leonard JJ, Cox D, Corke P, Milford MJ. Visual Place Recognition: A Survey. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2015.2496823] [Citation(s) in RCA: 531] [Impact Index Per Article: 66.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
25
|
|
26
|
|
27
|
Song S, Xia S, Teng Z, Zhang S. A Precise and Real-Time Loop-closure Detection for SLAM Using the RSOM Tree. INT J ADV ROBOT SYST 2015. [DOI: 10.5772/60687] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
In robotic applications of visual simultaneous localization and mapping (SLAM) techniques, loop-closure detection detects whether or not a current location has previously been visited. We present an online and incremental approach to detect loops when images come from an already visited scene and learn new information from the environment. Instead of utilizing a bag-of-words model, the attributed graph model is applied to represent images and measure the similarity between pairs of images in our method. In order to position a camera in visual environments in real-time, the method demands retrieval of images from the database through a clustering tree that we call RSOM (recursive self-organizing feature map). As long as the match is found between the current graph and several graphs in the database, a threshold will be chosen to judge whether loop-closure is accepted or rejected. The results demonstrate the method's accuracy and real-time performance by testing several videos collected from a digital camera fixed on vehicles in indoor and outdoor environments.
Collapse
Affiliation(s)
- Siyang Song
- College of Electrical and Information Engineering, Hunan University, Hunan, China
| | - Shengping Xia
- ATR Lab, School of Electronic Science and Engineering, National University of Defense Technology, Hunan, China
| | - Zhaosheng Teng
- College of Electrical and Information Engineering, Hunan University, Hunan, China
| | - Shuimei Zhang
- College of Electrical and Information Engineering, Hunan University, Hunan, China
| |
Collapse
|
28
|
Ravari AN, Taghirad HD. Loop closure detection by algorithmic information theory: implemented on range and camera image data. IEEE TRANSACTIONS ON CYBERNETICS 2014; 44:1938-1949. [PMID: 24968363 DOI: 10.1109/tcyb.2014.2300180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods.
Collapse
|
29
|
Paul R, Newman P. Self-help: Seeking out perplexing images for ever improving topological mapping. Int J Rob Res 2013. [DOI: 10.1177/0278364913509859] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In this work, we present a novel approach that allows a robot to improve its own navigation performance through introspection and then targeted data retrieval. It is a step in the direction of life-long learning and adaptation and is motivated by the desire to build robots that have plastic competencies which are not baked in. They should react to and benefit from use. We consider a particular instantiation of this problem in the context of place recognition. Based on a topic-based probabilistic representation for images, we use a measure of perplexity to evaluate how well a working set of background images explain the robot’s online view of the world. Offline, the robot then searches an external resource to seek out additional background images that bolster its ability to localize in its environment when used next. In this way the robot adapts and improves performance through use. We demonstrate this approach using data collected from a mobile robot operating in outdoor workspaces.
Collapse
Affiliation(s)
- Rohan Paul
- Oxford University Mobile Robotics Research Group, UK
| | - Paul Newman
- Oxford University Mobile Robotics Research Group, UK
| |
Collapse
|
30
|
Evangelidis GD, Bauckhage C. Efficient subframe video alignment using short descriptors. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2013; 35:2371-2386. [PMID: 23969383 DOI: 10.1109/tpami.2013.56] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This paper addresses the problem of video alignment. We present efficient approaches that allow for spatiotemporal alignment of two sequences. Unlike most related works, we consider independently moving cameras that capture a 3D scene at different times. The novelty of the proposed method lies in the adaptation and extension of an efficient information retrieval framework that casts the sequences as an image database and a set of query frames, respectively. The efficient retrieval builds on the recently proposed quad descriptor. In this context, we define the 3D Vote Space (VS) by aggregating votes through a multiquerying (multiscale) scheme and we present two solutions based on VS entries; a causal solution that permits online synchronization and a global solution through multiscale dynamic programming. In addition, we extend the recently introduced ECC image-alignment algorithm to the temporal dimension that allows for spatial registration and synchronization refinement with subframe accuracy. We investigate full search and quantization methods for short descriptors and we compare the proposed schemes with the state of the art. Experiments with real videos by moving or static cameras demonstrate the efficiency of the proposed method and verify its effectiveness with respect to spatiotemporal alignment accuracy.
Collapse
Affiliation(s)
- Georgios D Evangelidis
- Perception Team, INRIA Rhone-Alpes, 655 Avenue de l'Europe, Montbonnot Saint-Martin, Grenoble, Rhone-Alpes, France.
| | | |
Collapse
|
31
|
Rebai K, Azouaoui O, Achour N. Fuzzy ART-based place recognition for visual loop closure detection. BIOLOGICAL CYBERNETICS 2013; 107:247-259. [PMID: 23224495 DOI: 10.1007/s00422-012-0539-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/29/2012] [Accepted: 11/21/2012] [Indexed: 06/01/2023]
Abstract
The automatic place recognition problem is one of the key challenges in SLAM approaches for loop closure detection. Most of the appearance-based solutions to this problem share the idea of image feature extraction, memorization, and matching search. The weakness of these solutions is the storage and computational costs which increase drastically with the environment size. In this regard, the major constraints to overcome are the required visual information storage and the complexity of similarity computation. In this paper, a novel formulation is proposed that allows the computation time reduction while no visual information are stored and matched explicitly. The proposed solution relies on the incremental building of a bio-inspired visual memory using a Fuzzy ART network. This network considers the properties discovered in primate brain. The performance evaluation of the proposed method has been conducted using two datasets representing different large scale outdoor environments. The method has been compared with RatSLAM and FAB-MAP approaches and has demonstrated a decreased time and storage costs with broadly comparable precision recall performance.
Collapse
Affiliation(s)
- Karima Rebai
- Centre de Développement des Technologies Avancées CDTA, Algiers, Algeria.
| | | | | |
Collapse
|
32
|
Zou D, Tan P. CoSLAM: collaborative visual SLAM in dynamic environments. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2013; 35:354-366. [PMID: 22547430 DOI: 10.1109/tpami.2012.104] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
This paper studies the problem of vision-based simultaneous localization and mapping (SLAM) in dynamic environments with multiple cameras. These cameras move independently and can be mounted on different platforms. All cameras work together to build a global map, including 3D positions of static background points and trajectories of moving foreground points. We introduce intercamera pose estimation and intercamera mapping to deal with dynamic objects in the localization and mapping process. To further enhance the system robustness, we maintain the position uncertainty of each map point. To facilitate intercamera operations, we cluster cameras into groups according to their view overlap, and manage the split and merge of camera groups in real time. Experimental results demonstrate that our system can work robustly in highly dynamic environments and produce more accurate results in static environments.
Collapse
Affiliation(s)
- Danping Zou
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore.
| | | |
Collapse
|
33
|
Murillo AC, Singh G, Kosecká J, Guerrero JJ. Localization in Urban Environments Using a Panoramic Gist Descriptor. IEEE T ROBOT 2013. [DOI: 10.1109/tro.2012.2220211] [Citation(s) in RCA: 69] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
34
|
Liu W, Zheng N, Xue J, Zhang X, Yuan Z. Visual Appearance-Based Unmanned Vehicle Sequential Localization. INT J ADV ROBOT SYST 2013. [DOI: 10.5772/54899] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Localizationis of vital importance for an unmanned vehicle to drive on the road. Most of the existing algorithms are based on laser range finders, inertial equipment, artificial landmarks, distributing sensors or global positioning system(GPS) information. Currently, the problem of localization with vision information is most concerned. However, vision-based localization techniquesare still unavailable for practical applications. In this paper, we present a vision-based sequential probability localization method. This method uses the surface information of the roadside to locate the vehicle, especially in the situation where GPS information is unavailable. It is composed of two step, first, in a recording stage, we construct a ground truthmap with the appearance of the roadside environment. Then in an on-line stage, we use a sequential matching approach to localize the vehicle. In the experiment, we use two independent cameras to observe the environment, one is left-orientated and the other is right. SIFT features and Daisy features are used to represent for the visual appearance of the environment. The experiment results show that the proposed method could locate the vehicle in a complicated, large environment with high reliability.
Collapse
Affiliation(s)
- Wei Liu
- Institute of Artificial Intelligence and Robotics, Department of Electrical Engineering, Xi'an Jiaotong University
| | - Nanning Zheng
- Institute of Artificial Intelligence and Robotics, Department of Electrical Engineering, Xi'an Jiaotong University
| | - Jianru Xue
- Institute of Artificial Intelligence and Robotics, Department of Electrical Engineering, Xi'an Jiaotong University
| | - Xuetao Zhang
- Institute of Artificial Intelligence and Robotics, Department of Electrical Engineering, Xi'an Jiaotong University
| | - Zejian Yuan
- Institute of Artificial Intelligence and Robotics, Department of Electrical Engineering, Xi'an Jiaotong University
| |
Collapse
|
35
|
|
36
|
Diego F, Ponsa D, Serrat J, López AM. Video alignment for change detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2011; 20:1858-1869. [PMID: 21118773 DOI: 10.1109/tip.2010.2095873] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
In this work, we address the problem of aligning two video sequences. Such alignment refers to synchronization, i.e., the establishment of temporal correspondence between frames of the first and second video, followed by spatial registration of all the temporally corresponding frames. Video synchronization and alignment have been attempted before, but most often in the relatively simple cases of fixed or rigidly attached cameras and simultaneous acquisition. In addition, restrictive assumptions have been applied, including linear time correspondence or the knowledge of the complete trajectories of corresponding scene points; to some extent, these assumptions limit the practical applicability of any solutions developed. We intend to solve the more general problem of aligning video sequences recorded by independently moving cameras that follow similar trajectories, based only on the fusion of image intensity and GPS information. The novelty of our approach is to pose the synchronization as a MAP inference problem on a Bayesian network including the observations from these two sensor types, which have been proved complementary. Alignment results are presented in the context of videos recorded from vehicles driving along the same track at different times, for different road types. In addition, we explore two applications of the proposed video alignment method, both based on change detection between aligned videos. One is the detection of vehicles, which could be of use in ADAS. The other is online difference spotting videos of surveillance rounds.
Collapse
Affiliation(s)
- Ferran Diego
- Computer Vision Center and Computer Science Department, Edifici O, Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallés, Spain.
| | | | | | | |
Collapse
|
37
|
Abstract
In this paper we address the loop closure detection problem in simultaneous localization and mapping ( slam ), and present a method for solving the problem using pairwise comparison of point clouds in both two and three dimensions. The point clouds are mathematically described using features that capture important geometric and statistical properties. The features are used as input to the machine learning algorithm AdaBoost, which is used to build a non-linear classifier capable of detecting loop closure from pairs of point clouds. Vantage point dependency in the detection process is eliminated by only using rotation invariant features, thus loop closure can be detected from an arbitrary direction. The classifier is evaluated using publicly available data, and is shown to generalize well between environments. Detection rates of 66%, 63% and 53% for 0% false alarm rate are achieved for 2D outdoor data, 3D outdoor data and 3D indoor data, respectively. In both two and three dimensions, experiments are performed using publicly available data, showing that the proposed algorithm compares favourably with related work.
Collapse
Affiliation(s)
- Karl Granström
- Division of Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Thomas B Schön
- Division of Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Juan I Nieto
- Australian Centre for Field Robotics, University of Sydney, Sydney, Australia
| | - Fabio T Ramos
- School of Information Technologies, Australian Centre for Field Robotics, University of Sydney, Sydney, Australia
| |
Collapse
|
38
|
Jones ES, Soatto S. Visual-inertial navigation, mapping and localization: A scalable real-time causal approach. Int J Rob Res 2011. [DOI: 10.1177/0278364910388963] [Citation(s) in RCA: 278] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We describe a model to estimate motion from monocular visual and inertial measurements. We analyze the model and characterize the conditions under which its state is observable, and its parameters are identifiable. These include the unknown gravity vector, and the unknown transformation between the camera coordinate frame and the inertial unit. We show that it is possible to estimate both state and parameters as part of an on-line procedure, but only provided that the motion sequence is ‘rich enough’, a condition that we characterize explicitly. We then describe an efficient implementation of a filter to estimate the state and parameters of this model, including gravity and camera-to-inertial calibration. It runs in real-time on an embedded platform. We report experiments of continuous operation, without failures, re-initialization, or re-calibration, on paths of length up to 30 km. We also describe an integrated approach to ‘loop-closure’, that is the recognition of previously seen locations and the topological re-adjustment of the traveled path. It represents visual features relative to the global orientation reference provided by the gravity vector estimated by the filter, and relative to the scale provided by their known position within the map; these features are organized into ‘locations’ defined by visibility constraints, represented in a topological graph, where loop-closure can be performed without the need to re-compute past trajectories or perform bundle adjustment. The software infrastructure as well as the embedded platform is described in detail in a previous technical report.
Collapse
|
39
|
Elibol A, Gracias N, Garcia R. Augmented state-extended Kalman filter combined framework for topology estimation in large-area underwater mapping. J FIELD ROBOT 2010. [DOI: 10.1002/rob.20357] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
40
|
Kawewong A, Tongprasit N, Tangruamsub S, Hasegawa O. Online and Incremental Appearance-based SLAM in Highly Dynamic Environments. Int J Rob Res 2010. [DOI: 10.1177/0278364910371855] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this paper we present a novel method for online and incremental appearance-based localization and mapping in a highly dynamic environment. Using position-invariant robust features (PIRFs), the method can achieve a high rate of recall with 100% precision. It can handle both strong perceptual aliasing and dynamic changes of places efficiently. Its performance also extends beyond conventional images; it is applicable to omnidirectional images for which the major portions of scenes are similar for most places. The proposed PIRF-based Navigation method named PIRF-Nav is evaluated by testing it on two standard datasets in a similar manner as in FAB-MAP and on an additional omnidirectional image dataset that we collected. This extra dataset was collected on 2 days with different specific events, i.e. an open-campus event, to present challenges related to illumination variance and strong dynamic changes, and to test assessment of dynamic scene changes. Results show that PIRF-Nav outperforms FAB-MAP; PIRF-Nav at precision-1 yields a recall rate about twice as high (approximately 80% increase) than that of FAB-MAP. Its computation time is sufficiently short for real-time applications. The method is fully incremental, and requires no offline process for dictionary creation. Additional testing using combined datasets proves that PIRF-Nav can function over the long term and can solve the kidnapped robot problem.
Collapse
Affiliation(s)
- Aram Kawewong
- Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama, Japan,
| | - Noppharit Tongprasit
- Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama, Japan
| | - Sirinart Tangruamsub
- Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama, Japan
| | - Osamu Hasegawa
- Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama, Japan
| |
Collapse
|
41
|
|
42
|
Abstract
Wide-angle images exhibit significant distortion for which existing scale-space detectors such as the scale-invariant feature transform (SIFT) are inappropriate. The required scale-space images for feature detection are correctly obtained through the convolution of the image, mapped to the sphere, with the spherical Gaussian. A new visual key-point detector, based on this principle, is developed and several computational approaches to the convolution are investigated in both the spatial and frequency domain. In particular, a close approximation is developed that has comparable computation time to conventional SIFT but with improved matching performance. Results are presented for monocular wide-angle outdoor image sequences obtained using fisheye and equiangular catadioptric cameras. We evaluate the overall matching performance (recall versus 1-precision) of these methods compared to conventional SIFT. We also demonstrate the use of the technique for variable frame-rate visual odometry and its application to place recognition.
Collapse
Affiliation(s)
- Peter Hansen
- Queensland University of Technology, Brisbane, Australia,
| | | | - Wageeh Boles
- Queensland University of Technology, Brisbane, Australia,
| |
Collapse
|
43
|
Newman P, Sibley G, Smith M, Cummins M, Harrison A, Mei C, Posner I, Shade R, Schroeter D, Murphy L, Churchill W, Cole D, Reid I. Navigating, Recognizing and Describing Urban Spaces With Vision and Lasers. Int J Rob Res 2009. [DOI: 10.1177/0278364909341483] [Citation(s) in RCA: 94] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this paper we describe a body of work aimed at extending the reach of mobile navigation and mapping. We describe how running topological and metric mapping and pose estimation processes concurrently, using vision and laser ranging, has produced a full six-degree-of-freedom outdoor navigation system. It is capable of producing intricate three-dimensional maps over many kilometers and in real time. We consider issues concerning the intrinsic quality of the built maps and describe our progress towards adding semantic labels to maps via scene de-construction and labeling. We show how our choices of representation, inference methods and use of both topological and metric techniques naturally allow us to fuse maps built from multiple sessions with no need for manual frame alignment or data association.
Collapse
Affiliation(s)
- Paul Newman
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK,
| | - Gabe Sibley
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Mike Smith
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Mark Cummins
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Alastair Harrison
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Chris Mei
- Active Vision Lab, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Ingmar Posner
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Robbie Shade
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Derik Schroeter
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Liz Murphy
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Winston Churchill
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Dave Cole
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Ian Reid
- Active Vision Lab, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| |
Collapse
|
44
|
Artieda J, Sebastian JM, Campoy P, Correa JF, Mondragón IF, Martínez C, Olivares M. Visual 3-D SLAM from UAVs. J INTELL ROBOT SYST 2009. [DOI: 10.1007/s10846-008-9304-8] [Citation(s) in RCA: 115] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
45
|
Angeli A, Filliat D, Doncieux S, Meyer JA. Fast and Incremental Method for Loop-Closure Detection Using Bags of Visual Words. IEEE T ROBOT 2008. [DOI: 10.1109/tro.2008.2004514] [Citation(s) in RCA: 330] [Impact Index Per Article: 20.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
46
|
|
47
|
Abstract
This paper describes a probabilistic approach to the problem of recognizing places based on their appearance. The system we present is not limited to localization, but can determine that a new observation comes from a previously unseen place, and so augment its map. Effectively this is a SLAM system in the space of appearance. Our probabilistic approach allows us to explicitly account for perceptual aliasing in the environment—identical but indistinctive observations receive a low probability of having come from the same place. We achieve this by learning a generative model of place appearance. By partitioning the learning problem into two parts, new place models can be learned online from only a single observation of a place. The algorithm complexity is linear in the number of places in the map, and is particularly suitable for online loop closure detection in mobile robotics.
Collapse
Affiliation(s)
- Mark Cummins
- Mobile Robotics Group, University of Oxford, UK,
| | - Paul Newman
- Mobile Robotics Group, University of Oxford, UK,
| |
Collapse
|