2
|
Han F, Yang X, Deng Y, Rentschler M, Yang D, Zhang H. SRAL: Shared Representative Appearance Learning for Long-Term Visual Place Recognition. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2662061] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
3
|
Kawewong A, Tongprasit N, Tangruamsub S, Hasegawa O. Online and Incremental Appearance-based SLAM in Highly Dynamic Environments. Int J Rob Res 2010. [DOI: 10.1177/0278364910371855] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this paper we present a novel method for online and incremental appearance-based localization and mapping in a highly dynamic environment. Using position-invariant robust features (PIRFs), the method can achieve a high rate of recall with 100% precision. It can handle both strong perceptual aliasing and dynamic changes of places efficiently. Its performance also extends beyond conventional images; it is applicable to omnidirectional images for which the major portions of scenes are similar for most places. The proposed PIRF-based Navigation method named PIRF-Nav is evaluated by testing it on two standard datasets in a similar manner as in FAB-MAP and on an additional omnidirectional image dataset that we collected. This extra dataset was collected on 2 days with different specific events, i.e. an open-campus event, to present challenges related to illumination variance and strong dynamic changes, and to test assessment of dynamic scene changes. Results show that PIRF-Nav outperforms FAB-MAP; PIRF-Nav at precision-1 yields a recall rate about twice as high (approximately 80% increase) than that of FAB-MAP. Its computation time is sufficiently short for real-time applications. The method is fully incremental, and requires no offline process for dictionary creation. Additional testing using combined datasets proves that PIRF-Nav can function over the long term and can solve the kidnapped robot problem.
Collapse
Affiliation(s)
- Aram Kawewong
- Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama, Japan,
| | - Noppharit Tongprasit
- Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama, Japan
| | - Sirinart Tangruamsub
- Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama, Japan
| | - Osamu Hasegawa
- Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama, Japan
| |
Collapse
|
4
|
Newman P, Sibley G, Smith M, Cummins M, Harrison A, Mei C, Posner I, Shade R, Schroeter D, Murphy L, Churchill W, Cole D, Reid I. Navigating, Recognizing and Describing Urban Spaces With Vision and Lasers. Int J Rob Res 2009. [DOI: 10.1177/0278364909341483] [Citation(s) in RCA: 94] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this paper we describe a body of work aimed at extending the reach of mobile navigation and mapping. We describe how running topological and metric mapping and pose estimation processes concurrently, using vision and laser ranging, has produced a full six-degree-of-freedom outdoor navigation system. It is capable of producing intricate three-dimensional maps over many kilometers and in real time. We consider issues concerning the intrinsic quality of the built maps and describe our progress towards adding semantic labels to maps via scene de-construction and labeling. We show how our choices of representation, inference methods and use of both topological and metric techniques naturally allow us to fuse maps built from multiple sessions with no need for manual frame alignment or data association.
Collapse
Affiliation(s)
- Paul Newman
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK,
| | - Gabe Sibley
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Mike Smith
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Mark Cummins
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Alastair Harrison
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Chris Mei
- Active Vision Lab, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Ingmar Posner
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Robbie Shade
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Derik Schroeter
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Liz Murphy
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Winston Churchill
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Dave Cole
- Oxford Mobile Robotics Group, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| | - Ian Reid
- Active Vision Lab, Department of Engineering Science, University of Oxford, Parks Road, Oxford, UK
| |
Collapse
|
5
|
Abstract
This paper describes a probabilistic approach to the problem of recognizing places based on their appearance. The system we present is not limited to localization, but can determine that a new observation comes from a previously unseen place, and so augment its map. Effectively this is a SLAM system in the space of appearance. Our probabilistic approach allows us to explicitly account for perceptual aliasing in the environment—identical but indistinctive observations receive a low probability of having come from the same place. We achieve this by learning a generative model of place appearance. By partitioning the learning problem into two parts, new place models can be learned online from only a single observation of a place. The algorithm complexity is linear in the number of places in the map, and is particularly suitable for online loop closure detection in mobile robotics.
Collapse
Affiliation(s)
- Mark Cummins
- Mobile Robotics Group, University of Oxford, UK,
| | - Paul Newman
- Mobile Robotics Group, University of Oxford, UK,
| |
Collapse
|