51
|
Xu M, Fischer T, Sunderhauf N, Milford M. Probabilistic Appearance-Invariant Topometric Localization With New Place Awareness. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3096745] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
52
|
|
53
|
Steiner R, Cox M, Borges PVK, Bernreiter L, Nieto J. Certainty Aware Global Localisation Using 3D Point Correspondences. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3114956] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
54
|
Keetha NV, Milford M, Garg S. A Hierarchical Dual Model of Environment- and Place-Specific Utility for Visual Place Recognition. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3096751] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
55
|
Xu Y, Huang J, Wang J, Wang Y, Qin H, Nan K. ESA-VLAD: A Lightweight Network Based on Second-Order Attention and NetVLAD for Loop Closure Detection. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3094228] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
56
|
Qin C, Zhang Y, Liu Y, Coleman S, Du H, Kerr D. A visual place recognition approach using learnable feature map filtering and graph attention networks. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.06.038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
57
|
Variational Bayesian Approach to Condition-Invariant Feature Extraction for Visual Place Recognition. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11198976] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
As mobile robots perform long-term operations in large-scale environments, coping with perceptual changes becomes an important issue recently. This paper introduces a stochastic variational inference and learning architecture that can extract condition-invariant features for visual place recognition in a changing environment. Under the assumption that a latent representation of the variational autoencoder can be divided into condition-invariant and condition-sensitive features, a new structure of the variation autoencoder is proposed and a variational lower bound is derived to train the model. After training the model, condition-invariant features are extracted from test images to calculate the similarity matrix, and the places can be recognized even in severe environmental changes. Experiments were conducted to verify the proposed method, and the experimental results showed that our assumption was reasonable and effective in recognizing places in changing environments.
Collapse
|
58
|
SVG-Loop: Semantic–Visual–Geometric Information-Based Loop Closure Detection. REMOTE SENSING 2021. [DOI: 10.3390/rs13173520] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Loop closure detection is an important component of visual simultaneous localization and mapping (SLAM). However, most existing loop closure detection methods are vulnerable to complex environments and use limited information from images. As higher-level image information and multi-information fusion can improve the robustness of place recognition, a semantic–visual–geometric information-based loop closure detection algorithm (SVG-Loop) is proposed in this paper. In detail, to reduce the interference of dynamic features, a semantic bag-of-words model was firstly constructed by connecting visual features with semantic labels. Secondly, in order to improve detection robustness in different scenes, a semantic landmark vector model was designed by encoding the geometric relationship of the semantic graph. Finally, semantic, visual, and geometric information was integrated by fuse calculation of the two modules. Compared with art-of-the-state methods, experiments on the TUM RBG-D dataset, KITTI odometry dataset, and practical environment show that SVG-Loop has advantages in complex environments with varying light, changeable weather, and dynamic interference.
Collapse
|
59
|
Cheng HM, Song D. Graph-Based Proprioceptive Localization Using a Discrete Heading-Length Feature Sequence Matching Approach. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2020.3046419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
60
|
Xia Z, Booij O, Manfredi M, Kooij JFP. Cross-View Matching for Vehicle Localization by Learning Geographically Local Representations. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3088076] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
61
|
Lu F, Chen B, Zhou XD, Song D. STA-VPR: Spatio-Temporal Alignment for Visual Place Recognition. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3067623] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
62
|
Waheed M, Milford M, McDonald-Maier K, Ehsan S. Improving Visual Place Recognition Performance by Maximising Complementarity. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3088779] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
63
|
Kulhanek J, Derner E, Babuska R. Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3068106] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
64
|
Zhu S, Yang S, Hu P, Qu X. A Robust Optical Flow Tracking Method Based On Prediction Model for Visual-Inertial Odometry. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3079806] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
65
|
Oh J, Han C, Lee S. Condition-Invariant Robot Localization Using Global Sequence Alignment of Deep Features. SENSORS 2021; 21:s21124103. [PMID: 34203682 PMCID: PMC8232079 DOI: 10.3390/s21124103] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 06/02/2021] [Accepted: 06/10/2021] [Indexed: 11/16/2022]
Abstract
Localization is one of the essential process in robotics, as it plays an important role in autonomous navigation, simultaneous localization, and mapping for mobile robots. As robots perform large-scale and long-term operations, identifying the same locations in a changing environment has become an important problem. In this paper, we describe a robust visual localization system under severe appearance changes. First, a robust feature extraction method based on a deep variational autoencoder is described to calculate the similarity between images. Then, a global sequence alignment is proposed to find the actual trajectory of the robot. To align sequences, local fragments are detected from the similarity matrix and connected using a rectangle chaining algorithm considering the robot's motion constraint. Since the chained fragments provide reliable clues to find the global path, false matches on featureless structures or partial failures during the alignment could be recovered and perform accurate robot localization in changing environments. The presented experimental results demonstrated the benefits of the proposed method, which outperformed existing algorithms in long-term conditions.
Collapse
Affiliation(s)
- Junghyun Oh
- Department of Robotics, Kwangwoon University, Seoul 01897, Korea; (J.O.); (C.H.)
| | - Changwan Han
- Department of Robotics, Kwangwoon University, Seoul 01897, Korea; (J.O.); (C.H.)
| | - Seunghwan Lee
- Department of Electronic Engineering, Kumoh National Institute of Technology, Gumi, Gyeongbuk 39177, Korea
- Correspondence:
| |
Collapse
|
66
|
Sequence-based visual place recognition: a scale-space approach for boundary detection. Auton Robots 2021. [DOI: 10.1007/s10514-021-09984-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
67
|
Yin H, Xu X, Wang Y, Xiong R. Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning. Front Robot AI 2021; 8:661199. [PMID: 34079825 PMCID: PMC8166203 DOI: 10.3389/frobt.2021.661199] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 03/30/2021] [Indexed: 12/04/2022] Open
Abstract
Place recognition is critical for both offline mapping and online localization. However, current single-sensor based place recognition still remains challenging in adverse conditions. In this paper, a heterogeneous measurement based framework is proposed for long-term place recognition, which retrieves the query radar scans from the existing lidar (Light Detection and Ranging) maps. To achieve this, a deep neural network is built with joint training in the learning stage, and then in the testing stage, shared embeddings of radar and lidar are extracted for heterogeneous place recognition. To validate the effectiveness of the proposed method, we conducted tests and generalization experiments on the multi-session public datasets and compared them to other competitive methods. The experimental results indicate that our model is able to perform multiple place recognitions: lidar-to-lidar (L2L), radar-to-radar (R2R), and radar-to-lidar (R2L), while the learned model is trained only once. We also release the source code publicly: https://github.com/ZJUYH/radar-to-lidar-place-recognition.
Collapse
Affiliation(s)
| | | | - Yue Wang
- Institute of Cyber-Systems and Control, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | | |
Collapse
|
68
|
Abstract
AbstractThis paper explores current developments in evolutionary and bio-inspired approaches to autonomous robotics, concentrating on research from our group at the University of Sussex. These developments are discussed in the context of advances in the wider fields of adaptive and evolutionary approaches to AI and robotics, focusing on the exploitation of embodied dynamics to create behaviour. Four case studies highlight various aspects of such exploitation. The first exploits the dynamical properties of a physical electronic substrate, demonstrating for the first time how component-level analog electronic circuits can be evolved directly in hardware to act as robot controllers. The second develops novel, effective and highly parsimonious navigation methods inspired by the way insects exploit the embodied dynamics of innate behaviours. Combining biological experiments with robotic modeling, it is shown how rapid route learning can be achieved with the aid of navigation-specific visual information that is provided and exploited by the innate behaviours. The third study focuses on the exploitation of neuromechanical chaos in the generation of robust motor behaviours. It is demonstrated how chaotic dynamics can be exploited to power a goal-driven search for desired motor behaviours in embodied systems using a particular control architecture based around neural oscillators. The dynamics are shown to be chaotic at all levels in the system, from the neural to the embodied mechanical. The final study explores the exploitation of the dynamics of brain-body-environment interactions for efficient, agile flapping winged flight. It is shown how a multi-objective evolutionary algorithm can be used to evolved dynamical neural controllers for a simulated flapping wing robot with feathered wings. Results demonstrate robust, stable, agile flight is achieved in the face of random wind gusts by exploiting complex asymmetric dynamics partly enabled by continually changing wing and tail morphologies.
Collapse
|
69
|
Zaffar M, Garg S, Milford M, Kooij J, Flynn D, McDonald-Maier K, Ehsan S. VPR-Bench: An Open-Source Visual Place Recognition Evaluation Framework with Quantifiable Viewpoint and Appearance Change. Int J Comput Vis 2021. [DOI: 10.1007/s11263-021-01469-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractVisual place recognition (VPR) is the process of recognising a previously visited place using visual information, often under varying appearance conditions and viewpoint changes and with computational constraints. VPR is related to the concepts of localisation, loop closure, image retrieval and is a critical component of many autonomous navigation systems ranging from autonomous vehicles to drones and computer vision systems. While the concept of place recognition has been around for many years, VPR research has grown rapidly as a field over the past decade due to improving camera hardware and its potential for deep learning-based techniques, and has become a widely studied topic in both the computer vision and robotics communities. This growth however has led to fragmentation and a lack of standardisation in the field, especially concerning performance evaluation. Moreover, the notion of viewpoint and illumination invariance of VPR techniques has largely been assessed qualitatively and hence ambiguously in the past. In this paper, we address these gaps through a new comprehensive open-source framework for assessing the performance of VPR techniques, dubbed “VPR-Bench”. VPR-Bench (Open-sourced at: https://github.com/MubarizZaffar/VPR-Bench) introduces two much-needed capabilities for VPR researchers: firstly, it contains a benchmark of 12 fully-integrated datasets and 10 VPR techniques, and secondly, it integrates a comprehensive variation-quantified dataset for quantifying viewpoint and illumination invariance. We apply and analyse popular evaluation metrics for VPR from both the computer vision and robotics communities, and discuss how these different metrics complement and/or replace each other, depending upon the underlying applications and system requirements. Our analysis reveals that no universal SOTA VPR technique exists, since: (a) state-of-the-art (SOTA) performance is achieved by 8 out of the 10 techniques on at least one dataset, (b) SOTA technique in one community does not necessarily yield SOTA performance in the other given the differences in datasets and metrics. Furthermore, we identify key open challenges since: (c) all 10 techniques suffer greatly in perceptually-aliased and less-structured environments, (d) all techniques suffer from viewpoint variance where lateral change has less effect than 3D change, and (e) directional illumination change has more adverse effects on matching confidence than uniform illumination change. We also present detailed meta-analyses regarding the roles of varying ground-truths, platforms, application requirements and technique parameters. Finally, VPR-Bench provides a unified implementation to deploy these VPR techniques, metrics and datasets, and is extensible through templates.
Collapse
|
70
|
Neuland R, Rodrigues F, Pittol D, Jaulin L, Maffei R, Kolberg M, Prestes E. Interval Inspired Approach Based on Temporal Sequence Constraints to Place Recognition. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01375-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
71
|
DARE-SLAM: Degeneracy-Aware and Resilient Loop Closing in Perceptually-Degraded Environments. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01362-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
72
|
Jaenal A, Moreno FA, Gonzalez-Jimenez J. Appearance-Based Sequential Robot Localization Using a Patchwise Approximation of a Descriptor Manifold. SENSORS (BASEL, SWITZERLAND) 2021; 21:2483. [PMID: 33918493 PMCID: PMC8038242 DOI: 10.3390/s21072483] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 03/26/2021] [Accepted: 03/29/2021] [Indexed: 11/16/2022]
Abstract
This paper addresses appearance-based robot localization in 2D with a sparse, lightweight map of the environment composed of descriptor-pose image pairs. Based on previous research in the field, we assume that image descriptors are samples of a low-dimensional Descriptor Manifold that is locally articulated by the camera pose. We propose a piecewise approximation of the geometry of such Descriptor Manifold through a tessellation of so-called Patches of Smooth Appearance Change (PSACs), which defines our appearance map. Upon this map, the presented robot localization method applies both a Gaussian Process Particle Filter (GPPF) to perform camera tracking and a Place Recognition (PR) technique for relocalization within the most likely PSACs according to the observed descriptor. A specific Gaussian Process (GP) is trained for each PSAC to regress a Gaussian distribution over the descriptor for any particle pose lying within that PSAC. The evaluation of the observed descriptor in this distribution gives us a likelihood, which is used as the weight for the particle. Besides, we model the impact of appearance variations on image descriptors as a white noise distribution within the GP formulation, ensuring adequate operation under lighting and scene appearance changes with respect to the conditions in which the map was constructed. A series of experiments with both real and synthetic images show that our method outperforms state-of-the-art appearance-based localization methods in terms of robustness and accuracy, with median errors below 0.3 m and 6°.
Collapse
Affiliation(s)
| | - Francisco-Angel Moreno
- Machine Perception and Intelligent Robotics Group (MAPIR), Department of System Engineering and Automation Biomedical Research Institute of Malaga (IBIMA), University of Malaga, 29071 Málaga, Spain; (A.J.); (J.G.-J.)
| | | |
Collapse
|
73
|
Xu M, Snderhauf N, Milford M. Probabilistic Visual Place Recognition for Hierarchical Localization. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2020.3040134] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
74
|
Xu X, Yin H, Chen Z, Li Y, Wang Y, Xiong R. DiSCO: Differentiable Scan Context With Orientation. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3060741] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
75
|
Alqaraleh S, Hafez AHA, Tello A. Dynamic Time Warping of Deep Features for Place Recognition in Visually Varying Conditions. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2021. [DOI: 10.1007/s13369-020-05146-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
76
|
Cheng HM, Chou C, Song D. Vehicle-to-Vehicle Collaborative Graph-Based Proprioceptive Localization. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3056032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
77
|
Neubert P, Schubert S, Protzel P. Resolving Place Recognition Inconsistencies Using Intra-Set Similarities. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3060729] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
78
|
Nardari GV, Cohen A, Chen SW, Liu X, Arcot V, Romero RAF, Kumar V. Place Recognition in Forests With Urquhart Tessellations. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2020.3039217] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
79
|
Molloy TL, Fischer T, Milford M, Nair GN. Intelligent Reference Curation for Visual Place Recognition Via Bayesian Selective Fusion. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2020.3047791] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
80
|
Schubert S, Neubert P, Protzel P. Graph-Based Non-Linear Least Squares Optimization for Visual Place Recognition in Changing Environments. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3052446] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
81
|
Alberto Jordan M. Progressive Underwater Exploration with a Corridor-Based Navigation System. UNDERWATER WORK 2021. [DOI: 10.5772/intechopen.90934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
The present work focuses on the exploration of underwater environments by means of autonomous submarines like AUVs using vision-based navigation. An approach called Corridor SLAM (C-SLAM) was developed for this purpose. It implements a global exploration strategy that consists of first creating a trunk corridor on the seabed and then branching as far as possible in different directions to increase the explored region. The system guarantees the safe return of the vehicle to the starting point by taking into account a metric of the corridor lengths that are related to their energy autonomy. Experimental trials in a basin with underwater scenarios demonstrated the feasibility of the approach.
Collapse
|
82
|
Ge G, Zhang Y, Jiang Q, Wang W. Visual Features Assisted Robot Localization in Symmetrical Environment Using Laser SLAM. SENSORS 2021; 21:s21051772. [PMID: 33806414 PMCID: PMC7961336 DOI: 10.3390/s21051772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 02/26/2021] [Accepted: 03/03/2021] [Indexed: 11/26/2022]
Abstract
Localization for estimating the position and orientation of a robot in an asymmetrical environment has been solved by using various 2D laser rangefinder simultaneous localization and mapping (SLAM) approaches. Laser-based SLAM generates an occupancy grid map, then the most popular Monte Carlo Localization (MCL) method spreads particles on the map and calculates the position of the robot by a probabilistic algorithm. However, this can be difficult, especially in symmetrical environments, because landmarks or features may not be sufficient to determine the robot’s orientation. Sometimes the position is not unique if a robot does not stay at the geometric center. This paper presents a novel approach to solving the robot localization problem in a symmetrical environment using the visual features-assisted method. Laser range measurements are used to estimate the robot position, while visual features determine its orientation. Firstly, we convert laser range scans raw data into coordinate data and calculate the geometric center. Secondly, we calculate the new distance from the geometric center point to all end points and find the longest distances. Then, we compare those distances, fit lines, extract corner points, and calculate the distance between adjacent corner points to determine whether the environment is symmetrical. Finally, if the environment is symmetrical, visual features based on the ORB keypoint detector and descriptor will be added to the system to determine the orientation of the robot. The experimental results show that our approach can successfully determine the position of the robot in a symmetrical environment, while ordinary MCL and its extension localization method always fail.
Collapse
Affiliation(s)
- Gengyu Ge
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (G.G.); (Q.J.); (W.W.)
| | - Yi Zhang
- Advanced Manufacturing and Automatization Engineering Laboratory, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
- Correspondence: ; Tel.: +86-023-6248-0054
| | - Qin Jiang
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (G.G.); (Q.J.); (W.W.)
| | - Wei Wang
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (G.G.); (Q.J.); (W.W.)
| |
Collapse
|
83
|
Torii A, Taira H, Sivic J, Pollefeys M, Okutomi M, Pajdla T, Sattler T. Are Large-Scale 3D Models Really Necessary for Accurate Visual Localization? IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:814-829. [PMID: 31535984 DOI: 10.1109/tpami.2019.2941876] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Accurate visual localization is a key technology for autonomous navigation. 3D structure-based methods employ 3D models of the scene to estimate the full 6 degree-of-freedom (DOF) pose of a camera very accurately. However, constructing (and extending) large-scale 3D models is still a significant challenge. In contrast, 2D image retrieval-based methods only require a database of geo-tagged images, which is trivial to construct and to maintain. They are often considered inaccurate since they only approximate the positions of the cameras. Yet, the exact camera pose can theoretically be recovered when enough relevant database images are retrieved. In this paper, we demonstrate experimentally that large-scale 3D models are not strictly necessary for accurate visual localization. We create reference poses for a large and challenging urban dataset. Using these poses, we show that combining image-based methods with local reconstructions results in a higher pose accuracy compared to state-of-the-art structure-based methods, albeight at higher run-time costs. We show that some of these run-time costs can be alleviated by exploiting known database image poses. Our results suggest that we might want to reconsider the need for large-scale 3D models in favor of more local models, but also that further research is necessary to accelerate the local reconstruction process.
Collapse
|
84
|
Miranda A, Hook JV, Schaal C. Lamb wave-based mapping of plate structures via frontier exploration. ULTRASONICS 2021; 110:106282. [PMID: 33142227 DOI: 10.1016/j.ultras.2020.106282] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 09/22/2020] [Accepted: 10/09/2020] [Indexed: 06/11/2023]
Abstract
Substantial improvements in material processing and manufacturing techniques in recent years necessitate the introduction of effective and efficient nondestructive testing (NDT) methods that can seamlessly integrate into day-to-day aircraft and aerospace operations. Lamb wave-based methods have been identified as one of the most promising candidates for the inspection of large-scale structures. At the same time, there is presently a high level of research in the field of autonomous mobile robotics, especially in simultaneous localization and mapping (SLAM). Thus, this paper investigates a means to automate Lamb wave-based NDT by positioning sensors along a planar structure through mobile service robots. To this end, a generalized method for the mapping of plate structures using scattered Lamb waves by means of frontier exploration is presented such that an autonomous SLAM-capable NDT system can become realizable. The performance of this novel Lamb wave-based frontier exploration is first evaluated in simulation. It is shown that it generally outperforms a random frontier exploration and may even perform near-optimal in the case of an isotropic, square panel. These findings are then validated in laboratory experiments, confirming the general feasibility of utilizing Lamb waves for SLAM. Furthermore, the versatility of the developed methodology is successfully demonstrated on a more complexly shaped stiffened panel.
Collapse
Affiliation(s)
- Alvin Miranda
- Department of Mechanical Engineering, California State University, Northridge, CA, USA
| | - Joshua Vander Hook
- Maritime and Multi-Agent Autonomy Group, Jet Propulsion Laboratory, Pasadena, CA, USA
| | - Christoph Schaal
- Department of Mechanical Engineering, California State University, Northridge, CA, USA; Mechanical and Aerospace Engineering Department, University of California, Los Angeles, CA, USA.
| |
Collapse
|
85
|
Lassance C, Latif Y, Garg R, Gripon V, Reid I. Improved Visual Localization via Graph Filtering. J Imaging 2021; 7:20. [PMID: 34460619 PMCID: PMC8321269 DOI: 10.3390/jimaging7020020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 01/18/2021] [Accepted: 01/27/2021] [Indexed: 11/16/2022] Open
Abstract
Vision-based localization is the problem of inferring the pose of the camera given a single image. One commonly used approach relies on image retrieval where the query input is compared against a database of localized support examples and its pose is inferred with the help of the retrieved items. This assumes that images taken from the same places consist of the same landmarks and thus would have similar feature representations. These representations can learn to be robust to different variations in capture conditions like time of the day or weather. In this work, we introduce a framework which aims at enhancing the performance of such retrieval-based localization methods. It consists in taking into account additional information available, such as GPS coordinates or temporal proximity in the acquisition of the images. More precisely, our method consists in constructing a graph based on this additional information that is later used to improve reliability of the retrieval process by filtering the feature representations of support and/or query images. We show that the proposed method is able to significantly improve the localization accuracy on two large scale datasets, as well as the mean average precision in classical image retrieval scenarios.
Collapse
Affiliation(s)
- Carlos Lassance
- Electronics Department, IMT Atlantique, 29280 Brest, France;
| | - Yasir Latif
- School of Computer Science, University of Adelaide, Adelaide 5005, Australia; (Y.L.); (R.G.); (I.R.)
| | - Ravi Garg
- School of Computer Science, University of Adelaide, Adelaide 5005, Australia; (Y.L.); (R.G.); (I.R.)
| | - Vincent Gripon
- Electronics Department, IMT Atlantique, 29280 Brest, France;
| | - Ian Reid
- School of Computer Science, University of Adelaide, Adelaide 5005, Australia; (Y.L.); (R.G.); (I.R.)
| |
Collapse
|
86
|
Wang Z, Peng Z, Guan Y, Wu L. Two-Stage vSLAM Loop Closure Detection Based on Sequence Node Matching and Semi-Semantic Autoencoder. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-020-01302-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
87
|
Chen L, Jin S, Xia Z. Towards a Robust Visual Place Recognition in Large-Scale vSLAM Scenarios Based on a Deep Distance Learning. SENSORS 2021; 21:s21010310. [PMID: 33466401 PMCID: PMC7796086 DOI: 10.3390/s21010310] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 12/31/2020] [Accepted: 01/04/2021] [Indexed: 11/16/2022]
Abstract
The application of deep learning is blooming in the field of visual place recognition, which plays a critical role in visual Simultaneous Localization and Mapping (vSLAM) applications. The use of convolutional neural networks (CNNs) achieve better performance than handcrafted feature descriptors. However, visual place recognition is still a challenging task due to two major problems, i.e., perceptual aliasing and perceptual variability. Therefore, designing a customized distance learning method to express the intrinsic distance constraints in the large-scale vSLAM scenarios is of great importance. Traditional deep distance learning methods usually use the triplet loss which requires the mining of anchor images. This may, however, result in very tedious inefficient training and anomalous distance relationships. In this paper, a novel deep distance learning framework for visual place recognition is proposed. Through in-depth analysis of the multiple constraints of the distance relationship in the visual place recognition problem, the multi-constraint loss function is proposed to optimize the distance constraint relationships in the Euclidean space. The new framework can support any kind of CNN such as AlexNet, VGGNet and other user-defined networks to extract more distinguishing features. We have compared the results with the traditional deep distance learning method, and the results show that the proposed method can improve the performance by 19–28%. Additionally, compared to some contemporary visual place recognition techniques, the proposed method can improve the performance by 40%/36% and 27%/24% in average on VGGNet/AlexNet using the New College and the TUM datasets, respectively. It’s verified the method is capable to handle appearance changes in complex environments.
Collapse
Affiliation(s)
- Liang Chen
- Correspondence: ; Tel.: +86-185-5040-8581
| | | | | |
Collapse
|
88
|
Kim G, Choi S, Kim A. Scan Context++: Structural Place Recognition Robust to Rotation and Lateral Variations in Urban Environments. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3116424] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
89
|
Zhao J, Tang J, Zhao D, Cao H, Liu X, Shen C, Wang C, Liu J. Place recognition with deep superpixel features for brain-inspired navigation. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2020; 91:125110. [PMID: 33379976 DOI: 10.1063/5.0027767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 11/30/2020] [Indexed: 06/12/2023]
Abstract
Navigation in primates is generally supported by cognitive maps. Such a map endows an animal with navigational planning capabilities. Numerous methods have been proposed to mimic these natural navigation capabilities in artificial systems. Based on self-navigation and learning strategies in animals, we propose in this work a place recognition strategy for brain-inspired navigation. First, a place recognition algorithm structure based on convolutional neural networks (CNNs) is introduced, which can be applied in the field of intelligent navigation. Second, sufficient images are captured at each landmark and then stored as a reference image library. Simple linear iterative clustering (SLIC) is used to segment each image into superpixels with multi-scale viewpoint-invariant landmarks. Third, highly representative appearance-independent features are extracted from these landmarks through CNNs. In addition, spatial pyramid pooling (SPP) layers are introduced to generate a fixed-length CNN representation, regardless of the image size. This representation boosts the quality of the extracted landmark features. The proposed SLIC-SPP-CNN place recognition algorithm is evaluated on one collected dataset and two public datasets with viewpoint and appearance variations.
Collapse
Affiliation(s)
- Jing Zhao
- Key Laboratory of Instrumentation Science and Dynamic Measurement, Ministry of Education, School of Instrument and Electronics, North University of China, Taiyuan 030051, People's Republic of China
| | - Jun Tang
- Key Laboratory of Instrumentation Science and Dynamic Measurement, Ministry of Education, School of Instrument and Electronics, North University of China, Taiyuan 030051, People's Republic of China
| | - Donghua Zhao
- Key Laboratory of Instrumentation Science and Dynamic Measurement, Ministry of Education, School of Instrument and Electronics, North University of China, Taiyuan 030051, People's Republic of China
| | - Huiliang Cao
- Key Laboratory of Instrumentation Science and Dynamic Measurement, Ministry of Education, School of Instrument and Electronics, North University of China, Taiyuan 030051, People's Republic of China
| | - Xiaochen Liu
- School of Instrumentation Sciences and Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Chong Shen
- Key Laboratory of Instrumentation Science and Dynamic Measurement, Ministry of Education, School of Instrument and Electronics, North University of China, Taiyuan 030051, People's Republic of China
| | - Chenguang Wang
- School of Information and Communication Engineering, North University of China, Taiyuan 030051, People's Republic of China
| | - Jun Liu
- Key Laboratory of Instrumentation Science and Dynamic Measurement, Ministry of Education, School of Instrument and Electronics, North University of China, Taiyuan 030051, People's Republic of China
| |
Collapse
|
90
|
Garforth J, Webb B. Lost in the Woods? Place Recognition for Navigation in Difficult Forest Environments. Front Robot AI 2020; 7:541770. [PMID: 33501312 PMCID: PMC7805963 DOI: 10.3389/frobt.2020.541770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 11/06/2020] [Indexed: 11/13/2022] Open
Abstract
Forests present one of the most challenging environments for computer vision due to traits, such as complex texture, rapidly changing lighting, and high dynamicity. Loop closure by place recognition is a crucial part of successfully deploying robotic systems to map forests for the purpose of automating conservation. Modern CNN-based place recognition systems like NetVLAD have reported promising results, but the datasets used to train and test them are primarily of urban scenes. In this paper, we investigate how well NetVLAD generalizes to forest environments and find that it out performs state of the art loop closure approaches. Finally, integrating NetVLAD with ORBSLAM2 and evaluating on a novel forest data set, we find that, although suitable locations for loop closure can be identified, the SLAM system is unable to resolve matched places with feature correspondences. We discuss additional considerations to be addressed in future to deal with this challenging problem.
Collapse
Affiliation(s)
- James Garforth
- School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | | |
Collapse
|
91
|
Abstract
Research and development of autonomous mobile robotic solutions that can perform several active agricultural tasks (pruning, harvesting, mowing) have been growing. Robots are now used for a variety of tasks such as planting, harvesting, environmental monitoring, supply of water and nutrients, and others. To do so, robots need to be able to perform online localization and, if desired, mapping. The most used approach for localization in agricultural applications is based in standalone Global Navigation Satellite System-based systems. However, in many agricultural and forest environments, satellite signals are unavailable or inaccurate, which leads to the need of advanced solutions independent from these signals. Approaches like simultaneous localization and mapping and visual odometry are the most promising solutions to increase localization reliability and availability. This work leads to the main conclusion that, few methods can achieve simultaneously the desired goals of scalability, availability, and accuracy, due to the challenges imposed by these harsh environments. In the near future, novel contributions to this field are expected that will help one to achieve the desired goals, with the development of more advanced techniques, based on 3D localization, and semantic and topological mapping. In this context, this work proposes an analysis of the current state-of-the-art of localization and mapping approaches in agriculture and forest environments. Additionally, an overview about the available datasets to develop and test these approaches is performed. Finally, a critical analysis of this research field is done, with the characterization of the literature using a variety of metrics.
Collapse
|
92
|
Xiong L, Deng Z, Huang Y, Du W, Zhao X, Lu C, Tian W. Traffic Intersection Re-Identification Using Monocular Camera Sensors. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6515. [PMID: 33202653 PMCID: PMC7696742 DOI: 10.3390/s20226515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 11/06/2020] [Accepted: 11/11/2020] [Indexed: 06/11/2023]
Abstract
Perception of road structures especially the traffic intersections by visual sensors is an essential task for automated driving. However, compared with intersection detection or visual place recognition, intersection re-identification (intersection re-ID) strongly affects driving behavior decisions with given routes, yet has long been neglected by researchers. This paper strives to explore intersection re-ID by a monocular camera sensor. We propose a Hybrid Double-Level re-identification approach which exploits two branches of Deep Convolutional Neural Network to accomplish multi-task including classification of intersection and its fine attributes, and global localization in topological maps. Furthermore, we propose a mixed loss training for the network to learn the similarity of two intersection images. As no public datasets are available for the intersection re-ID task, based on the work of RobotCar, we propose a new dataset with carefully-labeled intersection attributes, which is called "RobotCar Intersection" and covers more than 30,000 images of eight intersections in different seasons and day time. Additionally, we provide another dataset, called "Campus Intersection" consisting of panoramic images of eight intersections in a university campus to verify our updating strategy of topology map. Experimental results demonstrate that our proposed approach can achieve promising results in re-ID of both coarse road intersections and its global pose, and is well suited for updating and completion of topological maps.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Wei Tian
- Institute of Intelligent Vehicles, School of Automotive Studies, Tongji University, Shanghai 201804, China; (L.X.); (Z.D.); (Y.H.); (W.D.); (X.Z.); (C.L.)
| |
Collapse
|
93
|
Niu J, Qian K. Robust place recognition based on salient landmarks screening and convolutional neural network features. INT J ADV ROBOT SYST 2020. [DOI: 10.1177/1729881420966966] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In this work, we propose a robust place recognition measurement in natural environments based on salient landmark screening and convolutional neural network (CNN) features. First, the salient objects in the image are segmented as candidate landmarks. Then, a category screening network is designed to remove specific object types that are not suitable for environmental modeling. Finally, a three-layer CNN is used to get highly representative features of the salient landmarks. In the similarity measurement, a Siamese network is chosen to calculate the similarity between images. Experiments were conducted on three challenging benchmark place recognition datasets and superior performance was achieved compared to other state-of-the-art methods, including FABMAP, SeqSLAM, SeqCNNSLAM, and PlaceCNN. Our method obtains the best results on the precision–recall curves, and the average precision reaches 78.43%, which is the best of the comparison methods. This demonstrates that the CNN features on the screened salient landmarks can be against a strong viewpoint and condition variations.
Collapse
Affiliation(s)
- Jie Niu
- School of Electronic Engineering, Changzhou College of Information Technology, Changzhou, Jiangsu, China
| | - Kun Qian
- School of Automation, Southeast University, Nanjing, Jiangsu, China
| |
Collapse
|
94
|
Beldzik E, Domagalik A, Fafrowicz M, Oginska H, Marek T. Brain networks involved in place recognition based on personal and spatial semantics. Behav Brain Res 2020; 398:112976. [PMID: 33148518 DOI: 10.1016/j.bbr.2020.112976] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Revised: 09/23/2020] [Accepted: 10/18/2020] [Indexed: 11/16/2022]
Abstract
Have you ever been to Krakow? If so, then you may recognize the Wawel Royal Castle from a picture due to your personal semantic memory, which stores all autobiographically significant concepts and repeated events of your past. If not, then you might still recognize the Wawel Royal Castle and be able to locate it on a map due to your spatial semantic memory. When recognizing a familiar landmark, how does neural activity depend on your memory related to that place? To address this question, we combined a novel task - the Krakow paradigm - with fMRI. In this task, participants are presented with a set of pictures showing various Krakow landmarks, each followed by two questions - one about its location, and the other about seeing the place in real-life, to trigger spatial and/or personal semantic memory, respectively. Group independent component analysis of fMRI data revealed several brain networks sensitive to the task conditions. Most sensitive was the medial temporal lobe network comprising bilateral hippocampus, parahippocampal, retrosplenial, and angular gyri, as well as distinct frontal areas. In agreement with the contextual continuum perspective, this network exhibited robust stimulus-related activity when the two memory types were combined, medium for spatial memory, and the weakest for baseline condition. The medial prefrontal network showed the same, pronounced deactivation for spatial memory and baseline conditions, yet far less deactivation for places seen in real-life. This effect was interpreted as self-referential processes counterbalancing the suppression of the brain's 'default mode.' In contrast, the motor, frontoparietal, and cingulo-opercular networks exhibited the strongest response-related activity for the spatial condition. These findings indicate that recognizing places based solely on general semantic knowledge requires more evidence accumulation, additional verbal semantics, and greater top-down control. Thus, the study imparts a novel insight into the neural mechanisms of place recognition. The Krakow paradigm has the potential to become a useful tool in future longitudinal or clinical studies.
Collapse
Affiliation(s)
- Ewa Beldzik
- Institute of Applied Psychology, Faculty of Management and Social Communication, Jagiellonian University, Ul. Łojasiewicza 4, 30-348, Krakow, Poland; Brain Imaging Core Facility, Malopolska Centre of Biotechnology, Jagiellonian University, Gronostajowa 7A, 30-387, Krakow, Poland.
| | - Aleksandra Domagalik
- Brain Imaging Core Facility, Malopolska Centre of Biotechnology, Jagiellonian University, Gronostajowa 7A, 30-387, Krakow, Poland
| | - Magdalena Fafrowicz
- Institute of Applied Psychology, Faculty of Management and Social Communication, Jagiellonian University, Ul. Łojasiewicza 4, 30-348, Krakow, Poland; Brain Imaging Core Facility, Malopolska Centre of Biotechnology, Jagiellonian University, Gronostajowa 7A, 30-387, Krakow, Poland
| | - Halszka Oginska
- Institute of Applied Psychology, Faculty of Management and Social Communication, Jagiellonian University, Ul. Łojasiewicza 4, 30-348, Krakow, Poland; Brain Imaging Core Facility, Malopolska Centre of Biotechnology, Jagiellonian University, Gronostajowa 7A, 30-387, Krakow, Poland
| | - Tadeusz Marek
- Institute of Applied Psychology, Faculty of Management and Social Communication, Jagiellonian University, Ul. Łojasiewicza 4, 30-348, Krakow, Poland; Brain Imaging Core Facility, Malopolska Centre of Biotechnology, Jagiellonian University, Gronostajowa 7A, 30-387, Krakow, Poland
| |
Collapse
|
95
|
Oertel A, Cieslewski T, Scaramuzza D. Augmenting Visual Place Recognition With Structural Cues. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.3009077] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
96
|
Mao J, Hu X, Zhang L, He X, Milford M. A Bio-Inspired Goal-Directed Visual Navigation Model for Aerial Mobile Robots. J INTELL ROBOT SYST 2020. [DOI: 10.1007/s10846-020-01190-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
97
|
Do H, Hong S, Kim J. Robust Loop Closure Method for Multi-Robot Map Fusion by Integration of Consistency and Data Similarity. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.3010731] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
98
|
Fischer T, Milford M. Event-Based Visual Place Recognition With Ensembles of Temporal Windows. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.3025505] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
99
|
Improving Image Description with Auxiliary Modality for Visual Localization in Challenging Conditions. Int J Comput Vis 2020. [DOI: 10.1007/s11263-020-01363-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
100
|
Burguera A, Bonin-Font F. An Unsupervised Neural Network for Loop Detection in Underwater Visual SLAM. J INTELL ROBOT SYST 2020. [DOI: 10.1007/s10846-020-01235-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|