1
|
Jiang F, Wang W, You H, Jiang S, Meng X, Kim J, Wang S. TS-LCD: Two-Stage Loop-Closure Detection Based on Heterogeneous Data Fusion. SENSORS (BASEL, SWITZERLAND) 2024; 24:3702. [PMID: 38931487 PMCID: PMC11207695 DOI: 10.3390/s24123702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 05/30/2024] [Accepted: 06/01/2024] [Indexed: 06/28/2024]
Abstract
Loop-closure detection plays a pivotal role in simultaneous localization and mapping (SLAM). It serves to minimize cumulative errors and ensure the overall consistency of the generated map. This paper introduces a multi-sensor fusion-based loop-closure detection scheme (TS-LCD) to address the challenges of low robustness and inaccurate loop-closure detection encountered in single-sensor systems under varying lighting conditions and structurally similar environments. Our method comprises two innovative components: a timestamp synchronization method based on data processing and interpolation, and a two-order loop-closure detection scheme based on the fusion validation of visual and laser loops. Experimental results on the publicly available KITTI dataset reveal that the proposed method outperforms baseline algorithms, achieving a significant average reduction of 2.76% in the trajectory error (TE) and a notable decrease of 1.381 m per 100 m in the relative error (RE). Furthermore, it boosts loop-closure detection efficiency by an average of 15.5%, thereby effectively enhancing the positioning accuracy of odometry.
Collapse
Affiliation(s)
- Fangdi Jiang
- School of Optoelectronic Engineering, Changchun University of Science and Technology, Changchun 130022, China; (F.J.); (W.W.); (H.Y.); (S.J.); (X.M.)
| | - Wanqiu Wang
- School of Optoelectronic Engineering, Changchun University of Science and Technology, Changchun 130022, China; (F.J.); (W.W.); (H.Y.); (S.J.); (X.M.)
| | - Hongru You
- School of Optoelectronic Engineering, Changchun University of Science and Technology, Changchun 130022, China; (F.J.); (W.W.); (H.Y.); (S.J.); (X.M.)
| | - Shuhang Jiang
- School of Optoelectronic Engineering, Changchun University of Science and Technology, Changchun 130022, China; (F.J.); (W.W.); (H.Y.); (S.J.); (X.M.)
| | - Xin Meng
- School of Optoelectronic Engineering, Changchun University of Science and Technology, Changchun 130022, China; (F.J.); (W.W.); (H.Y.); (S.J.); (X.M.)
| | - Jonghyuk Kim
- Center of Excellence in Cybercrimes and Digital Forensics, Naif Arab University for Security Sciences, Riyadh 11452, Saudi Arabia;
| | - Shifeng Wang
- School of Optoelectronic Engineering, Changchun University of Science and Technology, Changchun 130022, China; (F.J.); (W.W.); (H.Y.); (S.J.); (X.M.)
- Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528400, China
| |
Collapse
|
2
|
Yu X, Zhou B, Chang Z, Qian K, Fang F. MMDF: Multi-Modal Deep Feature Based Place Recognition of Mobile Robots With Applications on Cross-Scene Navigation. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3176731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Xiang Yu
- School of Automation, Southeast University, Nanjing, China
| | - Bo Zhou
- School of Automation, Southeast University, Nanjing, China
| | - Zeqing Chang
- School of Automation, Southeast University, Nanjing, China
| | - Kun Qian
- School of Automation, Southeast University, Nanjing, China
| | - Fang Fang
- School of Automation, Southeast University, Nanjing, China
| |
Collapse
|
3
|
Manzoor S, Joo SH, Kim EJ, Bae SH, In GG, Pyo JW, Kuc TY. 3D Recognition Based on Sensor Modalities for Robotic Systems: A Survey. SENSORS (BASEL, SWITZERLAND) 2021; 21:7120. [PMID: 34770429 PMCID: PMC8587961 DOI: 10.3390/s21217120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 10/17/2021] [Accepted: 10/20/2021] [Indexed: 11/16/2022]
Abstract
3D visual recognition is a prerequisite for most autonomous robotic systems operating in the real world. It empowers robots to perform a variety of tasks, such as tracking, understanding the environment, and human-robot interaction. Autonomous robots equipped with 3D recognition capability can better perform their social roles through supportive task assistance in professional jobs and effective domestic services. For active assistance, social robots must recognize their surroundings, including objects and places to perform the task more efficiently. This article first highlights the value-centric role of social robots in society by presenting recently developed robots and describes their main features. Instigated by the recognition capability of social robots, we present the analysis of data representation methods based on sensor modalities for 3D object and place recognition using deep learning models. In this direction, we delineate the research gaps that need to be addressed, summarize 3D recognition datasets, and present performance comparisons. Finally, a discussion of future research directions concludes the article. This survey is intended to show how recent developments in 3D visual recognition based on sensor modalities using deep-learning-based approaches can lay the groundwork to inspire further research and serves as a guide to those who are interested in vision-based robotics applications.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Tae-Yong Kuc
- Department of Electrical and Computer Engineering, College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, Korea; (S.M.); (S.-H.J.); (E.-J.K.); (S.-H.B.); (G.-G.I.); (J.-W.P.)
| |
Collapse
|
4
|
Yin H, Xu X, Wang Y, Xiong R. Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning. Front Robot AI 2021; 8:661199. [PMID: 34079825 PMCID: PMC8166203 DOI: 10.3389/frobt.2021.661199] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 03/30/2021] [Indexed: 12/04/2022] Open
Abstract
Place recognition is critical for both offline mapping and online localization. However, current single-sensor based place recognition still remains challenging in adverse conditions. In this paper, a heterogeneous measurement based framework is proposed for long-term place recognition, which retrieves the query radar scans from the existing lidar (Light Detection and Ranging) maps. To achieve this, a deep neural network is built with joint training in the learning stage, and then in the testing stage, shared embeddings of radar and lidar are extracted for heterogeneous place recognition. To validate the effectiveness of the proposed method, we conducted tests and generalization experiments on the multi-session public datasets and compared them to other competitive methods. The experimental results indicate that our model is able to perform multiple place recognitions: lidar-to-lidar (L2L), radar-to-radar (R2R), and radar-to-lidar (R2L), while the learned model is trained only once. We also release the source code publicly: https://github.com/ZJUYH/radar-to-lidar-place-recognition.
Collapse
Affiliation(s)
| | | | - Yue Wang
- Institute of Cyber-Systems and Control, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | | |
Collapse
|
5
|
Chen L, Jin S, Xia Z. Towards a Robust Visual Place Recognition in Large-Scale vSLAM Scenarios Based on a Deep Distance Learning. SENSORS 2021; 21:s21010310. [PMID: 33466401 PMCID: PMC7796086 DOI: 10.3390/s21010310] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 12/31/2020] [Accepted: 01/04/2021] [Indexed: 11/16/2022]
Abstract
The application of deep learning is blooming in the field of visual place recognition, which plays a critical role in visual Simultaneous Localization and Mapping (vSLAM) applications. The use of convolutional neural networks (CNNs) achieve better performance than handcrafted feature descriptors. However, visual place recognition is still a challenging task due to two major problems, i.e., perceptual aliasing and perceptual variability. Therefore, designing a customized distance learning method to express the intrinsic distance constraints in the large-scale vSLAM scenarios is of great importance. Traditional deep distance learning methods usually use the triplet loss which requires the mining of anchor images. This may, however, result in very tedious inefficient training and anomalous distance relationships. In this paper, a novel deep distance learning framework for visual place recognition is proposed. Through in-depth analysis of the multiple constraints of the distance relationship in the visual place recognition problem, the multi-constraint loss function is proposed to optimize the distance constraint relationships in the Euclidean space. The new framework can support any kind of CNN such as AlexNet, VGGNet and other user-defined networks to extract more distinguishing features. We have compared the results with the traditional deep distance learning method, and the results show that the proposed method can improve the performance by 19–28%. Additionally, compared to some contemporary visual place recognition techniques, the proposed method can improve the performance by 40%/36% and 27%/24% in average on VGGNet/AlexNet using the New College and the TUM datasets, respectively. It’s verified the method is capable to handle appearance changes in complex environments.
Collapse
Affiliation(s)
- Liang Chen
- Correspondence: ; Tel.: +86-185-5040-8581
| | | | | |
Collapse
|
6
|
Grzechca D, Ziębiński A, Paszek K, Hanzel K, Giel A, Czerny M, Becker A. How Accurate Can UWB and Dead Reckoning Positioning Systems Be? Comparison to SLAM Using the RPLidar System. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20133761. [PMID: 32635591 PMCID: PMC7374407 DOI: 10.3390/s20133761] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 06/25/2020] [Accepted: 07/01/2020] [Indexed: 06/11/2023]
Abstract
This paper compares two positioning systems, namely ultra-wideband (UWB) based micro-location technology and dead reckoning and a RPLidar based simultaneous localization and mapping (SLAM) solution. This new approach can be used to improve the quality of the positioning system and increase the functionality of advanced driver assistance systems (ADAS). This is achieved by using stationary nodes and UWB tags on the vehicles. Thus, the redundancy of localization can be achieved by this approach, e.g., as a backup to onboard sensors like RPlidar or radar. Additionally, UWB based micro-location allows additional data channels to be used for communication purposes. Furthermore, it is shown that the regular use of correction data increases UWB and dead reckoning accuracy. These correction data can be based on onboard sensors. This shows that it is promising to develop a system that fuses onboard sensors and micro-localization for safety-critical tasks like the platooning of commercial vehicles.
Collapse
Affiliation(s)
- Damian Grzechca
- Department of Electronics, Electrical Engineering and Microelectronics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland; (K.P.); (K.H.); (A.G.); (M.C.)
| | - Adam Ziębiński
- Department of Distributed Systems and Informatic Devices, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland;
| | - Krzysztof Paszek
- Department of Electronics, Electrical Engineering and Microelectronics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland; (K.P.); (K.H.); (A.G.); (M.C.)
| | - Krzysztof Hanzel
- Department of Electronics, Electrical Engineering and Microelectronics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland; (K.P.); (K.H.); (A.G.); (M.C.)
| | - Adam Giel
- Department of Electronics, Electrical Engineering and Microelectronics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland; (K.P.); (K.H.); (A.G.); (M.C.)
| | - Marcin Czerny
- Department of Electronics, Electrical Engineering and Microelectronics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland; (K.P.); (K.H.); (A.G.); (M.C.)
| | - Andreas Becker
- Faculty of Information Technology, University of Applied Science and Arts, Sonnenstr. 96, 44139 Dortmund, Germany;
| |
Collapse
|