1
|
Sun H, Li Y, Guo H, Luan C, Zhang L, Zheng H, Fan Y. Research on Student's T-Distribution Point Cloud Registration Algorithm Based on Local Features. SENSORS (BASEL, SWITZERLAND) 2024; 24:4972. [PMID: 39124021 PMCID: PMC11314997 DOI: 10.3390/s24154972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Revised: 07/28/2024] [Accepted: 07/30/2024] [Indexed: 08/12/2024]
Abstract
LiDAR offers a wide range of uses in autonomous driving, remote sensing, urban planning, and other areas. The laser 3D point cloud acquired by LiDAR typically encounters issues during registration, including laser speckle noise, Gaussian noise, data loss, and data disorder. This work suggests a novel Student's t-distribution point cloud registration algorithm based on the local features of point clouds to address these issues. The approach uses Student's t-distribution mixture model (SMM) to generate the probability distribution of point cloud registration, which can accurately describe the data distribution, in order to tackle the problem of the missing laser 3D point cloud data and data disorder. Owing to the disparity in the point cloud registration task, a full-rank covariance matrix is built based on the local features of the point cloud during the objective function design process. The combined penalty of point-to-point and point-to-plane distance is then added to the objective function adaptively. Simultaneously, by analyzing the imaging characteristics of LiDAR, according to the influence of the laser waveform and detector on the LiDAR imaging, the composite weight coefficient is added to improve the pertinence of the algorithm. Based on the public dataset and the laser 3D point cloud dataset acquired in the laboratory, the experimental findings demonstrate that the proposed algorithm has high practicability and dependability and outperforms the five comparison algorithms in terms of accuracy and robustness.
Collapse
Affiliation(s)
- Houpeng Sun
- Graduate School, Space Engineering University, Beijing 101416, China; (H.S.); (C.L.)
| | - Yingchun Li
- Space Engineering University, Beijing 101416, China; (L.Z.); (H.Z.); (Y.F.)
| | - Huichao Guo
- Space Engineering University, Beijing 101416, China; (L.Z.); (H.Z.); (Y.F.)
| | - Chenglong Luan
- Graduate School, Space Engineering University, Beijing 101416, China; (H.S.); (C.L.)
| | - Laixian Zhang
- Space Engineering University, Beijing 101416, China; (L.Z.); (H.Z.); (Y.F.)
| | - Haijing Zheng
- Space Engineering University, Beijing 101416, China; (L.Z.); (H.Z.); (Y.F.)
| | - Youchen Fan
- Space Engineering University, Beijing 101416, China; (L.Z.); (H.Z.); (Y.F.)
| |
Collapse
|
2
|
Gao Y, Wang Y, Tian L, Li D, Wang F. Visual Navigation Algorithms for Aircraft Fusing Neural Networks in Denial Environments. SENSORS (BASEL, SWITZERLAND) 2024; 24:4797. [PMID: 39123844 PMCID: PMC11314764 DOI: 10.3390/s24154797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 07/12/2024] [Accepted: 07/22/2024] [Indexed: 08/12/2024]
Abstract
A lightweight aircraft visual navigation algorithm that fuses neural networks is proposed to address the limited computing power issue during the offline operation of aircraft edge computing platforms in satellite-denied environments with complex working scenarios. This algorithm utilizes object detection algorithms to label dynamic objects within complex scenes and performs dynamic feature point elimination to enhance the feature point extraction quality, thereby improving navigation accuracy. The algorithm was validated using an aircraft edge computing platform, and comparisons were made with existing methods through experiments conducted on the TUM public dataset and physical flight experiments. The experimental results show that the proposed algorithm not only improves the navigation accuracy but also has high robustness compared with the monocular ORB-SLAM2 method under the premise of satisfying the real-time operation of the system.
Collapse
Affiliation(s)
| | - Yue Wang
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China; (Y.G.); (L.T.); (D.L.); (F.W.)
| | | | | | | |
Collapse
|
3
|
Shao G, Lin F, Li C, Shao W, Chai W, Xu X, Zhang M, Sun Z, Li Q. Multi-Sensor-Assisted Low-Cost Indoor Non-Visual Semantic Map Construction and Localization for Modern Vehicles. SENSORS (BASEL, SWITZERLAND) 2024; 24:4263. [PMID: 39001042 PMCID: PMC11243959 DOI: 10.3390/s24134263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Revised: 06/22/2024] [Accepted: 06/28/2024] [Indexed: 07/16/2024]
Abstract
With the transformation and development of the automotive industry, low-cost and seamless indoor and outdoor positioning has become a research hotspot for modern vehicles equipped with in-vehicle infotainment systems, Internet of Vehicles, or other intelligent systems (such as Telematics Box, Autopilot, etc.). This paper analyzes modern vehicles in different configurations and proposes a low-cost, versatile indoor non-visual semantic mapping and localization solution based on low-cost sensors. Firstly, the sliding window-based semantic landmark detection method is designed to identify non-visual semantic landmarks (e.g., entrance/exit, ramp entrance/exit, road node). Then, we construct an indoor non-visual semantic map that includes the vehicle trajectory waypoints, non-visual semantic landmarks, and Wi-Fi fingerprints of RSS features. Furthermore, to estimate the position of modern vehicles in the constructed semantic maps, we proposed a graph-optimized localization method based on landmark matching that exploits the correlation between non-visual semantic landmarks. Finally, field experiments are conducted in two shopping mall scenes with different underground parking layouts to verify the proposed non-visual semantic mapping and localization method. The results show that the proposed method achieves a high accuracy of 98.1% in non-visual semantic landmark detection and a low localization error of 1.31 m.
Collapse
Affiliation(s)
- Guangxiao Shao
- College of Electromechanical Engineering, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Fanyu Lin
- College of Sino-German Institute Science and Technology, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Chao Li
- Haier College, Qingdao Technical College, Qingdao 266555, China
| | - Wei Shao
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Wennan Chai
- College of Sino-German Institute Science and Technology, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Xiaorui Xu
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Mingyue Zhang
- College of Sino-German Institute Science and Technology, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Zhen Sun
- College of Information Science & Technology, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Qingdang Li
- College of Sino-German Institute Science and Technology, Qingdao University of Science and Technology, Qingdao 266061, China
| |
Collapse
|
4
|
Zhou W, Zhou R. Vision SLAM algorithm for wheeled robots integrating multiple sensors. PLoS One 2024; 19:e0301189. [PMID: 38547130 PMCID: PMC10977683 DOI: 10.1371/journal.pone.0301189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 03/12/2024] [Indexed: 04/02/2024] Open
Abstract
Wheeled robots play a crucial role in driving the autonomy and intelligence of robotics. However, they often encounter challenges such as tracking loss and poor real-time performance in low-texture environments. In response to these issues, this research proposes a real-time localization and mapping algorithm based on the fusion of multiple features, utilizing point, line, surface, and matrix decomposition characteristics. Building upon this foundation, the algorithm integrates multiple sensors to design a vision-based real-time localization and mapping algorithm for wheeled robots. The study concludes with experimental validation on a two-wheeled robot platform. The results indicated that the multi-feature fusion algorithm achieved the highest average accuracy in both conventional indoor datasets (84.57%) and sparse-feature indoor datasets (82.37%). In indoor scenarios, the vision-based algorithm integrating multiple sensors achieved an average accuracy of 85.4% with a processing time of 64.4 ms. In outdoor scenarios, the proposed algorithm exhibited a 14.51% accuracy improvement over a vision-based algorithm without closed-loop detection. In summary, the proposed method demonstrated outstanding accuracy and real-time performance, exhibiting favorable application effects across various practical scenarios.
Collapse
Affiliation(s)
- Weihua Zhou
- School of Computer and Information Technology (School of Big Data), Shanxi University, Taiyuan, 030002, China
| | - Rougang Zhou
- School of Mechanical Engineering, Hangzhou Dianzi University, Hangzhou, 310018, China
| |
Collapse
|
5
|
Malakouti-Khah H, Sadeghzadeh-Nokhodberiz N, Montazeri A. Simultaneous localization and mapping in a multi-robot system in a dynamic environment with unknown initial correspondence. Front Robot AI 2024; 10:1291672. [PMID: 38283801 PMCID: PMC10811797 DOI: 10.3389/frobt.2023.1291672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 12/11/2023] [Indexed: 01/30/2024] Open
Abstract
A basic assumption in most approaches to simultaneous localization and mapping (SLAM) is the static nature of the environment. In recent years, some research has been devoted to the field of SLAM in dynamic environments. However, most of the studies conducted in this field have implemented SLAM by removing and filtering the moving landmarks. Moreover, the use of several robots in large, complex, and dynamic environments can significantly improve performance on the localization and mapping task, which has attracted many researchers to this problem more recently. In multi-robot SLAM, the robots can cooperate in a decentralized manner without the need for a central processing center to obtain their positions and a more precise map of the environment. In this article, a new decentralized approach is presented for multi-robot SLAM problems in dynamic environments with unknown initial correspondence. The proposed method applies a modified Fast-SLAM method, which implements SLAM in a decentralized manner by considering moving landmarks in the environment. Due to the unknown initial correspondence of the robots, a geographical approach is embedded in the proposed algorithm to align and merge their maps. Data association is also embedded in the algorithm; this is performed using the measurement predictions in the SLAM process of each robot. Finally, simulation results are provided to demonstrate the performance of the proposed method.
Collapse
|
6
|
Zhang Y, Li Y, Chen P. TSG-SLAM: SLAM Employing Tight Coupling of Instance Segmentation and Geometric Constraints in Complex Dynamic Environments. SENSORS (BASEL, SWITZERLAND) 2023; 23:9807. [PMID: 38139653 PMCID: PMC10747090 DOI: 10.3390/s23249807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 12/10/2023] [Accepted: 12/11/2023] [Indexed: 12/24/2023]
Abstract
Although numerous effective Simultaneous Localization and Mapping (SLAM) systems have been developed, complex dynamic environments continue to present challenges, such as managing moving objects and enabling robots to comprehend environments. This paper focuses on a visual SLAM method specifically designed for complex dynamic environments. Our approach proposes a dynamic feature removal module based on the tight coupling of instance segmentation and multi-view geometric constraints (TSG). This method seamlessly integrates semantic information with geometric constraint data, using the fundamental matrix as a connecting element. In particular, instance segmentation is performed on frames to eliminate all dynamic and potentially dynamic features, retaining only reliable static features for sequential feature matching and acquiring a dependable fundamental matrix. Subsequently, based on this matrix, true dynamic features are identified and removed by capitalizing on multi-view geometry constraints while preserving reliable static features for further tracking and mapping. An instance-level semantic map of the global scenario is constructed to enhance the perception and understanding of complex dynamic environments. The proposed method is assessed on TUM datasets and in real-world scenarios, demonstrating that TSG-SLAM exhibits superior performance in detecting and eliminating dynamic feature points and obtains good localization accuracy in dynamic environments.
Collapse
Affiliation(s)
- Yongchao Zhang
- School of Intelligent Manufacturing, Taizhou University, Taizhou 318000, China;
| | - Yuanming Li
- Department of Electrical Engineering, Ganzhou Polytechnic, Ganzhou 341000, China;
- School of Electrical and Automation Engineering, East China Jiaotong University, Nanchang 330013, China
| | - Pengzhan Chen
- School of Intelligent Manufacturing, Taizhou University, Taizhou 318000, China;
- School of Electrical and Automation Engineering, East China Jiaotong University, Nanchang 330013, China
| |
Collapse
|
7
|
Hu C, Liu M, Zhang S, Xie Y, Tan L. MoTI: A Multi-Stage Algorithm for Moving Object Identification in SLAM. SENSORS (BASEL, SWITZERLAND) 2023; 23:7911. [PMID: 37765967 PMCID: PMC10537622 DOI: 10.3390/s23187911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 07/10/2023] [Accepted: 07/21/2023] [Indexed: 09/29/2023]
Abstract
Simultaneous localization and mapping (SLAM) algorithms are widely applied in fields such as autonomous driving and target tracking. However, the effect of moving objects on localization and mapping remains a challenge in natural dynamic scenarios. To overcome this challenge, this paper proposes an algorithm for dynamic point cloud detection that fuses laser and visual identification data, the multi-stage moving object identification algorithm (MoTI). The MoTI algorithm consists of two stages: rough processing and precise processing. In the rough processing stage, a statistical method is employed to preliminarily detect dynamic points based on the range image error of the point cloud. In the precise processing stage, the radius search strategy is used to statistically test the nearest neighbor points. Next, visual identification information and point cloud registration results are fused using a method of statistics and information weighting to construct a probability model for identifying whether a point cloud cluster originates from a moving object. The algorithm is integrated into the front-end of the LOAM system, which significantly improves the localization accuracy. The MoTI algorithm is evaluated on an actual indoor dynamic environment and several KITTI datasets, and the results demonstrate its ability to accurately detect dynamic targets in the background and improve the localization accuracy of the robot.
Collapse
Affiliation(s)
- Changqing Hu
- School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, China
| | - Manlu Liu
- School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, China
- Robot Technology Used for Special Environment Key Laboratory of Sichuan Province, Southwest University of Science and Technology, Mianyang 621010, China
| | - Su Zhang
- School of Traffic Transportation Engineering, Central South University, Changsha 410000, China
| | - Yu Xie
- School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, China
| | - Liguo Tan
- Laboratory for Space Environment and Physical Sciences, Harbin Institute of Technology, Harbin 150001, China
| |
Collapse
|
8
|
Bavle H, Sanchez-Lopez JL, Cimarelli C, Tourani A, Voos H. From SLAM to Situational Awareness: Challenges and Survey. SENSORS (BASEL, SWITZERLAND) 2023; 23:4849. [PMID: 37430762 DOI: 10.3390/s23104849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 04/27/2023] [Accepted: 05/13/2023] [Indexed: 07/12/2023]
Abstract
The capability of a mobile robot to efficiently and safely perform complex missions is limited by its knowledge of the environment, namely the situation. Advanced reasoning, decision-making, and execution skills enable an intelligent agent to act autonomously in unknown environments. Situational Awareness (SA) is a fundamental capability of humans that has been deeply studied in various fields, such as psychology, military, aerospace, and education. Nevertheless, it has yet to be considered in robotics, which has focused on single compartmentalized concepts such as sensing, spatial perception, sensor fusion, state estimation, and Simultaneous Localization and Mapping (SLAM). Hence, the present research aims to connect the broad multidisciplinary existing knowledge to pave the way for a complete SA system for mobile robotics that we deem paramount for autonomy. To this aim, we define the principal components to structure a robotic SA and their area of competence. Accordingly, this paper investigates each aspect of SA, surveying the state-of-the-art robotics algorithms that cover them, and discusses their current limitations. Remarkably, essential aspects of SA are still immature since the current algorithmic development restricts their performance to only specific environments. Nevertheless, Artificial Intelligence (AI), particularly Deep Learning (DL), has brought new methods to bridge the gap that maintains these fields apart from the deployment to real-world scenarios. Furthermore, an opportunity has been discovered to interconnect the vastly fragmented space of robotic comprehension algorithms through the mechanism of Situational Graph (S-Graph), a generalization of the well-known scene graph. Therefore, we finally shape our vision for the future of robotic situational awareness by discussing interesting recent research directions.
Collapse
Affiliation(s)
- Hriday Bavle
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Jose Luis Sanchez-Lopez
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Claudio Cimarelli
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Ali Tourani
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Holger Voos
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
- Department of Engineering, Faculty of Science, Technology, and Medicine (FSTM), University of Luxembourg, 1359 Luxembourg, Luxembourg
| |
Collapse
|
9
|
Tourani A, Bavle H, Sanchez-Lopez JL, Voos H. Visual SLAM: What Are the Current Trends and What to Expect? SENSORS (BASEL, SWITZERLAND) 2022; 22:9297. [PMID: 36501998 PMCID: PMC9735432 DOI: 10.3390/s22239297] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/21/2022] [Accepted: 11/25/2022] [Indexed: 06/17/2023]
Abstract
In recent years, Simultaneous Localization and Mapping (SLAM) systems have shown significant performance, accuracy, and efficiency gain. In this regard, Visual Simultaneous Localization and Mapping (VSLAM) methods refer to the SLAM approaches that employ cameras for pose estimation and map reconstruction and are preferred over Light Detection And Ranging (LiDAR)-based methods due to their lighter weight, lower acquisition costs, and richer environment representation. Hence, several VSLAM approaches have evolved using different camera types (e.g., monocular or stereo), and have been tested on various datasets (e.g., Technische Universität München (TUM) RGB-D or European Robotics Challenge (EuRoC)) and in different conditions (i.e., indoors and outdoors), and employ multiple methodologies to have a better understanding of their surroundings. The mentioned variations have made this topic popular for researchers and have resulted in various methods. In this regard, the primary intent of this paper is to assimilate the wide range of works in VSLAM and present their recent advances, along with discussing the existing challenges and trends. This survey is worthwhile to give a big picture of the current focuses in robotics and VSLAM fields based on the concentrated resolutions and objectives of the state-of-the-art. This paper provides an in-depth literature survey of fifty impactful articles published in the VSLAMs domain. The mentioned manuscripts have been classified by different characteristics, including the novelty domain, objectives, employed algorithms, and semantic level. The paper also discusses the current trends and contemporary directions of VSLAM techniques that may help researchers investigate them.
Collapse
Affiliation(s)
- Ali Tourani
- Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Hriday Bavle
- Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Jose Luis Sanchez-Lopez
- Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Holger Voos
- Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
- Faculty of Science, Technology and Medicine (FSTM), Department of Engineering, University of Luxembourg, 1359 Luxembourg, Luxembourg
| |
Collapse
|
10
|
DGFlow-SLAM: A Novel Dynamic Environment RGB-D SLAM without Prior Semantic Knowledge Based on Grid Segmentation of Scene Flow. Biomimetics (Basel) 2022; 7:biomimetics7040163. [PMID: 36278720 PMCID: PMC9590065 DOI: 10.3390/biomimetics7040163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 10/05/2022] [Accepted: 10/09/2022] [Indexed: 11/17/2022] Open
Abstract
Currently, using semantic segmentation networks to distinguish dynamic and static key points has become a mainstream designing method for semantic SLAM systems. However, the semantic SLAM systems must have prior semantic knowledge of relevant dynamic objects, and their processing speed is inversely proportional to the recognition accuracy. To simultaneously enhance the speed and accuracy for recognizing dynamic objects in different environments, a novel SLAM system without prior semantics called DGFlow-SLAM is proposed in this paper. A novel grid segmentation method is used in the system to segment the scene flow, and then an adaptive threshold method is used to roughly detect the dynamic objects. Based on this, a deep mean clustering segmentation method is applied to find potential dynamic targets. Finally, the results of grid segmentation and depth mean clustering segmentation are jointly used to find moving objects accurately, and all the feature points of the moving objects are removed on the premise of retaining the static part of the moving object. The experimental results show that on the dynamic sequence dataset of TUM RGB-D, compared with the DynaSLAM system with the highest accuracy for detecting moderate and violent motion and the DS-SLAM with the highest accuracy for detecting slight motion, DGflow-SLAM obtains similar accuracy results and improves the accuracy by 7.5%. In addition, DGflow-SLAM is 10 times and 1.27 times faster than DynaSLAM and DS-SLAM, respectively.
Collapse
|
11
|
Ma Y, Zhu J, Tian Z, Li Z. Effective multiview registration of point clouds based on Student’s-t mixture model. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
12
|
Real-Time Artificial Intelligence Based Visual Simultaneous Localization and Mapping in Dynamic Environments – a Review. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-022-01643-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
13
|
DGS-SLAM: A Fast and Robust RGBD SLAM in Dynamic Environments Combined by Geometric and Semantic Information. REMOTE SENSING 2022. [DOI: 10.3390/rs14030795] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
Visual Simultaneous Localization and Mapping (VSLAM) is a prerequisite for robots to accomplish fully autonomous movement and exploration in unknown environments. At present, many impressive VSLAM systems have emerged, but most of them rely on the static world assumption, which limits their application in real dynamic scenarios. To improve the robustness and efficiency of the system in dynamic environments, this paper proposes a dynamic RGBD SLAM based on a combination of geometric and semantic information (DGS-SLAM). First, a dynamic object detection module based on the multinomial residual model is proposed, which executes the motion segmentation of the scene by combining the motion residual information of adjacent frames and the potential motion information of the semantic segmentation module. Second, a camera pose tracking strategy using feature point classification results is designed to achieve robust system tracking. Finally, according to the results of dynamic segmentation and camera tracking, a semantic segmentation module based on a semantic frame selection strategy is designed for extracting potential moving targets in the scene. Extensive evaluation in public TUM and Bonn datasets demonstrates that DGS-SLAM has higher robustness and speed than state-of-the-art dynamic RGB-D SLAM systems in dynamic scenes.
Collapse
|
14
|
Yadav R, Kala R. Fusion of visual odometry and place recognition for SLAM in extreme conditions. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03050-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
15
|
VINS-dimc: A Visual-Inertial Navigation System for Dynamic Environment Integrating Multiple Constraints. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2022. [DOI: 10.3390/ijgi11020095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Most visual–inertial navigation systems (VINSs) suffer from moving objects and achieve poor positioning accuracy in dynamic environments. Therefore, to improve the positioning accuracy of VINS in dynamic environments, a monocular visual–inertial navigation system, VINS-dimc, is proposed. This system integrates various constraints on the elimination of dynamic feature points, which helps to improve the positioning accuracy of VINSs in dynamic environments. First, the motion model, computed from the inertial measurement unit (IMU) data, is subjected to epipolar constraint and flow vector bound (FVB) constraint to eliminate feature matching that deviates significantly from the motion model. This algorithm then combines multiple feature point matching constraints that avoid the lack of single constraints and make the system more robust and universal. Finally, VINS-dimc was proposed, which can adapt to a dynamic environment. Experiments show that the proposed algorithm could accurately eliminate the dynamic feature points on moving objects while preserving the static feature points. It is a great help for the positioning accuracy and robustness of VINSs, whether they are from self-collected data or public datasets.
Collapse
|
16
|
Wang J, Xu M, Foroughi F, Dai D, Chen Z. FasterGICP: Acceptance-Rejection Sampling Based 3D Lidar Odometry. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2021.3124072] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
17
|
|
18
|
Lin S, Wang J, Xu M, Zhao H, Chen Z. Topology Aware Object-Level Semantic Mapping Towards More Robust Loop Closure. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3097242] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|