1
|
Zheng F, Zhou L, Lin W, Liu J, Sun L. LRPL-VIO: A Lightweight and Robust Visual-Inertial Odometry with Point and Line Features. SENSORS (BASEL, SWITZERLAND) 2024; 24:1322. [PMID: 38400480 PMCID: PMC10892506 DOI: 10.3390/s24041322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 01/26/2024] [Accepted: 02/15/2024] [Indexed: 02/25/2024]
Abstract
Visual-inertial odometry (VIO) algorithms, fusing various features such as points and lines, are able to improve their performance in challenging scenes while the running time severely increases. In this paper, we propose a novel lightweight point-line visual-inertial odometry algorithm to solve this problem, called LRPL-VIO. Firstly, a fast line matching method is proposed based on the assumption that the photometric values of endpoints and midpoints are invariant between consecutive frames, which greatly reduces the time consumption of the front end. Then, an efficient filter-based state estimation framework is designed to finish information fusion (point, line, and inertial). Fresh measurements of line features with good tracking quality are selected for state estimation using a unique feature selection scheme, which improves the efficiency of the proposed algorithm. Finally, validation experiments are conducted on public datasets and in real-world tests to evaluate the performance of LRPL-VIO and the results show that we outperform other state-of-the-art algorithms especially in terms of speed and robustness.
Collapse
Affiliation(s)
- Feixiang Zheng
- College of Artificial Intelligence, Nankai University, Tianjin 300350, China; (F.Z.); (L.Z.); (J.L.)
| | - Lu Zhou
- College of Artificial Intelligence, Nankai University, Tianjin 300350, China; (F.Z.); (L.Z.); (J.L.)
| | - Wanbiao Lin
- Shenzhen Research Institute, Nankai University, Shenzhen 518081, China;
| | - Jingyang Liu
- College of Artificial Intelligence, Nankai University, Tianjin 300350, China; (F.Z.); (L.Z.); (J.L.)
| | - Lei Sun
- College of Artificial Intelligence, Nankai University, Tianjin 300350, China; (F.Z.); (L.Z.); (J.L.)
| |
Collapse
|
2
|
Usman M, Ali A, Tahir A, Rahman MZU, Khan AM. Efficient Approach for Extracting High-Level B-Spline Features from LIDAR Data for Light-Weight Mapping. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22239168. [PMID: 36501874 PMCID: PMC9737135 DOI: 10.3390/s22239168] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/12/2022] [Accepted: 11/21/2022] [Indexed: 05/27/2023]
Abstract
Light-weight and accurate mapping is made possible by high-level feature extraction from sensor readings. In this paper, the high-level B-spline features from a 2D LIDAR are extracted with a faster method as a solution to the mapping problem, making it possible for the robot to interact with its environment while navigating. The computation time of feature extraction is very crucial when mobile robots perform real-time tasks. In addition to the existing assessment measures of B-spline feature extraction methods, the paper also includes a new benchmark time metric for evaluating how well the extracted features perform. For point-to-point association, the most reliable vertex control points of the spline features generated from the hints of low-level point feature FALKO were chosen. The standard three indoor and one outdoor data sets were used for the experiment. The experimental results based on benchmark performance metrics, specifically computation time, show that the presented approach achieves better results than the state-of-the-art methods for extracting B-spline features. The classification of the methods implemented in the B-spline features detection and the algorithms are also presented in the paper.
Collapse
Affiliation(s)
- Muhammad Usman
- Department of Mechanical, Mechatronics, and Manufacturing Engineering, University of Engineering & Technology, Faisalabad Campus, Faisalabad 38000, Pakistan
| | - Ahmad Ali
- Department of Mechanical, Mechatronics, and Manufacturing Engineering, University of Engineering & Technology, Faisalabad Campus, Faisalabad 38000, Pakistan
| | - Abdullah Tahir
- Department of Mechanical, Mechatronics, and Manufacturing Engineering, University of Engineering & Technology, Faisalabad Campus, Faisalabad 38000, Pakistan
| | - Muhammad Zia Ur Rahman
- Department of Mechanical, Mechatronics, and Manufacturing Engineering, University of Engineering & Technology, Faisalabad Campus, Faisalabad 38000, Pakistan
| | - Abdul Manan Khan
- Department of Mechanical Engineering, Hanbat National University, Deajeon 34158, Republic of Korea
| |
Collapse
|
3
|
Liu X, Cao Z, Yu Y, Ren G, Yu J, Tan M. Robot Navigation Based on Situational Awareness. IEEE Trans Cogn Dev Syst 2022. [DOI: 10.1109/tcds.2021.3075862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Xilong Liu
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Zhiqiang Cao
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Yingying Yu
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Guangli Ren
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Junzhi Yu
- Department of Mechanics and Engineering Science, State Key Laboratory for Turbulence and Complex System, BIC-ESAT, College of Engineering, Peking University, Beijing, China
| | - Min Tan
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
4
|
Dong B, Zhang K. A Tightly Coupled Visual-Inertial GNSS State Estimator Based on Point-Line Feature. SENSORS (BASEL, SWITZERLAND) 2022; 22:3391. [PMID: 35591081 PMCID: PMC9102579 DOI: 10.3390/s22093391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/25/2022] [Accepted: 04/26/2022] [Indexed: 06/15/2023]
Abstract
Visual-inertial odometry (VIO) is known to suffer from drifting and can only provide local coordinates. In this paper, we propose a tightly coupled GNSS-VIO system based on point-line features for robust and drift-free state estimation. Feature-based methods are not robust in complex areas such as weak or repeated textures. To deal with this problem, line features with more environmental structure information can be extracted. In addition, to eliminate the accumulated drift of VIO, we tightly fused the GNSS measurement with visual and inertial information. The GNSS pseudorange measurements are real-time and unambiguous but experience large errors. The GNSS carrier phase measurements can achieve centimeter-level positioning accuracy, but the solution to the whole-cycle ambiguity is complex and time-consuming, which degrades the real-time performance of a state estimator. To combine the advantages of the two measurements, we use the carrier phase smoothed pseudorange instead of pseudorange to perform state estimation. Furthermore, the existence of the GNSS receiver and IMU also makes the extrinsic parameter calibration crucial. Our proposed system can calibrate the extrinsic translation parameter between the GNSS receiver and IMU in real-time. Finally, we show that the states represented in the ECEF frame are fully observable, and the tightly coupled GNSS-VIO state estimator is consistent. We conducted experiments on public datasets. The experimental results demonstrate that the positioning precision of our system is improved and the system is robust and real-time.
Collapse
Affiliation(s)
- Bo Dong
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China;
| | - Kai Zhang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China;
- Research Institute of Tsinghua, Pearl River Delta, Guangzhou 510530, China
| |
Collapse
|
5
|
Xu B, Wang P, He Y, Chen Y, Chen Y, Zhou M. Leveraging Structural Information to Improve Point Line Visual-Inertial Odometry. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3146893] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
6
|
Lim H, Jeon J, Myung H. UV-SLAM: Unconstrained Line-Based SLAM Using Vanishing Points for Structural Mapping. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3140816] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
7
|
Wen T, Jiang K, Miao J, Wijaya B, Jia P, Yang M, Yang D. Roadside HD Map Object Reconstruction Using Monocular Camera. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3185367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Tuopu Wen
- School of Vehicle and Mobility, Tsinghua University, Beijing, China
| | - Kun Jiang
- School of Vehicle and Mobility, Tsinghua University, Beijing, China
| | - Jinyu Miao
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Benny Wijaya
- School of Vehicle and Mobility, Tsinghua University, Beijing, China
| | - Peijin Jia
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Mengmeng Yang
- School of Vehicle and Mobility, Tsinghua University, Beijing, China
| | - Diange Yang
- School of Vehicle and Mobility, Tsinghua University, Beijing, China
| |
Collapse
|
8
|
Wu J, Xiong J, Guo H. Enforcing Regularities between Planes Using Key Plane for Monocular Mesh-based VIO. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01529-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
9
|
Wang Q, Yan Z, Wang J, Xue F, Ma W, Zha H. Line Flow Based Simultaneous Localization and Mapping. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3061403] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
10
|
Zhou F, Zhang L, Deng C, Fan X. Improved Point-Line Feature Based Visual SLAM Method for Complex Environments. SENSORS 2021; 21:s21134604. [PMID: 34283161 PMCID: PMC8272192 DOI: 10.3390/s21134604] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 06/23/2021] [Accepted: 07/02/2021] [Indexed: 11/23/2022]
Abstract
Traditional visual simultaneous localization and mapping (SLAM) systems rely on point features to estimate camera trajectories. However, feature-based systems are usually not robust in complex environments such as weak textures or obvious brightness changes. To solve this problem, we used more environmental structure information by introducing line segments features and designed a monocular visual SLAM system. This system combines points and line segments to effectively make up for the shortcomings of traditional positioning based only on point features. First, ORB algorithm based on local adaptive threshold was proposed. Subsequently, we not only optimized the extracted line features, but also added a screening step before the traditional descriptor matching to combine the point features matching results with the line features matching. Finally, the weighting idea was introduced. When constructing the optimized cost function, we allocated weights reasonably according to the richness and dispersion of features. Our evaluation on publicly available datasets demonstrated that the improved point-line feature method is competitive with the state-of-the-art methods. In addition, the trajectory graph significantly reduced drift and loss, which proves that our system increases the robustness of SLAM.
Collapse
Affiliation(s)
- Fei Zhou
- College of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (F.Z.); (C.D.); (X.F.)
| | - Limin Zhang
- College of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (F.Z.); (C.D.); (X.F.)
- Correspondence:
| | - Chaolong Deng
- College of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (F.Z.); (C.D.); (X.F.)
| | - Xinyue Fan
- College of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (F.Z.); (C.D.); (X.F.)
- Intelligent Terminal Key Laboratory of Sichuan Province, Yibin 644000, China
| |
Collapse
|
11
|
Monocular Visual SLAM with Points and Lines for Ground Robots in Particular Scenes: Parameterization for Lines on Ground. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01315-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
12
|
Li G, Zeng Y, Huang H, Song S, Liu B, Liao X. A Multi-Feature Fusion Slam System Attaching Semantic In-Variant to Points and Lines. SENSORS 2021; 21:s21041196. [PMID: 33567708 PMCID: PMC7916065 DOI: 10.3390/s21041196] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 02/03/2021] [Accepted: 02/04/2021] [Indexed: 11/23/2022]
Abstract
The traditional simultaneous localization and mapping (SLAM) system uses static points of the environment as features for real-time localization and mapping. When there are few available point features, the system is difficult to implement. A feasible solution is to introduce line features. In complex scenarios containing rich line segments, the description of line segments is not strongly differentiated, which can lead to incorrect association of line segment data, thus introducing errors into the system and aggravating the cumulative error of the system. To address this problem, a point-line stereo visual SLAM system incorporating semantic invariants is proposed in this paper. This system improves the accuracy of line feature matching by fusing line features with image semantic invariant information. When defining the error function, the semantic invariant is fused with the reprojection error function, and the semantic constraint is applied to reduce the cumulative error of the poses in the long-term tracking process. Experiments on the Office sequence of the TartanAir dataset and the KITTI dataset show that this system improves the matching accuracy of line features and suppresses the cumulative error of the SLAM system to some extent, and the mean relative pose error (RPE) is 1.38 and 0.0593 m, respectively.
Collapse
Affiliation(s)
- Gang Li
- College of Electrical Engineering, Guangxi University, Nanning 530000, China; (G.L.); (Y.Z.); (S.S.); (B.L.); (X.L.)
| | - Yawen Zeng
- College of Electrical Engineering, Guangxi University, Nanning 530000, China; (G.L.); (Y.Z.); (S.S.); (B.L.); (X.L.)
| | - Huilan Huang
- College of Mechanical Engineering, Guangxi University, Nanning 530000, China
- Correspondence:
| | - Shaojian Song
- College of Electrical Engineering, Guangxi University, Nanning 530000, China; (G.L.); (Y.Z.); (S.S.); (B.L.); (X.L.)
| | - Bin Liu
- College of Electrical Engineering, Guangxi University, Nanning 530000, China; (G.L.); (Y.Z.); (S.S.); (B.L.); (X.L.)
- College of Automation, Central South University, Changsha 410083, China
| | - Xiang Liao
- College of Electrical Engineering, Guangxi University, Nanning 530000, China; (G.L.); (Y.Z.); (S.S.); (B.L.); (X.L.)
| |
Collapse
|
13
|
Zhang X, Liu Q, Zheng B, Wang H, Wang Q. A visual simultaneous localization and mapping approach based on scene segmentation and incremental optimization. INT J ADV ROBOT SYST 2020. [DOI: 10.1177/1729881420977669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Existing visual simultaneous localization and mapping (V-SLAM) algorithms are usually sensitive to the situation with sparse landmarks in the environment and large view transformation of camera motion, and they are liable to generate large pose errors that lead to track failures due to the decrease of the matching rate of feature points. Aiming at the above problems, this article proposes an improved V-SLAM method based on scene segmentation and incremental optimization strategy. In the front end, this article proposes a scene segmentation algorithm considering camera motion direction and angle. By segmenting the trajectory and adding camera motion direction to the tracking thread, an effective prediction model of camera motion in the scene with sparse landmarks and large view transformation is realized. In the back end, this article proposes an incremental optimization method combining segmentation information and an optimization method for tracking prediction model. By incrementally adding the state parameters and reusing the computed results, high-precision results of the camera trajectory and feature points are obtained with satisfactory computing speed. The performance of our algorithm is evaluated by two well-known datasets: TUM RGB-D and NYUDv2 RGB-D. The experimental results demonstrate that our method improves the computational efficiency by 10.2% compared with state-of-the-art V-SLAMs on the desktop platform and by 22.4% on the embedded platform, respectively. Meanwhile, the robustness of our method is better than that of ORB-SLAM2 on the TUM RGB-D dataset.
Collapse
Affiliation(s)
- Xiaoguo Zhang
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Qihan Liu
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Bingqing Zheng
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Huiqing Wang
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Qing Wang
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| |
Collapse
|
14
|
Han J, Dong R, Kan J. A novel loop closure detection method with the combination of points and lines based on information entropy. J FIELD ROBOT 2020. [DOI: 10.1002/rob.21992] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
- Junyu Han
- School of Technology Beijing Forestry University Beijing China
- Key Lab of State Forestry and Grassland Administration on Forestry Equipment and Automation Beijing China
| | - Ruifang Dong
- School of Technology Beijing Forestry University Beijing China
- Key Lab of State Forestry and Grassland Administration on Forestry Equipment and Automation Beijing China
| | - Jiangming Kan
- School of Technology Beijing Forestry University Beijing China
- Key Lab of State Forestry and Grassland Administration on Forestry Equipment and Automation Beijing China
| |
Collapse
|
15
|
Li X, Li Y, Ornek EP, Lin J, Tombari F. Co-Planar Parametrization for Stereo-SLAM and Visual-Inertial Odometry. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.3027230] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
16
|
Liu J, Meng Z. Visual SLAM With Drift-Free Rotation Estimation in Manhattan World. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.3014648] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
17
|
Improved Point–Line Visual–Inertial Odometry System Using Helmert Variance Component Estimation. REMOTE SENSING 2020. [DOI: 10.3390/rs12182901] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Mobile platform visual image sequence inevitably has large areas with various types of weak textures, which affect the acquisition of accurate pose in the subsequent platform moving process. The visual–inertial odometry (VIO) with point features and line features as visual information shows a good performance in weak texture environments, which can solve these problems to a certain extent. However, the extraction and matching of line features are time consuming, and reasonable weights between the point and line features are hard to estimate, which makes it difficult to accurately track the pose of the platform in real time. In order to overcome the deficiency, an improved effective point–line visual–inertial odometry system is proposed in this paper, which makes use of geometric information of line features and combines with pixel correlation coefficient to match the line features. Furthermore, this system uses the Helmert variance component estimation method to adjust weights between point features and line features. Comprehensive experimental results on the two datasets of EuRoc MAV and PennCOSYVIO demonstrate that the point–line visual–inertial odometry system developed in this paper achieved significant improvements in both localization accuracy and efficiency compared with several state-of-the-art VIO systems.
Collapse
|
18
|
Zou Y, Eldemiry A, Li Y, Chen W. Robust RGB-D SLAM Using Point and Line Features for Low Textured Scene. SENSORS 2020; 20:s20174984. [PMID: 32887486 PMCID: PMC7506666 DOI: 10.3390/s20174984] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Revised: 08/23/2020] [Accepted: 09/01/2020] [Indexed: 11/23/2022]
Abstract
Three-dimensional (3D) reconstruction using RGB-D camera with simultaneous color image and depth information is attractive as it can significantly reduce the cost of equipment and time for data collection. Point feature is commonly used for aligning two RGB-D frames. Due to lacking reliable point features, RGB-D simultaneous localization and mapping (SLAM) is easy to fail in low textured scenes. To overcome the problem, this paper proposes a robust RGB-D SLAM system fusing both points and lines, because lines can provide robust geometry constraints when points are insufficient. To comprehensively fuse line constraints, we combine 2D and 3D line reprojection error with point reprojection error in a novel cost function. To solve the cost function and filter out wrong feature matches, we build a robust pose solver using the Gauss–Newton method and Chi-Square test. To correct the drift of camera poses, we maintain a sliding-window framework to update the keyframe poses and related features. We evaluate the proposed system on both public datasets and real-world experiments. It is demonstrated that it is comparable to or better than state-of-the-art methods in consideration with both accuracy and robustness.
Collapse
Affiliation(s)
- Yajing Zou
- Shenzhen Research Institute, The Hong Kong Polytechnic University, Shenzhen 518057, China; (Y.Z.); (Y.L.)
- Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong 999077, China;
| | - Amr Eldemiry
- Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong 999077, China;
| | - Yaxin Li
- Shenzhen Research Institute, The Hong Kong Polytechnic University, Shenzhen 518057, China; (Y.Z.); (Y.L.)
| | - Wu Chen
- Shenzhen Research Institute, The Hong Kong Polytechnic University, Shenzhen 518057, China; (Y.Z.); (Y.L.)
- Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong 999077, China;
- Correspondence: ; Tel.: +852-2766-5969
| |
Collapse
|
19
|
Zhao X, Miao C, Zhang H. Multi-Feature Nonlinear Optimization Motion Estimation Based on RGB-D and Inertial Fusion. SENSORS 2020; 20:s20174666. [PMID: 32824978 PMCID: PMC7506712 DOI: 10.3390/s20174666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 08/12/2020] [Accepted: 08/17/2020] [Indexed: 11/17/2022]
Abstract
To achieve a high precision estimation of indoor robot motion, a tightly coupled RGB-D visual-inertial SLAM system is proposed herein based on multiple features. Most of the traditional visual SLAM methods only rely on points for feature matching and they often underperform in low textured scenes. Besides point features, line segments can also provide geometrical structure information of the environment. This paper utilized both points and lines in low-textured scenes to increase the robustness of RGB-D SLAM system. In addition, we implemented a fast initialization process based on the RGB-D camera to improve the real-time performance of the proposed system and designed a new backend nonlinear optimization framework. By minimizing the cost function formed by the pre-integrated IMU residuals and re-projection errors of points and lines in sliding windows, the state vector is optimized. The experiments evaluated on public datasets show that our system achieves higher accuracy and robustness on trajectories and in pose estimation compared with several state-of-the-art visual SLAM systems.
Collapse
|
20
|
Wen S, Wang S, Zhang Z, Zhang X, Zhang D. Walking Human Detection Using Stereo Camera Based on Feature Classification Algorithm of Second Re-projection Error. Front Neurorobot 2019; 13:105. [PMID: 31920615 PMCID: PMC6930239 DOI: 10.3389/fnbot.2019.00105] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 12/03/2019] [Indexed: 11/13/2022] Open
Abstract
This paper presents a feature classification method based on vision sensor in dynamic environment. Aiming at the detected targets, a double-projection error based on orb and surf is proposed, which combines texture constraints and region constraints to achieve accurate feature classification in four different environments. For dynamic targets with different velocities, the proposed classification framework can effectively reduce the impact of large-area moving targets. The algorithm can classify static and dynamic feature objects and optimize the conversion relationship between frames only through visual sensors. The experimental results show that the proposed algorithm is superior to other algorithms in both static and dynamic environments.
Collapse
Affiliation(s)
- Shuhuan Wen
- Key Lab of Industrial Computer Control Engineering of Hebei Province, Yanshan University, Qinhuangdao, China
| | - Sen Wang
- Key Lab of Industrial Computer Control Engineering of Hebei Province, Yanshan University, Qinhuangdao, China
| | - ZhiShang Zhang
- Key Lab of Industrial Computer Control Engineering of Hebei Province, Yanshan University, Qinhuangdao, China
| | - Xuebo Zhang
- Institute of Robotics and Automatic Information System, Nankai University, Tianjin, China
| | - Dan Zhang
- Department of Mechanical Engineering, York University, Toronto, ON, Canada
| |
Collapse
|
21
|
Zhang N, Zhao Y. Fast and Robust Monocular Visua-Inertial Odometry Using Points and Lines. SENSORS 2019; 19:s19204545. [PMID: 31635048 PMCID: PMC6832589 DOI: 10.3390/s19204545] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Revised: 10/15/2019] [Accepted: 10/17/2019] [Indexed: 11/16/2022]
Abstract
When the camera moves quickly and the image is blurred or the texture in the scene is missing, the Simultaneous Localization and Mapping (SLAM) algorithm based on point feature experiences difficulty tracking enough effective feature points, and the positioning accuracy and robustness are poor, and even may not work properly. For this problem, we propose a monocular visual odometry algorithm based on the point and line features and combining IMU measurement data. Based on this, an environmental-feature map with geometric information is constructed, and the IMU measurement data is incorporated to provide prior and scale information for the visual localization algorithm. Then, the initial pose estimation is obtained based on the motion estimation of the sparse image alignment, and the feature alignment is further performed to obtain the sub-pixel level feature correlation. Finally, more accurate poses and 3D landmarks are obtained by minimizing the re-projection errors of local map points and lines. The experimental results on EuRoC public datasets show that the proposed algorithm outperforms the Open Keyframe-based Visual-Inertial SLAM (OKVIS-mono) algorithm and Oriented FAST and Rotated BRIEF-SLAM (ORB-SLAM) algorithm, which demonstrates the accuracy and speed of the algorithm.
Collapse
Affiliation(s)
- Ning Zhang
- State Key Laboratory of Virtual Reality Technology and Systems, School of Automation Science and Eletrical Engineering, Beihang University, Beijing 100191, China.
| | - Yongjia Zhao
- State Key Laboratory of Virtual Reality Technology and Systems, School of Automation Science and Eletrical Engineering, Beihang University, Beijing 100191, China.
| |
Collapse
|
22
|
Gomez-Ojeda R, Moreno FA, Zuniga-Noel D, Scaramuzza D, Gonzalez-Jimenez J. PL-SLAM: A Stereo SLAM System Through the Combination of Points and Line Segments. IEEE T ROBOT 2019. [DOI: 10.1109/tro.2019.2899783] [Citation(s) in RCA: 146] [Impact Index Per Article: 29.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
23
|
Bian J, Hui X, Zhao X, Tan M. A monocular vision–based perception approach for unmanned aerial vehicle close proximity transmission tower inspection. INT J ADV ROBOT SYST 2019. [DOI: 10.1177/1729881418820227] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Employing unmanned aerial vehicles to conduct close proximity inspection of transmission tower is becoming increasingly common. This article aims to solve the two key problems of close proximity navigation—localizing tower and simultaneously estimating the unmanned aerial vehicle positions. To this end, we propose a novel monocular vision–based environmental perception approach and implement it in a hierarchical embedded unmanned aerial vehicle system. The proposed framework comprises tower localization and an improved point–line-based simultaneous localization and mapping framework consisting of feature matching, frame tracking, local mapping, loop closure, and nonlinear optimization. To enhance frame association, the prominent line feature of tower is heuristically extracted and matched followed by the intersections of lines are processed as the point feature. Then, the bundle adjustment optimization leverages the intersections of lines and the point-to-line distance to improve the accuracy of unmanned aerial vehicle localization. For tower localization, a transmission tower data set is created and a concise deep learning-based neural network is designed to perform real-time and accurate tower detection. Then, it is in combination with a keyframe-based semi-dense mapping to locate the tower with a clear line-shaped structure in 3-D space. Additionally, two reasonable paths are planned for the refined inspection. In experiments, the whole unmanned aerial vehicle system developed on Robot Operating System framework is evaluated along the paths both in a synthetic scene and in a real-world inspection environment. The final results show that the accuracy of unmanned aerial vehicle localization is improved, and the tower reconstruction is fast and clear. Based on our approach, the safe and autonomous unmanned aerial vehicle close proximity inspection of transmission tower can be realized.
Collapse
Affiliation(s)
- Jiang Bian
- The State Key Laboratory of Management and Control for Complex System, Institute of Automation Chinese Academy of Sciences, Beijing, China
| | - Xiaolong Hui
- The State Key Laboratory of Management and Control for Complex System, Institute of Automation Chinese Academy of Sciences, Beijing, China
| | - Xiaoguang Zhao
- The State Key Laboratory of Management and Control for Complex System, Institute of Automation Chinese Academy of Sciences, Beijing, China
| | - Min Tan
- The State Key Laboratory of Management and Control for Complex System, Institute of Automation Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
24
|
Wu Y, Tang F, Li H. Image-based camera localization: an overview. Vis Comput Ind Biomed Art 2018; 1:8. [PMID: 32240389 PMCID: PMC7099558 DOI: 10.1186/s42492-018-0008-z] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 07/06/2018] [Indexed: 11/22/2022] Open
Abstract
Virtual reality, augmented reality, robotics, and autonomous driving, have recently attracted much attention from both academic and industrial communities, in which image-based camera localization is a key task. However, there has not been a complete review on image-based camera localization. It is urgent to map this topic to enable individuals enter the field quickly. In this paper, an overview of image-based camera localization is presented. A new and complete classification of image-based camera localization approaches is provided and the related techniques are introduced. Trends for future development are also discussed. This will be useful not only to researchers, but also to engineers and other individuals interested in this field.
Collapse
Affiliation(s)
- Yihong Wu
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China, University of Chinese Academy of Sciences, Beijing, China.
| | - Fulin Tang
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China, University of Chinese Academy of Sciences, Beijing, China
| | - Heping Li
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
25
|
Meier K, Chung S, Hutchinson S. Visual‐inertial curve simultaneous localization and mapping: Creating a sparse structured world without feature points. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21759] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Kevin Meier
- University of Illinois at Urbana‐Champaign Urbana Ilinois 61801
| | - Soon‐Jo Chung
- California Institute of Technology 1200 East California Boulevard, MC 105‐50 Pasadena California 91125
| | - Seth Hutchinson
- University of Illinois at Urbana‐Champaign Urbana Ilinois 61801
| |
Collapse
|