1
|
Dong W, Lu C, Bao L, Li W, Shin K, Han C. A Planar Multi-Inertial Navigation Strategy for Autonomous Systems for Signal-Variable Environments. SENSORS (BASEL, SWITZERLAND) 2024; 24:1064. [PMID: 38400221 PMCID: PMC10893360 DOI: 10.3390/s24041064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 02/02/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024]
Abstract
The challenge of precise dynamic positioning for mobile robots is addressed through the development of a multi-inertial navigation system (M-INSs). The inherent cumulative sensor errors prevalent in traditional single inertial navigation systems (INSs) under dynamic conditions are mitigated by a novel algorithm, integrating multiple INS units in a predefined planar configuration, utilizing fixed distances between the units as invariant constraints. An extended Kalman filter (EKF) is employed to significantly enhance the positioning accuracy. Dynamic experimental validation of the proposed 3INS EKF algorithm reveals a marked improvement over individual INS units, with the positioning errors reduced and stability increased, resulting in an average accuracy enhancement rate exceeding 60%. This advancement is particularly critical for mobile robot applications that demand high precision, such as autonomous driving and disaster search and rescue. The findings from this study not only demonstrate the potential of M-INSs to improve dynamic positioning accuracy but also to provide a new research direction for future advancements in robotic navigation systems.
Collapse
Affiliation(s)
- Wenbin Dong
- Department of Mechatronics Engineering, Hanyang University, Ansan 15588, Republic of Korea; (W.D.); (L.B.); (W.L.); (C.H.)
- School of Mechanical Engineering, Anhui Science and Technology University, Chuzhou 233100, China;
| | - Cheng Lu
- School of Mechanical Engineering, Anhui Science and Technology University, Chuzhou 233100, China;
| | - Le Bao
- Department of Mechatronics Engineering, Hanyang University, Ansan 15588, Republic of Korea; (W.D.); (L.B.); (W.L.); (C.H.)
| | - Wenqi Li
- Department of Mechatronics Engineering, Hanyang University, Ansan 15588, Republic of Korea; (W.D.); (L.B.); (W.L.); (C.H.)
| | - Kyoosik Shin
- Department of Mechatronics Engineering, Hanyang University, Ansan 15588, Republic of Korea; (W.D.); (L.B.); (W.L.); (C.H.)
| | - Changsoo Han
- Department of Mechatronics Engineering, Hanyang University, Ansan 15588, Republic of Korea; (W.D.); (L.B.); (W.L.); (C.H.)
| |
Collapse
|
2
|
Malakouti-Khah H, Sadeghzadeh-Nokhodberiz N, Montazeri A. Simultaneous localization and mapping in a multi-robot system in a dynamic environment with unknown initial correspondence. Front Robot AI 2024; 10:1291672. [PMID: 38283801 PMCID: PMC10811797 DOI: 10.3389/frobt.2023.1291672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 12/11/2023] [Indexed: 01/30/2024] Open
Abstract
A basic assumption in most approaches to simultaneous localization and mapping (SLAM) is the static nature of the environment. In recent years, some research has been devoted to the field of SLAM in dynamic environments. However, most of the studies conducted in this field have implemented SLAM by removing and filtering the moving landmarks. Moreover, the use of several robots in large, complex, and dynamic environments can significantly improve performance on the localization and mapping task, which has attracted many researchers to this problem more recently. In multi-robot SLAM, the robots can cooperate in a decentralized manner without the need for a central processing center to obtain their positions and a more precise map of the environment. In this article, a new decentralized approach is presented for multi-robot SLAM problems in dynamic environments with unknown initial correspondence. The proposed method applies a modified Fast-SLAM method, which implements SLAM in a decentralized manner by considering moving landmarks in the environment. Due to the unknown initial correspondence of the robots, a geographical approach is embedded in the proposed algorithm to align and merge their maps. Data association is also embedded in the algorithm; this is performed using the measurement predictions in the SLAM process of each robot. Finally, simulation results are provided to demonstrate the performance of the proposed method.
Collapse
|
3
|
Wong CC, Feng HM, Kuo KL. Multi-Sensor Fusion Simultaneous Localization Mapping Based on Deep Reinforcement Learning and Multi-Model Adaptive Estimation. SENSORS (BASEL, SWITZERLAND) 2023; 24:48. [PMID: 38202911 PMCID: PMC11154468 DOI: 10.3390/s24010048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 12/13/2023] [Accepted: 12/14/2023] [Indexed: 01/12/2024]
Abstract
In this study, we designed a multi-sensor fusion technique based on deep reinforcement learning (DRL) mechanisms and multi-model adaptive estimation (MMAE) for simultaneous localization and mapping (SLAM). The LiDAR-based point-to-line iterative closest point (PLICP) and RGB-D camera-based ORBSLAM2 methods were utilized to estimate the localization of mobile robots. The residual value anomaly detection was combined with the Proximal Policy Optimization (PPO)-based DRL model to accomplish the optimal adjustment of weights among different localization algorithms. Two kinds of indoor simulation environments were established by using the Gazebo simulator to validate the multi-model adaptive estimation localization performance, which is used in this paper. The experimental results of the proposed method in this study confirmed that it can effectively fuse the localization information from multiple sensors and enable mobile robots to obtain higher localization accuracy than the traditional PLICP and ORBSLAM2. It was also found that the proposed method increases the localization stability of mobile robots in complex environments.
Collapse
Affiliation(s)
- Ching-Chang Wong
- Department of Electrical and Computer Engineering, Tamkang University, New Taipei City 25137, Taiwan; (C.-C.W.); (K.-L.K.)
| | - Hsuan-Ming Feng
- Department of Computer Science and Information Engineering, National Quemoy University, Kinmen County 89250, Taiwan
| | - Kun-Lung Kuo
- Department of Electrical and Computer Engineering, Tamkang University, New Taipei City 25137, Taiwan; (C.-C.W.); (K.-L.K.)
| |
Collapse
|
4
|
Tondo GR, Riley C, Morgenthal G. Characterization of the iPhone LiDAR-Based Sensing System for Vibration Measurement and Modal Analysis. SENSORS (BASEL, SWITZERLAND) 2023; 23:7832. [PMID: 37765888 PMCID: PMC10537187 DOI: 10.3390/s23187832] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 09/08/2023] [Accepted: 09/09/2023] [Indexed: 09/29/2023]
Abstract
Portable depth sensing using time-of-flight LiDAR principles is available on iPhone 13 Pro and similar Apple mobile devices. This study sought to characterize the LiDAR sensing system for measuring full-field vibrations to support modal analysis. A vibrating target was employed to identify the limits and quality of the sensor in terms of noise, frequency, and range, and the results were compared to a laser displacement transducer. In addition, properties such as phone-to-target distance and lighting conditions were investigated. It was determined that the optimal phone-to-target distance range is between 0.30 m and 2.00 m. Despite an indicated sampling frequency equal to the 60 Hz framerate of the RGB camera, the LiDAR depth map sampling rate is actually 15 Hz, limiting the utility of this sensor for vibration measurement and presenting challenges if the depth map time series is not downsampled to 15 Hz before further processing. Depth maps were processed with Stochastic Subspace Identification in a Monte Carlo manner for stochastic modal parameter identification of a flexible steel cantilever. Despite significant noise and distortion, the natural frequencies were identified with an average difference of 1.9% in comparison to the laser displacement transducer data, and high-resolution mode shapes including uncertainty ranges were obtained and compared to an analytical solution counterpart. Our findings indicate that mobile LiDAR measurements can be a powerful tool in modal identification if used in combination with prior knowledge of the structural system. The technology has significant potential for applications in structural health monitoring and diagnostics, particularly where non-contact vibration sensing is useful, such as in flexible scaled laboratory models or field scenarios where access to place physical sensors is challenging.
Collapse
Affiliation(s)
- Gledson Rodrigo Tondo
- Chair of Modelling and Simulation of Structures, Bauhaus University Weimar, Marienstr. 13, 99423 Weimar, Germany;
| | - Charles Riley
- Civil Engineering Department, Oregon Institute of Technology, 3201 Campus Drive, Klamath Falls, OR 97601, USA
| | - Guido Morgenthal
- Chair of Modelling and Simulation of Structures, Bauhaus University Weimar, Marienstr. 13, 99423 Weimar, Germany;
| |
Collapse
|
5
|
Zhang W, He L, Wang H, Yuan L, Xiao W. Multiple Self-Supervised Auxiliary Tasks for Target-Driven Visual Navigation Using Deep Reinforcement Learning. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1007. [PMID: 37509957 PMCID: PMC10378290 DOI: 10.3390/e25071007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 06/26/2023] [Accepted: 06/28/2023] [Indexed: 07/30/2023]
Abstract
Visual navigation based on deep reinforcement learning requires a large amount of interaction with the environment, and due to the reward sparsity, it requires a large amount of training time and computational resources. In this paper, we focus on sample efficiency and navigation performance and propose a framework for visual navigation based on multiple self-supervised auxiliary tasks. Specifically, we present an LSTM-based dynamics model and an attention-based image-reconstruction model as auxiliary tasks. These self-supervised auxiliary tasks enable agents to learn navigation strategies directly from the original high-dimensional images without relying on ResNet features by constructing latent representation learning. Experimental results show that without manually designed features and prior demonstrations, our method significantly improves the training efficiency and outperforms the baseline algorithms on the simulator and real-world image datasets.
Collapse
Affiliation(s)
- Wenzhi Zhang
- School of Mechanical Engineering, Xinjiang University, Urumqi 830046, China
| | - Li He
- School of Mechanical Engineering, Xinjiang University, Urumqi 830046, China
| | - Hongwei Wang
- School of Mechanical Engineering, Xinjiang University, Urumqi 830046, China
| | - Liang Yuan
- School of Mechanical Engineering, Xinjiang University, Urumqi 830046, China
- School of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
| | - Wendong Xiao
- School of Mechanical Engineering, Xinjiang University, Urumqi 830046, China
| |
Collapse
|
6
|
Ren G, Wu H, Bao A, Lin T, Ting KC, Ying Y. Mobile robotics platform for strawberry temporal-spatial yield monitoring within precision indoor farming systems. FRONTIERS IN PLANT SCIENCE 2023; 14:1162435. [PMID: 37180389 PMCID: PMC10167025 DOI: 10.3389/fpls.2023.1162435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 03/21/2023] [Indexed: 05/16/2023]
Abstract
Plant phenotyping and production management are emerging fields to facilitate Genetics, Environment, & Management (GEM) research and provide production guidance. Precision indoor farming systems (PIFS), vertical farms with artificial light (aka plant factories) in particular, have long been suitable production scenes due to the advantages of efficient land utilization and year-round cultivation. In this study, a mobile robotics platform (MRP) within a commercial plant factory has been developed to dynamically understand plant growth and provide data support for growth model construction and production management by periodical monitoring of individual strawberry plants and fruit. Yield monitoring, where yield = the total number of ripe strawberry fruit detected, is a critical task to provide information on plant phenotyping. The MRP consists of an autonomous mobile robot (AMR) and a multilayer perception robot (MPR), i.e., MRP = the MPR installed on top of the AMR. The AMR is capable of traveling along the aisles between plant growing rows. The MPR consists of a data acquisition module that can be raised to the height of any plant growing tier of each row by a lifting module. Adding AprilTag observations (captured by a monocular camera) into the inertial navigation system to form an ATI navigation system has enhanced the MRP navigation within the repetitive and narrow physical structure of a plant factory to capture and correlate the growth and position information of each individual strawberry plant. The MRP performed robustly at various traveling speeds with a positioning accuracy of 13.0 mm. The temporal-spatial yield monitoring within a whole plant factory can be achieved to guide farmers to harvest strawberries on schedule through the MRP's periodical inspection. The yield monitoring performance was found to have an error rate of 6.26% when the plants were inspected at a constant MRP traveling speed of 0.2 m/s. The MRP's functions are expected to be transferable and expandable to other crop production monitoring and cultural tasks.
Collapse
Affiliation(s)
- Guoqiang Ren
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, Zhejiang, China
- Zhejiang University-University of Illinois Urbana-Champaign Institute (ZJU-UIUC), International Campus, Zhejiang University, Haining, Zhejiang, China
- Key Laboratory of Intelligent Equipment and Robotics for Agriculture of Zhejiang Province, Hangzhou, China
| | - Hangyu Wu
- College of Control Science and Engineering, Zhejiang University, Hangzhou, Zhejiang, China
| | - Anbo Bao
- Department of Automation, Shanghai Jiao Tong University, Shanghai, China
| | - Tao Lin
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, Zhejiang, China
- Key Laboratory of Intelligent Equipment and Robotics for Agriculture of Zhejiang Province, Hangzhou, China
| | - Kuan-Chong Ting
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, Zhejiang, China
- Zhejiang University-University of Illinois Urbana-Champaign Institute (ZJU-UIUC), International Campus, Zhejiang University, Haining, Zhejiang, China
- Department of Agricultural and Biological Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| | - Yibin Ying
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, Zhejiang, China
- Key Laboratory of Intelligent Equipment and Robotics for Agriculture of Zhejiang Province, Hangzhou, China
| |
Collapse
|
7
|
Han L, Shi Z, Wang H. A Localization and Mapping Algorithm Based on Improved LVI-SAM for Vehicles in Field Environments. SENSORS (BASEL, SWITZERLAND) 2023; 23:3744. [PMID: 37050804 PMCID: PMC10098548 DOI: 10.3390/s23073744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 03/29/2023] [Accepted: 03/31/2023] [Indexed: 06/19/2023]
Abstract
Quickly grasping the surrounding environment's information and the location of the vehicle is the key to achieving automatic driving. However, accurate and robust localization and mapping are still challenging for field vehicles and robots due to the characteristics of emptiness, terrain changeability, and Global Navigation Satellite System (GNSS)-denied in complex field environments. In this study, an LVI-SAM-based lidar, inertial, and visual fusion using simultaneous localization and mapping (SLAM) algorithm was proposed to solve the problem of localization and mapping for vehicles in such open, bumpy, and Global Positioning System (GPS)-denied field environments. In this method, a joint lidar front end of pose estimation and correction was designed using the Super4PCS, Iterative Closest Point (ICP), and Normal Distributions Transform (NDT) algorithms and their variants. The algorithm can balance localization accuracy and real-time performance by carrying out lower-frequency pose correction based on higher-frequency pose estimation. Experimental results from the complex field environment show that, compared with LVI-SAM, the proposed method can reduce the translational error of localization by about 4.7% and create a three-dimensional point cloud map of the environment in real time, realizing the high-precision and high-robustness localization and mapping of the vehicle in complex field environments.
Collapse
|
8
|
Zhang F, Zhang J, Xu Z, Tang J, Jiang P, Zhong R. Extracting Traffic Signage by Combining Point Clouds and Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:2262. [PMID: 36850860 PMCID: PMC9964076 DOI: 10.3390/s23042262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/28/2022] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
Recognizing traffic signs is key to achieving safe automatic driving. With the decreasing cost of LiDAR, the accurate extraction of traffic signs using point cloud data has received wide attention. In this study, we propose combining point cloud and image traffic sign extraction: firstly, we use the improved YoloV3 model to detect traffic signs in panoramic images. The specific improvement is that the convolution block attention module is added to the algorithm framework, the traditional K-means clustering algorithm is improved, and Focal Loss is introduced as the loss function. It shows higher accuracy on the TT100K dataset, with a 1.4% improvement in accuracy compared to the previous YoloV3. Then, the point cloud of the area where the traffic sign is located is extracted by combining the image detection results. On this basis, the outline of the traffic sign is accurately extracted using the reflection intensity, spatial geometry and other information. Compared with the traditional method, the proposed method can effectively reduce the missed detection rate, narrow the range of point cloud, and improve the detection accuracy by 10.2%.
Collapse
Affiliation(s)
- Furao Zhang
- Key Laboratory of 3D Information Acquisition and Application, MOE, Capital Normal University, Beijing 100048, China
- Base of the State Key Laboratory of Urban Environmental Process and Digital Modeling, Capital Normal University, Beijing 100048, China
- College of Resource Environment and Tourism, Capital Normal University, Beijing 100048, China
| | - Jianan Zhang
- Key Laboratory of 3D Information Acquisition and Application, MOE, Capital Normal University, Beijing 100048, China
- Base of the State Key Laboratory of Urban Environmental Process and Digital Modeling, Capital Normal University, Beijing 100048, China
- College of Resource Environment and Tourism, Capital Normal University, Beijing 100048, China
| | - Zhihong Xu
- Key Laboratory of 3D Information Acquisition and Application, MOE, Capital Normal University, Beijing 100048, China
- Base of the State Key Laboratory of Urban Environmental Process and Digital Modeling, Capital Normal University, Beijing 100048, China
- College of Resource Environment and Tourism, Capital Normal University, Beijing 100048, China
| | - Jie Tang
- Key Laboratory of 3D Information Acquisition and Application, MOE, Capital Normal University, Beijing 100048, China
- Base of the State Key Laboratory of Urban Environmental Process and Digital Modeling, Capital Normal University, Beijing 100048, China
- College of Resource Environment and Tourism, Capital Normal University, Beijing 100048, China
| | - Peiyu Jiang
- Department of Statistics, Uppsala University, 75120 Uppsala, Sweden
| | - Ruofei Zhong
- Key Laboratory of 3D Information Acquisition and Application, MOE, Capital Normal University, Beijing 100048, China
- Base of the State Key Laboratory of Urban Environmental Process and Digital Modeling, Capital Normal University, Beijing 100048, China
- College of Resource Environment and Tourism, Capital Normal University, Beijing 100048, China
| |
Collapse
|
9
|
Lou L, Li Y, Zhang Q, Wei H. SLAM and 3D Semantic Reconstruction Based on the Fusion of Lidar and Monocular Vision. SENSORS (BASEL, SWITZERLAND) 2023; 23:1502. [PMID: 36772544 PMCID: PMC9920633 DOI: 10.3390/s23031502] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 01/24/2023] [Accepted: 01/25/2023] [Indexed: 06/18/2023]
Abstract
Monocular camera and Lidar are the two most commonly used sensors in unmanned vehicles. Combining the advantages of the two is the current research focus of SLAM and semantic analysis. In this paper, we propose an improved SLAM and semantic reconstruction method based on the fusion of Lidar and monocular vision. We fuse the semantic image with the low-resolution 3D Lidar point clouds and generate dense semantic depth maps. Through visual odometry, ORB feature points with depth information are selected to improve positioning accuracy. Our method uses parallel threads to aggregate 3D semantic point clouds while positioning the unmanned vehicle. Experiments are conducted on the public CityScapes and KITTI Visual Odometry datasets, and the results show that compared with the ORB-SLAM2 and DynaSLAM, our positioning error is approximately reduced by 87%; compared with the DEMO and DVL-SLAM, our positioning accuracy improves in most sequences. Our 3D reconstruction quality is better than DynSLAM and contains semantic information. The proposed method has engineering application value in the unmanned vehicles field.
Collapse
Affiliation(s)
- Lu Lou
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing 400074, China
| | - Yitian Li
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing 400074, China
| | - Qi Zhang
- Guangdong Haoxing Technology Co., Ltd, Foshan 528300, China
| | - Hanbing Wei
- School of Mechatronics and Vehicle Engineering, Chongqing Jiaotong University, Chongqing 400074, China
| |
Collapse
|
10
|
Patoliya J, Mewada H, Hassaballah M, Khan MA, Kadry S. A robust autonomous navigation and mapping system based on GPS and LiDAR data for unconstraint environment. EARTH SCIENCE INFORMATICS 2022; 15:2703-2715. [DOI: 10.1007/s12145-022-00791-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 03/01/2022] [Indexed: 08/25/2024]
|
11
|
Laal S, Vasilyev P, Pearson S, Aboy M, McNames J. Feasibility of Tracking Human Kinematics with Simultaneous Localization and Mapping (SLAM). SENSORS (BASEL, SWITZERLAND) 2022; 22:9378. [PMID: 36502075 PMCID: PMC9739070 DOI: 10.3390/s22239378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 11/21/2022] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
We evaluated a new wearable technology that fuses inertial sensors and cameras for tracking human kinematics. These devices use on-board simultaneous localization and mapping (SLAM) algorithms to localize the camera within the environment. Significance of this technology is in its potential to overcome many of the limitations of the other dominant technologies. Our results demonstrate this system often attains an estimated orientation error of less than 1° and a position error of less than 4 cm as compared to a robotic arm. This demonstrates that SLAM's accuracy is adequate for many practical applications for tracking human kinematics.
Collapse
Affiliation(s)
- Sepehr Laal
- Department of Electrical and Computer Engineering, Portland State University, Portland, OR 97201, USA
| | - Paul Vasilyev
- Department of Electrical and Computer Engineering, Portland State University, Portland, OR 97201, USA
| | - Sean Pearson
- APDM Wearable Technologies, Portland, OR 97201, USA
| | - Mateo Aboy
- Centre for Law, Medicine and Life Sciences, University of Cambridge, Cambridge CB2 1TN, UK
| | - James McNames
- Department of Electrical and Computer Engineering, Portland State University, Portland, OR 97201, USA
| |
Collapse
|
12
|
IBISCape: A Simulated Benchmark for multi-modal SLAM Systems Evaluation in Large-scale Dynamic Environments. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-022-01753-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
13
|
Metasurface-enhanced light detection and ranging technology. Nat Commun 2022; 13:5724. [PMID: 36175421 PMCID: PMC9523074 DOI: 10.1038/s41467-022-33450-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 09/20/2022] [Indexed: 11/08/2022] Open
Abstract
Deploying advanced imaging solutions to robotic and autonomous systems by mimicking human vision requires simultaneous acquisition of multiple fields of views, named the peripheral and fovea regions. Among 3D computer vision techniques, LiDAR is currently considered at the industrial level for robotic vision. Notwithstanding the efforts on LiDAR integration and optimization, commercially available devices have slow frame rate and low resolution, notably limited by the performance of mechanical or solid-state deflection systems. Metasurfaces are versatile optical components that can distribute the optical power in desired regions of space. Here, we report on an advanced LiDAR technology that leverages from ultrafast low FoV deflectors cascaded with large area metasurfaces to achieve large FoV (150°) and high framerate (kHz) which can provide simultaneous peripheral and central imaging zones. The use of our disruptive LiDAR technology with advanced learning algorithms offers perspectives to improve perception and decision-making process of ADAS and robotic systems.
Collapse
|
14
|
Shi H, Yang J, Shi J, Zhu L, Wang G. Vision-Sensor-Assisted Probabilistic Localization Method for Indoor Environment. SENSORS (BASEL, SWITZERLAND) 2022; 22:7114. [PMID: 36236211 PMCID: PMC9572421 DOI: 10.3390/s22197114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 09/02/2022] [Accepted: 09/13/2022] [Indexed: 06/16/2023]
Abstract
Among the numerous indoor localization methods, Light-Detection-and-Ranging (LiDAR)-based probabilistic algorithms have been extensively applied to indoor localization due to their real-time performance and high accuracy. Nevertheless, these methods are challenged in symmetrical environments when tackling global localization and the robot kidnapping problem. In this paper, a novel hybrid method that combines visual and probabilistic localization results is proposed. Augmented Monte Carlo Localization (AMCL) is improved for position tracking continually. LiDAR-based measurements' uncertainty is evaluated to incorporate discrete visual-based results; therefore, a better diversity of the particle can be maintained. The robot kidnapping problem can be detected and solved by preventing premature convergence of the particle filter. Extensive experiments were implemented to validate the robustness and accuracy performance. Meanwhile, the localization error was reduced from 30 mm to 9 mm during a 600 m tour.
Collapse
Affiliation(s)
- Hui Shi
- School of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, China
| | - Jianyu Yang
- School of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, China
| | - Jiashun Shi
- School of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, China
| | - Lida Zhu
- School of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, China
| | - Guofa Wang
- China Coal Technology and Engineering Group, Beijing 100013, China
| |
Collapse
|
15
|
Zhou H, Xu C, Tang X, Wang S, Zhang Z. A Review of Vision-Laser-Based Civil Infrastructure Inspection and Monitoring. SENSORS (BASEL, SWITZERLAND) 2022; 22:5882. [PMID: 35957439 PMCID: PMC9371157 DOI: 10.3390/s22155882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 08/03/2022] [Accepted: 08/04/2022] [Indexed: 06/15/2023]
Abstract
Structural health and construction security are important problems in civil engineering. Regular infrastructure inspection and monitoring methods are mostly performed manually. Early automatic structural health monitoring techniques were mostly based on contact sensors, which usually are difficult to maintain in complex infrastructure environments. Therefore, non-contact infrastructure inspection and monitoring techniques received increasing interest in recent years, and they are widely used in all aspects of infrastructure life, owing to their convenience and non-destructive properties. This paper provides an overview of vision-based inspection and vision-laser-based monitoring techniques and applications. The inspection part includes image-processing algorithms, object detection, and semantic segmentation. In particular, infrastructure monitoring involves not only visual technologies but also different fusion methods of vision and lasers. Furthermore, the most important challenges for future automatic non-contact inspections and monitoring are discussed and the paper correspondingly concludes with state-of-the-art algorithms and applications to resolve these challenges.
Collapse
Affiliation(s)
- Huixing Zhou
- School of Mechanical-Electronic and Vehicle Engineering, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
| | - Chongwen Xu
- School of Mechanical-Electronic and Vehicle Engineering, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
| | - Xiuying Tang
- College of Engineering, China Agricultural University, Beijing 100083, China
| | - Shun Wang
- School of Mechanical-Electronic and Vehicle Engineering, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
| | - Zhongyue Zhang
- School of Mechanical-Electronic and Vehicle Engineering, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
| |
Collapse
|
16
|
FPS: Fast Path Planner Algorithm Based on Sparse Visibility Graph and Bidirectional Breadth-First Search. REMOTE SENSING 2022. [DOI: 10.3390/rs14153720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The majority of planning algorithms used are based on the occupancy grid maps, but in complicated situations, the occupancy grid maps have a significant search overhead. This paper proposed a path planner based on the visibility graph (v-graph) for the mobile robot that uses sparse methods to speed up and simplify the construction of the v-graph. Firstly, the complementary grid framework is designed to reduce graph updating iteration costs during the data collection process in each data frame. Secondly, a filter approach based on the edge length and the number of vertices of the obstacle contour is proposed to reduce redundant nodes and edges in the v-graph. Thirdly, a bidirectional breadth-first search is combined into the path searching process in the proposed fast path planner algorithm in order to reduce the waste of exploring space. Finally, the simulation results indicate that the proposed sparse v-graph planner can significantly improve the efficiency of building the v-graph and reduce the time of path search. In highly convoluted unknown or partially known environments, our method is 40% faster than the FAR Planner and produces paths 25% shorter than it. Moreover, the physical experiment shows that the proposed path planner is faster than the FAR Planner in both the v-graph update process and laser process. The method proposed in this paper performs faster when seeking paths than the conventional method based on the occupancy grid.
Collapse
|
17
|
Benchmarking Built-In Tracking Systems for Indoor AR Applications on Popular Mobile Devices. SENSORS 2022; 22:s22145382. [PMID: 35891058 PMCID: PMC9320911 DOI: 10.3390/s22145382] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/15/2022] [Accepted: 07/18/2022] [Indexed: 11/25/2022]
Abstract
As one of the most promising technologies for next-generation mobile platforms, Augmented Reality (AR) has the potential to radically change the way users interact with real environments enriched with various digital information. To achieve this potential, it is of fundamental importance to track and maintain accurate registration between real and computer-generated objects. Thus, it is crucially important to assess tracking capabilities. In this paper, we present a benchmark evaluation of the tracking performances of some of the most popular AR handheld devices, which can be regarded as a representative set of devices for sale in the global market. In particular, eight different next-gen devices including smartphones and tablets were considered. Experiments were conducted in a laboratory by adopting an external tracking system. The experimental methodology consisted of three main stages: calibration, data acquisition, and data evaluation. The results of the experimentation showed that the selected devices, in combination with the AR SDKs, have different tracking performances depending on the covered trajectory.
Collapse
|
18
|
Marvin: An Innovative Omni-Directional Robotic Assistant for Domestic Environments. SENSORS 2022; 22:s22145261. [PMID: 35890940 PMCID: PMC9322347 DOI: 10.3390/s22145261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/06/2022] [Accepted: 07/12/2022] [Indexed: 12/10/2022]
Abstract
Population aging and pandemics have been shown to cause the isolation of elderly people in their houses, generating the need for a reliable assistive figure. Robotic assistants are the new frontier of innovation for domestic welfare, and elderly monitoring is one of the services a robot can handle for collective well-being. Despite these emerging needs, in the actual landscape of robotic assistants, there are no platforms that successfully combine reliable mobility in cluttered domestic spaces with lightweight and offline Artificial Intelligence (AI) solutions for perception and interaction. In this work, we present Marvin, a novel assistive robotic platform we developed with a modular layer-based architecture, merging a flexible mechanical design with cutting-edge AI for perception and vocal control. We focus the design of Marvin on three target service functions: monitoring of elderly and reduced-mobility subjects, remote presence and connectivity, and night assistance. Compared to previous works, we propose a tiny omnidirectional platform, which enables agile mobility and effective obstacle avoidance. Moreover, we design a controllable positioning device, which easily allows the user to access the interface for connectivity and extends the visual range of the camera sensor. Nonetheless, we delicately consider the privacy issues arising from private data collection on cloud services, a critical aspect of commercial AI-based assistants. To this end, we demonstrate how lightweight deep learning solutions for visual perception and vocal command can be adopted, completely running offline on the embedded hardware of the robot.
Collapse
|
19
|
Helmberger M, Morin K, Berner B, Kumar N, Cioffi G, Scaramuzza D. The Hilti SLAM Challenge Dataset. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3183759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
| | | | | | | | - Giovanni Cioffi
- Robotics and Perception Group, Department of Informatics, University of Zurich, Zurich, Switzerland
| | - Davide Scaramuzza
- Robotics and Perception Group, Department of Informatics, University of Zurich, Zurich, Switzerland
| |
Collapse
|
20
|
Jia G, Li X, Zhang D, Xu W, Lv H, Shi Y, Cai M. Visual-SLAM Classical Framework and Key Techniques: A Review. SENSORS 2022; 22:s22124582. [PMID: 35746363 PMCID: PMC9227238 DOI: 10.3390/s22124582] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 05/31/2022] [Accepted: 06/07/2022] [Indexed: 02/01/2023]
Abstract
With the significant increase in demand for artificial intelligence, environmental map reconstruction has become a research hotspot for obstacle avoidance navigation, unmanned operations, and virtual reality. The quality of the map plays a vital role in positioning, path planning, and obstacle avoidance. This review starts with the development of SLAM (Simultaneous Localization and Mapping) and proceeds to a review of V-SLAM (Visual-SLAM) from its proposal to the present, with a summary of its historical milestones. In this context, the five parts of the classic V-SLAM framework—visual sensor, visual odometer, backend optimization, loop detection, and mapping—are explained separately. Meanwhile, the details of the latest methods are shown; VI-SLAM (Visual inertial SLAM) is reviewed and extended. The four critical techniques of V-SLAM and its technical difficulties are summarized as feature detection and matching, selection of keyframes, uncertainty technology, and expression of maps. Finally, the development direction and needs of the V-SLAM field are proposed.
Collapse
Affiliation(s)
- Guanwei Jia
- School of Physics and Electronics, Henan University, Kaifeng 475004, China; (G.J.); (X.L.); (H.L.)
| | - Xiaoying Li
- School of Physics and Electronics, Henan University, Kaifeng 475004, China; (G.J.); (X.L.); (H.L.)
| | - Dongming Zhang
- School of Physics and Electronics, Henan University, Kaifeng 475004, China; (G.J.); (X.L.); (H.L.)
- Correspondence: (D.Z.); (W.X.); Tel./Fax: +86-10-82339160
| | - Weiqing Xu
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China; (Y.S.); (M.C.)
- Pneumatic and Thermodynamic Energy Storage and Supply Beijing Key Laboratory, Beijing 100191, China
- Correspondence: (D.Z.); (W.X.); Tel./Fax: +86-10-82339160
| | - Haojie Lv
- School of Physics and Electronics, Henan University, Kaifeng 475004, China; (G.J.); (X.L.); (H.L.)
| | - Yan Shi
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China; (Y.S.); (M.C.)
- Pneumatic and Thermodynamic Energy Storage and Supply Beijing Key Laboratory, Beijing 100191, China
| | - Maolin Cai
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China; (Y.S.); (M.C.)
- Pneumatic and Thermodynamic Energy Storage and Supply Beijing Key Laboratory, Beijing 100191, China
| |
Collapse
|
21
|
Abstract
The ability of intelligent unmanned platforms to achieve autonomous navigation and positioning in a large-scale environment has become increasingly demanding, in which LIDAR-based Simultaneous Localization and Mapping (SLAM) is the mainstream of research schemes. However, the LIDAR-based SLAM system will degenerate and affect the localization and mapping effects in extreme environments with high dynamics or sparse features. In recent years, a large number of LIDAR-based multi-sensor fusion SLAM works have emerged in order to obtain a more stable and robust system. In this work, the development process of LIDAR-based multi-sensor fusion SLAM and the latest research work are highlighted. After summarizing the basic idea of SLAM and the necessity of multi-sensor fusion, this paper introduces the basic principles and recent work of multi-sensor fusion in detail from four aspects based on the types of fused sensors and data coupling methods. Meanwhile, we review some SLAM datasets and compare the performance of five open-source algorithms using the UrbanNav dataset. Finally, the development trend and popular research directions of SLAM based on 3D LIDAR multi-sensor fusion are discussed and summarized.
Collapse
|
22
|
Research on rapid location method of mobile robot based on semantic grid map in large scene similar environment. ROBOTICA 2022. [DOI: 10.1017/s026357472200073x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Abstract
Aiming at the problem that adaptive Monte Carlo localization (AMCL) algorithm is difficult to localize in large scenes and similar environments. This paper uses a semantic information-assisted approach to improve the AMCL algorithm. This method realizes the robust localization of the robot in the large scenes and similar environments. Firstly, the 2D grid map created by simultaneous localization and mapping using lidar can obtain highly accurate indoor environmental contour information. Secondly, the semantic object capture is achieved by using a depth camera combined with an instance segmentation algorithm. Then, the semantic grid map is created by mapping the semantic point cloud through the back-projection process of the pinhole camera. Finally, semantic grid maps are used as a priori information to assist in localization, which will be used to improve the initial particle swarm distribution in the global localization of the AMCL algorithm and thus will solve the robot localization problem in this environment. The experimental evidence shows that the semantic grid map solves the environmental information degradation problem caused by 2D lidar as well as improves the robot’s perception of the environment. In addition, this paper improves the localization robustness of the AMCL algorithm in large scenes and similar environments, resulting in an average localization success rate of about 90% or even higher, and further reduces the number of iterations. The global localization problem of robots in large scenes and similar environments is effectively solved.
Collapse
|
23
|
Elhashash M, Albanwan H, Qin R. A Review of Mobile Mapping Systems: From Sensors to Applications. SENSORS (BASEL, SWITZERLAND) 2022; 22:4262. [PMID: 35684883 PMCID: PMC9185250 DOI: 10.3390/s22114262] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 05/28/2022] [Accepted: 05/31/2022] [Indexed: 06/15/2023]
Abstract
The evolution of mobile mapping systems (MMSs) has gained more attention in the past few decades. MMSs have been widely used to provide valuable assets in different applications. This has been facilitated by the wide availability of low-cost sensors, advances in computational resources, the maturity of mapping algorithms, and the need for accurate and on-demand geographic information system (GIS) data and digital maps. Many MMSs combine hybrid sensors to provide a more informative, robust, and stable solution by complementing each other. In this paper, we presented a comprehensive review of the modern MMSs by focusing on: (1) the types of sensors and platforms, discussing their capabilities and limitations and providing a comprehensive overview of recent MMS technologies available in the market; (2) highlighting the general workflow to process MMS data; (3) identifying different use cases of mobile mapping technology by reviewing some of the common applications; and (4) presenting a discussion on the benefits and challenges and sharing our views on potential research directions.
Collapse
Affiliation(s)
- Mostafa Elhashash
- Geospatial Data Analytics Lab, The Ohio State University, Columbus, OH 43210, USA; (M.E.); (H.A.)
- Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210, USA
| | - Hessah Albanwan
- Geospatial Data Analytics Lab, The Ohio State University, Columbus, OH 43210, USA; (M.E.); (H.A.)
- Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, OH 43210, USA
| | - Rongjun Qin
- Geospatial Data Analytics Lab, The Ohio State University, Columbus, OH 43210, USA; (M.E.); (H.A.)
- Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210, USA
- Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, OH 43210, USA
- Translational Data Analytics Institute, The Ohio State University, Columbus, OH 43210, USA
| |
Collapse
|
24
|
Saputra MRU, Lu CX, Porto Buarque de Gusmao PPB, Wang B, Markham A, Trigoni N. Graph-Based Thermal–Inertial SLAM With Probabilistic Neural Networks. IEEE T ROBOT 2022. [DOI: 10.1109/tro.2021.3120036] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
| | | | | | - Bing Wang
- Department of Computer Science, University of Oxford, Oxford, U.K
| | - Andrew Markham
- Department of Computer Science, University of Oxford, Oxford, U.K
| | - Niki Trigoni
- Department of Computer Science, University of Oxford, Oxford, U.K
| |
Collapse
|
25
|
Automatic Measurements of Garment Sizes Using Computer Vision Deep Learning Models and Point Cloud Data. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12105286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Automatic garment size measurement approaches using computer vision algorithms have been attempted in various ways, but there are still many limitations to overcome. One limitation is that the process involves 2D images, which results in constraints in the process of determining the actual distance between the estimated points. To solve this problem, in this paper, we propose an automated method for measuring garment sizes using computer vision deep learning models and point cloud data. In the proposed method, a deep learning-based keypoint estimation model is first used to capture the clothing size measurement points from 2D images. Then, point cloud data from a LiDAR sensor are used to provide real-world distance information to calculate the actual clothing sizes. As the proposed method uses a mobile device equipped with a LiDAR sensor and camera, it is also more easily configurable than extant methods, which have varied constraints. Experimental results show that our method is not only precise but also robust in measuring the size regardless of the shape, direction, or design of the clothes in two different environments, with 1.59% and 2.08% of the average relative error, respectively.
Collapse
|
26
|
Gonzalez P, Mora A, Garrido S, Barber R, Moreno L. Multi-LiDAR Mapping for Scene Segmentation in Indoor Environments for Mobile Robots. SENSORS 2022; 22:s22103690. [PMID: 35632099 PMCID: PMC9147791 DOI: 10.3390/s22103690] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 04/26/2022] [Accepted: 05/10/2022] [Indexed: 02/01/2023]
Abstract
Nowadays, most mobile robot applications use two-dimensional LiDAR for indoor mapping, navigation, and low-level scene segmentation. However, single data type maps are not enough in a six degree of freedom world. Multi-LiDAR sensor fusion increments the capability of robots to map on different levels the surrounding environment. It exploits the benefits of several data types, counteracting the cons of each of the sensors. This research introduces several techniques to achieve mapping and navigation through indoor environments. First, a scan matching algorithm based on ICP with distance threshold association counter is used as a multi-objective-like fitness function. Then, with Harmony Search, results are optimized without any previous initial guess or odometry. A global map is then built during SLAM, reducing the accumulated error and demonstrating better results than solo odometry LiDAR matching. As a novelty, both algorithms are implemented in 2D and 3D mapping, overlapping the resulting maps to fuse geometrical information at different heights. Finally, a room segmentation procedure is proposed by analyzing this information, avoiding occlusions that appear in 2D maps, and proving the benefits by implementing a door recognition system. Experiments are conducted in both simulated and real scenarios, proving the performance of the proposed algorithms.
Collapse
|
27
|
Zhao Z, Zhang Y, Long L, Lu Z, Shi J. Efficient and adaptive lidar–visual–inertial odometry for agricultural unmanned ground vehicle. INT J ADV ROBOT SYST 2022. [DOI: 10.1177/17298806221094925] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The accuracy of agricultural unmanned ground vehicles’ localization directly affects the accuracy of their navigation. However, due to the changeable environment and fewer features in the agricultural scene, it is challenging for these unmanned ground vehicles to localize precisely in global positioning system-denied areas with a single sensor. In this article, we present an efficient and adaptive sensor-fusion odometry framework based on simultaneous localization and mapping to handle the localization problems of agricultural unmanned ground vehicles without the assistance of a global positioning system. The framework leverages three kinds of sub-odometry (lidar odometry, visual odometry and inertial odometry) and automatically combines them depending on the environment to provide accurate pose estimation in real time. The combination of sub-odometry is implemented by trading off the robustness and the accuracy of pose estimation. The efficiency and adaptability are mainly reflected in the novel surfel-based iterative closest point method for lidar odometry we propose, which utilizes the changeable surfel radius range and the adaptive iterative closest point initialization to improve the accuracy of pose estimation in different environments. We test our system in various agricultural unmanned ground vehicles’ working zones and some other open data sets, and the results prove that the proposed method shows better performance mainly in accuracy, efficiency and robustness, compared with the state-of-art methods.
Collapse
Affiliation(s)
- Zixu Zhao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yucheng Zhang
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Long Long
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Zaiwang Lu
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Jinglin Shi
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
28
|
A Forest Point Cloud Real-Time Reconstruction Method with Single-Line Lidar Based on Visual–IMU Fusion. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
In order to accurately obtain tree growth information from a forest at low cost, this paper proposes a forest point cloud real-time reconstruction method with a single-line lidar based on visual–IMU fusion. We build a collection device based on a monocular camera, inertial measurement unit (IMU), and single-line lidar. Firstly, pose information is obtained using the nonlinear optimization real-time location method. Then, lidar data are projected to the world coordinates and interpolated to form a dense spatial point cloud. Finally, an incremental iterative point cloud loopback detection algorithm based on visual key frames is utilized to optimize the global point cloud and further improve precision. Experiments are conducted in a real forest. Compared with a reconstruction based on the Kalman filter, the root mean square error of the point cloud map decreases by 4.65%, and the time of each frame is 903 μs; therefore, the proposed method can realize real-time scene reconstruction in large-scale forests.
Collapse
|
29
|
Ariante G, Ponte S, Papa U, Greco A, Del Core G. Ground Control System for UAS Safe Landing Area Determination (SLAD) in Urban Air Mobility Operations. SENSORS (BASEL, SWITZERLAND) 2022; 22:3226. [PMID: 35590916 PMCID: PMC9104420 DOI: 10.3390/s22093226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 04/15/2022] [Accepted: 04/20/2022] [Indexed: 02/04/2023]
Abstract
The use of the Unmanned Aerial Vehicles (UAV) and Unmanned Aircraft System (UAS) for civil, scientific, and military operations, is constantly increasing, particularly in environments very dangerous or impossible for human actions. Many tasks are currently carried out in metropolitan areas, such as urban traffic monitoring, pollution and land monitoring, security surveillance, delivery of small packages, etc. Estimation of features around the flight path and surveillance of crowded areas, where there is a high number of vehicles and/or obstacles, are of extreme importance for typical UAS missions. Ensuring safety and efficiency during air traffic operations in a metropolitan area is one of the conditions for Urban Air Mobility (UAM) operations. This paper focuses on the development of a ground control system capable of monitoring crowded areas or impervious sites, identifying the UAV position and a safety area for vertical landing or take-off maneuvers (VTOL), ensuring a high level of accuracy and robustness, even without using GNSS-derived navigation information, and with on-board terrain hazard detection and avoidance (DAA) capabilities, in particular during operations conducted in BVLOS (Beyond Visual Line Of Sight). The system is composed by a mechanically rotating real-time LiDAR (Light Detection and Ranging) sensor, linked to a Raspberry Pi 3 as SBC (Session Board Controller), and interfaced to a GCS (Ground Control Station) by wireless connection for data management and 3-D information transfer.
Collapse
Affiliation(s)
- Gennaro Ariante
- Department of Science and Technology, University of Naples “Parthenope”, 80133 Naples, Italy; (U.P.); (A.G.); (G.D.C.)
| | - Salvatore Ponte
- Department of Engineering, University of Campania “L. Vanvitelli”, 81031 Aversa, Italy;
| | - Umberto Papa
- Department of Science and Technology, University of Naples “Parthenope”, 80133 Naples, Italy; (U.P.); (A.G.); (G.D.C.)
| | - Alberto Greco
- Department of Science and Technology, University of Naples “Parthenope”, 80133 Naples, Italy; (U.P.); (A.G.); (G.D.C.)
| | - Giuseppe Del Core
- Department of Science and Technology, University of Naples “Parthenope”, 80133 Naples, Italy; (U.P.); (A.G.); (G.D.C.)
| |
Collapse
|
30
|
Camera, LiDAR and Multi-modal SLAM Systems for Autonomous Ground Vehicles: a Survey. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-022-01582-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
31
|
Rozsypálek Z, Broughton G, Linder P, Rouček T, Blaha J, Mentzl L, Kusumam K, Krajník T. Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation. SENSORS 2022; 22:s22082975. [PMID: 35458959 PMCID: PMC9030179 DOI: 10.3390/s22082975] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 04/04/2022] [Accepted: 04/11/2022] [Indexed: 12/04/2022]
Abstract
Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model’s robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R.
Collapse
Affiliation(s)
- Zdeněk Rozsypálek
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
- Correspondence:
| | - George Broughton
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
| | - Pavel Linder
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
| | - Tomáš Rouček
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
| | - Jan Blaha
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
| | - Leonard Mentzl
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
| | - Keerthy Kusumam
- Department of Computer Science, University of Nottingham, Jubilee Campus, 7301 Wollaton Rd, Lenton, Nottingham NG8 1BB, UK;
| | - Tomáš Krajník
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
| |
Collapse
|
32
|
Bai L, Li Y, Kirubarajan T, Gao X. Quadruple tripatch-wise modular architecture-based real-time structure from motion. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.01.071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
33
|
Simultaneous Localization and Mapping (SLAM) and Data Fusion in Unmanned Aerial Vehicles: Recent Advances and Challenges. DRONES 2022. [DOI: 10.3390/drones6040085] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
This article presents a survey of simultaneous localization and mapping (SLAM) and data fusion techniques for object detection and environmental scene perception in unmanned aerial vehicles (UAVs). We critically evaluate some current SLAM implementations in robotics and autonomous vehicles and their applicability and scalability to UAVs. SLAM is envisioned as a potential technique for object detection and scene perception to enable UAV navigation through continuous state estimation. In this article, we bridge the gap between SLAM and data fusion in UAVs while also comprehensively surveying related object detection techniques such as visual odometry and aerial photogrammetry. We begin with an introduction to applications where UAV localization is necessary, followed by an analysis of multimodal sensor data fusion to fuse the information gathered from different sensors mounted on UAVs. We then discuss SLAM techniques such as Kalman filters and extended Kalman filters to address scene perception, mapping, and localization in UAVs. The findings are summarized to correlate prevalent and futuristic SLAM and data fusion for UAV navigation, and some avenues for further research are discussed.
Collapse
|
34
|
A Robust Localization System Fusion Vision-CNN Relocalization and Progressive Scan Matching for Indoor Mobile Robots. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12063007] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Map-based, high-precision dynamic pose tracking and rapid relocalization in the case of unknown poses are very important for indoor navigation robots. This paper aims to propose a robust and high-precision indoor robot positioning algorithm that combines vision and laser sensor information. This algorithm mainly includes two parts: initialization and real-time pose tracking. The initialization component is mainly to solve the problem of the uncertainty of a robot’s initial pose and loss of pose tracking. First, the laser information is added to the posenetLSTM neural network that only considers image information as a geometric constraint, and the loss function is redesigned thereby improving global positioning accuracy. Second, on the basis of visual rough positioning, the branch and bound method is used to quickly search the high-precision pose of the robot. In the real-time tracking component, small-scale correlation sampling is performed on the high-resolution environment grid map, and the robot’s pose is dynamically tracked in real time. When the score of the tracking pose is lower than a certain threshold, the method of nonlinear graph optimization is used to perform the pose optimization. In order to prove the robustness, high precision, and real-time performance of the algorithm, this article first builds a simulation environment in Gazebo for evaluation, and then verifies the relevant performance of the algorithm through the Mir robot platform. Both simulations and experiments show that the introduction of laser information into the neural network can greatly improve the accuracy of vision relocalization and the system can quickly perform high-precision repositioning when the camera is not severely blocked. At the same time, compared with the pose tracking performance of the adaptive Monte Carlo localization (AMCL) algorithm, the proposed algorithm has also improved in accuracy and in real-time performance.
Collapse
|
35
|
Ou J, Huang P, Zhou J, Zhao Y, Lin L. Automatic Extrinsic Calibration of 3D LIDAR and Multi-Cameras Based on Graph Optimization. SENSORS 2022; 22:s22062221. [PMID: 35336392 PMCID: PMC8954836 DOI: 10.3390/s22062221] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 02/28/2022] [Accepted: 03/10/2022] [Indexed: 12/04/2022]
Abstract
In recent years, multi-sensor fusion technology has made enormous progress in 3D reconstruction, surveying and mapping, autonomous driving, and other related fields, and extrinsic calibration is a necessary condition for multi-sensor fusion applications. This paper proposes a 3D LIDAR-to-camera automatic calibration framework based on graph optimization. The system can automatically identify the position of the pattern and build a set of virtual feature point clouds, and can simultaneously complete the calibration of the LIDAR and multiple cameras. To test this framework, a multi-sensor system is formed using a mobile robot equipped with LIDAR, monocular and binocular cameras, and the pairwise calibration of LIDAR with two cameras is evaluated quantitatively and qualitatively. The results show that this method can produce more accurate calibration results than the state-of-the-art method. The average error on the camera normalization plane is 0.161 mm, which outperforms existing calibration methods. Due to the introduction of graph optimization, the original point cloud is also optimized while optimizing the external parameters between the sensors, which can effectively correct the errors caused during data collection, so it is also robust to bad data.
Collapse
Affiliation(s)
- Jinshun Ou
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (J.O.); (J.Z.); (Y.Z.); (L.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China
| | - Panling Huang
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (J.O.); (J.Z.); (Y.Z.); (L.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China
- Correspondence:
| | - Jun Zhou
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (J.O.); (J.Z.); (Y.Z.); (L.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China
| | - Yifan Zhao
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (J.O.); (J.Z.); (Y.Z.); (L.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China
| | - Lebin Lin
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (J.O.); (J.Z.); (Y.Z.); (L.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China
| |
Collapse
|
36
|
Naheem K, Elsharkawy A, Koo D, Lee Y, Kim M. A UWB-Based Lighter-Than-Air Indoor Robot for User-Centered Interactive Applications. SENSORS (BASEL, SWITZERLAND) 2022; 22:2093. [PMID: 35336264 PMCID: PMC8951315 DOI: 10.3390/s22062093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 02/21/2022] [Accepted: 03/06/2022] [Indexed: 06/14/2023]
Abstract
Features such as safety and longer flight times render lighter-than-air robots strong candidates for indoor navigation applications involving people. However, the existing interactive mobility solutions using such robots lack the capability to follow a long-distance user in a relatively larger indoor space. At the same time, the tracking data delivered to these robots are sensitive to uncertainties in indoor environments such as varying intensities of light and electromagnetic field disturbances. Regarding the above shortcomings, we proposed an ultra-wideband (UWB)-based lighter-than-air indoor robot for user-centered interactive applications. We developed the data processing scheme over a robot operating system (ROS) framework to accommodate the robot's integration needs for a user-centered interactive application. In order to explore the user interaction with the robot at a long-distance, the dual interactions (i.e., user footprint following and user intention recognition) were proposed by equipping the user with a hand-held UWB sensor. Finally, experiments were conducted inside a professional arena to validate the robot's pose tracking in which 3D positioning was compared with the 3D laser sensor, and to reveal the applicability of the user-centered autonomous following of the robot according to the dual interactions.
Collapse
|
37
|
Text-MCL: Autonomous Mobile Robot Localization in Similar Environment Using Text-Level Semantic Information. MACHINES 2022. [DOI: 10.3390/machines10030169] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Localization is one of the most important issues in mobile robotics, especially when an autonomous mobile robot performs a navigation task. The current and popular occupancy grid map, based on 2D LiDar simultaneous localization and mapping (SLAM), is suitable and easy for path planning, and the adaptive Monte Carlo localization (AMCL) method can realize localization in most of the rooms in indoor environments. However, the conventional method fails to locate the robot when there are similar and repeated geometric structures, like long corridors. To solve this problem, we present Text-MCL, a new method for robot localization based on text information and laser scan data. A coarse-to-fine localization paradigm is used for localization: firstly, we find the coarse place for global localization by finding text-level semantic information, and then get the fine local localization using the Monte Carlo localization (MCL) method based on laser data. Extensive experiments demonstrate that our approach improves the global localization speed and success rate to 96.2% with few particles. In addition, the mobile robot using our proposed approach can recover from robot kidnapping after a short movement, while conventional MCL methods converge to the wrong position.
Collapse
|
38
|
Yang T, Cabani A, Chafouk H. A Survey of Recent Indoor Localization Scenarios and Methodologies. SENSORS 2021; 21:s21238086. [PMID: 34884090 PMCID: PMC8662396 DOI: 10.3390/s21238086] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 11/25/2021] [Accepted: 11/29/2021] [Indexed: 01/27/2023]
Abstract
Recently, various novel scenarios have been studied for indoor localization. The trilateration is known as a classic theoretical model of geometric-based indoor localization, with uniform RSSI data that can be transferred directly into distance ranges. Then, a trilateration solution can be algebraically acquired from theses ranges, in order to fix user’s actual location. However, the collected RSSI or other measurement data should be further processed and classified to lower the localization error rate, instead of using the raw data influenced by multi-path effect, multiple nonlinear interference and noises. In this survey, a large number of existing techniques are presented for different indoor network structures and channel conditions, divided as LOS (light-of-sight) and NLOS (non light-of-sight). Besides, the input measurement data such as RSSI (received signal strength indication), TDOA (time difference of arrival), DOA (distance of arrival), and RTT (round trip time) are studied towards different application scenarios. The key localization techniques like RSSI-based fingerprinting technique are presented using supervised machine learning methods, namely SVM (support vector machine), KNN (K nearest neighbors) and NN (neural network) methods, especially in an offline training phase. Other unsupervised methods as isolation forest, k-means, and expectation maximization methods are utilized to further improve the localization accuracy in online testing phase. For Bayesian filtering methods, apart from the basic linear Kalman filter (LKF) methods, nonlinear stochastic filters such as extended KF, cubature KF, unscented KF and particle filters are introduced. These nonlinear methods are more suitable for dynamic localization models. In addition to the localization accuracy, the other important performance features and evaluation aspects are presented in our paper: scalability, stability, reliability, and the complexity of proposed algorithms is compared in this survey. Our paper provides a comprehensive perspective to compare the existing techniques and related practical localization models, with the aim of improving localization accuracy and reducing the complexity of the system.
Collapse
|
39
|
Petracek P, Kratky V, Petrlik M, Baca T, Kratochvil R, Saska M. Large-Scale Exploration of Cave Environments by Unmanned Aerial Vehicles. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3098304] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
40
|
|
41
|
Abstract
AbstractService robots are appearing more and more in our daily life. The development of service robots combines multiple fields of research, from object perception to object manipulation. The state-of-the-art continues to improve to make a proper coupling between object perception and manipulation. This coupling is necessary for service robots not only to perform various tasks in a reasonable amount of time but also to continually adapt to new environments and safely interact with non-expert human users. Nowadays, robots are able to recognize various objects, and quickly plan a collision-free trajectory to grasp a target object in predefined settings. Besides, in most of the cases, there is a reliance on large amounts of training data. Therefore, the knowledge of such robots is fixed after the training phase, and any changes in the environment require complicated, time-consuming, and expensive robot re-programming by human experts. Therefore, these approaches are still too rigid for real-life applications in unstructured environments, where a significant portion of the environment is unknown and cannot be directly sensed or controlled. In such environments, no matter how extensive the training data used for batch learning, a robot will always face new objects. Therefore, apart from batch learning, the robot should be able to continually learn about new object categories and grasp affordances from very few training examples on-site. Moreover, apart from robot self-learning, non-expert users could interactively guide the process of experience acquisition by teaching new concepts, or by correcting insufficient or erroneous concepts. In this way, the robot will constantly learn how to help humans in everyday tasks by gaining more and more experiences without the need for re-programming. In this paper, we review a set of previously published works and discuss advances in service robots from object perception to complex object manipulation and shed light on the current challenges and bottlenecks.
Collapse
|
42
|
Survey Solutions for 3D Acquisition and Representation of Artificial and Natural Caves. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11146482] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
A three-dimensional survey of natural caves is often a difficult task due to the roughness of the investigated area and the problems of accessibility. Traditional adopted techniques allow a simplified acquisition of the topography of caves characterized by an oversimplification of the geometry. Nowadays, the advent of LiDAR and Structure from Motion applications eased three-dimensional surveys in different environments. In this paper, we present a comparison between other three-dimensional survey systems, namely a Terrestrial Laser Scanner, a SLAM-based portable instrument, and a commercial photo camera, to test their possible deployment in natural caves survey. We presented a comparative test carried out in a tunnel stretch to calibrate the instrumentation on a benchmark site. The choice of the site is motivated by its regular geometry and easy accessibility. According to the result obtained in the calibration site, we presented a methodology, based on the Structure from Motion approach that resulted in the best compromise among accuracy, feasibility, and cost-effectiveness, that could be adopted for the three-dimensional survey of complex natural caves using a sequence of images and the structure from motion algorithm. The methods consider two different approaches to obtain a low resolution complete three-dimensional model of the cave and ultra-detailed models of most peculiar cave morphological elements. The proposed system was tested in the Gazzano Cave (Piemonte region, Northwestern Italy). The obtained result is a three-dimensional model of the cave at low resolution due to the site’s extension and the remarkable amount of data. Additionally, a peculiar speleothem, i.e., a stalagmite, in the cave was surveyed at high resolution to test the proposed high-resolution approach on a single object. The benchmark and the cave trials allowed a better definition of the instrumentation choice for underground surveys regarding accuracy and feasibility.
Collapse
|
43
|
Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues. ARRAY 2021. [DOI: 10.1016/j.array.2021.100057] [Citation(s) in RCA: 53] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|
44
|
Research on Visual Positioning of a Roadheader and Construction of an Environment Map. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11114968] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
The autonomous positioning of tunneling equipment is the key to intellectualization and robotization of a tunneling face. In this paper, a method based on simultaneous localization and mapping (SLAM) to estimate the body pose of a roadheader and build a navigation map of a roadway is presented. In terms of pose estimation, an RGB-D camera is used to collect images, and a pose calculation model of a roadheader is established based on random sample consensus (RANSAC) and iterative closest point (ICP); constructing a pose graph optimization model with closed-loop constraints. An iterative equation based on Levenberg–Marquadt is derived, which can achieve the optimal estimation of the body pose. In terms of mapping, LiDAR is used to experimentally construct the grid map based on open-source algorithms, such as Gmapping, Cartographer, Karto, and Hector. A point cloud map, octree map, and compound map are experimentally constructed based on the open-source library RTAB-MAP. By setting parameters, such as the expansion radius of an obstacle and the updating frequency of the map, a cost map for the navigation of a roadheader is established. Combined with algorithms, such as Dijskra and timed-elastic-band, simulation experiments show that the combination of octree map and cost map can support global path planning and local obstacle avoidance.
Collapse
|
45
|
Oelsch M, Karimi M, Steinbach E. R-LOAM: Improving LiDAR Odometry and Mapping With Point-to-Mesh Features of a Known 3D Reference Object. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3060413] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
46
|
Wang H, Wang C, Xie L. Intensity-SLAM: Intensity Assisted Localization and Mapping for Large Scale Environment. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3059567] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
47
|
Jiang P, Chen L, Guo H, Yu M, Xiong J. Novel indoor positioning algorithm based on Lidar/inertial measurement unit integrated system. INT J ADV ROBOT SYST 2021. [DOI: 10.1177/1729881421999923] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
As an important research field of mobile robot, simultaneous localization and mapping technology is the core technology to realize intelligent autonomous mobile robot. Aiming at the problems of low positioning accuracy of Lidar (light detection and ranging) simultaneous localization and mapping with nonlinear and non-Gaussian noise characteristics, this article presents a mobile robot simultaneous localization and mapping method that combines Lidar and inertial measurement unit to set up a multi-sensor integrated system and uses a rank Kalman filtering to estimate the robot motion trajectory through inertial measurement unit and Lidar observations. Rank Kalman filtering is similar to the Gaussian deterministic point sampling filtering algorithm in structure, but it does not need to meet the assumptions of Gaussian distribution. It completely calculates the sampling points and the sampling points weights based on the correlation principle of rank statistics. It is suitable for nonlinear and non-Gaussian systems. With multiple experimental tests of small-scale arc trajectories, we can see that compared with the alone Lidar simultaneous localization and mapping algorithm, the new algorithm reduces the mean error of the indoor mobile robot in the X direction from 0.0928 m to 0.0451 m, with an improved accuracy rate of 46.39%, and the mean error in the Y direction from 0.0772 m to 0.0405 m, which improves the accuracy rate of 48.40%. Compared with the extended Kalman filter fusion algorithm, the new algorithm reduces the mean error of the indoor mobile robot in the X direction from 0.0597 m to 0.0451 m, with an improved accuracy rate of 24.46%, and the mean error in the Y direction from 0.0537 m to 0.0405 m, which improves the accuracy rate of 24.58%. Finally, we also tested on a large-scale rectangular trajectory, compared with the extended Kalman filter algorithm, rank Kalman filtering improves the accuracy of 23.84% and 25.26% in the X and Y directions, respectively, it is verified that the accuracy of the algorithm proposed in this article has been improved.
Collapse
Affiliation(s)
- Ping Jiang
- Information Engineering School, Nanchang University, Nanchang, China
| | - Liang Chen
- Information Engineering School, Nanchang University, Nanchang, China
| | - Hang Guo
- Information Engineering School, Nanchang University, Nanchang, China
| | - Min Yu
- College of Software, Jiangxi Normal University, Nanchang, China
| | - Jian Xiong
- Information Engineering School, Nanchang University, Nanchang, China
| |
Collapse
|
48
|
Qu Y, Yang M, Zhang J, Xie W, Qiang B, Chen J. An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation. SENSORS 2021; 21:s21051605. [PMID: 33668886 PMCID: PMC7956205 DOI: 10.3390/s21051605] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Revised: 02/10/2021] [Accepted: 02/15/2021] [Indexed: 11/30/2022]
Abstract
Indoor autonomous navigation refers to the perception and exploration abilities of mobile agents in unknown indoor environments with the help of various sensors. It is the basic and one of the most important functions of mobile agents. In spite of the high performance of the single-sensor navigation method, multi-sensor fusion methods still potentially improve the perception and navigation abilities of mobile agents. This work summarizes the multi-sensor fusion methods for mobile agents’ navigation by: (1) analyzing and comparing the advantages and disadvantages of a single sensor in the task of navigation; (2) introducing the mainstream technologies of multi-sensor fusion methods, including various combinations of sensors and several widely recognized multi-modal sensor datasets. Finally, we discuss the possible technique trends of multi-sensor fusion methods, especially its technique challenges in practical navigation environments.
Collapse
Affiliation(s)
- Yuanhao Qu
- Research Center for Brain-inspired Intelligence (BII), Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing 100190, China;
- School of Computer and Information Security, Guilin University of Electronic Technology, Guilin 541004, China; (J.Z.); (W.X.); (B.Q.); (J.C.)
| | - Minghao Yang
- Research Center for Brain-inspired Intelligence (BII), Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing 100190, China;
- Correspondence:
| | - Jiaqing Zhang
- School of Computer and Information Security, Guilin University of Electronic Technology, Guilin 541004, China; (J.Z.); (W.X.); (B.Q.); (J.C.)
| | - Wu Xie
- School of Computer and Information Security, Guilin University of Electronic Technology, Guilin 541004, China; (J.Z.); (W.X.); (B.Q.); (J.C.)
| | - Baohua Qiang
- School of Computer and Information Security, Guilin University of Electronic Technology, Guilin 541004, China; (J.Z.); (W.X.); (B.Q.); (J.C.)
| | - Jinlong Chen
- School of Computer and Information Security, Guilin University of Electronic Technology, Guilin 541004, China; (J.Z.); (W.X.); (B.Q.); (J.C.)
| |
Collapse
|
49
|
Arshad S, Kim GW. Role of Deep Learning in Loop Closure Detection for Visual and Lidar SLAM: A Survey. SENSORS 2021; 21:s21041243. [PMID: 33578695 PMCID: PMC7916334 DOI: 10.3390/s21041243] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 01/27/2021] [Accepted: 02/04/2021] [Indexed: 11/16/2022]
Abstract
Loop closure detection is of vital importance in the process of simultaneous localization and mapping (SLAM), as it helps to reduce the cumulative error of the robot's estimated pose and generate a consistent global map. Many variations of this problem have been considered in the past and the existing methods differ in the acquisition approach of query and reference views, the choice of scene representation, and associated matching strategy. Contributions of this survey are many-fold. It provides a thorough study of existing literature on loop closure detection algorithms for visual and Lidar SLAM and discusses their insight along with their limitations. It presents a taxonomy of state-of-the-art deep learning-based loop detection algorithms with detailed comparison metrics. Also, the major challenges of conventional approaches are identified. Based on those challenges, deep learning-based methods were reviewed where the identified challenges are tackled focusing on the methods providing long-term autonomy in various conditions such as changing weather, light, seasons, viewpoint, and occlusion due to the presence of mobile objects. Furthermore, open challenges and future directions were also discussed.
Collapse
|
50
|
Internal Wind Turbine Blade Inspections Using UAVs: Analysis and Design Issues. ENERGIES 2021. [DOI: 10.3390/en14020294] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Interior and exterior wind turbine blade inspections are necessary to extend the lifetime of wind turbine generators. The use of unmanned vehicles is an alternative to exterior wind turbine blade inspections performed by technicians that require the use of cranes and ropes. Interior wind turbine blade inspections are even more challenging due to the confined spaces, lack of illumination, and the presence of potentially harmful internal structural components. Additionally, the cost of manned interior wind turbine blade inspections is a major limiting factor. This paper analyses all aspects of the viability of using manually controlled or autonomous aerial vehicles for interior wind turbine blade inspections. We discuss why the size, weight, and flight time of a vehicle, in addition to the structure of the wind turbine blade, are the main limiting factors in performing internal blade inspections. We also describe the design issues that must be considered to provide autonomy to unmanned vehicles and the control system, the sensors that can be used, and introduce some of the algorithms for localization, obstacle avoidance and path planning that are best suited for the task. Lastly, we briefly describe which non-destructive test instrumentation can be used for the purpose.
Collapse
|