1
|
Lahemer ESF, Rad A. An Audio-Based SLAM for Indoor Environments: A Robotic Mixed Reality Presentation. SENSORS (BASEL, SWITZERLAND) 2024; 24:2796. [PMID: 38732904 PMCID: PMC11086165 DOI: 10.3390/s24092796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 04/21/2024] [Accepted: 04/25/2024] [Indexed: 05/13/2024]
Abstract
In this paper, we present a novel approach referred to as the audio-based virtual landmark-based HoloSLAM. This innovative method leverages a single sound source and microphone arrays to estimate the voice-printed speaker's direction. The system allows an autonomous robot equipped with a single microphone array to navigate within indoor environments, interact with specific sound sources, and simultaneously determine its own location while mapping the environment. The proposed method does not require multiple audio sources in the environment nor sensor fusion to extract pertinent information and make accurate sound source estimations. Furthermore, the approach incorporates Robotic Mixed Reality using Microsoft HoloLens to superimpose landmarks, effectively mitigating the audio landmark-related issues of conventional audio-based landmark SLAM, particularly in situations where audio landmarks cannot be discerned, are limited in number, or are completely missing. The paper also evaluates an active speaker detection method, demonstrating its ability to achieve high accuracy in scenarios where audio data are the sole input. Real-time experiments validate the effectiveness of this method, emphasizing its precision and comprehensive mapping capabilities. The results of these experiments showcase the accuracy and efficiency of the proposed system, surpassing the constraints associated with traditional audio-based SLAM techniques, ultimately leading to a more detailed and precise mapping of the robot's surroundings.
Collapse
Affiliation(s)
- Elfituri S. F. Lahemer
- Autonomous and Intelligent Systems Laboratory, School of Mechatronic Systems Engineering, Simon Fraser University, Surrey, BC V3T 0A3, Canada;
| | | |
Collapse
|
2
|
Peng G, Zhou Y, Hu L, Xiao L, Sun Z, Wu Z, Zhu X. VILO SLAM: Tightly Coupled Binocular Vision-Inertia SLAM Combined with LiDAR. SENSORS (BASEL, SWITZERLAND) 2023; 23:4588. [PMID: 37430501 DOI: 10.3390/s23104588] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 04/14/2023] [Accepted: 04/27/2023] [Indexed: 07/12/2023]
Abstract
For the existing visual-inertial SLAM algorithm, when the robot is moving at a constant speed or purely rotating and encounters scenes with insufficient visual features, problems of low accuracy and poor robustness arise. Aiming to solve the problems of low accuracy and robustness of the visual inertial SLAM algorithm, a tightly coupled vision-IMU-2D lidar odometry (VILO) algorithm is proposed. Firstly, low-cost 2D lidar observations and visual-inertial observations are fused in a tightly coupled manner. Secondly, the low-cost 2D lidar odometry model is used to derive the Jacobian matrix of the lidar residual with respect to the state variable to be estimated, and the residual constraint equation of the vision-IMU-2D lidar is constructed. Thirdly, the nonlinear solution method is used to obtain the optimal robot pose, which solves the problem of how to fuse 2D lidar observations with visual-inertial information in a tightly coupled manner. The results show that the algorithm still has reliable pose-estimation accuracy and robustness in many special environments, and the position error and yaw angle error are greatly reduced. Our research improves the accuracy and robustness of the multi-sensor fusion SLAM algorithm.
Collapse
Affiliation(s)
- Gang Peng
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
- Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, Wuhan 430074, China
| | - Yicheng Zhou
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
- Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, Wuhan 430074, China
| | - Lu Hu
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
- Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, Wuhan 430074, China
| | - Li Xiao
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
- Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, Wuhan 430074, China
| | - Zhigang Sun
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
- Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, Wuhan 430074, China
| | - Zhangang Wu
- Shantui Construction Machinery Co., Ltd., Jining 272073, China
| | - Xukang Zhu
- Shantui Construction Machinery Co., Ltd., Jining 272073, China
| |
Collapse
|