1
|
Wang J, Lin S, Liu A. Bioinspired Perception and Navigation of Service Robots in Indoor Environments: A Review. Biomimetics (Basel) 2023; 8:350. [PMID: 37622955 PMCID: PMC10452487 DOI: 10.3390/biomimetics8040350] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 07/27/2023] [Accepted: 08/01/2023] [Indexed: 08/26/2023] Open
Abstract
Biological principles draw attention to service robotics because of similar concepts when robots operate various tasks. Bioinspired perception is significant for robotic perception, which is inspired by animals' awareness of the environment. This paper reviews the bioinspired perception and navigation of service robots in indoor environments, which are popular applications of civilian robotics. The navigation approaches are classified by perception type, including vision-based, remote sensing, tactile sensor, olfactory, sound-based, inertial, and multimodal navigation. The trend of state-of-art techniques is moving towards multimodal navigation to combine several approaches. The challenges in indoor navigation focus on precise localization and dynamic and complex environments with moving objects and people.
Collapse
Affiliation(s)
- Jianguo Wang
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
| | - Shiwei Lin
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
| | - Ang Liu
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
| |
Collapse
|
2
|
Alabi A, Vanderelst D, Minai AA. Rapid learning of spatial representations for goal-directed navigation based on a novel model of hippocampal place fields. Neural Netw 2023; 161:116-128. [PMID: 36745937 DOI: 10.1016/j.neunet.2023.01.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 12/16/2022] [Accepted: 01/11/2023] [Indexed: 01/21/2023]
Abstract
The discovery of place cells and other spatially modulated neurons in the hippocampal complex of rodents has been crucial to elucidating the neural basis of spatial cognition. More recently, the replay of neural sequences encoding previously experienced trajectories has been observed during consummatory behavior-potentially with implications for rapid learning, quick memory consolidation, and behavioral planning. Several promising models for robotic navigation and reinforcement learning have been proposed based on these and previous findings. Most of these models, however, use carefully engineered neural networks, and sometimes require long learning periods. In this paper, we present a self-organizing model incorporating place cells and replay, and demonstrate its utility for rapid one-shot learning in non-trivial environments with obstacles.
Collapse
Affiliation(s)
- Adedapo Alabi
- Department of Electrical & Computer Engineering, University of Cincinnati, Cincinnati, OH, 45221, USA.
| | - Dieter Vanderelst
- Department of Electrical & Computer Engineering, University of Cincinnati, Cincinnati, OH, 45221, USA.
| | - Ali A Minai
- Department of Electrical & Computer Engineering, University of Cincinnati, Cincinnati, OH, 45221, USA.
| |
Collapse
|
3
|
Waheed M, Milford M, McDonald-Maier K, Ehsan S. Improving Visual Place Recognition Performance by Maximising Complementarity. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3088779] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
4
|
Mount J, Xu M, Dawes L, Milford M. Unsupervised Selection of Optimal Operating Parameters for Visual Place Recognition Algorithms Using Gaussian Mixture Models. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2020.3043171] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
5
|
Mao J, Hu X, Zhang L, He X, Milford M. A Bio-Inspired Goal-Directed Visual Navigation Model for Aerial Mobile Robots. J INTELL ROBOT SYST 2020. [DOI: 10.1007/s10846-020-01190-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
6
|
Mount J, Dawes L, Milford MJ. Automatic Coverage Selection for Surface-Based Visual Localization. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2928259] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
7
|
Qiao Y, Cappelle C, Ruichek Y, Yang T. ConvNet and LSH-Based Visual Localization Using Localized Sequence Matching. SENSORS (BASEL, SWITZERLAND) 2019; 19:E2439. [PMID: 31142006 PMCID: PMC6603665 DOI: 10.3390/s19112439] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2019] [Revised: 05/21/2019] [Accepted: 05/23/2019] [Indexed: 11/16/2022]
Abstract
Convolutional Network (ConvNet), with its strong image representation ability, has achieved significant progress in the computer vision and robotic fields. In this paper, we propose a visual localization approach based on place recognition that combines the powerful ConvNet features and localized image sequence matching. The image distance matrix is constructed based on the cosine distance of extracted ConvNet features, and then a sequence search technique is applied on this distance matrix for the final visual recognition. To speed up the computational efficiency, the locality sensitive hashing (LSH) method is applied to achieve real-time performances with minimal accuracy degradation. We present extensive experiments on four real world data sets to evaluate each of the specific challenges in visual recognition. A comprehensive performance comparison of different ConvNet layers (each defining a level of features) considering both appearance and illumination changes is conducted. Compared with the traditional approaches based on hand-crafted features and single image matching, the proposed method shows good performances even in the presence of appearance and illumination changes.
Collapse
Affiliation(s)
- Yongliang Qiao
- Australian Centre for Field Robotics (ACFR), Department of Aerospace, Mechanical and Mechatronic Engineering (AMME), The University of Sydney, Sydney, NSW 2006, Australia.
| | - Cindy Cappelle
- Connaissance et Intelligence Artificielle Distribuées (CIAD), University Bourgogne Franche-Comté, UTBM, F-90010 Belfort, France.
| | - Yassine Ruichek
- Connaissance et Intelligence Artificielle Distribuées (CIAD), University Bourgogne Franche-Comté, UTBM, F-90010 Belfort, France.
| | - Tao Yang
- Connaissance et Intelligence Artificielle Distribuées (CIAD), University Bourgogne Franche-Comté, UTBM, F-90010 Belfort, France.
| |
Collapse
|
8
|
Multi-Process Fusion: Visual Place Recognition Using Multiple Image Processing Methods. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2898427] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
9
|
Jacobson A, Chen Z, Milford M. Leveraging variable sensor spatial acuity with a homogeneous, multi-scale place recognition framework. BIOLOGICAL CYBERNETICS 2018; 112:209-225. [PMID: 29353330 DOI: 10.1007/s00422-017-0745-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Accepted: 12/21/2017] [Indexed: 06/07/2023]
Abstract
Most robot navigation systems perform place recognition using a single-sensor modality and one, or at most two heterogeneous map scales. In contrast, mammals perform navigation by combining sensing from a wide variety of modalities including vision, auditory, olfactory and tactile senses with a multi-scale, homogeneous neural map of the environment. In this paper, we develop a multi-scale, multi-sensor system for mapping and place recognition that combines spatial localization hypotheses at different spatial scales from multiple different sensors to calculate an overall place recognition estimate. We evaluate the system's performance over three repeated 1.5-km day and night journeys across a university campus spanning outdoor and multi-level indoor environments, incorporating camera, WiFi and barometric sensory information. The system outperforms a conventional camera-only localization system, with the results demonstrating not only how combining multiple sensing modalities together improves performance, but also how combining these sensing modalities over multiple scales further improves performance over a single-scale approach. The multi-scale mapping framework enables us to analyze the naturally varying spatial acuity of different sensing modalities, revealing how the multi-scale approach captures each sensing modality at its optimal operation point where a single-scale approach does not, and enables us to then weight sensor contributions at different scales based on their utility for place recognition at that scale.
Collapse
Affiliation(s)
- Adam Jacobson
- School of Electrical Engineering and Computer Science, Queensland University of Technology, Brisbane, Australia.
| | - Zetao Chen
- School of Electrical Engineering and Computer Science, Queensland University of Technology, Brisbane, Australia
| | - Michael Milford
- School of Electrical Engineering and Computer Science, Queensland University of Technology, Brisbane, Australia
| |
Collapse
|
10
|
Ahmad Yousef KM, Mohd BJ, Al-Widyan K, Hayajneh T. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap. SENSORS 2017; 17:s17102346. [PMID: 29036905 PMCID: PMC5677002 DOI: 10.3390/s17102346] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Revised: 10/08/2017] [Accepted: 10/12/2017] [Indexed: 11/16/2022]
Abstract
Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot-world hand-eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.
Collapse
Affiliation(s)
| | - Bassam J Mohd
- Department of Computer Engineering, the Hashemite University, Zarqa 13115, Jordan.
| | - Khalid Al-Widyan
- Department of Mechatronics Engineering, the Hashemite University, Zarqa 13115, Jordan.
| | - Thaier Hayajneh
- Department of Computer and Information Sciences, Fordham University, New York, NY 10023, USA.
| |
Collapse
|
11
|
Lowry S, Sunderhauf N, Newman P, Leonard JJ, Cox D, Corke P, Milford MJ. Visual Place Recognition: A Survey. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2015.2496823] [Citation(s) in RCA: 531] [Impact Index Per Article: 59.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|