451
|
Lenac K, Ćesić J, Marković I, Petrović I. Exactly sparse delayed state filter on Lie groups for long-term pose graph SLAM. Int J Rob Res 2018. [DOI: 10.1177/0278364918767756] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In this paper we propose a simultaneous localization and mapping (SLAM) back-end solution called the exactly sparse delayed state filter on Lie groups (LG-ESDSF). We derive LG-ESDSF and demonstrate that it retains all the good characteristics of the classic Euclidean ESDSF, the main advantage being the exact sparsity of the information matrix. The key advantage of LG-ESDSF in comparison with the classic ESDSF lies in the ability to respect the state space geometry by negotiating uncertainties and employing filtering equations directly on Lie groups. We also exploit the special structure of the information matrix in order to allow long-term operation while the robot is moving repeatedly through the same environment. To prove the effectiveness of the proposed SLAM solution, we conducted extensive experiments on two different publicly available datasets, namely the KITTI and EuRoC datasets, using two front-ends: one based on the stereo camera and the other on the 3D LIDAR. We compare LG-ESDSF with the general graph optimization framework ([Formula: see text]) when coupled with the same front-ends. Similarly to [Formula: see text] the proposed LG-ESDSF is front-end agnostic and the comparison demonstrates that our solution can match the accuracy of [Formula: see text], while maintaining faster computation times. Furthermore, the proposed back-end coupled with the stereo camera front-end forms a complete visual SLAM solution dubbed LG-SLAM. Finally, we evaluated LG-SLAM using the online KITTI protocol and at the time of writing it achieved the second best result among the stereo odometry solutions and the best result among the tested SLAM algorithms.
Collapse
Affiliation(s)
- Kruno Lenac
- University of Zagreb Faculty of Electrical Engineering and Computing, Croatia
| | - Josip Ćesić
- University of Zagreb Faculty of Electrical Engineering and Computing, Croatia
| | - Ivan Marković
- University of Zagreb Faculty of Electrical Engineering and Computing, Croatia
| | - Ivan Petrović
- University of Zagreb Faculty of Electrical Engineering and Computing, Croatia
| |
Collapse
|
452
|
Sünderhauf N, Brock O, Scheirer W, Hadsell R, Fox D, Leitner J, Upcroft B, Abbeel P, Burgard W, Milford M, Corke P. The limits and potentials of deep learning for robotics. Int J Rob Res 2018. [DOI: 10.1177/0278364918770733] [Citation(s) in RCA: 224] [Impact Index Per Article: 37.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The application of deep learning in robotics leads to very specific problems and research questions that are typically not addressed by the computer vision and machine learning communities. In this paper we discuss a number of robotics-specific learning, reasoning, and embodiment challenges for deep learning. We explain the need for better evaluation metrics, highlight the importance and unique challenges for deep robotic learning in simulation, and explore the spectrum between purely data-driven and model-driven approaches. We hope this paper provides a motivating overview of important research directions to overcome the current limitations, and helps to fulfill the promising potentials of deep learning in robotics.
Collapse
Affiliation(s)
- Niko Sünderhauf
- Australian Centre for Robotic Vision, Queensland University of Technology (QUT), Brisbane, Australia
| | - Oliver Brock
- Robotics and Biology Laboratory, Technische Universität Berlin, Germany
| | - Walter Scheirer
- Department of Computer Science and Engineering, University of Notre Dame, IN, USA
| | | | - Dieter Fox
- Paul G. Allen School of Computer Science & Engineering, University of Washington, WA, USA
| | - Jürgen Leitner
- Australian Centre for Robotic Vision, Queensland University of Technology (QUT), Brisbane, Australia
| | | | - Pieter Abbeel
- UC Berkeley, Department of Electrical Engineering and Computer Sciences, CA, USA
| | - Wolfram Burgard
- Department of Computer Science, University of Freiburg, Germany
| | - Michael Milford
- Australian Centre for Robotic Vision, Queensland University of Technology (QUT), Brisbane, Australia
| | - Peter Corke
- Australian Centre for Robotic Vision, Queensland University of Technology (QUT), Brisbane, Australia
| |
Collapse
|
453
|
SLAMM: Visual monocular SLAM with continuous mapping using multiple maps. PLoS One 2018; 13:e0195878. [PMID: 29702697 PMCID: PMC5922523 DOI: 10.1371/journal.pone.0195878] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Accepted: 03/31/2018] [Indexed: 11/20/2022] Open
Abstract
This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM). It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor’s malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM.
Collapse
|
454
|
Sequence-based sparse optimization methods for long-term loop closure detection in visual SLAM. Auton Robots 2018. [DOI: 10.1007/s10514-018-9736-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
455
|
Bai F, Vidal-Calleja T, Huang S. Robust Incremental SLAM Under Constrained Optimization Formulation. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2794610] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
456
|
Carlone L, Calafiore GC. Convex Relaxations for Pose Graph Optimization With Outliers. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2793352] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
457
|
An efficient cooperative exploration strategy for wireless sensor network. INTEL SERV ROBOT 2018. [DOI: 10.1007/s11370-018-0249-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
458
|
He Y, Chen S. Advances in sensing and processing methods for three-dimensional robot vision. INT J ADV ROBOT SYST 2018. [DOI: 10.1177/1729881418760623] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Affiliation(s)
- Yu He
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China
| | - Shengyong Chen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China
- School of Computer Science and Engineering, Tianjin University of Technology, Tianjin, China
| |
Collapse
|
459
|
Vallivaara I, Poikselkä K, Kemppainen A, Röning J. Quadtree-based ancestry tree maps for 2D scattered data SLAM. Adv Robot 2018. [DOI: 10.1080/01691864.2018.1436468] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Ilari Vallivaara
- Computer Science and Engineering (CSE), University of Oulu, Oulu, Finland
| | - Katja Poikselkä
- Computer Science and Engineering (CSE), University of Oulu, Oulu, Finland
| | - Anssi Kemppainen
- Computer Science and Engineering (CSE), University of Oulu, Oulu, Finland
| | - Juha Röning
- Computer Science and Engineering (CSE), University of Oulu, Oulu, Finland
| |
Collapse
|
460
|
Gao H, Zhang X, Fang Y, Yuan J. A line segment extraction algorithm using laser data based on seeded region growing. INT J ADV ROBOT SYST 2018. [DOI: 10.1177/1729881418755245] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
This article presents a novel line segment extraction algorithm using two-dimensional (2D) laser data, which is composed of four main procedures: seed-segment detection, region growing, overlap region processing, and endpoint generation. Different from existing approaches, the proposed algorithm borrows the idea of seeded region growing in the field of image processing, which is more efficient with more precise endpoints of the extracted line segments. Comparative experimental results with respect to the well-known Split-and-Merge algorithm are presented to show superior performance of the proposed approach in terms of efficiency, correctness, and precision, using real 2D data taken from our hallway and laboratory.
Collapse
Affiliation(s)
- Haiming Gao
- Institute of Robotics and Automatic Information System, Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, People’s Republic of China
| | - Xuebo Zhang
- Institute of Robotics and Automatic Information System, Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, People’s Republic of China
| | - Yongchun Fang
- Institute of Robotics and Automatic Information System, Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, People’s Republic of China
| | - Jing Yuan
- Institute of Robotics and Automatic Information System, Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, People’s Republic of China
| |
Collapse
|
461
|
Bernuy F, Ruiz-del-Solar J. Topological Semantic Mapping and Localization in Urban Road Scenarios. J INTELL ROBOT SYST 2017. [DOI: 10.1007/s10846-017-0744-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
462
|
|
463
|
Cvišić I, Ćesić J, Marković I, Petrović I. SOFT-SLAM: Computationally efficient stereo visual simultaneous localization and mapping for autonomous unmanned aerial vehicles. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21762] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
464
|
A Node-Based Method for SLAM Navigation in Self-Similar Underwater Environments: A Case Study. ROBOTICS 2017. [DOI: 10.3390/robotics6040029] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
465
|
An L, Zhang X, Gao H, Liu Y. Semantic segmentation–aided visual odometry for urban autonomous driving. INT J ADV ROBOT SYST 2017. [DOI: 10.1177/1729881417735667] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odometry estimation. Finding available visual cues that could represent real motion is the most important and hardest step for visual odometry in the dynamic environment. Semantic attributes of pixels could be considered as a more reasonable factor for candidate selection in that case. This article analyzed the availability of all visual cues with the help of pixel-level semantic information and proposed a new visual odometry method that combines feature-based and alignment-based visual odometry methods with one optimization pipeline. The proposed method was compared with three open-source visual odometry algorithms on Kitti benchmark data sets and our own data set. Experimental results confirmed that the new approach provided effective improvement both on accurate and robustness in the complex dynamic scenes.
Collapse
Affiliation(s)
- Lifeng An
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
| | - Xinyu Zhang
- Information Technology Center, Tsinghua University, Beijing, China
| | - Hongbo Gao
- Key Laboratory of Automotive Safety and Energy, Tsinghua University, Beijing, China
| | - Yuchao Liu
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
| |
Collapse
|
466
|
Recchiuto CT, Sgorbissa A. Post-disaster assessment with unmanned aerial vehicles: A survey on practical implementations and research approaches. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21756] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
467
|
Global Registration of 3D LiDAR Point Clouds Based on Scene Features: Application to Structured Environments. REMOTE SENSING 2017. [DOI: 10.3390/rs9101014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
468
|
de Chambrier G, Billard A. Non-Parametric Bayesian State Space Estimator for Negative Information. Front Robot AI 2017. [DOI: 10.3389/frobt.2017.00040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
469
|
Dominguez S. Simultaneous Recognition and Relative Pose Estimation of 3D Objects Using 4D Orthonormal Moments. SENSORS 2017; 17:s17092122. [PMID: 28914779 PMCID: PMC5620957 DOI: 10.3390/s17092122] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Revised: 09/05/2017] [Accepted: 09/12/2017] [Indexed: 12/04/2022]
Abstract
Both three-dimensional (3D) object recognition and pose estimation are open topics in the research community. These tasks are required for a wide range of applications, sometimes separately, sometimes concurrently. Many different algorithms have been presented in the literature to solve these problems separately, and some to solve them jointly. In this paper, an algorithm to solve them simultaneously is introduced. It is based on the definition of a four-dimensional (4D) tensor that gathers and organizes the projections of a 3D object from different points of view. This 4D tensor is then represented by a set of 4D orthonormal moments. Once these moments are arranged in a matrix that can be computed off-line, recognition and pose estimation is reduced to the solution of a linear least squares problem, involving that matrix and the 2D moments of the observed projection of an unknown object. The abilities of this method for 3D object recognition and pose estimation is analytically proved, demonstrating that it does not rely on experimental work to apply a generic technique to these problems. An additional strength of the algorithm is that the required projection is textureless and defined at a very low resolution. This method is computationally simple and shows very good performance in both tasks, allowing its use in applications where real-time constraints have to be fulfilled. Three different kinds of experiments have been conducted in order to perform a thorough validation of the proposed approach: recognition and pose estimation under z axis (yaw) rotations, the same estimation but with the addition of y axis rotations (pitch), and estimation of the pose of objects in real images downloaded from the Internet. In all these cases, results are encouraging, at a similar level to those of state-of-the art algorithms.
Collapse
Affiliation(s)
- Sergio Dominguez
- Centre for Automation and Robotics UPM-CSIC, Universidad Politécnica de Madrid, Jose Gutierrez Abascal, 2, 28006 Madrid, Spain.
| |
Collapse
|
470
|
Ravankar A, Ravankar AA, Kobayashi Y, Emaru T. Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments. SENSORS 2017; 17:s17081878. [PMID: 28809803 PMCID: PMC5579880 DOI: 10.3390/s17081878] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Revised: 08/09/2017] [Accepted: 08/12/2017] [Indexed: 11/16/2022]
Abstract
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from `driver-lost' scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results.
Collapse
Affiliation(s)
- Abhijeet Ravankar
- Lab of Robotics and Dynamics, Faculty of Engineering, Hokkaido University, Sapporo 060-8628, Japan.
| | - Ankit A Ravankar
- Lab of Robotics and Dynamics, Faculty of Engineering, Hokkaido University, Sapporo 060-8628, Japan.
| | - Yukinori Kobayashi
- Lab of Robotics and Dynamics, Faculty of Engineering, Hokkaido University, Sapporo 060-8628, Japan.
| | - Takanori Emaru
- Lab of Robotics and Dynamics, Faculty of Engineering, Hokkaido University, Sapporo 060-8628, Japan.
| |
Collapse
|
471
|
Autonomous robotic exploration using a utility function based on Rényi’s general theory of entropy. Auton Robots 2017. [DOI: 10.1007/s10514-017-9662-9] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
472
|
Krajnik T, Fentanes JP, Santos JM, Duckett T. FreMEn: Frequency Map Enhancement for Long-Term Mobile Robot Autonomy in Changing Environments. IEEE T ROBOT 2017. [DOI: 10.1109/tro.2017.2665664] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
473
|
|
474
|
Meng Z, Qin H, Chen Z, Chen X, Sun H, Lin F, Ang MH. A Two-Stage Optimized Next-View Planning Framework for 3-D Unknown Environment Exploration, and Structural Reconstruction. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2655144] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
475
|
López E, García S, Barea R, Bergasa LM, Molinos EJ, Arroyo R, Romera E, Pardo S. A Multi-Sensorial Simultaneous Localization and Mapping (SLAM) System for Low-Cost Micro Aerial Vehicles in GPS-Denied Environments. SENSORS 2017; 17:s17040802. [PMID: 28397758 PMCID: PMC5422163 DOI: 10.3390/s17040802] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2017] [Revised: 03/27/2017] [Accepted: 04/05/2017] [Indexed: 11/29/2022]
Abstract
One of the main challenges of aerial robots navigation in indoor or GPS-denied environments is position estimation using only the available onboard sensors. This paper presents a Simultaneous Localization and Mapping (SLAM) system that remotely calculates the pose and environment map of different low-cost commercial aerial platforms, whose onboard computing capacity is usually limited. The proposed system adapts to the sensory configuration of the aerial robot, by integrating different state-of-the art SLAM methods based on vision, laser and/or inertial measurements using an Extended Kalman Filter (EKF). To do this, a minimum onboard sensory configuration is supposed, consisting of a monocular camera, an Inertial Measurement Unit (IMU) and an altimeter. It allows to improve the results of well-known monocular visual SLAM methods (LSD-SLAM and ORB-SLAM are tested and compared in this work) by solving scale ambiguity and providing additional information to the EKF. When payload and computational capabilities permit, a 2D laser sensor can be easily incorporated to the SLAM system, obtaining a local 2.5D map and a footprint estimation of the robot position that improves the 6D pose estimation through the EKF. We present some experimental results with two different commercial platforms, and validate the system by applying it to their position control.
Collapse
Affiliation(s)
- Elena López
- Electronics Department, University of Alcalá, Campus Universitario, 28805 Alcalá de Henares, Spain.
| | - Sergio García
- Electronics Department, University of Alcalá, Campus Universitario, 28805 Alcalá de Henares, Spain.
| | - Rafael Barea
- Electronics Department, University of Alcalá, Campus Universitario, 28805 Alcalá de Henares, Spain.
| | - Luis M Bergasa
- Electronics Department, University of Alcalá, Campus Universitario, 28805 Alcalá de Henares, Spain.
| | - Eduardo J Molinos
- Electronics Department, University of Alcalá, Campus Universitario, 28805 Alcalá de Henares, Spain.
| | - Roberto Arroyo
- Electronics Department, University of Alcalá, Campus Universitario, 28805 Alcalá de Henares, Spain.
| | - Eduardo Romera
- Electronics Department, University of Alcalá, Campus Universitario, 28805 Alcalá de Henares, Spain.
| | - Samuel Pardo
- Electronics Department, University of Alcalá, Campus Universitario, 28805 Alcalá de Henares, Spain.
| |
Collapse
|