1
|
Chen Z, Liao Y, Du H, Zhang H, Xu X, Lu H, Xiong R, Wang Y. DPCN++: Differentiable Phase Correlation Network for Versatile Pose Registration. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:14366-14384. [PMID: 37729564 DOI: 10.1109/tpami.2023.3317501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/22/2023]
Abstract
Pose registration is critical in vision and robotics. This article focuses on the challenging task of initialization-free pose registration up to 7DoF for homogeneous and heterogeneous measurements. While recent learning-based methods show promise using differentiable solvers, they either rely on heuristically defined correspondences or require initialization. Phase correlation seeks solutions in the spectral domain and is correspondence-free and initialization-free. Following this, we propose a differentiable solver and combine it with simple feature extraction networks, namely DPCN++. It can perform registration for homo/hetero inputs and generalizes well on unseen objects. Specifically, the feature extraction networks first learn dense feature grids from a pair of homogeneous/heterogeneous measurements. These feature grids are then transformed into a translation and scale invariant spectrum representation based on Fourier transform and spherical radial aggregation, decoupling translation and scale from rotation. Next, the rotation, scale, and translation are independently and efficiently estimated in the spectrum step-by-step. The entire pipeline is differentiable and trained end-to-end. We evaluate DCPN++ on a wide range of tasks taking different input modalities, including 2D bird's-eye view images, 3D object and scene measurements, and medical images. Experimental results demonstrate that DCPN++ outperforms both classical and learning-based baselines, especially on partially observed and heterogeneous measurements.
Collapse
|
2
|
Tang TY, De Martini D, Newman P. Point-based metric and topological localisation between lidar and overhead imagery. Auton Robots 2023. [DOI: 10.1007/s10514-023-10085-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
AbstractIn this paper, we present a method for solving the localisation of a ground lidar using overhead imagery only. Public overhead imagery such as Google satellite images are readily available resources. They can be used as the map proxy for robot localisation, relaxing the requirement for a prior traversal for mapping as in traditional approaches. While prior approaches have focused on the metric localisation between range sensors and overhead imagery, our method is the first to learn both place recognition and metric localisation of a ground lidar using overhead imagery, and also outperforms prior methods on metric localisation with large initial pose offsets. To bridge the drastic domain gap between lidar data and overhead imagery, our method learns to transform an overhead image into a collection of 2D points, emulating the resulting point-cloud scanned by a lidar sensor situated near the centre of the overhead image. After both modalities are expressed as point sets, point-based machine learning methods for localisation are applied.
Collapse
|
3
|
Urban localization based on aerial imagery by correcting projection distortion. Auton Robots 2022. [DOI: 10.1007/s10514-022-10082-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
4
|
Hoshi M, Hara Y, Nakamura S. Graph-based SLAM using architectural floor plans without loop closure. Adv Robot 2022. [DOI: 10.1080/01691864.2022.2081513] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Affiliation(s)
- Masahiko Hoshi
- Graduate School of Science and Engineering, Hosei University, Tokyo, Japan
| | - Yoshitaka Hara
- Future Robotics Technology Center (fuRo), Chiba Institute of Technology, Chiba, Japan
| | - Sousuke Nakamura
- Faculty of Science and Engineering, Hosei University, Hosei, Japan
| |
Collapse
|
5
|
Seco T, Lázaro MT, Espelosín J, Montano L, Villarroel JL. Robot Localization in Tunnels: Combining Discrete Features in a Pose Graph Framework. SENSORS 2022; 22:s22041390. [PMID: 35214292 PMCID: PMC8962997 DOI: 10.3390/s22041390] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 01/20/2022] [Accepted: 02/07/2022] [Indexed: 02/06/2023]
Abstract
Robot localization inside tunnels is a challenging task due to the special conditions of these environments. The GPS-denied nature of these scenarios, coupled with the low visibility, slippery and irregular surfaces, and lack of distinguishable visual and structural features, make traditional robotics methods based on cameras, lasers, or wheel encoders unreliable. Fortunately, tunnels provide other types of valuable information that can be used for localization purposes. On the one hand, radio frequency signal propagation in these types of scenarios shows a predictable periodic structure (periodic fadings) under certain settings, and on the other hand, tunnels present structural characteristics (e.g., galleries, emergency shelters) that must comply with safety regulations. The solution presented in this paper consists of detecting both types of features to be introduced as discrete sources of information in an alternative graph-based localization approach. The results obtained from experiments conducted in a real tunnel demonstrate the validity and suitability of the proposed system for inspection applications.
Collapse
Affiliation(s)
- Teresa Seco
- Instituto Tecnológico de Aragón, 50018 Zaragoza, Spain; (M.T.L.); (J.E.)
- Correspondence:
| | - María T. Lázaro
- Instituto Tecnológico de Aragón, 50018 Zaragoza, Spain; (M.T.L.); (J.E.)
| | - Jesús Espelosín
- Instituto Tecnológico de Aragón, 50018 Zaragoza, Spain; (M.T.L.); (J.E.)
| | - Luis Montano
- Aragón Institute for Engineering Research (I3A), University of Zaragoza, 50009 Zaragoza, Spain; (L.M.); (J.L.V.)
| | - José L. Villarroel
- Aragón Institute for Engineering Research (I3A), University of Zaragoza, 50009 Zaragoza, Spain; (L.M.); (J.L.V.)
| |
Collapse
|
6
|
Tang TY, De Martini D, Wu S, Newman P. Self-supervised learning for using overhead imagery as maps in outdoor range sensor localization. Int J Rob Res 2021; 40:1488-1509. [PMID: 34992328 PMCID: PMC8721700 DOI: 10.1177/02783649211045736] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Traditional approaches to outdoor vehicle localization assume a reliable, prior map is available, typically built using the same sensor suite as the on-board sensors used during localization. This work makes a different assumption. It assumes that an overhead image of the workspace is available and utilizes that as a map for use for range-based sensor localization by a vehicle. Here, range-based sensors are radars and lidars. Our motivation is simple, off-the-shelf, publicly available overhead imagery such as Google satellite images can be a ubiquitous, cheap, and powerful tool for vehicle localization when a usable prior sensor map is unavailable, inconvenient, or expensive. The challenge to be addressed is that overhead images are clearly not directly comparable to data from ground range sensors because of their starkly different modalities. We present a learned metric localization method that not only handles the modality difference, but is also cheap to train, learning in a self-supervised fashion without requiring metrically accurate ground truth. By evaluating across multiple real-world datasets, we demonstrate the robustness and versatility of our method for various sensor configurations in cross-modality localization, achieving localization errors on-par with a prior supervised approach while requiring no pixel-wise aligned ground truth for supervision at training. We pay particular attention to the use of millimeter-wave radar, which, owing to its complex interaction with the scene and its immunity to weather and lighting conditions, makes for a compelling and valuable use case.
Collapse
Affiliation(s)
- Tim Y Tang
- Mobile Robotics Group, University of Oxford, Oxford, UK
| | | | - Shangzhe Wu
- Visual Geometry Group, University of Oxford, Oxford, UK
| | - Paul Newman
- Mobile Robotics Group, University of Oxford, Oxford, UK
| |
Collapse
|
7
|
Terblanche J, Claassens S, Fourie D. Multimodal Navigation-Affordance Matching for SLAM. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3098788] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
8
|
Oelsch M, Karimi M, Steinbach E. R-LOAM: Improving LiDAR Odometry and Mapping With Point-to-Mesh Features of a Known 3D Reference Object. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3060413] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
9
|
Tang TY, De Martini D, Barnes D, Newman P. RSL-Net: Localising in Satellite Images From a Radar on the Ground. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2965907] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
10
|
de Paula Veronese L, Badue C, Auat Cheein F, Guivant J, De Souza AF. A single sensor system for mapping in GNSS-denied environments. COGN SYST RES 2019. [DOI: 10.1016/j.cogsys.2019.03.018] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
11
|
Abstract
We present a method for matching sketch maps to a corresponding metric map, with the aim of later using the sketch as an intuitive interface for human–robot interactions. While sketch maps are not metrically accurate and many details, which are deemed unnecessary, are omitted, they represent the topology of the environment well and are typically accurate at key locations. Thus, for sketch map interpretation and matching, one cannot only rely on metric information. Our matching method first finds the most distinguishable, or unique, regions of two maps. The topology of the maps, the positions of the unique regions, and the size of all regions are used to build region descriptors. Finally, a sequential graph matching algorithm uses the region descriptors to find correspondences between regions of the sketch and metric maps. Our method obtained higher accuracy than both a state-of-the-art matching method for inaccurate map matching, and our previous work on the subject. The state of the art was unable to match sketch maps while our method performed only 10% worse than a human expert.
Collapse
|
12
|
Abstract
Simultaneous Localization And Mapping (SLAM) usually assumes the robot starts without knowledge of the environment. While prior information, such as emergency maps or layout maps, is often available, integration is not trivial since such maps are often out of date and have uncertainty in local scale. Integration of prior map information is further complicated by sensor noise, drift in the measurements, and incorrect scan registrations in the sensor map. We present the Auto-Complete Graph (ACG), a graph-based SLAM method merging elements of sensor and prior maps into one consistent representation. After optimizing the ACG, the sensor map’s errors are corrected thanks to the prior map, while the sensor map corrects the local scale inaccuracies in the prior map. We provide three datasets with associated prior maps: two recorded in campus environments, and one from a fireman training facility. Our method handled up to 40% of noise in odometry, was robust to varying levels of details between the prior and the sensor map, and could correct local scale errors of the prior. In field tests with ACG, users indicated points of interest directly on the prior before exploration. We did not record failures in reaching them.
Collapse
|
13
|
Wang S, Kobayashi Y, Ravankar AA, Ravankar A, Emaru T. A Novel Approach for Lidar-Based Robot Localization in a Scale-Drifted Map Constructed Using Monocular SLAM. SENSORS 2019; 19:s19102230. [PMID: 31091810 PMCID: PMC6567333 DOI: 10.3390/s19102230] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 05/09/2019] [Accepted: 05/10/2019] [Indexed: 12/01/2022]
Abstract
Scale ambiguity and drift are inherent drawbacks of a pure-visual monocular simultaneous localization and mapping (SLAM) system. This problem could be a crucial challenge for other robots with range sensors to perform localization in a map previously built by a monocular camera. In this paper, a metrically inconsistent priori map is made by the monocular SLAM that is subsequently used to perform localization on another robot only using a laser range finder (LRF). To tackle the problem of the metric inconsistency, this paper proposes a 2D-LRF-based localization algorithm which allows the robot to locate itself and resolve the scale of the local map simultaneously. To align the data from 2D LRF to the map, 2D structures are extracted from the 3D point cloud map obtained by the visual SLAM process. Next, a modified Monte Carlo localization (MCL) approach is proposed to estimate the robot’s state which is composed of both the robot’s pose and map’s relative scale. Finally, the effectiveness of the proposed system is demonstrated in the experiments on a public benchmark dataset as well as in a real-world scenario. The experimental results indicate that the proposed method is able to globally localize the robot in real-time. Additionally, even in a badly drifted map, the successful localization can also be achieved.
Collapse
Affiliation(s)
- Su Wang
- Division of Human Mechanical Systems and Design, Faculty and Graduate School of Engineering, Hokkaido University, Sapporo 060-8628, Hokkaido, Japan.
| | - Yukinori Kobayashi
- Division of Human Mechanical Systems and Design, Faculty and Graduate School of Engineering, Hokkaido University, Sapporo 060-8628, Hokkaido, Japan.
| | - Ankit A Ravankar
- Division of Human Mechanical Systems and Design, Faculty and Graduate School of Engineering, Hokkaido University, Sapporo 060-8628, Hokkaido, Japan.
| | - Abhijeet Ravankar
- School of Regional Innovation and Social Design Engineering, Faculty of Engineering, Kitami Institute of Technology, Kitami 090-8507, Hokkaido, Japan.
| | - Takanori Emaru
- Division of Human Mechanical Systems and Design, Faculty and Graduate School of Engineering, Hokkaido University, Sapporo 060-8628, Hokkaido, Japan.
| |
Collapse
|
14
|
Wen J, Qian C, Tang J, Liu H, Ye W, Fan X. 2D LiDAR SLAM Back-End Optimization with Control Network Constraint for Mobile Mapping. SENSORS 2018; 18:s18113668. [PMID: 30380621 PMCID: PMC6263705 DOI: 10.3390/s18113668] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Revised: 10/25/2018] [Accepted: 10/26/2018] [Indexed: 12/05/2022]
Abstract
Simultaneous localization and mapping (SLAM) has been investigated in the field of robotics for two decades, as it is considered to be an effective method for solving the positioning and mapping problem in a single framework. In the SLAM community, the Extended Kalman Filter (EKF) based SLAM and particle filter SLAM are the most mature technologies. After years of development, graph-based SLAM is becoming the most promising technology and a lot of progress has been made recently with respect to accuracy and efficiency. No matter which SLAM method is used, loop closure is a vital part for overcoming the accumulated errors. However, in 2D Light Detection and Ranging (LiDAR) SLAM, on one hand, it is relatively difficult to extract distinctive features in LiDAR scans for loop closure detection, as 2D LiDAR scans encode much less information than images; on the other hand, there is also some special mapping scenery, where no loop closure exists. Thereby, in this paper, instead of loop closure detection, we first propose the method to introduce extra control network constraint (CNC) to the back-end optimization of graph-based SLAM, by aligning the LiDAR scan center with the control vertex of the presurveyed control network to optimize all the poses of scans and submaps. Field tests were carried out in a typical urban Global Navigation Satellite System (GNSS) weak outdoor area. The results prove that the position Root Mean Square (RMS) error of the selected key points is 0.3614 m, evaluated with a reference map produced by Terrestrial Laser Scanner (TLS). Mapping accuracy is significantly improved, compared to the mapping RMS of 1.6462 m without control network constraint. Adding distance constraints of the control network to the back-end optimization is an effective and practical method to solve the drift accumulation of LiDAR front-end scan matching.
Collapse
Affiliation(s)
- Jingren Wen
- GNSS Research Centre, Wuhan University, 129 Luoyu Road, Wuhan 430079, China.
| | - Chuang Qian
- GNSS Research Centre, Wuhan University, 129 Luoyu Road, Wuhan 430079, China.
| | - Jian Tang
- GNSS Research Centre, Wuhan University, 129 Luoyu Road, Wuhan 430079, China.
| | - Hui Liu
- GNSS Research Centre, Wuhan University, 129 Luoyu Road, Wuhan 430079, China.
| | - Wenfang Ye
- GNSS Research Centre, Wuhan University, 129 Luoyu Road, Wuhan 430079, China.
| | - Xiaoyun Fan
- GNSS Research Centre, Wuhan University, 129 Luoyu Road, Wuhan 430079, China.
| |
Collapse
|
15
|
Schuster MJ, Schmid K, Brand C, Beetz M. Distributed stereo vision-based 6D localization and mapping for multi-robot teams. J FIELD ROBOT 2018. [DOI: 10.1002/rob.21812] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Martin J. Schuster
- Department of Perception and Cognition, Robotics and Mechatronics Center (RMC); German Aerospace Center (DLR); Weßling Germany
| | | | - Christoph Brand
- Department of Perception and Cognition, Robotics and Mechatronics Center (RMC); German Aerospace Center (DLR); Weßling Germany
| | - Michael Beetz
- Institute for Artificial Intelligence and Center for Computing Technologies (TZI); Faculty of Computer Science, University Bremen; Bremen Germany
| |
Collapse
|
16
|
Landsiedel C, Wollherr D. Global localization of 3D point clouds in building outline maps of urban outdoor environments. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2017; 1:429-441. [PMID: 29250589 PMCID: PMC5727157 DOI: 10.1007/s41315-017-0038-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2017] [Accepted: 11/10/2017] [Indexed: 11/29/2022]
Abstract
This paper presents a method to localize a robot in a global coordinate frame based on a sparse 2D map containing outlines of building and road network information and no location prior information. Its input is a single 3D laser scan of the surroundings of the robot. The approach extends the generic chamfer matching template matching technique from image processing by including visibility analysis in the cost function. Thus, the observed building planes are matched to the expected view of the corresponding map section instead of to the entire map, which makes a more accurate matching possible. Since this formulation operates on generic edge maps from visual sensors, the matching formulation can be expected to generalize to other input data, e.g., from monocular or stereo cameras. The method is evaluated on two large datasets collected in different real-world urban settings and compared to a baseline method from literature and to the standard chamfer matching approach, where it shows considerable performance benefits, as well as the feasibility of global localization based on sparse building outline data.
Collapse
Affiliation(s)
- Christian Landsiedel
- Chair of Automatic Control Engineering, Technische Universität München, Munich, Germany
| | - Dirk Wollherr
- Chair of Automatic Control Engineering, Technische Universität München, Munich, Germany
| |
Collapse
|
17
|
Roh H, Jeong J, Kim A. Aerial Image Based Heading Correction for Large Scale SLAM in an Urban Canyon. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2725439] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
18
|
Towards High-Definition 3D Urban Mapping: Road Feature-Based Registration of Mobile Mapping Systems and Aerial Imagery. REMOTE SENSING 2017. [DOI: 10.3390/rs9100975] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
19
|
Boukas E, Gasteratos A, Visentin G. Introducing a globally consistent orbital-based localization system. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21739] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Evangelos Boukas
- Robotics, Vision and Machine Intelligence (RVMI) Lab., Department of Materials and Production; Aalborg University Copenhagen; Denmark
| | - Antonios Gasteratos
- Laboratory of Robotics and Automation, Engineering School; Democritus University of Thrace; Greece
| | - Gianfranco Visentin
- Automation and Robotics Section (TEC-MMA); European Space Agency; The Netherlands
| |
Collapse
|
20
|
Kang J, Doh NL. Full-DOF Calibration of a Rotating 2-D LIDAR With a Simple Plane Measurement. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2016.2596769] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
21
|
Lee K, Ryu SH, Yeon S, Cho H, Jun C, Kang J, Choi H, Hyeon J, Baek I, Jung W, Kim H, Doh NL. Accurate Continuous Sweeping Framework in Indoor Spaces With Backpack Sensor System for Applications to 3-D Mapping. IEEE Robot Autom Lett 2016. [DOI: 10.1109/lra.2016.2516585] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
22
|
Majdik AL, Verda D, Albers-Schoenberg Y, Scaramuzza D. Air-ground Matching: Appearance-based GPS-denied Urban Localization of Micro Aerial Vehicles. J FIELD ROBOT 2015. [DOI: 10.1002/rob.21585] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- András L. Majdik
- Department of Informatics; University of Zurich; Zurich Switzerland
| | - Damiano Verda
- Italian National Council of Research; CNR-IEIIT Genova Italy
| | | | | |
Collapse
|
23
|
Kümmerle R, Ruhnke M, Steder B, Stachniss C, Burgard W. Autonomous Robot Navigation in Highly Populated Pedestrian Zones. J FIELD ROBOT 2014. [DOI: 10.1002/rob.21534] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Rainer Kümmerle
- Department of Computer Science; University of Freiburg; 79110 Freiburg Germany
| | - Michael Ruhnke
- Department of Computer Science; University of Freiburg; 79110 Freiburg Germany
| | - Bastian Steder
- Department of Computer Science; University of Freiburg; 79110 Freiburg Germany
| | - Cyrill Stachniss
- Department of Computer Science; University of Freiburg; 79110 Freiburg Germany
| | - Wolfram Burgard
- Department of Computer Science; University of Freiburg; 79110 Freiburg Germany
| |
Collapse
|
24
|
IMU and Multiple RGB-D Camera Fusion for Assisting Indoor Stop-and-Go 3D Terrestrial Laser Scanning. ROBOTICS 2014. [DOI: 10.3390/robotics3030247] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
25
|
|
26
|
Stop-and-go mode: sensor manipulation as essential as sensor development in terrestrial laser scanning. SENSORS 2013; 13:8140-54. [PMID: 23799493 PMCID: PMC3758587 DOI: 10.3390/s130708140] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2013] [Revised: 06/15/2013] [Accepted: 06/17/2013] [Indexed: 12/01/2022]
Abstract
This study was dedicated to illustrating the significance of sensor manipulation in the case of terrestrial laser scanning, which is a field now in quick development. In fact, this quickness was mainly rooted in the emergence of new sensors with better performance, while the implications of sensor manipulation have not been fully recognized by the whole community. For this technical gap, the stop-and-go mapping mode can be reckoned as one of the potential solution plans. Stop-and-go was first proposed to handle the low efficiency of traditional static terrestrial laser scanning, and then, it was re-emphasized to improve the stability of sample collections for the state-of-the-art technology of mobile laser scanning. This work reviewed the previous efforts of trying the stop-and-go mode for improving the performance of static and mobile terrestrial laser scanning and generalized their principles respectively. This work also analyzed its advantages compared to the fully-static and fully-kinematic terrestrial laser scanning, and suggested the plans with more automatic measures for raising the efficacy of terrestrial laser scanning. Overall, this literature review indicated that the stop-and-go mapping mode as a case with generic sense can verify the presumption of sensor manipulation as essential as sensor development.
Collapse
|
27
|
|
28
|
|
29
|
Abstract
In this paper we describe a method for the automatic self-calibration of a 3D laser sensor. We wish to acquire crisp point clouds and so we adopt a measure of crispness to capture point cloud quality. We then pose the calibration problem as the task of maximizing point cloud quality. Concretely, we use Rényi Quadratic Entropy to measure the degree of organization of a point cloud. By expressing this quantity as a function of key unknown system parameters, we are able to deduce a full calibration of the sensor via an online optimization. Beyond details on the sensor design itself, we fully describe the end-to-end intrinsic parameter calibration process and the estimation of the clock skews between the constituent microprocessors. We analyse performance using real and simulated data and demonstrate robust performance over 30 test sites.
Collapse
Affiliation(s)
- Mark Sheehan
- Oxford University Mobile Robotics Research Group, Oxford, UK
| | | | - Paul Newman
- Oxford University Mobile Robotics Research Group, Oxford, UK
| |
Collapse
|