1
|
Francis A, Li S, Griffiths C, Sienz J. Gas source localization and mapping with mobile robots: A review. J FIELD ROBOT 2022. [DOI: 10.1002/rob.22109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Adam Francis
- Department of Mechanical Engineering Faculty of Science and Engineering, Swansea University Swansea UK
| | - Shuai Li
- Department of Mechanical Engineering Faculty of Science and Engineering, Swansea University Swansea UK
| | - Christian Griffiths
- Department of General Engineering Faculty of Science and Engineering, Swansea University Swansea UK
| | - Johann Sienz
- Department of General Engineering Faculty of Science and Engineering, Swansea University Swansea UK
| |
Collapse
|
2
|
Dinaux R, Wessendorp N, Dupeyroux J, Croon GCHED. FAITH: Fast Iterative Half-Plane Focus of Expansion Estimation Using Optic Flow. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3100153] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
3
|
Elmokadem T, Savkin AV. Towards Fully Autonomous UAVs: A Survey. SENSORS 2021; 21:s21186223. [PMID: 34577430 PMCID: PMC8473245 DOI: 10.3390/s21186223] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 09/09/2021] [Accepted: 09/09/2021] [Indexed: 11/17/2022]
Abstract
Unmanned Aerial Vehicles have undergone rapid developments in recent decades. This has made them very popular for various military and civilian applications allowing us to reach places that were previously hard to reach in addition to saving time and lives. A highly desirable direction when developing unmanned aerial vehicles is towards achieving fully autonomous missions and performing their dedicated tasks with minimum human interaction. Thus, this paper provides a survey of some of the recent developments in the field of unmanned aerial vehicles related to safe autonomous navigation, which is a very critical component in the whole system. A great part of this paper focus on advanced methods capable of producing three-dimensional avoidance maneuvers and safe trajectories. Research challenges related to unmanned aerial vehicle development are also highlighted.
Collapse
|
4
|
Abstract
Because of their high maneuverability and fast deployment times, aerial robots have recently gained popularity for automating inspection tasks. In this paper, we address the visual inspection of vessel cargo holds, aiming at safer, cost-efficient and more intensive visual inspections of ships by means of a multirotor-type platform. To this end, the vehicle is equipped with a sensor suite able to supply the surveyor with imagery from relevant areas, while the control software is supporting the operator during flight with enhanced functionalities and reliable autonomy. All this has been accomplished in the context of the supervised autonomy (SA) paradigm, by means of extensive use of behaviour-based high-level control (including obstacle detection and collision prevention), all specifically devised for visual inspection. The full system has been evaluated both in laboratory and in real environments, on-board two different vessels. Results show the vehicle effective for the referred application, in particular due to the inspection-oriented capabilities it has been fitted with.
Collapse
|
5
|
Estimating Tree Diameters from an Autonomous Below-Canopy UAV with Mounted LiDAR. REMOTE SENSING 2021. [DOI: 10.3390/rs13132576] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Below-canopy UAVs hold promise for automated forest surveys because their sensors can provide detailed information on below-canopy forest structures, especially in dense forests, which may be inaccessible to above-canopy UAVs, aircraft, and satellites. We present an end-to-end autonomous system for estimating tree diameters using a below-canopy UAV in parklands. We used simultaneous localization and mapping (SLAM) and LiDAR data produced at flight time as inputs to diameter-estimation algorithms in post-processing. The SLAM path was used for initial compilation of horizontal LiDAR scans into a 2D cross-sectional map, and then optimization algorithms aligned the scans for each tree within the 2D map to achieve a precision suitable for diameter measurement. The algorithms successfully identified 12 objects, 11 of which were trees and one a lamppost. For these, the estimated diameters from the autonomous survey were highly correlated with manual ground-truthed diameters (R2=0.92, root mean squared error = 30.6%, bias = 18.4%). Autonomous measurement was most effective for larger trees (>300 mm diameter) within 10 m of the UAV flight path, for medium trees (200–300 mm diameter) within 5 m, and for trees with regular cross sections. We conclude that fully automated below-canopy forest surveys are a promising, but still nascent, technology and suggest directions for future research.
Collapse
|
6
|
Cooperative Localization Using Distance Measurements for Mobile Nodes. SENSORS 2021; 21:s21041507. [PMID: 33671554 PMCID: PMC7926533 DOI: 10.3390/s21041507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 02/11/2021] [Accepted: 02/12/2021] [Indexed: 11/22/2022]
Abstract
This paper considers the two-dimensional (2D) anchorless localization problem for sensor networks in global positioning system (GPS)-denied environments. We present an efficient method, based on the multidimensional scaling (MDS) algorithm, in order to estimate the positions of the nodes in the network using measurements of the inter-node distances. The proposed method takes advantage of the mobility of the nodes to address the location ambiguity problem, i.e., rotation and flip ambiguity, which arises in the anchorless MDS algorithm. Knowledge of the displacement of the moving node is used to produce an analytical solution for the noise-free case. Subsequently, a least squares estimator is presented for the noisy scenario and the associated closed-form solution derived. The simulations show that the proposed algorithm accurately and efficiently estimates the locations of nodes, outperforming alternative methods.
Collapse
|
7
|
Autonomous quadrotor collision avoidance and destination seeking in a GPS-denied environment. Auton Robots 2020. [DOI: 10.1007/s10514-020-09949-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
8
|
LiDAR-Based GNSS Denied Localization for Autonomous Racing Cars. SENSORS 2020; 20:s20143992. [PMID: 32709102 PMCID: PMC7411595 DOI: 10.3390/s20143992] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Revised: 07/09/2020] [Accepted: 07/14/2020] [Indexed: 11/18/2022]
Abstract
Self driving vehicles promise to bring one of the greatest technological and social revolutions of the next decade for their potential to drastically change human mobility and goods transportation, in particular regarding efficiency and safety. Autonomous racing provides very similar technological issues while allowing for more extreme conditions in a safe human environment. While the software stack driving the racing car consists of several modules, in this paper we focus on the localization problem, which provides as output the estimated pose of the vehicle needed by the planning and control modules. When driving near the friction limits, localization accuracy is critical as small errors can induce large errors in control due to the nonlinearities of the vehicle’s dynamic model. In this paper, we present a localization architecture for a racing car that does not rely on Global Navigation Satellite Systems (GNSS). It consists of two multi-rate Extended Kalman Filters and an extension of a state-of-the-art laser-based Monte Carlo localization approach that exploits some a priori knowledge of the environment and context. We first compare the proposed method with a solution based on a widely employed state-of-the-art implementation, outlining its strengths and limitations within our experimental scenario. The architecture is then tested both in simulation and experimentally on a full-scale autonomous electric racing car during an event of Roborace Season Alpha. The results show its robustness in avoiding the robot kidnapping problem typical of particle filters localization methods, while providing a smooth and high rate pose estimate. The pose error distribution depends on the car velocity, and spans on average from 0.1 m (at 60 km/h) to 1.48 m (at 200 km/h) laterally and from 1.9 m (at 100 km/h) to 4.92 m (at 200 km/h) longitudinally.
Collapse
|
9
|
Tian Y, Liu K, Ok K, Tran L, Allen D, Roy N, How JP. Search and rescue under the forest canopy using multiple UAVs. Int J Rob Res 2020. [DOI: 10.1177/0278364920929398] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
We present a multi-robot system for GPS-denied search and rescue under the forest canopy. Forests are particularly challenging environments for collaborative exploration and mapping, in large part due to the existence of severe perceptual aliasing which hinders reliable loop closure detection for mutual localization and map fusion. Our proposed system features unmanned aerial vehicles (UAVs) that perform onboard sensing, estimation, and planning. When communication is available, each UAV transmits compressed tree-based submaps to a central ground station for collaborative simultaneous localization and mapping (CSLAM). To overcome high measurement noise and perceptual aliasing, we use the local configuration of a group of trees as a distinctive feature for robust loop closure detection. Furthermore, we propose a novel procedure based on cycle consistent multiway matching to recover from incorrect pairwise data associations. The returned global data association is guaranteed to be cycle consistent, and is shown to improve both precision and recall compared with the input pairwise associations. The proposed multi-UAV system is validated both in simulation and during real-world collaborative exploration missions at NASA Langley Research Center.
Collapse
Affiliation(s)
- Yulun Tian
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Katherine Liu
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Kyel Ok
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Loc Tran
- NASA Langley Research Center, Hampton, VA, USA
| | | | - Nicholas Roy
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | | |
Collapse
|
10
|
Monocular Visual SLAM Based on a Cooperative UAV-Target System. SENSORS 2020; 20:s20123531. [PMID: 32580347 PMCID: PMC7378774 DOI: 10.3390/s20123531] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 06/13/2020] [Accepted: 06/18/2020] [Indexed: 11/17/2022]
Abstract
To obtain autonomy in applications that involve Unmanned Aerial Vehicles (UAVs), the capacity of self-location and perception of the operational environment is a fundamental requirement. To this effect, GPS represents the typical solution for determining the position of a UAV operating in outdoor and open environments. On the other hand, GPS cannot be a reliable solution for a different kind of environments like cluttered and indoor ones. In this scenario, a good alternative is represented by the monocular SLAM (Simultaneous Localization and Mapping) methods. A monocular SLAM system allows a UAV to operate in a priori unknown environment using an onboard camera to simultaneously build a map of its surroundings while at the same time locates itself respect to this map. So, given the problem of an aerial robot that must follow a free-moving cooperative target in a GPS denied environment, this work presents a monocular-based SLAM approach for cooperative UAV-Target systems that addresses the state estimation problem of (i) the UAV position and velocity, (ii) the target position and velocity, (iii) the landmarks positions (map). The proposed monocular SLAM system incorporates altitude measurements obtained from an altimeter. In this case, an observability analysis is carried out to show that the observability properties of the system are improved by incorporating altitude measurements. Furthermore, a novel technique to estimate the approximate depth of the new visual landmarks is proposed, which takes advantage of the cooperative target. Additionally, a control system is proposed for maintaining a stable flight formation of the UAV with respect to the target. In this case, the stability of control laws is proved using the Lyapunov theory. The experimental results obtained from real data as well as the results obtained from computer simulations show that the proposed scheme can provide good performance.
Collapse
|
11
|
Cooperative Visual-SLAM System for UAV-Based Target Tracking in GPS-Denied Environments: A Target-Centric Approach. ELECTRONICS 2020. [DOI: 10.3390/electronics9050813] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Autonomous tracking of dynamic targets by the use of Unmanned Aerial Vehicles (UAVs) is a challenging problem that has practical applications in many scenarios. In this context, a fundamental aspect that must be addressed has to do with the position estimation of aerial robots and a target to control the flight formation. For non-cooperative targets, their position must be estimated using the on-board sensors. Moreover, for estimating the position of UAVs, global position information may not always be available (GPS-denied environments). This work presents a cooperative visual-based SLAM (Simultaneous Localization and Mapping) system that allows a team of aerial robots to autonomously follow a non-cooperative target moving freely in a GPS-denied environment. One of the contributions of this work is to propose and investigate the use of a target-centric SLAM configuration to solve the estimation problem that differs from the well-known World-centric and Robot-centric SLAM configurations. In this sense, the proposed approach is supported by theoretical results obtained from an extensive nonlinear observability analysis. Additionally, a control system is proposed for maintaining a stable UAV flight formation with respect to the target as well. In this case, the stability of control laws is proved using the Lyapunov theory. Employing an extensive set of computer simulations, the proposed system demonstrated potentially to outperform other related approaches.
Collapse
|
12
|
|
13
|
Martz J, Al‐Sabban W, Smith RN. Survey of unmanned subterranean exploration, navigation, and localisation. IET CYBER-SYSTEMS AND ROBOTICS 2020. [DOI: 10.1049/iet-csr.2019.0043] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Jeffrey Martz
- Physics and Engineering Fort Lewis College Durango CO USA
| | - Wesam Al‐Sabban
- Computer and Information Systems umm Al Qura University Makkah Saudi Arabia
| | - Ryan N. Smith
- Physics and Engineering Fort Lewis College Durango CO USA
| |
Collapse
|
14
|
Coppola M, McGuire KN, De Wagter C, de Croon GCHE. A Survey on Swarming With Micro Air Vehicles: Fundamental Challenges and Constraints. Front Robot AI 2020; 7:18. [PMID: 33501187 PMCID: PMC7806031 DOI: 10.3389/frobt.2020.00018] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 02/04/2020] [Indexed: 11/30/2022] Open
Abstract
This work presents a review and discussion of the challenges that must be solved in order to successfully develop swarms of Micro Air Vehicles (MAVs) for real world operations. From the discussion, we extract constraints and links that relate the local level MAV capabilities to the global operations of the swarm. These should be taken into account when designing swarm behaviors in order to maximize the utility of the group. At the lowest level, each MAV should operate safely. Robustness is often hailed as a pillar of swarm robotics, and a minimum level of local reliability is needed for it to propagate to the global level. An MAV must be capable of autonomous navigation within an environment with sufficient trustworthiness before the system can be scaled up. Once the operations of the single MAV are sufficiently secured for a task, the subsequent challenge is to allow the MAVs to sense one another within a neighborhood of interest. Relative localization of neighbors is a fundamental part of self-organizing robotic systems, enabling behaviors ranging from basic relative collision avoidance to higher level coordination. This ability, at times taken for granted, also must be sufficiently reliable. Moreover, herein lies a constraint: the design choice of the relative localization sensor has a direct link to the behaviors that the swarm can (and should) perform. Vision-based systems, for instance, force MAVs to fly within the field of view of their camera. Range or communication-based solutions, alternatively, provide omni-directional relative localization, yet can be victim to unobservable conditions under certain flight behaviors, such as parallel flight, and require constant relative excitation. At the swarm level, the final outcome is thus intrinsically influenced by the on-board abilities and sensors of the individual. The real-world behavior and operations of an MAV swarm intrinsically follow in a bottom-up fashion as a result of the local level limitations in cognition, relative knowledge, communication, power, and safety. Taking these local limitations into account when designing a global swarm behavior is key in order to take full advantage of the system, enabling local limitations to become true strengths of the swarm.
Collapse
Affiliation(s)
- Mario Coppola
- Micro Air Vehicle Laboratory (MAVLab), Department of Control and Simulation, Faculty of Aerospace Engineering, Delft University of Technology, Delft, Netherlands
- Department of Space Systems Engineering, Faculty of Aerospace Engineering, Delft University of Technology, Delft, Netherlands
| | - Kimberly N. McGuire
- Micro Air Vehicle Laboratory (MAVLab), Department of Control and Simulation, Faculty of Aerospace Engineering, Delft University of Technology, Delft, Netherlands
| | - Christophe De Wagter
- Micro Air Vehicle Laboratory (MAVLab), Department of Control and Simulation, Faculty of Aerospace Engineering, Delft University of Technology, Delft, Netherlands
| | - Guido C. H. E. de Croon
- Micro Air Vehicle Laboratory (MAVLab), Department of Control and Simulation, Faculty of Aerospace Engineering, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
15
|
GP-SLAM: laser-based SLAM approach based on regionalized Gaussian process map reconstruction. Auton Robots 2020. [DOI: 10.1007/s10514-020-09906-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
16
|
Zeng F, Jacobson A, Smith D, Boswell N, Peynot T, Milford M. TIMTAM: Tunnel-Image Texturally Accorded Mosaic for Location Refinement of Underground Vehicles With a Single Camera. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2932579] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
17
|
Sarkar M, Homaifar A, Erol BA, Behniapoor M, Tunstel E. PIE: a Tool for Data-Driven Autonomous UAV Flight Testing. J INTELL ROBOT SYST 2019. [DOI: 10.1007/s10846-019-01078-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
18
|
Pestana J, Maurer M, Muschick D, Hofer M, Fraundorfer F. Overview obstacle maps for obstacle-aware navigation of autonomous drones. J FIELD ROBOT 2019; 36:734-762. [PMID: 31656453 PMCID: PMC6777497 DOI: 10.1002/rob.21863] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Revised: 12/01/2018] [Accepted: 01/24/2019] [Indexed: 11/07/2022]
Abstract
Achieving the autonomous deployment of aerial robots in unknown outdoor environments using only onboard computation is a challenging task. In this study, we have developed a solution to demonstrate the feasibility of autonomously deploying drones in unknown outdoor environments, with the main capability of providing an obstacle map of the area of interest in a short period of time. We focus on use cases where no obstacle maps are available beforehand, for instance, in search and rescue scenarios, and on increasing the autonomy of drones in such situations. Our vision-based mapping approach consists of two separate steps. First, the drone performs an overview flight at a safe altitude acquiring overlapping nadir images, while creating a high-quality sparse map of the environment by using a state-of-the-art photogrammetry method. Second, this map is georeferenced, densified by fitting a mesh model and converted into an Octomap obstacle map, which can be continuously updated while performing a task of interest near the ground or in the vicinity of objects. The generation of the overview obstacle map is performed in almost real time on the onboard computer of the drone, a map of size 100 m × 75 m is created in ≈ 2.75 min , therefore, with enough time remaining for the drone to execute other tasks inside the area of interest during the same flight. We evaluate quantitatively the accuracy of the acquired map and the characteristics of the planned trajectories. We further demonstrate experimentally the safe navigation of the drone in an area mapped with our proposed approach.
Collapse
Affiliation(s)
- Jesús Pestana
- Institute for Computer Graphics and Vision (ICG)Graz University of Technology (TU Graz)GrazAustria
| | - Michael Maurer
- Institute for Computer Graphics and Vision (ICG)Graz University of Technology (TU Graz)GrazAustria
| | | | - Manuel Hofer
- Institute for Computer Graphics and Vision (ICG)Graz University of Technology (TU Graz)GrazAustria
| | - Friedrich Fraundorfer
- Institute for Computer Graphics and Vision (ICG)Graz University of Technology (TU Graz)GrazAustria
| |
Collapse
|
19
|
An Orthogonal Weighted Occupancy Likelihood Map with IMU-Aided Laser Scan Matching for 2D Indoor Mapping. SENSORS 2019; 19:s19071742. [PMID: 30979020 PMCID: PMC6479394 DOI: 10.3390/s19071742] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Revised: 04/08/2019] [Accepted: 04/09/2019] [Indexed: 11/29/2022]
Abstract
An indoor map is a piece of infrastructure associated with location-based services. Simultaneous Localization and Mapping (SLAM)-based mobile mapping is an efficient method to construct an indoor map. This paper proposes an SLAM algorithm based on a laser scanner and an Inertial Measurement Unit (IMU) for 2D indoor mapping. A grid-based occupancy likelihood map is chosen as the map representation method and is built from all previous scans. Scan-to-map matching is utilized to find the optimal rigid-body transformation in order to avoid the accumulation of matching errors. Map generation and update are probabilistically motivated. According to the assumption that the orthogonal is the main feature of indoor environments, we propose a lightweight segment extraction method, based on the orthogonal blurred segments (OBS) method. Instead of calculating the parameters of segments, we give the scan points contained in blurred segments a greater weight during the construction of the grid-based occupancy likelihood map, which we call the orthogonal feature weighted occupancy likelihood map (OWOLM). The OWOLM enhances the occupancy likelihood map by fusing the orthogonal features. It can filter out noise scan points, produced by objects, such as glass cabinets and bookcases. Experiments were carried out in a library, which is a representative indoor environment, consisting of orthogonal features. The experimental result proves that, compared with the general occupancy likelihood map, the OWOLM can effectively reduce accumulated errors and construct a clearer indoor map.
Collapse
|
20
|
Position Estimation Based on Grid Cells and Self-Growing Self-Organizing Map. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2019; 2019:3606397. [PMID: 30936912 PMCID: PMC6413409 DOI: 10.1155/2019/3606397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Accepted: 01/13/2019] [Indexed: 11/21/2022]
Abstract
As the basis of animals' natal homing behavior, path integration can continuously provide current position information relative to the initial position. Some neurons in freely moving animals' brains can encode current positions and surrounding environments by special firing patterns. Research studies show that neurons such as grid cells (GCs) in the hippocampus of animals' brains are related to the path integration. They might encode the coordinate of the animal's current position in the same way as the residue number system (RNS) which is based on the Chinese remainder theorem (CRT). Hence, in order to provide vehicles a bionic position estimation method, we propose a model to decode the GCs' encoding information based on the improved traditional self-organizing map (SOM), and this model makes full use of GCs' firing characteristics. The details of the model are discussed in this paper. Besides, the model is realized by computer simulation, and its performance is analyzed under different conditions. Simulation results indicate that the proposed position estimation model is effective and stable.
Collapse
|
21
|
Tzoumanikas D, Li W, Grimm M, Zhang K, Kovac M, Leutenegger S. Fully autonomous micro air vehicle flight and landing on a moving target using visual-inertial estimation and model-predictive control. J FIELD ROBOT 2018. [DOI: 10.1002/rob.21821] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Dimos Tzoumanikas
- Department of Computing; Imperial College London; London United Kingdom
| | - Wenbin Li
- Department of Computing; Imperial College London; London United Kingdom
| | - Marius Grimm
- Department of Computing; Imperial College London; London United Kingdom
- Department of Mechanical Engineering; Eidgenössische Technische Hochschule Zurich; Zurich Switzerland
| | - Ketao Zhang
- Department of Aeronautics; Imperial College London; London United Kingdom
| | - Mirko Kovac
- Department of Aeronautics; Imperial College London; London United Kingdom
| | | |
Collapse
|
22
|
Affiliation(s)
- S. Suzuki
- Department of Mechanical Engineering and Robotics, Shinshu University, Ueda-shi, Nagano, Japan
| |
Collapse
|
23
|
An efficient RANSAC hypothesis evaluation using sufficient statistics for RGB-D pose estimation. Auton Robots 2018. [DOI: 10.1007/s10514-018-9801-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
24
|
McLoughlin BJ, Pointon HAG, McLoughlin JP, Shaw A, Bezombes FA. Uncertainty Characterisation of Mobile Robot Localisation Techniques using Optical Surveying Grade Instruments. SENSORS 2018; 18:s18072274. [PMID: 30011874 PMCID: PMC6068590 DOI: 10.3390/s18072274] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Revised: 07/07/2018] [Accepted: 07/11/2018] [Indexed: 11/16/2022]
Abstract
Recent developments in localisation systems for autonomous robotic technology have been a driving factor in the deployment of robots in a wide variety of environments. Estimating sensor measurement noise is an essential factor when producing uncertainty models for state-of-the-art robotic positioning systems. In this paper, a surveying grade optical instrument in the form of a Trimble S7 Robotic Total Station is utilised to dynamically characterise the error of positioning sensors of a ground based unmanned robot. The error characteristics are used as inputs into the construction of a Localisation Extended Kalman Filter which fuses Pozyx Ultra-wideband range measurements with odometry to obtain an optimal position estimation, all whilst using the path generated from the remote tracking feature of the Robotic Total Station as a ground truth metric. Experiments show that the proposed method yields an improved positional estimation compared to the Pozyx systems’ native firmware algorithm as well as producing a smoother trajectory.
Collapse
Affiliation(s)
- Benjamin J McLoughlin
- Engineering and Technology Research Institute, Liverpool John Moores University, 3 Byrom St, Liverpool L3 3AF, UK.
| | - Harry A G Pointon
- Engineering and Technology Research Institute, Liverpool John Moores University, 3 Byrom St, Liverpool L3 3AF, UK.
| | - John P McLoughlin
- Engineering and Technology Research Institute, Liverpool John Moores University, 3 Byrom St, Liverpool L3 3AF, UK.
| | - Andy Shaw
- Engineering and Technology Research Institute, Liverpool John Moores University, 3 Byrom St, Liverpool L3 3AF, UK.
| | - Frederic A Bezombes
- Engineering and Technology Research Institute, Liverpool John Moores University, 3 Byrom St, Liverpool L3 3AF, UK.
| |
Collapse
|
25
|
Data Fusion Architectures for Orthogonal Redundant Inertial Measurement Units. SENSORS 2018; 18:s18061910. [PMID: 29895775 PMCID: PMC6022023 DOI: 10.3390/s18061910] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Revised: 06/07/2018] [Accepted: 06/08/2018] [Indexed: 11/16/2022]
Abstract
This work looks at the exploitation of large numbers of orthogonal redundant inertial measurement units. Specifically, the paper analyses centralized and distributed architectures in the context of data fusion algorithms for those sensors. For both architectures, data fusion algorithms based on Kalman filter are developed. Some of those algorithms consider sensors location, whereas the others do not, but all estimate the sensors bias. A fault detection algorithm, based on residual analysis, is also proposed. Monte-Carlo simulations show better performance for the centralized architecture with an algorithm considering sensors location. Due to a better estimation of the sensors bias, the latter provides the most precise and accurate estimates and the best fault detection. However, it requires a much longer computational time. An analysis of the sensors bias correlation is also done. Based on the simulations, the biases correlation has a small effect on the attitude rate estimation, but a very significant one on the acceleration estimation.
Collapse
|
26
|
Trujillo JC, Munguia R, Guerra E, Grau A. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments. SENSORS 2018; 18:s18051351. [PMID: 29701722 PMCID: PMC5981868 DOI: 10.3390/s18051351] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2018] [Revised: 04/03/2018] [Accepted: 04/19/2018] [Indexed: 11/16/2022]
Abstract
This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.
Collapse
Affiliation(s)
- Juan-Carlos Trujillo
- Department of Computer Science, CUCEI, University of Guadalajara, Guadalajara 44430, Mexico.
| | - Rodrigo Munguia
- Department of Computer Science, CUCEI, University of Guadalajara, Guadalajara 44430, Mexico.
| | - Edmundo Guerra
- Department of Automatic Control, Technical University of Catalonia UPC, 08034 Barcelona, Spain.
| | - Antoni Grau
- Department of Automatic Control, Technical University of Catalonia UPC, 08034 Barcelona, Spain.
| |
Collapse
|
27
|
Affiliation(s)
- B. Deniz Ilhan
- Electrical & Systems Engineering University of Pennsylvania Philadelphia Pennsylvania 19104
| | - Aaron M. Johnson
- Mechanical Engineering Carnegie Mellon University Pittsburgh Pennsylvania 15213
| | - D. E. Koditschek
- Electrical & Systems Engineering University of Pennsylvania Philadelphia Pennsylvania 19104
| |
Collapse
|
28
|
Zhang Y, Liang W, He H, Tan J. Wearable Heading Estimation for Motion Tracking in Health Care by Adaptive Fusion of Visual-Inertial Measurements. IEEE J Biomed Health Inform 2018; 22:1732-1743. [PMID: 29994357 DOI: 10.1109/jbhi.2018.2795006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/28/2024]
Abstract
The increasing demand for health informatics has become a far-reaching trend in the ageing society. The utilization of wearable sensors enables monitoring senior people daily activities in free-living environments, conveniently and effectively. Among the primary health-care sensing categories, the wearable visual-inertial modality for human motion tracking gradually exerts promising potentials. In this paper, we present a novel wearable heading estimation strategy to track the movements of human limbs. It adaptively fuses inertial measurements with visual features following locality constraints. Body movements are classified into two types: general motion (which consists of both rotation and translation). or degenerate motion (which consists of only rotation). A specific number of feature correspondences between camera frames are adaptively chosen to satisfy both the feature descriptor similarity constraint and the locality constraint. The selected feature correspondences and inertial quaternions are employed to calculate the initial pose, followed by the coarse-to-fine procedure to iteratively remove visual outliers. Eventually, the ultimate heading is optimized using the correct feature matches. The proposed method has been thoroughly evaluated on the straight-line, rotatory and ambulatory movement scenarios. As the system is lightweight and requires small computational resources, it enables effective and unobtrusive human motion monitoring, especially for the senior citizens in the long-term rehabilitation.
Collapse
|
29
|
Lee DH, Coltin B, Morse T, Park IW, Flückiger L, Smith T. Handrail detection and pose estimation for a free-flying robot. INT J ADV ROBOT SYST 2018. [DOI: 10.1177/1729881417753691] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
We present a handrail detection and pose estimation algorithm for the free-flying Astrobee robots that will operate inside the International Space Station. The Astrobee will be equipped with a single time-of-flight depth sensor and a compliant perching arm to grab the International Space Station handrails. Autonomous perching enables a free-flying robot to minimize power consumption by holding its position without using propulsion. Astrobee is a small robot with many competing demands on its computing, power, and volume resources. Therefore, for perching, we were limited to using a single compact sensor and a lightweight detection algorithm. Moreover, the handrails on the International Space Station are surrounded by various instruments and cables, and the lighting conditions change significantly depending on the light sources, time, and robot location. The proposed algorithm uses a time-of-flight depth sensor for handrail perception under varying lighting conditions and utilizes the geometric characteristics of the handrails for robust detection and pose estimation. We demonstrate the robustness and accuracy of the algorithm in various environment scenarios.
Collapse
Affiliation(s)
- Dong-Hyun Lee
- Department of Electrical Engineering, Kumoh National Institute of Technology, Gumi, Gyeongbuk, South Korea
| | - Brian Coltin
- Stinger Ghaffarian Technologies, Inc., Moffett Field, California, USA
| | - Theodore Morse
- Stinger Ghaffarian Technologies, Inc., Moffett Field, California, USA
| | - In-Won Park
- Stinger Ghaffarian Technologies, Inc., Moffett Field, California, USA
| | - Lorenzo Flückiger
- Stinger Ghaffarian Technologies, Inc., Moffett Field, California, USA
| | - Trey Smith
- Intelligent Robotics Group, NASA Ames Research Center, Moffett Field, California, USA
| |
Collapse
|
30
|
Mohta K, Watterson M, Mulgaonkar Y, Liu S, Qu C, Makineni A, Saulnier K, Sun K, Zhu A, Delmerico J, Karydis K, Atanasov N, Loianno G, Scaramuzza D, Daniilidis K, Taylor CJ, Kumar V. Fast, autonomous flight in GPS-denied and cluttered environments. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21774] [Citation(s) in RCA: 86] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Kartik Mohta
- GRASP Lab; University of Pennsylvania; Philadelphia PA USA
| | | | | | - Sikang Liu
- GRASP Lab; University of Pennsylvania; Philadelphia PA USA
| | - Chao Qu
- GRASP Lab; University of Pennsylvania; Philadelphia PA USA
| | | | | | - Ke Sun
- GRASP Lab; University of Pennsylvania; Philadelphia PA USA
| | - Alex Zhu
- GRASP Lab; University of Pennsylvania; Philadelphia PA USA
| | - Jeffrey Delmerico
- Robotics and Perception Group; University of Zurich; Zurich Switzerland
| | | | | | | | - Davide Scaramuzza
- Robotics and Perception Group; University of Zurich; Zurich Switzerland
| | | | | | - Vijay Kumar
- GRASP Lab; University of Pennsylvania; Philadelphia PA USA
| |
Collapse
|
31
|
Tijmons S, de Croon GCHE, Remes BDW, De Wagter C, Mulder M. Obstacle Avoidance Strategy using Onboard Stereo Vision on a Flapping Wing MAV. IEEE T ROBOT 2017. [DOI: 10.1109/tro.2017.2683530] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
32
|
|
33
|
Szafir D, Mutlu B, Fong T. Designing planning and control interfaces to support user collaboration with flying robots. Int J Rob Res 2017. [DOI: 10.1177/0278364916688256] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
34
|
Vision-Based Corrosion Detection Assisted by a Micro-Aerial Vehicle in a Vessel Inspection Application. SENSORS 2016; 16:s16122118. [PMID: 27983627 PMCID: PMC5191098 DOI: 10.3390/s16122118] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2016] [Revised: 11/22/2016] [Accepted: 12/07/2016] [Indexed: 11/17/2022]
Abstract
Vessel maintenance requires periodic visual inspection of the hull in order to detect typical defective situations of steel structures such as, among others, coating breakdown and corrosion. These inspections are typically performed by well-trained surveyors at great cost because of the need for providing access means (e.g., scaffolding and/or cherry pickers) that allow the inspector to be at arm's reach from the structure under inspection. This paper describes a defect detection approach comprising a micro-aerial vehicle which is used to collect images from the surfaces under inspection, particularly focusing on remote areas where the surveyor has no visual access, and a coating breakdown/corrosion detector based on a three-layer feed-forward artificial neural network. As it is discussed in the paper, the success of the inspection process depends not only on the defect detection software but also on a number of assistance functions provided by the control architecture of the aerial platform, whose aim is to improve picture quality. Both aspects of the work are described along the different sections of the paper, as well as the classification performance attained.
Collapse
|
35
|
Nguyen T, Mann GKI, Gosine RG, Vardy A. Appearance-Based Visual-Teach-And-Repeat Navigation Technique for Micro Aerial Vehicle. J INTELL ROBOT SYST 2016. [DOI: 10.1007/s10846-015-0320-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
36
|
An Accurate and Fault-Tolerant Target Positioning System for Buildings Using Laser Rangefinders and Low-Cost MEMS-Based MARG Sensors. SENSORS 2015; 15:27060-86. [PMID: 26512672 PMCID: PMC4634413 DOI: 10.3390/s151027060] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Revised: 10/17/2015] [Accepted: 10/21/2015] [Indexed: 11/16/2022]
Abstract
Target positioning systems based on MEMS gyros and laser rangefinders (LRs) have extensive prospects due to their advantages of low cost, small size and easy realization. The target positioning accuracy is mainly determined by the LR's attitude derived by the gyros. However, the attitude error is large due to the inherent noises from isolated MEMS gyros. In this paper, both accelerometer/magnetometer and LR attitude aiding systems are introduced to aid MEMS gyros. A no-reset Federated Kalman Filter (FKF) is employed, which consists of two local Kalman Filters (KF) and a Master Filter (MF). The local KFs are designed by using the Direction Cosine Matrix (DCM)-based dynamic equations and the measurements from the two aiding systems. The KFs can estimate the attitude simultaneously to limit the attitude errors resulting from the gyros. Then, the MF fuses the redundant attitude estimates to yield globally optimal estimates. Simulation and experimental results demonstrate that the FKF-based system can improve the target positioning accuracy effectively and allow for good fault-tolerant capability.
Collapse
|
37
|
Inspection of Pole-Like Structures Using a Visual-Inertial Aided VTOL Platform with Shared Autonomy. SENSORS 2015; 15:22003-48. [PMID: 26340631 PMCID: PMC4610434 DOI: 10.3390/s150922003] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2015] [Revised: 08/24/2015] [Accepted: 08/26/2015] [Indexed: 11/16/2022]
Abstract
This paper presents an algorithm and a system for vertical infrastructure inspection using a vertical take-off and landing (VTOL) unmanned aerial vehicle and shared autonomy. Inspecting vertical structures such as light and power distribution poles is a difficult task that is time-consuming, dangerous and expensive. Recently, micro VTOL platforms (i.e., quad-, hexa- and octa-rotors) have been rapidly gaining interest in research, military and even public domains. The unmanned, low-cost and VTOL properties of these platforms make them ideal for situations where inspection would otherwise be time-consuming and/or hazardous to humans. There are, however, challenges involved with developing such an inspection system, for example flying in close proximity to a target while maintaining a fixed stand-off distance from it, being immune to wind gusts and exchanging useful information with the remote user. To overcome these challenges, we require accurate and high-update rate state estimation and high performance controllers to be implemented onboard the vehicle. Ease of control and a live video feed are required for the human operator. We demonstrate a VTOL platform that can operate at close-quarters, whilst maintaining a safe stand-off distance and rejecting environmental disturbances. Two approaches are presented: Position-Based Visual Servoing (PBVS) using an Extended Kalman Filter (EKF) and estimator-free Image-Based Visual Servoing (IBVS). Both use monocular visual, inertia, and sonar data, allowing the approaches to be applied for indoor or GPS-impaired environments. We extensively compare the performances of PBVS and IBVS in terms of accuracy, robustness and computational costs. Results from simulations Sensors 2015, 15 22004 and indoor/outdoor (day and night) flight experiments demonstrate the system is able to successfully inspect and circumnavigate a vertical pole.
Collapse
|
38
|
Kaul L, Zlot R, Bosse M. Continuous-Time Three-Dimensional Mapping for Micro Aerial Vehicles with a Passively Actuated Rotating Laser Scanner. J FIELD ROBOT 2015. [DOI: 10.1002/rob.21614] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Lukas Kaul
- Autonomous Systems; CSIRO; Brisbane Australia
- Institute of Measurement and Control Systems; Karlsruhe Institute of Technology; Germany
| | - Robert Zlot
- Autonomous Systems; CSIRO; Brisbane Australia
| | - Michael Bosse
- Autonomous Systems Laboratory; ETH; Zürich Switzerland
| |
Collapse
|
39
|
Invariant Observer-Based State Estimation for Micro-Aerial Vehicles in GPS-Denied Indoor Environments Using an RGB-D Camera and MEMS Inertial Sensors. MICROMACHINES 2015. [DOI: 10.3390/mi6040487] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
40
|
Bry A, Richter C, Bachrach A, Roy N. Aggressive flight of fixed-wing and quadrotor aircraft in dense indoor environments. Int J Rob Res 2015. [DOI: 10.1177/0278364914558129] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In this paper, we describe trajectory planning and state estimation algorithms for aggressive flight of micro aerial vehicles in known, obstacle-dense environments. Finding aggressive but dynamically feasible and collision-free trajectories in cluttered environments requires trajectory optimization and state estimation in the full state space of the vehicle, which is usually computationally infeasible on realistic timescales for real vehicles and sensors. We first build on previous work of van Nieuwstadt and Murray and Mellinger and Kumar, to show how a search process can be coupled with optimization in the output space of a differentially flat vehicle model to find aggressive trajectories that utilize the full maneuvering capabilities of a quadrotor. We further extend this work to vehicles with complex, Dubins-type dynamics and present a novel trajectory representation called a “Dubins–Polynomial trajectory”, which allows us to optimize trajectories for fixed-wing vehicles. To provide accurate state estimation for aggressive flight, we show how the Gaussian particle filter can be extended to allow laser rangefinder localization to be combined with a Kalman filter. This formulation allows similar estimation accuracy to particle filtering in the full vehicle state but with an order of magnitude more efficiency. We conclude with experiments demonstrating the execution of quadrotor and fixed-wing trajectories in cluttered environments. We show results of aggressive flight at speeds of up to 8 m/s for the quadrotor and 11 m/s for the fixed-wing aircraft.
Collapse
Affiliation(s)
- Adam Bry
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| | - Charles Richter
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| | - Abraham Bachrach
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| | - Nicholas Roy
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| |
Collapse
|
41
|
Li D, Li Q, Cheng N, Song J. Sampling-based real-time motion planning under state uncertainty for autonomous micro-aerial vehicles in GPS-denied environments. SENSORS 2014; 14:21791-825. [PMID: 25412217 PMCID: PMC4279562 DOI: 10.3390/s141121791] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2014] [Revised: 10/26/2014] [Accepted: 11/03/2014] [Indexed: 11/24/2022]
Abstract
This paper presents a real-time motion planning approach for autonomous vehicles with complex dynamics and state uncertainty. The approach is motivated by the motion planning problem for autonomous vehicles navigating in GPS-denied dynamic environments, which involves non-linear and/or non-holonomic vehicle dynamics, incomplete state estimates, and constraints imposed by uncertain and cluttered environments. To address the above motion planning problem, we propose an extension of the closed-loop rapid belief trees, the closed-loop random belief trees (CL-RBT), which incorporates predictions of the position estimation uncertainty, using a factored form of the covariance provided by the Kalman filter-based estimator. The proposed motion planner operates by incrementally constructing a tree of dynamically feasible trajectories using the closed-loop prediction, while selecting candidate paths with low uncertainty using efficient covariance update and propagation. The algorithm can operate in real-time, continuously providing the controller with feasible paths for execution, enabling the vehicle to account for dynamic and uncertain environments. Simulation results demonstrate that the proposed approach can generate feasible trajectories that reduce the state estimation uncertainty, while handling complex vehicle dynamics and environment constraints.
Collapse
Affiliation(s)
- Dachuan Li
- Department of Automation, Tsinghua University, Bejing 100084, China.
| | - Qing Li
- Department of Automation, Tsinghua University, Bejing 100084, China.
| | - Nong Cheng
- Department of Automation, Tsinghua University, Bejing 100084, China.
| | - Jingyan Song
- Department of Automation, Tsinghua University, Bejing 100084, China.
| |
Collapse
|
42
|
Kümmerle R, Ruhnke M, Steder B, Stachniss C, Burgard W. Autonomous Robot Navigation in Highly Populated Pedestrian Zones. J FIELD ROBOT 2014. [DOI: 10.1002/rob.21534] [Citation(s) in RCA: 72] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Rainer Kümmerle
- Department of Computer Science; University of Freiburg; 79110 Freiburg Germany
| | - Michael Ruhnke
- Department of Computer Science; University of Freiburg; 79110 Freiburg Germany
| | - Bastian Steder
- Department of Computer Science; University of Freiburg; 79110 Freiburg Germany
| | - Cyrill Stachniss
- Department of Computer Science; University of Freiburg; 79110 Freiburg Germany
| | - Wolfram Burgard
- Department of Computer Science; University of Freiburg; 79110 Freiburg Germany
| |
Collapse
|
43
|
Kubelka V, Oswald L, Pomerleau F, Colas F, Svoboda T, Reinstein M. Robust Data Fusion of Multimodal Sensory Information for Mobile Robots. J FIELD ROBOT 2014. [DOI: 10.1002/rob.21535] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Vladimír Kubelka
- Center for Machine Perception; Dept. of Cybernetics, Faculty of Electrical Engineering, Czech Technical University in Prague; Technicka 2 166 27 Prague 6 Czech Republic
| | | | | | | | - Tomáš Svoboda
- Center for Machine Perception; Dept. of Cybernetics, Faculty of Electrical Engineering, Czech Technical University in Prague; Technicka 2 166 27 Prague 6 Czech Republic
| | - Michal Reinstein
- Center for Machine Perception; Dept. of Cybernetics, Faculty of Electrical Engineering, Czech Technical University in Prague; Technicka 2 166 27 Prague 6 Czech Republic
| |
Collapse
|
44
|
Tang J, Chen Y, Jaakkola A, Liu J, Hyyppä J, Hyyppä H. NAVIS-An UGV indoor positioning system using laser scan matching for large-area real-time applications. SENSORS 2014; 14:11805-24. [PMID: 24999715 PMCID: PMC4168456 DOI: 10.3390/s140711805] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Revised: 06/03/2014] [Accepted: 06/20/2014] [Indexed: 11/30/2022]
Abstract
Laser scan matching with grid-based maps is a promising tool for real-time indoor positioning of mobile Unmanned Ground Vehicles (UGVs). While there are critical implementation problems, such as the ability to estimate the position by sensing the unknown indoor environment with sufficient accuracy and low enough latency for stable vehicle control, further development work is necessary. Unfortunately, most of the existing methods employ heuristics for quick positioning in which numerous accumulated errors easily lead to loss of positioning accuracy. This severely restricts its applications in large areas and over lengthy periods of time. This paper introduces an efficient real-time mobile UGV indoor positioning system for large-area applications using laser scan matching with an improved probabilistically-motivated Maximum Likelihood Estimation (IMLE) algorithm, which is based on a multi-resolution patch-divided grid likelihood map. Compared with traditional methods, the improvements embodied in IMLE include: (a) Iterative Closed Point (ICP) preprocessing, which adaptively decreases the search scope; (b) a totally brute search matching method on multi-resolution map layers, based on the likelihood value between current laser scan and the grid map within refined search scope, adopted to obtain the global optimum position at each scan matching; and (c) a patch-divided likelihood map supporting a large indoor area. A UGV platform called NAVIS was designed, manufactured, and tested based on a low-cost robot integrating a LiDAR and an odometer sensor to verify the IMLE algorithm. A series of experiments based on simulated data and field tests with NAVIS proved that the proposed IMEL algorithm is a better way to perform local scan matching that can offer a quick and stable positioning solution with high accuracy so it can be part of a large area localization/mapping, application. The NAVIS platform can reach an updating rate of 12 Hz in a feature-rich environment and 2 Hz even in a feature-poor environment, respectively. Therefore, it can be utilized in a real-time application.
Collapse
Affiliation(s)
- Jian Tang
- GNSS Research Center, Wuhan University, 129 Luoyu Road, Wuhan 430079, China.
| | - Yuwei Chen
- Department of Remote Sensing and Photogrammetry, Finnish Geodetic Institute, Kirkkonummi FI-02431, Finland.
| | - Anttoni Jaakkola
- Department of Remote Sensing and Photogrammetry, Finnish Geodetic Institute, Kirkkonummi FI-02431, Finland.
| | - Jinbing Liu
- Department of Remote Sensing and Photogrammetry, Finnish Geodetic Institute, Kirkkonummi FI-02431, Finland.
| | - Juha Hyyppä
- Department of Remote Sensing and Photogrammetry, Finnish Geodetic Institute, Kirkkonummi FI-02431, Finland.
| | - Hannu Hyyppä
- Department of Real Estate, Planning and Geoinformatics, Aalto University, Espoo FI-11000, Finland.
| |
Collapse
|
45
|
Heng L, Honegger D, Lee GH, Meier L, Tanskanen P, Fraundorfer F, Pollefeys M. Autonomous Visual Mapping and Exploration With a Micro Aerial Vehicle. J FIELD ROBOT 2014. [DOI: 10.1002/rob.21520] [Citation(s) in RCA: 63] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Lionel Heng
- Computer Vision and Geometry Lab; ETH Zürich, Universitätstrasse; 6, 8092 Zürich Switzerland
| | - Dominik Honegger
- Computer Vision and Geometry Lab; ETH Zürich, Universitätstrasse; 6, 8092 Zürich Switzerland
| | - Gim Hee Lee
- Computer Vision and Geometry Lab; ETH Zürich, Universitätstrasse; 6, 8092 Zürich Switzerland
| | - Lorenz Meier
- Computer Vision and Geometry Lab; ETH Zürich, Universitätstrasse; 6, 8092 Zürich Switzerland
| | - Petri Tanskanen
- Computer Vision and Geometry Lab; ETH Zürich, Universitätstrasse; 6, 8092 Zürich Switzerland
| | - Friedrich Fraundorfer
- Remote Sensing Technology; Faculty of Civil Engineering and Surveying, Technische Universität München; Arcisstrasse 21, 80333 München Germany
| | - Marc Pollefeys
- Computer Vision and Geometry Lab; ETH Zürich, Universitätstrasse; 6, 8092 Zürich Switzerland
| |
Collapse
|
46
|
Keshavan J, Gremillion G, Escobar-Alvarez H, Humbert JS. A μ analysis-based, controller-synthesis framework for robust bioinspired visual navigation in less-structured environments. BIOINSPIRATION & BIOMIMETICS 2014; 9:025011. [PMID: 24852145 DOI: 10.1088/1748-3182/9/2/025011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Safe, autonomous navigation by aerial microsystems in less-structured environments is a difficult challenge to overcome with current technology. This paper presents a novel visual-navigation approach that combines bioinspired wide-field processing of optic flow information with control-theoretic tools for synthesis of closed loop systems, resulting in robustness and performance guarantees. Structured singular value analysis is used to synthesize a dynamic controller that provides good tracking performance in uncertain environments without resorting to explicit pose estimation or extraction of a detailed environmental depth map. Experimental results with a quadrotor demonstrate the vehicle's robust obstacle-avoidance behaviour in a straight line corridor, an S-shaped corridor and a corridor with obstacles distributed in the vehicle's path. The computational efficiency and simplicity of the current approach offers a promising alternative to satisfying the payload, power and bandwidth constraints imposed by aerial microsystems.
Collapse
Affiliation(s)
- J Keshavan
- Autonomous Vehicles Laboratory, Department of Aerospace Engineering, University of Maryland, College Park 20742, USA
| | | | | | | |
Collapse
|
47
|
Kerns AJ, Shepard DP, Bhatti JA, Humphreys TE. Unmanned Aircraft Capture and Control Via GPS Spoofing. J FIELD ROBOT 2014. [DOI: 10.1002/rob.21513] [Citation(s) in RCA: 315] [Impact Index Per Article: 31.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Andrew J. Kerns
- Department of Electrical and Computer Engineering; The University of Texas at Austin; Austin Texas 78712
| | - Daniel P. Shepard
- Department of Aerospace Engineering; The University of Texas at Austin; Austin Texas 78712
| | - Jahshan A. Bhatti
- Department of Aerospace Engineering; The University of Texas at Austin; Austin Texas 78712
| | - Todd E. Humphreys
- Department of Aerospace Engineering; The University of Texas at Austin; Austin Texas 78712
| |
Collapse
|
48
|
Eich M, Bonnin-Pascual F, Garcia-Fidalgo E, Ortiz A, Bruzzone G, Koveos Y, Kirchner F. A Robot Application for Marine Vessel Inspection. J FIELD ROBOT 2014. [DOI: 10.1002/rob.21498] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Markus Eich
- DFKI - Robotics Innovation Center; 28359 Bremen Germany
| | | | | | - Alberto Ortiz
- University of the Balearic Islands; 07122 Palma de Mallorca Spain
| | | | | | | |
Collapse
|
49
|
Abstract
This paper presents a large scale dataset of vision (stereo and RGB-D), laser and proprioceptive data collected over an extended duration by a Willow Garage PR2 robot in the 10 story MIT Stata Center. As of September 2012 the dataset comprises over 2.3 TB, 38 h and 42 km (the length of a marathon). The dataset is of particular interest to robotics and computer vision researchers interested in long-term autonomy. It is expected to be useful in a variety of research areas—robotic mapping (long-term, visual, RGB-D or laser), change detection in indoor environments, human pattern analysis, long-term path planning. For ease of use the original ROS ‘bag’ log files are provided and also a derivative version combining human readable data and imagery in standard formats. Of particular importance, this dataset also includes ground-truth position estimates of the robot at every instance (to typical accuracy of 2 cm) using as-built floor-plans—which were carefully extracted using our software tools. The provision of ground-truth for such a large dataset enables more meaningful comparison between algorithms than has previously been possible.
Collapse
Affiliation(s)
- Maurice Fallon
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Michael Kaess
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - John J Leonard
- Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
50
|
Das A, Diu M, Mathew N, Scharfenberger C, Servos J, Wong A, Zelek JS, Clausi DA, Waslander SL. Mapping, Planning, and Sample Detection Strategies for Autonomous Exploration. J FIELD ROBOT 2013. [DOI: 10.1002/rob.21490] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Arun Das
- Mechanical and Mechatronics Engineering; University of Waterloo; Waterloo ON Canada N2L 3G1
| | - Michael Diu
- Electrical and Computer Engineering; University of Waterloo; Waterloo ON Canada N2L 3G1
| | - Neil Mathew
- Mechanical and Mechatronics Engineering; University of Waterloo; Waterloo ON Canada N2L 3G1
| | | | - James Servos
- Mechanical and Mechatronics Engineering; University of Waterloo; Waterloo ON Canada N2L 3G1
| | - Andy Wong
- Mechanical and Mechatronics Engineering; University of Waterloo; Waterloo ON Canada N2L 3G1
| | - John S. Zelek
- Systems Design Engineering; University of Waterloo; Waterloo ON Canada N2L 3G1
| | - David A. Clausi
- Systems Design Engineering; University of Waterloo; Waterloo ON Canada N2L 3G1
| | - Steven L. Waslander
- Mechanical and Mechatronics Engineering; University of Waterloo; Waterloo ON Canada N2L 3G1
| |
Collapse
|