1
|
Cheng X, Liang X, Li X, Liu Z, Tan H. Autonomous Landing Strategy for Micro-UAV with Mirrored Field-of-View Expansion. SENSORS (BASEL, SWITZERLAND) 2024; 24:6889. [PMID: 39517786 PMCID: PMC11548618 DOI: 10.3390/s24216889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2024] [Revised: 10/18/2024] [Accepted: 10/25/2024] [Indexed: 11/16/2024]
Abstract
Positioning and autonomous landing are key technologies for implementing autonomous flight missions across various fields in unmanned aerial vehicle (UAV) systems. This research proposes a visual positioning method based on mirrored field-of-view expansion, providing a visual-based autonomous landing strategy for quadrotor micro-UAVs (MAVs). The forward-facing camera of the MAV obtains a top view through a view transformation lens while retaining the original forward view. Subsequently, the MAV camera captures the ground landing markers in real-time, and the pose of the MAV camera relative to the landing marker is obtained through a virtual-real image conversion technique and the R-PnP pose estimation algorithm. Then, using a camera-IMU external parameter calibration method, the pose transformation relationship between the UAV camera and the MAV body IMU is determined, thereby obtaining the position of the landing marker's center point relative to the MAV's body coordinate system. Finally, the ground station sends guidance commands to the UAV based on the position information to execute the autonomous landing task. The indoor and outdoor landing experiments with the DJI Tello MAV demonstrate that the proposed forward-facing camera mirrored field-of-view expansion method and landing marker detection and guidance algorithm successfully enable autonomous landing with an average accuracy of 0.06 m. The results show that this strategy meets the high-precision landing requirements of MAVs.
Collapse
Affiliation(s)
- Xiaoqi Cheng
- School of Mechatronic Engineering and Automation, Foshan University, Foshan 528225, China; (X.C.); (Z.L.)
- Guangdong Provincial Key Laboratory of Industrial Intelligent Inspection Technology, Foshan University, Foshan 528225, China; (X.L.); (X.L.)
| | - Xinfeng Liang
- Guangdong Provincial Key Laboratory of Industrial Intelligent Inspection Technology, Foshan University, Foshan 528225, China; (X.L.); (X.L.)
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China
| | - Xiaosong Li
- Guangdong Provincial Key Laboratory of Industrial Intelligent Inspection Technology, Foshan University, Foshan 528225, China; (X.L.); (X.L.)
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China
| | - Zhimin Liu
- School of Mechatronic Engineering and Automation, Foshan University, Foshan 528225, China; (X.C.); (Z.L.)
- Guangdong Provincial Key Laboratory of Industrial Intelligent Inspection Technology, Foshan University, Foshan 528225, China; (X.L.); (X.L.)
| | - Haishu Tan
- School of Mechatronic Engineering and Automation, Foshan University, Foshan 528225, China; (X.C.); (Z.L.)
- Guangdong Provincial Key Laboratory of Industrial Intelligent Inspection Technology, Foshan University, Foshan 528225, China; (X.L.); (X.L.)
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China
| |
Collapse
|
2
|
Yang L, Wang C, Wang L. Autonomous UAVs landing site selection from point cloud in unknown environments. ISA TRANSACTIONS 2022; 130:610-628. [PMID: 35697539 DOI: 10.1016/j.isatra.2022.04.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 02/22/2022] [Accepted: 04/01/2022] [Indexed: 06/15/2023]
Abstract
Autonomous safe landing of UAVs is an important and challenging task in unknown environments, as almost no prior scene information can be leveraged for navigation. Most existing methods cannot address this issue completely, due to terrain uncertainty and system complexity. In this paper, we present a novel and complete framework for UAVs landing, which is built on point cloud in coarse-to-fine manner. Besides, our framework is designed with modularity and has four modules: point cloud preprocessing, coarse landing site selection, fine terrain evaluation, and landing optimal model. Specifically, a composite preprocessing scheme is applied to simultaneously filter noise, generate 3D Octo-map and plan the path on the raw point cloud. To balance the accuracy and real-time of the landing system, only promising coarse landing locations are automatically selected by adopting the proposed multi-stage process in grid-map. Based on the result of coarse stage, a fine-grained 3D validation is modeled by multiple terrain factors, which can further improve landing safety. Finally, a novel landing optimal model fuses terrain condition, fuel consumption, and second landing validation to determine the final landing sites during descent. Extensive experiments have been successfully conducted on different real-world and unknown environments, verifying that our method can select safe landing sites for UAVs robustly. Additionally, the system is further evaluated under normal, emergency, and rescue situations respectively to highlight different landing requirements.
Collapse
Affiliation(s)
- Linjie Yang
- School of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou 510006, China.
| | - Chenglong Wang
- School of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou 510006, China
| | - Luping Wang
- School of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou 510006, China.
| |
Collapse
|
3
|
Jiang Z, Jovan F, Moradi P, Richardson T, Bernardini S, Watson S, Weightman A, Hine D. A multirobot system for autonomous deployment and recovery of a blade crawler for operations and maintenance of offshore wind turbine blades. J FIELD ROBOT 2022. [DOI: 10.1002/rob.22117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Zhengyi Jiang
- Department of Electrical and Electronic Engineering The University of Manchester Manchester UK
| | - Ferdian Jovan
- Department of Computer Science University of Bristol Bristol UK
| | - Peiman Moradi
- Department of Aerospace Engineering University of Bristol Bristol UK
| | - Tom Richardson
- Department of Aerospace Engineering University of Bristol Bristol UK
| | - Sara Bernardini
- Department of Computer Science Royal Holloway University of London Egham UK
| | - Simon Watson
- Department of Electrical and Electronic Engineering The University of Manchester Manchester UK
| | - Andrew Weightman
- Department of Mechanical, Aerospace and Civil Engineering The University of Manchester Manchester UK
| | - Duncan Hine
- Department of Aerospace Engineering University of Bristol Bristol UK
| |
Collapse
|
4
|
Ochoa-de-Eribe-Landaberea A, Zamora-Cadenas L, Peñagaricano-Muñoa O, Velez I. UWB and IMU-Based UAV’s Assistance System for Autonomous Landing on a Platform. SENSORS 2022; 22:s22062347. [PMID: 35336532 PMCID: PMC8948988 DOI: 10.3390/s22062347] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/14/2022] [Accepted: 03/16/2022] [Indexed: 02/01/2023]
Abstract
This work presents a novel landing assistance system (LAS) capable of locating a drone for a safe landing after its inspection mission. The location of the drone is achieved by a fusion of ultra-wideband (UWB), inertial measurement unit (IMU) and magnetometer data. Unlike other typical landing assistance systems, the UWB fixed sensors are placed around a 2 × 2 m landing platform and two tags are attached to the drone. Since this type of set-up is suboptimal for UWB location systems, a new positioning algorithm is proposed for a correct performance. First, an extended Kalman filter (EKF) algorithm is used to calculate the position of each tag, and then both positions are combined for a more accurate and robust localisation. As a result, the obtained positioning errors can be reduced by 50% compared to a typical UWB-based landing assistance system. Moreover, due to the small demand of space, the proposed landing assistance system can be used almost anywhere and is deployed easily.
Collapse
Affiliation(s)
- Aitor Ochoa-de-Eribe-Landaberea
- CEIT-Basque Research and Technology Alliance (BRTA), Manuel Lardizabal 15, 20018 San Sebastián, Spain; (L.Z.-C.); (I.V.)
- Tecnun School of Engineering, Universidad de Navarra, Manuel Lardizabal 13, 20018 San Sebastián, Spain
- Correspondence:
| | - Leticia Zamora-Cadenas
- CEIT-Basque Research and Technology Alliance (BRTA), Manuel Lardizabal 15, 20018 San Sebastián, Spain; (L.Z.-C.); (I.V.)
- Tecnun School of Engineering, Universidad de Navarra, Manuel Lardizabal 13, 20018 San Sebastián, Spain
| | | | - Igone Velez
- CEIT-Basque Research and Technology Alliance (BRTA), Manuel Lardizabal 15, 20018 San Sebastián, Spain; (L.Z.-C.); (I.V.)
- Tecnun School of Engineering, Universidad de Navarra, Manuel Lardizabal 13, 20018 San Sebastián, Spain
| |
Collapse
|
5
|
Chatzikalymnios E, Moustakas K. Landing Site Detection for Autonomous Rotor Wing UAVs Using Visual and Structural Information. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-021-01544-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractThe technology of unmanned aerial vehicles (UAVs) has increasingly become part of many civil and research applications in recent years. UAVs offer high-quality aerial imaging and the ability to perform quick, flexible and in-depth data acquisition over an area of interest. While navigating in remote environments, UAVs need to be capable of autonomously landing on complex terrains for security, safety and delivery reasons. This is extremely challenging as the structure of these terrains is often unknown, and no prior knowledge can be leveraged. In this study, we present a vision-based autonomous landing system for rotor wing UAVs equipped with a stereo camera and an inertial measurement unit (IMU). The landing site detection algorithm introduces and evaluates several factors including terrain’s flatness, inclination and steepness. Considering these features we compute map metrics that are used to obtain a landing-score map, based on which we detect candidate landing sites. The 3D reconstruction of the scene is acquired by stereo processing and the pose of the UAV at any given time is estimated by fusing raw data from the inertial sensors with the pose obtained from stereo ORB-SLAM2. Real-world trials demonstrate successful landing in unknown and complex terrains such as suburban and forest areas.
Collapse
|
6
|
Measurement of End-effector Pose Errors and the Cable Profile of Cable-Driven Robot using Monocular Camera. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01486-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
7
|
Real-Time Monocular Vision System for UAV Autonomous Landing in Outdoor Low-Illumination Environments. SENSORS 2021; 21:s21186226. [PMID: 34577433 PMCID: PMC8471562 DOI: 10.3390/s21186226] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 09/09/2021] [Accepted: 09/14/2021] [Indexed: 12/05/2022]
Abstract
Landing an unmanned aerial vehicle (UAV) autonomously and safely is a challenging task. Although the existing approaches have resolved the problem of precise landing by identifying a specific landing marker using the UAV’s onboard vision system, the vast majority of these works are conducted in either daytime or well-illuminated laboratory environments. In contrast, very few researchers have investigated the possibility of landing in low-illumination conditions by employing various active light sources to lighten the markers. In this paper, a novel vision system design is proposed to tackle UAV landing in outdoor extreme low-illumination environments without the need to apply an active light source to the marker. We use a model-based enhancement scheme to improve the quality and brightness of the onboard captured images, then present a hierarchical-based method consisting of a decision tree with an associated light-weight convolutional neural network (CNN) for coarse-to-fine landing marker localization, where the key information of the marker is extracted and reserved for post-processing, such as pose estimation and landing control. Extensive evaluations have been conducted to demonstrate the robustness, accuracy, and real-time performance of the proposed vision system. Field experiments across a variety of outdoor nighttime scenarios with an average luminance of 5 lx at the marker locations have proven the feasibility and practicability of the system.
Collapse
|
8
|
Application of a Vision-Based Single Target on Robot Positioning System. SENSORS 2021; 21:s21051829. [PMID: 33807940 PMCID: PMC7961800 DOI: 10.3390/s21051829] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 02/22/2021] [Accepted: 02/24/2021] [Indexed: 11/17/2022]
Abstract
In this paper, we propose a Circular-ring visual location marker based on a global image-matching model to improve the positioning ability in the fiducial marker system of a single-target mobile robot. The unique coding information is designed according to the cross-ratio invariance of the projective theorem. To verify the accuracy of full 6D pose estimation using the Circular-ring marker, a 6 degree of freedom (DoF) robotic arm platform is used to design a visual location experiment. The experimental result shows in terms of small resolution images, different size markers, and long-distance tests that our proposed robot positioning method significantly outperforms AprilTag, ArUco, and Checkerboard. Furthermore, through a repeatable robot positioning experiment, the results indicated that the proposed Circular-ring marker is twice as accurate as the fiducial marker at 2–4 m. In terms of recognition speed, the Circular-ring marker processes a frame within 0.077 s. When the Circular-ring marker is used for robot positioning at 2–4 m, the maximum average translation error of the Circular-ring marker is 2.19, 3.04, and 9.44 mm. The maximum average rotation error is also 1.703°, 1.468°, and 0.782°.
Collapse
|
9
|
A Vision-Based Odometer for Localization of Omnidirectional Indoor Robots. SENSORS 2020; 20:s20030875. [PMID: 32041371 PMCID: PMC7038713 DOI: 10.3390/s20030875] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 01/27/2020] [Accepted: 02/04/2020] [Indexed: 11/28/2022]
Abstract
In this paper we tackle the problem of indoor robot localization by using a vision-based approach. Specifically, we propose a visual odometer able to give back the relative pose of an omnidirectional automatic guided vehicle (AGV) that moves inside an indoor industrial environment. A monocular downward-looking camera having the optical axis nearly perpendicular to the ground floor, is used for collecting floor images. After a preliminary analysis of images aimed at detecting robust point features (keypoints) takes place, specific descriptors associated to the keypoints enable to match the detected points to their consecutive frames. A robust correspondence feature filter based on statistical and geometrical information is devised for rejecting those incorrect matchings, thus delivering better pose estimations. A camera pose compensation is further introduced for ensuring better positioning accuracy. The effectiveness of proposed methodology has been proven through several experiments, in laboratory as well as in an industrial setting. Both quantitative and qualitative evaluations have been made. Outcomes have shown that the method provides a final positioning percentage error of 0.21% on an average distance of 17.2 m. A longer run in an industrial context has provided comparable results (a percentage error of 0.94% after about 80 m). The average relative positioning error is about 3%, which is still in good agreement with current state of the art.
Collapse
|
10
|
Abstract
Over the last few years, several researchers have been developing protocols and applications in order to autonomously land unmanned aerial vehicles (UAVs). However, most of the proposed protocols rely on expensive equipment or do not satisfy the high precision needs of some UAV applications such as package retrieval and delivery or the compact landing of UAV swarms. Therefore, in this work, a solution for high precision landing based on the use of ArUco markers is presented. In the proposed solution, a UAV equipped with a low-cost camera is able to detect ArUco markers sized 56 × 56 cm from an altitude of up to 30 m. Once the marker is detected, the UAV changes its flight behavior in order to land on the exact position where the marker is located. The proposal was evaluated and validated using both the ArduSim simulation platform and real UAV flights. The results show an average offset of only 11 cm from the target position, which vastly improves the landing accuracy compared to the traditional GPS-based landing, which typically deviates from the intended target by 1 to 3 m.
Collapse
|
11
|
Dufek J, Murphy R. Visual Pose Estimation of Rescue Unmanned Surface Vehicle From Unmanned Aerial System. Front Robot AI 2019; 6:42. [PMID: 33501058 PMCID: PMC7805959 DOI: 10.3389/frobt.2019.00042] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Accepted: 05/08/2019] [Indexed: 11/13/2022] Open
Abstract
This article addresses the problem of how to visually estimate the pose of a rescue unmanned surface vehicle (USV) using an unmanned aerial system (UAS) in marine mass casualty events. A UAS visually navigating the USV can help solve problems with teleoperation and manpower requirements. The solution has to estimate full pose (both position and orientation) and has to work in an outdoor environment from oblique view angle (up to 85° from nadir) at large distances (180 m) in real-time (5 Hz) and assume both moving UAS (up to 22 m s-1) and moving object (up to 10 m s-1). None of the 58 reviewed studies satisfied all those requirements. This article presents two algorithms for visual position estimation using the object's hue (thresholding and histogramming) and four techniques for visual orientation estimation using the object's shape while satisfying those requirements. Four physical experiments were performed to validate the feasibility and compare the thresholding and histogramming algorithms. The histogramming had statistically significantly lower position estimation error compared to thresholding for all four trials (p-value ranged from ~0 to 8.23263 × 10-29), but it only had statistically significantly lower orientation estimation error for two of the trials (p-values 3.51852 × 10-39 and 1.32762 × 10-46). The mean position estimation error ranged from 7 to 43 px while the mean orientation estimation error ranged from 0.134 to 0.480 rad. The histogramming algorithm demonstrated feasibility for variations in environmental conditions and physical settings while requiring fewer parameters than thresholding. However, three problems were identified. The orientation estimation error was quite large for both algorithms, both algorithms required manual tuning before each trial, and both algorithms were not robust enough to recover from significant changes in illumination conditions. To reduce the orientation estimation error, inverse perspective warping will be necessary to reduce the perspective distortion. To eliminate the necessity for tuning and increase the robustness, a machine learning approach to pose estimation might ultimately be a better solution.
Collapse
Affiliation(s)
- Jan Dufek
- Department of Computer Science and Engineering, Texas A&M University, College Station, TX, United States
| | - Robin Murphy
- Department of Computer Science and Engineering, Texas A&M University, College Station, TX, United States
| |
Collapse
|