1
|
Cheng X, Liang X, Li X, Liu Z, Tan H. Autonomous Landing Strategy for Micro-UAV with Mirrored Field-of-View Expansion. SENSORS (BASEL, SWITZERLAND) 2024; 24:6889. [PMID: 39517786 PMCID: PMC11548618 DOI: 10.3390/s24216889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2024] [Revised: 10/18/2024] [Accepted: 10/25/2024] [Indexed: 11/16/2024]
Abstract
Positioning and autonomous landing are key technologies for implementing autonomous flight missions across various fields in unmanned aerial vehicle (UAV) systems. This research proposes a visual positioning method based on mirrored field-of-view expansion, providing a visual-based autonomous landing strategy for quadrotor micro-UAVs (MAVs). The forward-facing camera of the MAV obtains a top view through a view transformation lens while retaining the original forward view. Subsequently, the MAV camera captures the ground landing markers in real-time, and the pose of the MAV camera relative to the landing marker is obtained through a virtual-real image conversion technique and the R-PnP pose estimation algorithm. Then, using a camera-IMU external parameter calibration method, the pose transformation relationship between the UAV camera and the MAV body IMU is determined, thereby obtaining the position of the landing marker's center point relative to the MAV's body coordinate system. Finally, the ground station sends guidance commands to the UAV based on the position information to execute the autonomous landing task. The indoor and outdoor landing experiments with the DJI Tello MAV demonstrate that the proposed forward-facing camera mirrored field-of-view expansion method and landing marker detection and guidance algorithm successfully enable autonomous landing with an average accuracy of 0.06 m. The results show that this strategy meets the high-precision landing requirements of MAVs.
Collapse
Affiliation(s)
- Xiaoqi Cheng
- School of Mechatronic Engineering and Automation, Foshan University, Foshan 528225, China; (X.C.); (Z.L.)
- Guangdong Provincial Key Laboratory of Industrial Intelligent Inspection Technology, Foshan University, Foshan 528225, China; (X.L.); (X.L.)
| | - Xinfeng Liang
- Guangdong Provincial Key Laboratory of Industrial Intelligent Inspection Technology, Foshan University, Foshan 528225, China; (X.L.); (X.L.)
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China
| | - Xiaosong Li
- Guangdong Provincial Key Laboratory of Industrial Intelligent Inspection Technology, Foshan University, Foshan 528225, China; (X.L.); (X.L.)
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China
| | - Zhimin Liu
- School of Mechatronic Engineering and Automation, Foshan University, Foshan 528225, China; (X.C.); (Z.L.)
- Guangdong Provincial Key Laboratory of Industrial Intelligent Inspection Technology, Foshan University, Foshan 528225, China; (X.L.); (X.L.)
| | - Haishu Tan
- School of Mechatronic Engineering and Automation, Foshan University, Foshan 528225, China; (X.C.); (Z.L.)
- Guangdong Provincial Key Laboratory of Industrial Intelligent Inspection Technology, Foshan University, Foshan 528225, China; (X.L.); (X.L.)
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China
| |
Collapse
|
2
|
Choutri K, Lagha M, Meshoul S, Shaiba H, Chegrani A, Yahiaoui M. Vision-Based UAV Detection and Localization to Indoor Positioning System. SENSORS (BASEL, SWITZERLAND) 2024; 24:4121. [PMID: 39000900 PMCID: PMC11243916 DOI: 10.3390/s24134121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 06/04/2024] [Accepted: 06/21/2024] [Indexed: 07/16/2024]
Abstract
In recent years, the technological landscape has undergone a profound metamorphosis catalyzed by the widespread integration of drones across diverse sectors. Essential to the drone manufacturing process is comprehensive testing, typically conducted in controlled laboratory settings to uphold safety and privacy standards. However, a formidable challenge emerges due to the inherent limitations of GPS signals within indoor environments, posing a threat to the accuracy of drone positioning. This limitation not only jeopardizes testing validity but also introduces instability and inaccuracies, compromising the assessment of drone performance. Given the pivotal role of precise GPS-derived data in drone autopilots, addressing this indoor-based GPS constraint is imperative to ensure the reliability and resilience of unmanned aerial vehicles (UAVs). This paper delves into the implementation of an Indoor Positioning System (IPS) leveraging computer vision. The proposed system endeavors to detect and localize UAVs within indoor environments through an enhanced vision-based triangulation approach. A comparative analysis with alternative positioning methodologies is undertaken to ascertain the efficacy of the proposed system. The results obtained showcase the efficiency and precision of the designed system in detecting and localizing various types of UAVs, underscoring its potential to advance the field of indoor drone navigation and testing.
Collapse
Affiliation(s)
- Kheireddine Choutri
- Aeronautical Sciences Laboratory, Aeronautical and Spatial Studies Institute, Blida 1 University, Blida 0900, Algeria; (M.L.); (A.C.)
| | - Mohand Lagha
- Aeronautical Sciences Laboratory, Aeronautical and Spatial Studies Institute, Blida 1 University, Blida 0900, Algeria; (M.L.); (A.C.)
| | - Souham Meshoul
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Hadil Shaiba
- Department of Computer Science, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Akram Chegrani
- Aeronautical Sciences Laboratory, Aeronautical and Spatial Studies Institute, Blida 1 University, Blida 0900, Algeria; (M.L.); (A.C.)
| | - Mohamed Yahiaoui
- Aeronautical Sciences Laboratory, Aeronautical and Spatial Studies Institute, Blida 1 University, Blida 0900, Algeria; (M.L.); (A.C.)
| |
Collapse
|
3
|
Ma N, Weng X, Cao Y, Wu L. Monocular-Vision-Based Precise Runway Detection Applied to State Estimation for Carrier-Based UAV Landing. SENSORS (BASEL, SWITZERLAND) 2022; 22:8385. [PMID: 36366084 PMCID: PMC9653648 DOI: 10.3390/s22218385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 10/26/2022] [Accepted: 10/27/2022] [Indexed: 06/16/2023]
Abstract
Improving the level of autonomy during the landing phase helps promote the full-envelope autonomous flight capability of unmanned aerial vehicles (UAVs). Aiming at the identification of potential landing sites, an end-to-end state estimation method for the autonomous landing of carrier-based UAVs based on monocular vision is proposed in this paper, which allows them to discover landing sites in flight by using equipped optical sensors and avoid a crash or damage during normal and emergency landings. This scheme aims to solve two problems: the requirement of accuracy for runway detection and the requirement of precision for UAV state estimation. First, we design a robust runway detection framework on the basis of YOLOv5 (you only look once, ver. 5) with four modules: a data augmentation layer, a feature extraction layer, a feature aggregation layer and a target prediction layer. Then, the corner prediction method based on geometric features is introduced into the prediction model of the detection framework, which enables the landing field prediction to more precisely fit the runway appearance. In simulation experiments, we developed datasets applied to carrier-based UAV landing simulations based on monocular vision. In addition, our method was implemented with help of the PyTorch deep learning tool, which supports the dynamic and efficient construction of a detection network. Results showed that the proposed method achieved a higher precision and better performance on state estimation during carrier-based UAV landings.
Collapse
|
4
|
Visual Landing Based on the Human Depth Perception in Limited Visibility and Failure of Avionic Systems. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4320101. [PMID: 35498171 PMCID: PMC9054408 DOI: 10.1155/2022/4320101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 03/21/2022] [Indexed: 11/21/2022]
Abstract
This paper introduces a novel visual landing system applicable to the accurate landing of commercial aircraft utilizing human depth perception algorithms, named a 3D Model Landing System (3DMLS). The 3DMLS uses a simulation environment for visual landing in the failure of navigation aids/avionics, adverse weather conditions, and limited visibility. To simulate the approach path and surrounding area, the 3DMLS implements both the inertial measurement unit (IMU) and the digital elevation model (DEM). While the aircraft is in the instrument landing system (ILS) range, the 3DMLS simulates more details of the environment in addition to implementing the DOF depth perception algorithm to provide a clear visual landing path. This path is displayed on a multifunction display in the cockpit for pilots. As the pilot's eye concentrates mostly on the runway location and touch-down point, “the runway” becomes the center of focus in the environment simulation. To display and evaluate the performance of the 3DMLS and depth perception, a landing auto test is also designed and implemented to guide the aircraft along the runway. The flight path is derived simultaneously by comparison of the current aircraft and the runway position. The Unity and MATLAB software are adopted to model the 3DMLS. The accuracy and the quality of the simulated environment in terms of resolution, the field of view, frame per second, and latency are confirmed based on FSTD's visual requirements. Finally, the saliency map toolbox shows that the depth of field (DOF) implementation increases the pilot's concentration resulting in safe landing guidance.
Collapse
|
5
|
Simultaneous Localization and Mapping (SLAM) and Data Fusion in Unmanned Aerial Vehicles: Recent Advances and Challenges. DRONES 2022. [DOI: 10.3390/drones6040085] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
This article presents a survey of simultaneous localization and mapping (SLAM) and data fusion techniques for object detection and environmental scene perception in unmanned aerial vehicles (UAVs). We critically evaluate some current SLAM implementations in robotics and autonomous vehicles and their applicability and scalability to UAVs. SLAM is envisioned as a potential technique for object detection and scene perception to enable UAV navigation through continuous state estimation. In this article, we bridge the gap between SLAM and data fusion in UAVs while also comprehensively surveying related object detection techniques such as visual odometry and aerial photogrammetry. We begin with an introduction to applications where UAV localization is necessary, followed by an analysis of multimodal sensor data fusion to fuse the information gathered from different sensors mounted on UAVs. We then discuss SLAM techniques such as Kalman filters and extended Kalman filters to address scene perception, mapping, and localization in UAVs. The findings are summarized to correlate prevalent and futuristic SLAM and data fusion for UAV navigation, and some avenues for further research are discussed.
Collapse
|
6
|
Ochoa-de-Eribe-Landaberea A, Zamora-Cadenas L, Peñagaricano-Muñoa O, Velez I. UWB and IMU-Based UAV’s Assistance System for Autonomous Landing on a Platform. SENSORS 2022; 22:s22062347. [PMID: 35336532 PMCID: PMC8948988 DOI: 10.3390/s22062347] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/14/2022] [Accepted: 03/16/2022] [Indexed: 02/01/2023]
Abstract
This work presents a novel landing assistance system (LAS) capable of locating a drone for a safe landing after its inspection mission. The location of the drone is achieved by a fusion of ultra-wideband (UWB), inertial measurement unit (IMU) and magnetometer data. Unlike other typical landing assistance systems, the UWB fixed sensors are placed around a 2 × 2 m landing platform and two tags are attached to the drone. Since this type of set-up is suboptimal for UWB location systems, a new positioning algorithm is proposed for a correct performance. First, an extended Kalman filter (EKF) algorithm is used to calculate the position of each tag, and then both positions are combined for a more accurate and robust localisation. As a result, the obtained positioning errors can be reduced by 50% compared to a typical UWB-based landing assistance system. Moreover, due to the small demand of space, the proposed landing assistance system can be used almost anywhere and is deployed easily.
Collapse
Affiliation(s)
- Aitor Ochoa-de-Eribe-Landaberea
- CEIT-Basque Research and Technology Alliance (BRTA), Manuel Lardizabal 15, 20018 San Sebastián, Spain; (L.Z.-C.); (I.V.)
- Tecnun School of Engineering, Universidad de Navarra, Manuel Lardizabal 13, 20018 San Sebastián, Spain
- Correspondence:
| | - Leticia Zamora-Cadenas
- CEIT-Basque Research and Technology Alliance (BRTA), Manuel Lardizabal 15, 20018 San Sebastián, Spain; (L.Z.-C.); (I.V.)
- Tecnun School of Engineering, Universidad de Navarra, Manuel Lardizabal 13, 20018 San Sebastián, Spain
| | | | - Igone Velez
- CEIT-Basque Research and Technology Alliance (BRTA), Manuel Lardizabal 15, 20018 San Sebastián, Spain; (L.Z.-C.); (I.V.)
- Tecnun School of Engineering, Universidad de Navarra, Manuel Lardizabal 13, 20018 San Sebastián, Spain
| |
Collapse
|
7
|
Unstable Landing Platform Pose Estimation Based on Camera and Range Sensor Homogeneous Fusion (CRHF). DRONES 2022. [DOI: 10.3390/drones6030060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
Much research has been accomplished in the area of drone landing and specifically pose estimation. While some of these works focus on sensor fusion using GPS, or GNSS, we propose a method that uses sensors, including four Time of Flight (ToF) range sensors and a monocular camera. However, when the descending platform is unstable, for example, on ships in the ocean, the uncertainty will grow, and the tracking will fail easily. We designed an algorithm that includes four ToF sensors for calibration and one for pose estimation. The landing process was divided into two main parts, the rendezvous and the final landing. Two important assumptions were made for these two phases. During the rendezvous, the landing platform movement can be ignored, while during the landing phase, the drone is assumed to be stable and waiting for the best time to land. The current research modifies the landing part as a stable drone and an unstable landing platform, which is a Stewart platform, with a mounted AprilTag. A novel algorithm for calibration was used based on color thresholding, a convex hull, and centroid extraction. Next, using the homogeneous coordinate equations of the sensors’ touching points, the focal length in the X and Y directions can be calculated. In addition, knowing the plane equation allows the Z coordinates of the landmark points to be projected. The homogeneous coordinate equation was then used to obtain the landmark’s X and Y Cartesian coordinates. Finally, 3D rigid body transformation is engaged to project the landing platform transformation in the camera frame. The test bench used Software-in-the-Loop (SIL) to confirm the practicality of the method. The results of this work are promising for unstable landing platform pose estimation and offer a significant improvement over the single-camera pose estimation AprilTag detection algorithms (ATDA).
Collapse
|