1
|
Zhang J, Wolek A, Willis AR. UAV-Borne Mapping Algorithms for Low-Altitude and High-Speed Drone Applications. SENSORS (BASEL, SWITZERLAND) 2024; 24:2204. [PMID: 38610416 PMCID: PMC11014378 DOI: 10.3390/s24072204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Revised: 03/15/2024] [Accepted: 03/25/2024] [Indexed: 04/14/2024]
Abstract
This article presents an analysis of current state-of-the-art sensors and how these sensors work with several mapping algorithms for UAV (Unmanned Aerial Vehicle) applications, focusing on low-altitude and high-speed scenarios. A new experimental construct is created using highly realistic environments made possible by integrating the AirSim simulator with Google 3D maps models using the Cesium Tiles plugin. Experiments are conducted in this high-realism simulated environment to evaluate the performance of three distinct mapping algorithms: (1) Direct Sparse Odometry (DSO), (2) Stereo DSO (SDSO), and (3) DSO Lite (DSOL). Experimental results evaluate algorithms based on their measured geometric accuracy and computational speed. The results provide valuable insights into the strengths and limitations of each algorithm. Findings quantify compromises in UAV algorithm selection, allowing researchers to find the mapping solution best suited to their application, which often requires a compromise between computational performance and the density and accuracy of geometric map estimates. Results indicate that for UAVs with restrictive computing resources, DSOL is the best option. For systems with payload capacity and modest compute resources, SDSO is the best option. If only one camera is available, DSO is the option to choose for applications that require dense mapping results.
Collapse
Affiliation(s)
- Jincheng Zhang
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, Charlotte, NC 28223, USA;
| | - Artur Wolek
- Department of Mechanical Engineering and Engineering Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA;
| | - Andrew R. Willis
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, Charlotte, NC 28223, USA;
| |
Collapse
|
2
|
Oblique View Selection for Efficient and Accurate Building Reconstruction in Rural Areas Using Large-Scale UAV Images. DRONES 2022. [DOI: 10.3390/drones6070175] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
3D building models are widely used in many applications. The traditional image-based 3D reconstruction pipeline without using semantic information is inefficient for building reconstruction in rural areas. An oblique view selection methodology for efficient and accurate building reconstruction in rural areas is proposed in this paper. A Mask R-CNN model is trained using satellite datasets and used to detect building instances in nadir UAV images. Then, the detected building instances and UAV images are directly georeferenced. The georeferenced building instances are used to select oblique images that cover buildings by using nearest neighbours search. Finally, precise match pairs are generated from the selected oblique images and nadir images using their georeferenced principal points. The proposed methodology is tested on a dataset containing 9775 UAV images. A total of 4441 oblique images covering 99.4% of all the buildings in the survey area are automatically selected. Experimental results show that the average precision and recall of the oblique view selection are 0.90 and 0.88, respectively. The percentage of robustly matched oblique-oblique and oblique-nadir image pairs are above 94% and 84.0%, respectively. The proposed methodology is evaluated for sparse and dense reconstruction. Experimental results show that the sparse reconstruction based on the proposed methodology reduces 68.9% of the data processing time, and it is comparably accurate and complete. Experimental results also show high consistency between the dense point clouds of buildings reconstructed by the traditional pipeline and the pipeline based on the proposed methodology.
Collapse
|
3
|
Image-Aided LiDAR Mapping Platform and Data Processing Strategy for Stockpile Volume Estimation. REMOTE SENSING 2022. [DOI: 10.3390/rs14010231] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Stockpile quantity monitoring is vital for agencies and businesses to maintain inventory of bulk material such as salt, sand, aggregate, lime, and many other materials commonly used in agriculture, highways, and industrial applications. Traditional approaches for volumetric assessment of bulk material stockpiles, e.g., truckload counting, are inaccurate and prone to cumulative errors over long time. Modern aerial and terrestrial remote sensing platforms equipped with camera and/or light detection and ranging (LiDAR) units have been increasingly popular for conducting high-fidelity geometric measurements. Current use of these sensing technologies for stockpile volume estimation is impacted by environmental conditions such as lack of global navigation satellite system (GNSS) signals, poor lighting, and/or featureless surfaces. This study addresses these limitations through a new mapping platform denoted as Stockpile Monitoring and Reporting Technology (SMART), which is designed and integrated as a time-efficient, cost-effective stockpile monitoring solution. The novel mapping framework is realized through camera and LiDAR data-fusion that facilitates stockpile volume estimation in challenging environmental conditions. LiDAR point clouds are derived through a sequence of data collections from different scans. In order to handle the sparse nature of the collected data at a given scan, an automated image-aided LiDAR coarse registration technique is developed followed by a new segmentation approach to derive features, which are used for fine registration. The resulting 3D point cloud is subsequently used for accurate volume estimation. Field surveys were conducted on stockpiles of varying size and shape complexity. Independent assessment of stockpile volume using terrestrial laser scanners (TLS) shows that the developed framework had close to 1% relative error.
Collapse
|
4
|
Abstract
Strong geometric and radiometric distortions often exist in optical wide-baseline stereo images, and some local regions can include surface discontinuities and occlusions. Digital photogrammetry and computer vision researchers have focused on automatic matching for such images. Deep convolutional neural networks, which can express high-level features and their correlation, have received increasing attention for the task of wide-baseline image matching, and learning-based methods have the potential to surpass methods based on handcrafted features. Therefore, we focus on the dynamic study of wide-baseline image matching and review the main approaches of learning-based feature detection, description, and end-to-end image matching. Moreover, we summarize the current representative research using stepwise inspection and dissection. We present the results of comprehensive experiments on actual wide-baseline stereo images, which we use to contrast and discuss the advantages and disadvantages of several state-of-the-art deep-learning algorithms. Finally, we conclude with a description of the state-of-the-art methods and forecast developing trends with unresolved challenges, providing a guide for future work.
Collapse
|
5
|
Abstract
Unmanned Aerial Vehicles (UAVs) are increasingly being used in many challenging and diversified applications. These applications belong to the civilian and the military fields. To name a few; infrastructure inspection, traffic patrolling, remote sensing, mapping, surveillance, rescuing humans and animals, environment monitoring, and Intelligence, Surveillance, Target Acquisition, and Reconnaissance (ISTAR) operations. However, the use of UAVs in these applications needs a substantial level of autonomy. In other words, UAVs should have the ability to accomplish planned missions in unexpected situations without requiring human intervention. To ensure this level of autonomy, many artificial intelligence algorithms were designed. These algorithms targeted the guidance, navigation, and control (GNC) of UAVs. In this paper, we described the state of the art of one subset of these algorithms: the deep reinforcement learning (DRL) techniques. We made a detailed description of them, and we deduced the current limitations in this area. We noted that most of these DRL methods were designed to ensure stable and smooth UAV navigation by training computer-simulated environments. We realized that further research efforts are needed to address the challenges that restrain their deployment in real-life scenarios.
Collapse
|
6
|
New Orthophoto Generation Strategies from UAV and Ground Remote Sensing Platforms for High-Throughput Phenotyping. REMOTE SENSING 2021. [DOI: 10.3390/rs13050860] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Remote sensing platforms have become an effective data acquisition tool for digital agriculture. Imaging sensors onboard unmanned aerial vehicles (UAVs) and tractors are providing unprecedented high-geometric-resolution data for several crop phenotyping activities (e.g., canopy cover estimation, plant localization, and flowering date identification). Among potential products, orthophotos play an important role in agricultural management. Traditional orthophoto generation strategies suffer from several artifacts (e.g., double mapping, excessive pixilation, and seamline distortions). The above problems are more pronounced when dealing with mid- to late-season imagery, which is often used for establishing flowering date (e.g., tassel and panicle detection for maize and sorghum crops, respectively). In response to these challenges, this paper introduces new strategies for generating orthophotos that are conducive to the straightforward detection of tassels and panicles. The orthophoto generation strategies are valid for both frame and push-broom imaging systems. The target function of these strategies is striking a balance between the improved visual appearance of tassels/panicles and their geolocation accuracy. The new strategies are based on generating a smooth digital surface model (DSM) that maintains the geolocation quality along the plant rows while reducing double mapping and pixilation artifacts. Moreover, seamline control strategies are applied to avoid having seamline distortions at locations where the tassels and panicles are expected. The quality of generated orthophotos is evaluated through visual inspection as well as quantitative assessment of the degree of similarity between the generated orthophotos and original images. Several experimental results from both UAV and ground platforms show that the proposed strategies do improve the visual quality of derived orthophotos while maintaining the geolocation accuracy at tassel/panicle locations.
Collapse
|
7
|
Development of a Miniaturized Mobile Mapping System for In-Row, Under-Canopy Phenotyping. REMOTE SENSING 2021. [DOI: 10.3390/rs13020276] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This paper focuses on the development of a miniaturized mobile mapping platform with advantages over current agricultural phenotyping systems in terms of acquiring data that facilitate under-canopy plant trait extraction. The system is based on an unmanned ground vehicle (UGV) for in-row, under-canopy data acquisition to deliver accurately georeferenced 2D and 3D products. The paper addresses three main aspects pertaining to the UGV development: (a) architecture of the UGV mobile mapping system (MMS), (b) quality assessment of acquired data in terms of georeferencing information as well as derived 3D point cloud, and (c) ability to derive phenotypic plant traits using data acquired by the UGV MMS. The experimental results from this study demonstrate the ability of the UGV MMS to acquire dense and accurate data over agricultural fields that would facilitate highly accurate plant phenotyping (better than above-canopy platforms such as unmanned aerial systems and high-clearance tractors). Plant centers and plant count with an accuracy in the 90% range have been achieved.
Collapse
|
8
|
Gabrlik P, Lazna T, Jilek T, Sladek P, Zalud L. An automated heterogeneous robotic system for radiation surveys: Design and field testing. J FIELD ROBOT 2021. [DOI: 10.1002/rob.22010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Petr Gabrlik
- Cybernetics in Material Science Research Group, Central European Institute of Technology Brno University of Technology Brno Czech Republic
| | - Tomas Lazna
- Cybernetics in Material Science Research Group, Central European Institute of Technology Brno University of Technology Brno Czech Republic
| | - Tomas Jilek
- Cybernetics in Material Science Research Group, Central European Institute of Technology Brno University of Technology Brno Czech Republic
| | - Petr Sladek
- Chemical and Radiation Defence Department, NBC Defence Institute University of Defence Vyskov Czech Republic
| | - Ludek Zalud
- Cybernetics in Material Science Research Group, Central European Institute of Technology Brno University of Technology Brno Czech Republic
| |
Collapse
|
9
|
Nazeri B, Crawford MM, Tuinstra MR. Estimating Leaf Area Index in Row Crops Using Wheel-Based and Airborne Discrete Return Light Detection and Ranging Data. FRONTIERS IN PLANT SCIENCE 2021; 12:740322. [PMID: 34912353 PMCID: PMC8667472 DOI: 10.3389/fpls.2021.740322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 11/02/2021] [Indexed: 05/14/2023]
Abstract
Leaf area index (LAI) is an important variable for characterizing plant canopy in crop models. It is traditionally defined as the total one-sided leaf area per unit ground area and is estimated by both direct and indirect methods. This paper explores the effectiveness of using light detection and ranging (LiDAR) data to estimate LAI for sorghum and maize with different treatments at multiple times during the growing season from both a wheeled vehicle and Unmanned Aerial Vehicles. Linear and nonlinear regression models are investigated for prediction utilizing statistical and plant structure-based features extracted from the LiDAR point cloud data with ground reference obtained from an in-field plant canopy analyzer (indirect method). Results based on the value of the coefficient of determination (R 2) and root mean squared error for predictive models ranged from ∼0.4 in the early season to ∼0.6 for sorghum and ∼0.5 to 0.80 for maize from 40 Days after Sowing to harvest.
Collapse
Affiliation(s)
- Behrokh Nazeri
- Lyles School of Civil Engineering, Purdue University, West Lafayette, IN, United States
- *Correspondence: Behrokh Nazeri,
| | - Melba M. Crawford
- Lyles School of Civil Engineering, Purdue University, West Lafayette, IN, United States
- Department of Agronomy, Purdue University, West Lafayette, IN, United States
| | | |
Collapse
|
10
|
Multi-Temporal Predictive Modelling of Sorghum Biomass Using UAV-Based Hyperspectral and LiDAR Data. REMOTE SENSING 2020. [DOI: 10.3390/rs12213587] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
High-throughput phenotyping using high spatial, spectral, and temporal resolution remote sensing (RS) data has become a critical part of the plant breeding chain focused on reducing the time and cost of the selection process for the “best” genotypes with respect to the trait(s) of interest. In this paper, the potential of accurate and reliable sorghum biomass prediction using visible and near infrared (VNIR) and short-wave infrared (SWIR) hyperspectral data as well as light detection and ranging (LiDAR) data acquired by sensors mounted on UAV platforms is investigated. Predictive models are developed using classical regression-based machine learning methods for nine experiments conducted during the 2017 and 2018 growing seasons at the Agronomy Center for Research and Education (ACRE) at Purdue University, Indiana, USA. The impact of the regression method, data source, timing of RS and field-based biomass reference data acquisition, and the number of samples on the prediction results are investigated. R2 values for end-of-season biomass ranged from 0.64 to 0.89 for different experiments when features from all the data sources were included. Geometry-based features derived from the LiDAR point cloud to characterize plant structure and chemistry-based features extracted from hyperspectral data provided the most accurate predictions. Evaluation of the impact of the time of data acquisition during the growing season on the prediction results indicated that although the most accurate and reliable predictions of final biomass were achieved using remotely sensed data from mid-season to end-of-season, predictions in mid-season provided adequate results to differentiate between promising varieties for selection. The analysis of variance (ANOVA) of the accuracies of the predictive models showed that both the data source and regression method are important factors for a reliable prediction; however, the data source was more important with 69% significance, versus 28% significance for the regression method.
Collapse
|
11
|
Abstract
Perishable surveying, mapping, and post-disaster damage data typically require efficient and rapid field collection techniques. Such datasets permit highly detailed site investigation and characterization of civil infrastructure systems. One of the more common methods to collect, preserve, and reconstruct three-dimensional scenes digitally, is the use of an unpiloted aerial system (UAS), commonly known as a drone. Onboard photographic payloads permit scene reconstruction via structure-from-motion (SfM); however, such approaches often require direct site access and survey points for accurate and verified results, which may limit its efficiency. In this paper, the impact of the number and distribution of ground control points within a UAS SfM point cloud is evaluated in terms of error. This study is primarily motivated by the need to understand how the accuracy would vary if site access is not possible or limited. In this paper, the focus is on two remote sensing case studies, including a 0.75 by 0.50-km region of interest that contains a bridge structure, paved and gravel roadways, vegetation with a moderate elevation range of 24 m, and a low-volume gravel road of 1.0 km in length with a modest elevation range of 9 m, which represent two different site geometries. While other studies have focused primarily on the accuracy at discrete locations via checkpoints, this study examines the distributed errors throughout the region of interest via complementary light detection and ranging (lidar) datasets collected at the same time. Moreover, the international roughness index (IRI), a professional roadway surface standard, is quantified to demonstrate the impact of errors on roadway quality parameters. Via quantification and comparison of the differences, guidance is provided on the optimal number of ground control points required for a time-efficient remote UAS survey.
Collapse
|
12
|
A Double Epipolar Resampling Approach to Reliable Conjugate Point Extraction for Accurate Kompsat-3/3A Stereo Data Processing. REMOTE SENSING 2020. [DOI: 10.3390/rs12182940] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Kompsat-3/3A provides along-track and across-track stereo data for accurate three-dimensional (3D) topographic mapping. Stereo data preprocessing involves conjugate point extraction and acquisition of ground control points (GCPs), rational polynomial coefficient (RPC) bias compensation, and epipolar image resampling. Applications where absolute positional accuracy is not a top priority do not require GCPs, but require precise conjugate points from stereo images for subsequent RPC bias compensation, i.e., relative orientation. Conjugate points are extracted between the original stereo data using image-matching methods by a proper outlier removal process. Inaccurate matching results and potential outliers produce geometric inconsistency in the stereo data. Hence, the reliability of conjugate point extraction must be improved. For this purpose, we proposed to apply the coarse epipolar resampling using raw RPCs before the conjugate point matching. We expect epipolar images with even inaccurate RPCs to show better stereo similarity than the original images, providing better conjugate point extraction. To this end, we carried out the quantitative analysis of the conjugate point extraction performance by comparing the proposed approach using the coarsely epipolar resampled images to the traditional approach using the original stereo images. We tested along-track Kompsat-3 stereo and across-track Kompsat-3A stereo data with four well-known image-matching methods: phase correlation (PC), mutual information (MI), speeded up robust features (SURF), and Harris detector combined with fast retina keypoint (FREAK) descriptor (i.e., Harris). These matching methods were applied to the original stereo images and coarsely resampled epipolar images, and the conjugate point extraction performance was investigated. Experimental results showed that the coarse epipolar image approach was very helpful for accurate conjugate point extraction, realizing highly accurate RPC refinement and sub-pixel y-parallax through fine epipolar image resampling, which was not achievable through the traditional approach. MI and PC provided the most stable results for both along-track and across-track test data with larger patch sizes of more than 400 pixels.
Collapse
|
13
|
LiDAR-Aided Interior Orientation Parameters Refinement Strategy for Consumer-Grade Cameras Onboard UAV Remote Sensing Systems. REMOTE SENSING 2020. [DOI: 10.3390/rs12142268] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height.
Collapse
|