1
|
Lei T, Graefe J, Mayanja IK, Earles M, Bailey BN. Simulation of Automatically Annotated Visible and Multi-/Hyperspectral Images Using the Helios 3D Plant and Radiative Transfer Modeling Framework. PLANT PHENOMICS (WASHINGTON, D.C.) 2024; 6:0189. [PMID: 38817960 PMCID: PMC11136674 DOI: 10.34133/plantphenomics.0189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 04/25/2024] [Indexed: 06/01/2024]
Abstract
Deep learning and multimodal remote and proximal sensing are widely used for analyzing plant and crop traits, but many of these deep learning models are supervised and necessitate reference datasets with image annotations. Acquiring these datasets often demands experiments that are both labor-intensive and time-consuming. Furthermore, extracting traits from remote sensing data beyond simple geometric features remains a challenge. To address these challenges, we proposed a radiative transfer modeling framework based on the Helios 3-dimensional (3D) plant modeling software designed for plant remote and proximal sensing image simulation. The framework has the capability to simulate RGB, multi-/hyperspectral, thermal, and depth cameras, and produce associated plant images with fully resolved reference labels such as plant physical traits, leaf chemical concentrations, and leaf physiological traits. Helios offers a simulated environment that enables generation of 3D geometric models of plants and soil with random variation, and specification or simulation of their properties and function. This approach differs from traditional computer graphics rendering by explicitly modeling radiation transfer physics, which provides a critical link to underlying plant biophysical processes. Results indicate that the framework is capable of generating high-quality, labeled synthetic plant images under given lighting scenarios, which can lessen or remove the need for manually collected and annotated data. Two example applications are presented that demonstrate the feasibility of using the model to enable unsupervised learning by training deep learning models exclusively with simulated images and performing prediction tasks using real images.
Collapse
Affiliation(s)
- Tong Lei
- Department of Plant Sciences,
University of California, Davis, CA, USA
| | - Jan Graefe
- Leibniz Institute of Vegetable and Ornamental Crops e.V. (IGZ), Großbeeren, Germany
| | - Ismael K. Mayanja
- Department of Biological and Agricultural Engineering,
University of California, Davis, CA, USA
| | - Mason Earles
- Department of Biological and Agricultural Engineering,
University of California, Davis, CA, USA
- Department of Viticulture and Enology,
University of California, Davis, CA, USA
| | - Brian N. Bailey
- Department of Plant Sciences,
University of California, Davis, CA, USA
| |
Collapse
|
2
|
Sha O, Zhang H, Bai J, Zhang Y, Yang J. The analysis of the structural parameter influences on measurement errors in a binocular 3D reconstruction system: a portable 3D system. PeerJ Comput Sci 2023; 9:e1610. [PMID: 37810332 PMCID: PMC10557943 DOI: 10.7717/peerj-cs.1610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 09/01/2023] [Indexed: 10/10/2023]
Abstract
This study used an analytical model to investigate the factors that affect the reconstruction accuracy composed of the baseline length, lens focal length, the angle between the optical axis and baseline, and the field of the view angle. Firstly, the theoretical expressions of the above factors and measurement errors are derived based on the binocular three-dimensional reconstruction model. Then, the structural parameters' impact on the error propagation coefficient is analyzed and simulated using MATLAB software. The results show that structural parameters significantly impact the error propagation coefficient, and the reasonable range of structural parameters is pointed out. When the angle between the optical axis of the binocular camera and the baseline is between 30° and 55°, the ratio of the baseline length to the focal length can be reasonably reduced. In addition, using the field angle of the view that does not exceed 20° could reduce the error propagation coefficient. While the angle between the binocular optical axis and the baseline is between 40° and 50°, the reconstruction result has the highest accuracy, changing the angle out of this range will lead to an increase in the reconstruction error. The angle between the binocular optical axis and the baseline changes 30° through 60° leads to the error propagation coefficient being in a lower range. Finally, experimental verification and simulation results show that selecting reasonable structural parameters could significantly reduce measurement errors. This study proposes a model that constructs a binocular three-dimensional reconstruction system with high precision. A portable three-dimensional reconstruction system is built in the article.
Collapse
Affiliation(s)
- Ou Sha
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Hongyu Zhang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China
| | - Jing Bai
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China
| | - Yaoyu Zhang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China
| | | |
Collapse
|
3
|
Jasińska A, Pyka K, Pastucha E, Midtiby HS. A Simple Way to Reduce 3D Model Deformation in Smartphone Photogrammetry. SENSORS (BASEL, SWITZERLAND) 2023; 23:728. [PMID: 36679525 PMCID: PMC9860635 DOI: 10.3390/s23020728] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/03/2023] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
Recently, the term smartphone photogrammetry gained popularity. This suggests that photogrammetry may become a simple measurement tool by virtually every smartphone user. The research was undertaken to clarify whether it is appropriate to use the Structure from Motion-Multi Stereo View (SfM-MVS) procedure with self-calibration as it is done in Uncrewed Aerial Vehicle photogrammetry. First, the geometric stability of smartphone cameras was tested. Fourteen smartphones were calibrated on the checkerboard test field. The process was repeated multiple times. These observations were found: (1) most smartphone cameras have lower stability of the internal orientation parameters than a Digital Single-Lens Reflex (DSLR) camera, and (2) the principal distance and position of the principal point are constantly changing. Then, based on images from two selected smartphones, 3D models of a small sculpture were developed. The SfM-MVS method was used, with self-calibration and pre-calibration variants. By comparing the resultant models with the reference DSLR-created model it was shown that introducing calibration obtained in the test field instead of self-calibration improves the geometry of 3D models. In particular, deformations of local concavities and convexities decreased. In conclusion, there is real potential in smartphone photogrammetry, but it also has its limits.
Collapse
Affiliation(s)
- Aleksandra Jasińska
- Faculty of Geo-Data Science, Geodesy, and Environmental Engineering, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Cracow, Poland
| | - Krystian Pyka
- Faculty of Geo-Data Science, Geodesy, and Environmental Engineering, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Cracow, Poland
| | - Elżbieta Pastucha
- UAS Center, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Campusvey 55, 5230 Odense, Denmark
| | - Henrik Skov Midtiby
- UAS Center, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Campusvey 55, 5230 Odense, Denmark
| |
Collapse
|
4
|
Seven Different Lighting Conditions in Photogrammetric Studies of a 3D Urban Mock-Up. ENERGIES 2021. [DOI: 10.3390/en14238002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
One of the most important elements during photogrammetric studies is the appropriate lighting of the object or area under investigation. Nevertheless, the concept of “adequate lighting” is relative. Therefore, we have attempted, based on experimental proof of concept (technology readiness level—TRL3), to verify the impact of various types of lighting emitted by LED light sources for scene illumination and their direct influence on the quality of the photogrammetric study of a 3D urban mock-up. An important issue in this study was the measurement and evaluation of the artificial light sources used, based on illuminance (E), correlated colour temperature (CCT), colour rendering index (CRI) and Spectral power distribution (SPD) and the evaluation of the obtained point clouds (seven photogrammetric products of the same object, developed for seven different lighting conditions). The general values of the quality of the photogrammetric studies were compared. Additionally, we determined seventeen features concerning the group of tie-points in the vicinity of each F-point and the type of study. The acquired traits were related to the number of tie-points in the vicinity, their luminosities and spectral characteristics for each of the colours (red, green, blue). The dependencies between the identified features and the obtained XYZ total error were verified, and the possibility of detecting F-points depending on their luminosity was also analysed. The obtained results can be important in the process of developing a photogrammetric method of urban lighting monitoring or in selecting additional lighting for objects that are the subject of a short-range photogrammetric study.
Collapse
|
5
|
Camera Self-Calibration with GNSS Constrained Bundle Adjustment for Weakly Structured Long Corridor UAV Images. REMOTE SENSING 2021. [DOI: 10.3390/rs13214222] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Camera self-calibration determines the precision and robustness of AT (aerial triangulation) for UAV (unmanned aerial vehicle) images. The UAV images collected from long transmission line corridors are critical configurations, which may lead to the “bowl effect” with camera self-calibration. To solve such problems, traditional methods rely on more than three GCPs (ground control points), while this study designs a new self-calibration method with only one GCP. First, existing camera distortion models are grouped into two categories, i.e., physical and mathematical models, and their mathematical formulas are exploited in detail. Second, within an incremental SfM (Structure from Motion) framework, a camera self-calibration method is designed, which combines the strategies for initializing camera distortion parameters and fusing high-precision GNSS (Global Navigation Satellite System) observations. The former is achieved by using an iterative optimization algorithm that progressively optimizes camera parameters; the latter is implemented through inequality constrained BA (bundle adjustment). Finally, by using four UAV datasets collected from two sites with two data acquisition modes, the proposed algorithm is comprehensively analyzed and verified, and the experimental results demonstrate that the proposed method can dramatically alleviate the “bowl effect” of self-calibration for weakly structured long corridor UAV images, and the horizontal and vertical accuracy can reach 0.04 m and 0.05 m, respectively, when using one GCP. In addition, compared with open-source and commercial software, the proposed method achieves competitive or better performance.
Collapse
|
6
|
Roncella R, Forlani G. UAV Block Geometry Design and Camera Calibration: A Simulation Study. SENSORS 2021; 21:s21186090. [PMID: 34577297 PMCID: PMC8473092 DOI: 10.3390/s21186090] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 09/06/2021] [Accepted: 09/07/2021] [Indexed: 11/21/2022]
Abstract
Acknowledged guidelines and standards such as those formerly governing project planning in analogue aerial photogrammetry are still missing in UAV photogrammetry. The reasons are many, from a great variety of projects goals to the number of parameters involved: camera features, flight plan design, block control and georeferencing options, Structure from Motion settings, etc. Above all, perhaps, stands camera calibration with the alternative between pre- and on-the-job approaches. In this paper we present a Monte Carlo simulation study where the accuracy estimation of camera parameters and tie points’ ground coordinates is evaluated as a function of various project parameters. A set of UAV (Unmanned Aerial Vehicle) synthetic photogrammetric blocks, built by varying terrain shape, surveyed area shape, block control (ground and aerial), strip type (longitudinal, cross and oblique), image observation and control data precision has been synthetically generated, overall considering 144 combinations in on-the-job self-calibration. Bias in ground coordinates (dome effect) due to inaccurate pre-calibration has also been investigated. Under the test scenario, the accuracy gap between different block configurations can be close to an order of magnitude. Oblique imaging is confirmed as key requisite in flat terrain, while ground control density is not. Aerial control by accurate camera station positions is overall more accurate and efficient than GCP in flat terrain.
Collapse
|
7
|
Canopy Volume Extraction of Citrus reticulate Blanco cv. Shatangju Trees Using UAV Image-Based Point Cloud Deep Learning. REMOTE SENSING 2021. [DOI: 10.3390/rs13173437] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Automatic acquisition of the canopy volume parameters of the Citrus reticulate Blanco cv. Shatangju tree is of great significance to precision management of the orchard. This research combined the point cloud deep learning algorithm with the volume calculation algorithm to segment the canopy of the Citrus reticulate Blanco cv. Shatangju trees. The 3D (Three-Dimensional) point cloud model of a Citrus reticulate Blanco cv. Shatangju orchard was generated using UAV tilt photogrammetry images. The segmentation effects of three deep learning models, PointNet++, MinkowskiNet and FPConv, on Shatangju trees and the ground were compared. The following three volume algorithms: convex hull by slices, voxel-based method and 3D convex hull were applied to calculate the volume of Shatangju trees. Model accuracy was evaluated using the coefficient of determination (R2) and Root Mean Square Error (RMSE). The results show that the overall accuracy of the MinkowskiNet model (94.57%) is higher than the other two models, which indicates the best segmentation effect. The 3D convex hull algorithm received the highest R2 (0.8215) and the lowest RMSE (0.3186 m3) for the canopy volume calculation, which best reflects the real volume of Citrus reticulate Blanco cv. Shatangju trees. The proposed method is capable of rapid and automatic acquisition for the canopy volume of Citrus reticulate Blanco cv. Shatangju trees.
Collapse
|
8
|
Influence of Spatial Resolution for Vegetation Indices’ Extraction Using Visible Bands from Unmanned Aerial Vehicles’ Orthomosaics Datasets. REMOTE SENSING 2021. [DOI: 10.3390/rs13163238] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
The consolidation of unmanned aerial vehicle (UAV) photogrammetric techniques for campaigns with high and medium observation scales has triggered the development of new application areas. Most of these vehicles are equipped with common visible-band sensors capable of mapping areas of interest at various spatial resolutions. It is often necessary to identify vegetated areas for masking purposes during the postprocessing phase, excluding them for the digital elevation models (DEMs) generation or change detection purposes. However, vegetation can be extracted using sensors capable of capturing the near-infrared part of the spectrum, which cannot be recorded by visible (RGB) cameras. In this study, after reviewing different visible-band vegetation indices in various environments using different UAV technology, the influence of the spatial resolution of orthomosaics generated by photogrammetric processes in the vegetation extraction was examined. The triangular greenness index (TGI) index provided a high level of separability between vegetation and nonvegetation areas for all case studies in any spatial resolution. The efficiency of the indices remained fundamentally linked to the context of the scenario under investigation, and the correlation between spatial resolution and index incisiveness was found to be more complex than might be trivially assumed.
Collapse
|
9
|
Polymodal Method of Improving the Quality of Photogrammetric Images and Models. ENERGIES 2021. [DOI: 10.3390/en14123457] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Photogrammetry using unmanned aerial vehicles has become very popular and is already commonly used. The most frequent photogrammetry products are an orthoimage, digital terrain model and a 3D object model. When executing measurement flights, it may happen that there are unsuitable lighting conditions, and the flight itself is fast and not very stable. As a result, noise and blur appear on the images, and the images themselves can have too low of a resolution to satisfy the quality requirements for a photogrammetric product. In such cases, the obtained images are useless or will significantly reduce the quality of the end-product of low-level photogrammetry. A new polymodal method of improving measurement image quality has been proposed to avoid such issues. The method discussed in this article removes degrading factors from the images and, as a consequence, improves the geometric and interpretative quality of a photogrammetric product. The author analyzed 17 various image degradation cases, developed 34 models based on degraded and recovered images, and conducted an objective analysis of the quality of the recovered images and models. As evidenced, the result was a significant improvement in the interpretative quality of the images themselves and a better geometry model.
Collapse
|
10
|
An Optimal Image–Selection Algorithm for Large-Scale Stereoscopic Mapping of UAV Images. REMOTE SENSING 2021. [DOI: 10.3390/rs13112118] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recently, the mapping industry has been focusing on the possibility of large-scale mapping from unmanned aerial vehicles (UAVs) owing to advantages such as easy operation and cost reduction. In order to produce large-scale maps from UAV images, it is important to obtain precise orientation parameters as well as analyzing the sharpness of they themselves measured through image analysis. For this, various techniques have been developed and are included in most of the commercial UAV image processing software. For mapping, it is equally important to select images that can cover a region of interest (ROI) with the fewest possible images. Otherwise, to map the ROI, one may have to handle too many images, and commercial software does not provide information needed to select images, nor does it explicitly explain how to select images for mapping. For these reasons, stereo mapping of UAV images in particular is time consuming and costly. In order to solve these problems, this study proposes a method to select images intelligently. We can select a minimum number of image pairs to cover the ROI with the fewest possible images. We can also select optimal image pairs to cover the ROI with the most accurate stereo pairs. We group images by strips and generate the initial image pairs. We then apply an intelligent scheme to iteratively select optimal image pairs from the start to the end of an image strip. According to the results of the experiment, the number of images selected is greatly reduced by applying the proposed optimal image–composition algorithm. The selected image pairs produce a dense 3D point cloud over the ROI without any holes. For stereoscopic plotting, the selected image pairs were map the ROI successfully on a digital photogrammetric workstation (DPW) and a digital map covering the ROI is generated. The proposed method should contribute to time and cost reductions in UAV mapping.
Collapse
|
11
|
Digital Terrain Models Generated with Low-Cost UAV Photogrammetry: Methodology and Accuracy. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2021. [DOI: 10.3390/ijgi10050285] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Digital terrain model (DTM) generation is essential to recreating terrain morphology once the external elements are removed. Traditional survey methods are still used to collect accurate geographic data on the land surface. Given the emergence of unmanned aerial vehicles (UAVs) equipped with low-cost digital cameras and better photogrammetric methods for digital mapping, efficient approaches are necessary to allow rapid land surveys with high accuracy. This paper provides a review, complemented with the authors’ experience, regarding the UAV photogrammetric process and field survey parameters for DTM generation using popular commercial photogrammetric software to process images obtained with fixed-wing or multicopter UAVs. We analyzed the quality and accuracy of the DTMs based on four categories: (i) the UAV system (UAV platforms and camera); (ii) flight planning and image acquisition (flight altitude, image overlap, UAV speed, orientation of the flight line, camera configuration, and georeferencing); (iii) photogrammetric DTM generation (software, image alignment, dense point cloud generation, and ground filtering); (iv) geomorphology and land use/cover. For flat terrain, UAV photogrammetry provided a horizontal root mean square error (RMSE) between 1 to 3 × the ground sample distance (GSD) and a vertical RMSE between 1 to 4.5 × GSD, and, for complex topography, a horizontal RMSE between 1 to 7 × GSD and a vertical RMSE between 1.5 to 5 × GSD. Finally, we stress that UAV photogrammetry can provide DTMs with high accuracy when the photogrammetric process variables are optimized.
Collapse
|
12
|
Abstract
Existing reinforced concrete (RC) bridges that were designed in the decades between 1950 and 1990 exhibit inadequate structural safety with reference to both traffic loads and hazard conditions. Competent authorities are planning extensive inspections to collect data about these structures and to address retrofit interventions. In this context, Remotely Piloted Aircraft Systems (RPASs) represent a prospect to facilitate in-situ inspections, reducing time, cost and risk for the operators. A practice-oriented methodology to perform RPAS-based surveys is described. After that, a workflow to perform an in-situ RPAS inspection oriented to a photogrammetric data extraction is discussed. With the aim to connect the advantages of the RPAS technologies to the seismic risk assessment of bridges, a simplified mechanic-based procedure is described, oriented to map the structural risk in road networks and support prioritization strategies. A six-span RC bridge of the Basilicata road network, representing a typical Italian bridge typology is selected to practically describe the operating steps of the RPAS inspection and of the simplified seismic risk assessment approach.
Collapse
|
13
|
LiDAR-Aided Interior Orientation Parameters Refinement Strategy for Consumer-Grade Cameras Onboard UAV Remote Sensing Systems. REMOTE SENSING 2020. [DOI: 10.3390/rs12142268] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height.
Collapse
|