1
|
Dai Y, Yu S, Ma T, Ding J, Chen K, Zeng G, Xie A, He P, Peng S, Zhang M. Improving the estimation of rice above-ground biomass based on spatio-temporal UAV imagery and phenological stages. FRONTIERS IN PLANT SCIENCE 2024; 15:1328834. [PMID: 38774220 PMCID: PMC11106403 DOI: 10.3389/fpls.2024.1328834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 04/22/2024] [Indexed: 05/24/2024]
Abstract
Introduction Unmanned aerial vehicles (UAVs) equipped with visible and multispectral cameras provide reliable and efficient methods for remote crop monitoring and above-ground biomass (AGB) estimation in rice fields. However, existing research predominantly focuses on AGB estimation based on canopy spectral features or by incorporating plant height (PH) as a parameter. Insufficient consideration has been given to the spatial structure and the phenological stages of rice in these studies. In this study, a novel method was introduced by fully considering the three-dimensional growth dynamics of rice, integrating both horizontal (canopy cover, CC) and vertical (PH) aspects of canopy development, and accounting for the growing days of rice. Methods To investigate the synergistic effects of combining spectral, spatial and temporal parameters, both small-scale plot experiments and large-scale field testing were conducted in Jiangsu Province, China from 2021 to 2022. Twenty vegetation indices (VIs) were used as spectral features, PH and CC as spatial parameters, and days after transplanting (DAT) as a temporal parameter. AGB estimation models were built with five regression methods (MSR, ENet, PLSR, RF and SVR), using the derived data from six feature combinations (VIs, PH+CC, PH+CC+DAT, VIs+PH +CC, VIs+DAT, VIs+PH+CC+DAT). Results The results showed a strong correlation between extracted and ground-measured PH (R2 = 0.89, RMSE=5.08 cm). Furthermore, VIs, PH and CC exhibit strong correlations with AGB during the mid-tillering to flowering stages. The optimal AGB estimation results during the mid-tillering to flowering stages on plot data were from the PLSR model with VIs and DAT as inputs (R 2 = 0.88, RMSE=1111kg/ha, NRMSE=9.76%), and with VIs, PH, CC, and DAT all as inputs (R 2 = 0.88, RMSE=1131 kg/ha, NRMSE=9.94%). For the field sampling data, the ENet model combined with different feature inputs had the best estimation results (%error=0.6%-13.5%), demonstrating excellent practical applicability. Discussion Model evaluation and feature importance ranking demonstrated that augmenting VIs with temporal and spatial parameters significantly enhanced the AGB estimation accuracy. In summary, the fusion of spectral and spatio-temporal features enhanced the actual physical significance of the AGB estimation models and showed great potential for accurate rice AGB estimation during the main phenological stages.
Collapse
Affiliation(s)
- Yan Dai
- College of Agricultural Science and Engineering, Hohai University, Nanjing, China
- Jiangsu Province Engineering Research Center for Agricultural Soil-Water Efficient Utilization, Carbon Sequestration and Emission Reduction, Nanjing, China
| | - Shuang’en Yu
- College of Agricultural Science and Engineering, Hohai University, Nanjing, China
- Jiangsu Province Engineering Research Center for Agricultural Soil-Water Efficient Utilization, Carbon Sequestration and Emission Reduction, Nanjing, China
| | - Tao Ma
- College of Agricultural Science and Engineering, Hohai University, Nanjing, China
- Jiangsu Province Engineering Research Center for Agricultural Soil-Water Efficient Utilization, Carbon Sequestration and Emission Reduction, Nanjing, China
| | - Jihui Ding
- College of Agricultural Science and Engineering, Hohai University, Nanjing, China
- Jiangsu Province Engineering Research Center for Agricultural Soil-Water Efficient Utilization, Carbon Sequestration and Emission Reduction, Nanjing, China
| | - Kaiwen Chen
- College of Agricultural Science and Engineering, Hohai University, Nanjing, China
- Jiangsu Province Engineering Research Center for Agricultural Soil-Water Efficient Utilization, Carbon Sequestration and Emission Reduction, Nanjing, China
| | - Guangquan Zeng
- College of Agricultural Science and Engineering, Hohai University, Nanjing, China
| | - Airong Xie
- College of Agricultural Science and Engineering, Hohai University, Nanjing, China
| | - Pingru He
- College of Agricultural Science and Engineering, Hohai University, Nanjing, China
| | - Suhan Peng
- College of Agricultural Science and Engineering, Hohai University, Nanjing, China
| | - Mengxi Zhang
- College of Innovation and Entrepreneurship, Hunan Polytechnic of Water Resources and Electric Power, Changsha, China
| |
Collapse
|
2
|
Pugh NA, Young A, Ojha M, Emendack Y, Sanchez J, Xin Z, Puppala N. Yield prediction in a peanut breeding program using remote sensing data and machine learning algorithms. FRONTIERS IN PLANT SCIENCE 2024; 15:1339864. [PMID: 38444530 PMCID: PMC10912196 DOI: 10.3389/fpls.2024.1339864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 02/02/2024] [Indexed: 03/07/2024]
Abstract
Peanut is a critical food crop worldwide, and the development of high-throughput phenotyping techniques is essential for enhancing the crop's genetic gain rate. Given the obvious challenges of directly estimating peanut yields through remote sensing, an approach that utilizes above-ground phenotypes to estimate underground yield is necessary. To that end, this study leveraged unmanned aerial vehicles (UAVs) for high-throughput phenotyping of surface traits in peanut. Using a diverse set of peanut germplasm planted in 2021 and 2022, UAV flight missions were repeatedly conducted to capture image data that were used to construct high-resolution multitemporal sigmoidal growth curves based on apparent characteristics, such as canopy cover and canopy height. Latent phenotypes extracted from these growth curves and their first derivatives informed the development of advanced machine learning models, specifically random forest and eXtreme Gradient Boosting (XGBoost), to estimate yield in the peanut plots. The random forest model exhibited exceptional predictive accuracy (R2 = 0.93), while XGBoost was also reasonably effective (R2 = 0.88). When using confusion matrices to evaluate the classification abilities of each model, the two models proved valuable in a breeding pipeline, particularly for filtering out underperforming genotypes. In addition, the random forest model excelled in identifying top-performing material while minimizing Type I and Type II errors. Overall, these findings underscore the potential of machine learning models, especially random forests and XGBoost, in predicting peanut yield and improving the efficiency of peanut breeding programs.
Collapse
Affiliation(s)
- N. Ace Pugh
- United States Department of Agriculture, Crop Stress Research Laboratory, Lubbock, TX, United States
| | - Andrew Young
- United States Department of Agriculture, Crop Stress Research Laboratory, Lubbock, TX, United States
| | - Manisha Ojha
- Agricultural Science Center at Clovis, New Mexico State University, Clovis, NM, United States
| | - Yves Emendack
- United States Department of Agriculture, Crop Stress Research Laboratory, Lubbock, TX, United States
| | - Jacobo Sanchez
- United States Department of Agriculture, Crop Stress Research Laboratory, Lubbock, TX, United States
| | - Zhanguo Xin
- United States Department of Agriculture, Crop Stress Research Laboratory, Lubbock, TX, United States
| | - Naveen Puppala
- Agricultural Science Center at Clovis, New Mexico State University, Clovis, NM, United States
| |
Collapse
|
3
|
Bi L, Wally O, Hu G, Tenuta AU, Kandel YR, Mueller DS. A transformer-based approach for early prediction of soybean yield using time-series images. FRONTIERS IN PLANT SCIENCE 2023; 14:1173036. [PMID: 37409295 PMCID: PMC10319415 DOI: 10.3389/fpls.2023.1173036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 05/29/2023] [Indexed: 07/07/2023]
Abstract
Crop yield prediction which provides critical information for management decision-making is of significant importance in precision agriculture. Traditional manual inspection and calculation are often laborious and time-consuming. For yield prediction using high-resolution images, existing methods, e.g., convolutional neural network, are challenging to model long range multi-level dependencies across image regions. This paper proposes a transformer-based approach for yield prediction using early-stage images and seed information. First, each original image is segmented into plant and soil categories. Two vision transformer (ViT) modules are designed to extract features from each category. Then a transformer module is established to deal with the time-series features. Finally, the image features and seed features are combined to estimate the yield. A case study has been conducted using a dataset that was collected during the 2020 soybean-growing seasons in Canadian fields. Compared with other baseline models, the proposed method can reduce the prediction error by more than 40%. The impact of seed information on predictions is studied both between models and within a single model. The results show that the influence of seed information varies among different plots but it is particularly important for the prediction of low yields.
Collapse
Affiliation(s)
- Luning Bi
- Department of Industrial and Manufacturing Systems Engineering, Iowa State University, Ames, IA, United States
| | - Owen Wally
- Agriculture and Agri-Food Canada, Harrow Research and Development Centre, Harrow, ON, Canada
| | - Guiping Hu
- Department of Industrial and Manufacturing Systems Engineering, Iowa State University, Ames, IA, United States
| | - Albert U. Tenuta
- Ontario Ministry of Agriculture, Food and Rural Affairs, Ridgetown, ON, Canada
| | - Yuba R. Kandel
- Department of Plant Pathology and Microbiology, Iowa State University, Ames, IA, United States
| | - Daren S. Mueller
- Department of Plant Pathology and Microbiology, Iowa State University, Ames, IA, United States
| |
Collapse
|
4
|
Sakeef N, Scandola S, Kennedy C, Lummer C, Chang J, Uhrig RG, Lin G. Machine learning classification of plant genotypes grown under different light conditions through the integration of multi-scale time-series data. Comput Struct Biotechnol J 2023; 21:3183-3195. [PMID: 37333861 PMCID: PMC10275741 DOI: 10.1016/j.csbj.2023.05.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 05/04/2023] [Accepted: 05/04/2023] [Indexed: 06/20/2023] Open
Abstract
In order to mitigate the effects of a changing climate, agriculture requires more effective evaluation, selection, and production of crop cultivars in order to accelerate genotype-to-phenotype connections and the selection of beneficial traits. Critically, plant growth and development are highly dependent on sunlight, with light energy providing plants with the energy required to photosynthesize as well as a means to directly intersect with the environment in order to develop. In plant analyses, machine learning and deep learning techniques have a proven ability to learn plant growth patterns, including detection of disease, plant stress, and growth using a variety of image data. To date, however, studies have not assessed machine learning and deep learning algorithms for their ability to differentiate a large cohort of genotypes grown under several growth conditions using time-series data automatically acquired across multiple scales (daily and developmentally). Here, we extensively evaluate a wide range of machine learning and deep learning algorithms for their ability to differentiate 17 well-characterized photoreceptor deficient genotypes differing in their light detection capabilities grown under several different light conditions. Using algorithm performance measurements of precision, recall, F1-Score, and accuracy, we find that Suport Vector Machine (SVM) maintains the greatest classification accuracy, while a combined ConvLSTM2D deep learning model produces the best genotype classification results across the different growth conditions. Our successful integration of time-series growth data across multiple scales, genotypes and growth conditions sets a new foundational baseline from which more complex plant science traits can be assessed for genotype-to-phenotype connections.
Collapse
Affiliation(s)
- Nazmus Sakeef
- Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
- Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada
| | - Sabine Scandola
- Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada
| | - Curtis Kennedy
- Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
- Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada
| | - Christina Lummer
- Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada
| | - Jiameng Chang
- Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
| | - R. Glen Uhrig
- Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada
- Department of Biochemistry, University of Alberta, Edmonton, Alberta, Canada
| | - Guohui Lin
- Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
5
|
Mohd Saad NS, Neik TX, Thomas WJW, Amas JC, Cantila AY, Craig RJ, Edwards D, Batley J. Advancing designer crops for climate resilience through an integrated genomics approach. CURRENT OPINION IN PLANT BIOLOGY 2022; 67:102220. [PMID: 35489163 DOI: 10.1016/j.pbi.2022.102220] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 03/15/2022] [Accepted: 03/25/2022] [Indexed: 06/14/2023]
Abstract
Climate change and exponential population growth are exposing an immediate need for developing future crops that are highly resilient and adaptable to changing environments to maintain global food security in the next decade. Rigorous selection from long domestication history has rendered cultivated crops genetically disadvantaged, raising concerns in their ability to adapt to these new challenges and limiting their usefulness in breeding programmes. As a result, future crop improvement efforts must rely on integrating various genomic strategies ranging from high-throughput sequencing to machine learning, in order to exploit germplasm diversity and overcome bottlenecks created by domestication, expansive multi-dimensional phenotypes, arduous breeding processes, complex traits and big data.
Collapse
Affiliation(s)
- Nur Shuhadah Mohd Saad
- UWA School of Biological Sciences and the UWA Institute of Agriculture, University of Western Australia, Crawley, WA, Australia
| | - Ting Xiang Neik
- Sunway College Kuala Lumpur, Bandar Sunway, 47500, Selangor, Malaysia
| | - William J W Thomas
- UWA School of Biological Sciences and the UWA Institute of Agriculture, University of Western Australia, Crawley, WA, Australia
| | - Junrey C Amas
- UWA School of Biological Sciences and the UWA Institute of Agriculture, University of Western Australia, Crawley, WA, Australia
| | - Aldrin Y Cantila
- UWA School of Biological Sciences and the UWA Institute of Agriculture, University of Western Australia, Crawley, WA, Australia
| | - Ryan J Craig
- UWA School of Biological Sciences and the UWA Institute of Agriculture, University of Western Australia, Crawley, WA, Australia
| | - David Edwards
- UWA School of Biological Sciences and the UWA Institute of Agriculture, University of Western Australia, Crawley, WA, Australia
| | - Jacqueline Batley
- UWA School of Biological Sciences and the UWA Institute of Agriculture, University of Western Australia, Crawley, WA, Australia.
| |
Collapse
|
6
|
Rajurkar AB, McCoy SM, Ruhter J, Mulcrone J, Freyfogle L, Leakey ADB. Installation and imaging of thousands of minirhizotrons to phenotype root systems of field-grown plants. PLANT METHODS 2022; 18:39. [PMID: 35346269 PMCID: PMC8958774 DOI: 10.1186/s13007-022-00874-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 03/10/2022] [Indexed: 05/27/2023]
Abstract
BACKGROUND Roots are vital to plant performance because they acquire resources from the soil and provide anchorage. However, it remains difficult to assess root system size and distribution because roots are inaccessible in the soil. Existing methods to phenotype entire root systems range from slow, often destructive, methods applied to relatively small numbers of plants in the field to rapid methods that can be applied to large numbers of plants in controlled environment conditions. Much has been learned recently by extensive sampling of the root crown portion of field-grown plants. But, information on large-scale genetic and environmental variation in the size and distribution of root systems in the field remains a key knowledge gap. Minirhizotrons are the only established, non-destructive technology that can address this need in a standard field trial. Prior experiments have used only modest numbers of minirhizotrons, which has limited testing to small numbers of genotypes or environmental conditions. This study addressed the need for methods to install and collect images from thousands of minirhizotrons and thereby help break the phenotyping bottleneck in the field. RESULTS Over three growing seasons, methods were developed and refined to install and collect images from up to 3038 minirhizotrons per experiment. Modifications were made to four tractors and hydraulic soil corers mounted to them. High quality installation was achieved at an average rate of up to 84.4 minirhizotron tubes per tractor per day. A set of four commercially available minirhizotron camera systems were each transported by wheelbarrow to allow collection of images of mature maize root systems at an average rate of up to 65.3 tubes per day per camera. This resulted in over 300,000 images being collected in as little as 11 days for a single experiment. CONCLUSION The scale of minirhizotron installation was increased by two orders of magnitude by simultaneously using four tractor-mounted, hydraulic soil corers with modifications to ensure high quality, rapid operation. Image collection can be achieved at the corresponding scale using commercially available minirhizotron camera systems. Along with recent advances in image analysis, these advances will allow use of minirhizotrons at unprecedented scale to address key knowledge gaps regarding genetic and environmental effects on root system size and distribution in the field.
Collapse
Affiliation(s)
- Ashish B. Rajurkar
- Institute for Genomic Biology, University of Illinois at Urbana-Champaign, Urbana, IL USA
| | - Scott M. McCoy
- Institute for Genomic Biology, University of Illinois at Urbana-Champaign, Urbana, IL USA
| | - Jeremy Ruhter
- Institute for Genomic Biology, University of Illinois at Urbana-Champaign, Urbana, IL USA
| | - Jessica Mulcrone
- Institute for Genomic Biology, University of Illinois at Urbana-Champaign, Urbana, IL USA
| | - Luke Freyfogle
- Department of Plant Biology, University of Illinois at Urbana-Champaign, Urbana, IL USA
| | - Andrew D. B. Leakey
- Institute for Genomic Biology, University of Illinois at Urbana-Champaign, Urbana, IL USA
- Department of Plant Biology, University of Illinois at Urbana-Champaign, Urbana, IL USA
- Department of Crop Sciences, University of Illinois at Urbana-Champaign, Urbana, IL USA
| |
Collapse
|
7
|
Estimation of Above-Ground Biomass of Winter Wheat Based on Consumer-Grade Multi-Spectral UAV. REMOTE SENSING 2022. [DOI: 10.3390/rs14051251] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
One of the problems of optical remote sensing of crop above-ground biomass (AGB) is that vegetation indices (VIs) often saturate from the middle to late growth stages. This study focuses on combining VIs acquired by a consumer-grade multiple-spectral UAV and machine learning regression techniques to (i) determine the optimal time window for AGB estimation of winter wheat and to (ii) determine the optimal combination of multi-spectral VIs and regression algorithms. UAV-based multi-spectral data and manually measured AGB of winter wheat, under five nitrogen rates, were obtained from the jointing stage until 25 days after flowering in the growing season 2020/2021. Forty-four multi-spectral VIs were used in the linear regression (LR), partial least squares regression (PLSR), and random forest (RF) models in this study. Results of LR models showed that the heading stage was the most suitable stage for AGB prediction, with R2 values varying from 0.48 to 0.93. Three PLSR models based on different datasets performed differently in estimating AGB in the training dataset (R2 = 0.74~0.92, RMSE = 0.95~2.87 t/ha, MAE = 0.75~2.18 t/ha, and RPD = 2.00~3.67) and validation dataset (R2 = 0.50~0.75, RMSE = 1.56~2.57 t/ha, MAE = 1.44~2.05 t/ha, RPD = 1.45~1.89). Compared with PLSR models, the performance of the RF models was more stable in the prediction of AGB in the training dataset (R2 = 0.95~0.97, RMSE = 0.58~1.08 t/ha, MAE = 0.46~0.89 t/ha, and RPD = 3.95~6.35) and validation dataset (R2 = 0.83~0.93, RMSE = 0.93~2.34 t/ha, MAE = 0.72~2.01 t/ha, RPD = 1.36~3.79). Monitoring AGB prior to flowering was found to be more effective than post-flowering. Moreover, this study demonstrates that it is feasible to estimate AGB for multiple growth stages of winter wheat by combining the optimal VIs and PLSR and RF models, which overcomes the saturation problem of using individual VI-based linear regression models.
Collapse
|
8
|
Implementing Spatio-Temporal 3D-Convolution Neural Networks and UAV Time Series Imagery to Better Predict Lodging Damage in Sorghum. REMOTE SENSING 2022. [DOI: 10.3390/rs14030733] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Unmanned aerial vehicle (UAV)-based remote sensing is gaining momentum in a variety of agricultural and environmental applications. Very-high-resolution remote sensing image sets collected repeatedly throughout a crop growing season are becoming increasingly common. Analytical methods able to learn from both spatial and time dimensions of the data may allow for an improved estimation of crop traits, as well as the effects of genetics and the environment on these traits. Multispectral and geometric time series imagery was collected by UAV on 11 dates, along with ground-truth data, in a field trial of 866 genetically diverse biomass sorghum accessions. We compared the performance of Convolution Neural Network (CNN) architectures that used image data from single dates (two spatial dimensions, 2D) versus multiple dates (two spatial dimensions + temporal dimension, 3D) to estimate lodging detection and severity. Lodging was detected with 3D-CNN analysis of time series imagery with 0.88 accuracy, 0.92 precision, and 0.83 recall. This outperformed the best 2D-CNN on a single date with 0.85 accuracy, 0.84 precision, and 0.76 recall. The variation in lodging severity was estimated by the best 3D-CNN analysis with 9.4% mean absolute error (MAE), 11.9% root mean square error (RMSE), and goodness-of-fit (R2) of 0.76. This was a significant improvement over the best 2D-CNN analysis with 11.84% MAE, 14.91% RMSE, and 0.63 R2. The success of the improved 3D-CNN analysis approach depended on the inclusion of “before and after” data, i.e., images collected on dates before and after the lodging event. The integration of geometric and spectral features with 3D-CNN architecture was also key to the improved assessment of lodging severity, which is an important and difficult-to-assess phenomenon in bioenergy feedstocks such as biomass sorghum. This demonstrates that spatio-temporal CNN architectures based on UAV time series imagery have significant potential to enhance plant phenotyping capabilities in crop breeding and precision agriculture applications.
Collapse
|
9
|
Maize Yield Prediction at an Early Developmental Stage Using Multispectral Images and Genotype Data for Preliminary Hybrid Selection. REMOTE SENSING 2021. [DOI: 10.3390/rs13193976] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
Assessing crop production in the field often requires breeders to wait until the end of the season to collect yield-related measurements, limiting the pace of the breeding cycle. Early prediction of crop performance can reduce this constraint by allowing breeders more time to focus on the highest-performing varieties. Here, we present a multimodal deep learning model for predicting the performance of maize (Zea mays) at an early developmental stage, offering the potential to accelerate crop breeding. We employed multispectral images and eight vegetation indices, collected by an uncrewed aerial vehicle approximately 60 days after sowing, over three consecutive growing cycles (2017, 2018 and 2019). The multimodal deep learning approach was used to integrate field management and genotype information with the multispectral data, providing context to the conditions that the plants experienced during the trial. Model performance was assessed using holdout data, in which the model accurately predicted the yield (RMSE 1.07 t/ha, a relative RMSE of 7.60% of 16 t/ha, and R2 score 0.73) and identified the majority of high-yielding varieties, outperforming previously published models for early yield prediction. The inclusion of vegetation indices was important for model performance, with a normalized difference vegetation index and green with normalized difference vegetation index contributing the most to model performance. The model provides a decision support tool, identifying promising lines early in the field trial.
Collapse
|
10
|
Comparison of UAS-Based Structure-from-Motion and LiDAR for Structural Characterization of Short Broadacre Crops. REMOTE SENSING 2021. [DOI: 10.3390/rs13193975] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The use of small unmanned aerial system (UAS)-based structure-from-motion (SfM; photogrammetry) and LiDAR point clouds has been widely discussed in the remote sensing community. Here, we compared multiple aspects of the SfM and the LiDAR point clouds, collected concurrently in five UAS flights experimental fields of a short crop (snap bean), in order to explore how well the SfM approach performs compared with LiDAR for crop phenotyping. The main methods include calculating the cloud-to-mesh distance (C2M) maps between the preprocessed point clouds, as well as computing a multiscale model-to-model cloud comparison (M3C2) distance maps between the derived digital elevation models (DEMs) and crop height models (CHMs). We also evaluated the crop height and the row width from the CHMs and compared them with field measurements for one of the data sets. Both SfM and LiDAR point clouds achieved an average RMSE of ~0.02 m for crop height and an average RMSE of ~0.05 m for row width. The qualitative and quantitative analyses provided proof that the SfM approach is comparable to LiDAR under the same UAS flight settings. However, its altimetric accuracy largely relied on the number and distribution of the ground control points.
Collapse
|
11
|
Abstract
The inference of functional vegetation traits from remotely sensed signals is key to providing efficient information for multiple plant-based applications and to solve related problems [...]
Collapse
|