1
|
James C, Gu Y, Potgieter A, David E, Madec S, Guo W, Baret F, Eriksson A, Chapman S. From Prototype to Inference: A Pipeline to Apply Deep Learning in Sorghum Panicle Detection. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0017. [PMID: 37040294 PMCID: PMC10076054 DOI: 10.34133/plantphenomics.0017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2022] [Accepted: 12/01/2022] [Indexed: 06/19/2023]
Abstract
Head (panicle) density is a major component in understanding crop yield, especially in crops that produce variable numbers of tillers such as sorghum and wheat. Use of panicle density both in plant breeding and in the agronomy scouting of commercial crops typically relies on manual counts observation, which is an inefficient and tedious process. Because of the easy availability of red-green-blue images, machine learning approaches have been applied to replacing manual counting. However, much of this research focuses on detection per se in limited testing conditions and does not provide a general protocol to utilize deep-learning-based counting. In this paper, we provide a comprehensive pipeline from data collection to model deployment in deep-learning-assisted panicle yield estimation for sorghum. This pipeline provides a basis from data collection and model training, to model validation and model deployment in commercial fields. Accurate model training is the foundation of the pipeline. However, in natural environments, the deployment dataset is frequently different from the training data (domain shift) causing the model to fail, so a robust model is essential to build a reliable solution. Although we demonstrate our pipeline in a sorghum field, the pipeline can be generalized to other grain species. Our pipeline provides a high-resolution head density map that can be utilized for diagnosis of agronomic variability within a field, in a pipeline built without commercial software.
Collapse
Affiliation(s)
- Chrisbin James
- School of Agriculture and Food Sciences, The University of Queensland, Brisbane, Australia
| | - Yanyang Gu
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| | - Andries Potgieter
- Queensland Alliance for Agriculture and Food Innovation, The University of Queensland, Brisbane, Australia
| | | | | | - Wei Guo
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan
| | - Frédéric Baret
- Institut National de la Recherche Agronomique, Paris, France
| | - Anders Eriksson
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| | - Scott Chapman
- School of Agriculture and Food Sciences, The University of Queensland, Brisbane, Australia
| |
Collapse
|
2
|
Guo Z, Yang C, Yang W, Chen G, Jiang Z, Wang B, Zhang J. Panicle Ratio Network: streamlining rice panicle measurement by deep learning with ultra-high-definition aerial images in the field. JOURNAL OF EXPERIMENTAL BOTANY 2022; 73:6575-6588. [PMID: 35776094 DOI: 10.1093/jxb/erac294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Accepted: 06/29/2022] [Indexed: 06/15/2023]
Abstract
The heading date and effective tiller percentage are important traits in rice, and they directly affect plant architecture and yield. Both traits are related to the ratio of the panicle number to the maximum tiller number, referred to as the panicle ratio (PR). In this study, an automatic PR estimation model (PRNet) based on a deep convolutional neural network was developed. Ultra-high-definition unmanned aerial vehicle (UAV) images were collected from cultivated rice varieties planted in 2384 experimental plots in 2019 and 2020 and in a large field in 2021. The determination coefficient between estimated PR and ground-measured PR reached 0.935, and the root mean square error values for the estimations of the heading date and effective tiller percentage were 0.687 d and 4.84%, respectively. Based on the analysis of the results, various factors affecting PR estimation and strategies for improving PR estimation accuracy were investigated. The satisfactory results obtained in this study demonstrate the feasibility of using UAVs and deep learning techniques to replace ground-based manual methods to accurately extract phenotypic information of crop micro targets (such as grains per panicle, panicle flowering, etc.) for rice and potentially for other cereal crops in future research.
Collapse
Affiliation(s)
- Ziyue Guo
- Macro Agriculture Research Institute, College of Resources and Environment, Huazhong Agricultural University, Wuhan, China
- Key Laboratory of Farmland Conservation in the Middle and Lower Reaches of the Ministry of Agriculture, Wuhan, China
| | - Chenghai Yang
- Aerial Application Technology Research Unit, USDA-Agricultural Research Service, College Station, TX, USA
| | - Wangnen Yang
- College of Plant Science and Technology, Huazhong Agricultural University, Wuhan, China
| | - Guoxing Chen
- College of Plant Science and Technology, Huazhong Agricultural University, Wuhan, China
| | - Zhao Jiang
- Macro Agriculture Research Institute, College of Resources and Environment, Huazhong Agricultural University, Wuhan, China
- Key Laboratory of Farmland Conservation in the Middle and Lower Reaches of the Ministry of Agriculture, Wuhan, China
| | - Botao Wang
- Macro Agriculture Research Institute, College of Resources and Environment, Huazhong Agricultural University, Wuhan, China
- Key Laboratory of Farmland Conservation in the Middle and Lower Reaches of the Ministry of Agriculture, Wuhan, China
| | - Jian Zhang
- Macro Agriculture Research Institute, College of Resources and Environment, Huazhong Agricultural University, Wuhan, China
- Key Laboratory of Farmland Conservation in the Middle and Lower Reaches of the Ministry of Agriculture, Wuhan, China
| |
Collapse
|
3
|
Chakraborty SK, Chandel NS, Jat D, Tiwari MK, Rajwade YA, Subeesh A. Deep learning approaches and interventions for futuristic engineering in agriculture. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07744-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
4
|
Comparison of Deep Learning Methods for Detecting and Counting Sorghum Heads in UAV Imagery. REMOTE SENSING 2022. [DOI: 10.3390/rs14133143] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
With the rapid development of remote sensing with small, lightweight unmanned aerial vehicles (UAV), efficient and accurate crop spike counting, and yield estimation methods based on deep learning (DL) methods have begun to emerge, greatly reducing labor costs and enabling fast and accurate counting of sorghum spikes. However, there has not been a systematic, comprehensive evaluation of their applicability in cereal crop spike identification in UAV images, especially in sorghum head counting. To this end, this paper conducts a comparative study of the performance of three common DL algorithms, EfficientDet, Single Shot MultiBox Detector (SSD), and You Only Look Once (YOLOv4), for sorghum head detection based on lightweight UAV remote sensing data. The paper explores the effects of overlap ratio, confidence, and intersection over union (IoU) parameters, using the evaluation metrics of precision P, recall R, average precision AP, F1 score, computational efficiency, and the number of detected positive/negative samples (Objects detected consistent/inconsistent with real samples). The experiment results show the following. (1) The detection results of the three methods under dense coverage conditions were better than those under medium and sparse conditions. YOLOv4 had the most accurate detection under different coverage conditions; on the contrary, EfficientDet was the worst. While SSD obtained better detection results under dense conditions, the number of over-detections was larger. (2) It was concluded that although EfficientDet had a good positive sample detection rate, it detected the fewest samples, had the smallest R and F1, and its actual precision was poor, while its training time, although medium, had the lowest detection efficiency, and the detection time per image was 2.82-times that of SSD. SSD had medium values for P, AP, and the number of detected samples, but had the highest training and detection efficiency. YOLOv4 detected the largest number of positive samples, and its values for R, AP, and F1 were the highest among the three methods. Although the training time was the slowest, the detection efficiency was better than EfficientDet. (3) With an increase in the overlap ratios, both positive and negative samples tended to increase, and when the threshold value was 0.3, all three methods had better detection results. With an increase in the confidence value, the number of positive and negative samples significantly decreased, and when the threshold value was 0.3, it balanced the numbers for sample detection and detection accuracy. An increase in IoU was accompanied by a gradual decrease in the number of positive samples and a gradual increase in the number of negative samples. When the threshold value was 0.3, better detection was achieved. The research findings can provide a methodological basis for accurately detecting and counting sorghum heads using UAV.
Collapse
|
5
|
Woody Plant Encroachment: Evaluating Methodologies for Semiarid Woody Species Classification from Drone Images. REMOTE SENSING 2022. [DOI: 10.3390/rs14071665] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
Globally, native semiarid grasslands and savannas have experienced a densification of woody plant species—leading to a multitude of environmental, economic, and cultural changes. These encroached areas are unique in that the diversity of tree species is small, but at the same time the individual species possess diverse phenological responses. The overall goal of this study was to evaluate the ability of very high resolution drone imagery to accurately map species of woody plants encroaching on semiarid grasslands. For a site in the Edwards Plateau ecoregion of central Texas, we used affordable, very high resolution drone imagery to which we applied maximum likelihood (ML), support vector machine (SVM), random forest (RF), and VGG-19 convolutional neural network (CNN) algorithms in combination with pixel-based (with and without post-processing) and object-based (small and large) classification methods. Based on test sample data (n = 1000) the VGG-19 CNN model achieved the highest overall accuracy (96.9%). SVM came in second with an average classification accuracy of 91.2% across all methods, followed by RF (89.7%) and ML (86.8%). Overall, our findings show that RGB drone sensors are indeed capable of providing highly accurate classifications of woody plant species in semiarid landscapes—comparable to and even greater in some regards to those achieved by aerial and drone imagery using hyperspectral sensors in more diverse landscapes.
Collapse
|
6
|
Xiao Q, Bai X, Zhang C, He Y. Advanced high-throughput plant phenotyping techniques for genome-wide association studies: A review. J Adv Res 2022; 35:215-230. [PMID: 35003802 PMCID: PMC8721248 DOI: 10.1016/j.jare.2021.05.002] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Revised: 05/05/2021] [Accepted: 05/09/2021] [Indexed: 01/22/2023] Open
Abstract
Linking phenotypes and genotypes to identify genetic architectures that regulate important traits is crucial for plant breeding and the development of plant genomics. In recent years, genome-wide association studies (GWASs) have been applied extensively to interpret relationships between genes and traits. Successful GWAS application requires comprehensive genomic and phenotypic data from large populations. Although multiple high-throughput DNA sequencing approaches are available for the generation of genomics data, the capacity to generate high-quality phenotypic data is lagging far behind. Traditional methods for plant phenotyping mostly rely on manual measurements, which are laborious, inaccurate, and time-consuming, greatly impairing the acquisition of phenotypic data from large populations. In contrast, high-throughput phenotyping has unique advantages, facilitating rapid, non-destructive, and high-throughput detection, and, in turn, addressing the shortcomings of traditional methods. Aim of Review: This review summarizes the current status with regard to the integration of high-throughput phenotyping and GWAS in plants, in addition to discussing the inherent challenges and future prospects. Key Scientific Concepts of Review: High-throughput phenotyping, which facilitates non-contact and dynamic measurements, has the potential to offer high-quality trait data for GWAS and, in turn, to enhance the unraveling of genetic structures of complex plant traits. In conclusion, high-throughput phenotyping integration with GWAS could facilitate the revealing of coding information in plant genomes.
Collapse
Affiliation(s)
- Qinlin Xiao
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, China
| | - Xiulin Bai
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, China
| | - Chu Zhang
- School of Information Engineering, Huzhou University, Huzhou 313000, China
| | - Yong He
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, China
| |
Collapse
|
7
|
Deep Convolutional Neural Network for Large-Scale Date Palm Tree Mapping from UAV-Based Images. REMOTE SENSING 2021. [DOI: 10.3390/rs13142787] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Large-scale mapping of date palm trees is vital for their consistent monitoring and sustainable management, considering their substantial commercial, environmental, and cultural value. This study presents an automatic approach for the large-scale mapping of date palm trees from very-high-spatial-resolution (VHSR) unmanned aerial vehicle (UAV) datasets, based on a deep learning approach. A U-Shape convolutional neural network (U-Net), based on a deep residual learning framework, was developed for the semantic segmentation of date palm trees. A comprehensive set of labeled data was established to enable the training and evaluation of the proposed segmentation model and increase its generalization capability. The performance of the proposed approach was compared with those of various state-of-the-art fully convolutional networks (FCNs) with different encoder architectures, including U-Net (based on VGG-16 backbone), pyramid scene parsing network, and two variants of DeepLab V3+. Experimental results showed that the proposed model outperformed other FCNs in the validation and testing datasets. The generalizability evaluation of the proposed approach on a comprehensive and complex testing dataset exhibited higher classification accuracy and showed that date palm trees could be automatically mapped from VHSR UAV images with an F-score, mean intersection over union, precision, and recall of 91%, 85%, 0.91, and 0.92, respectively. The proposed approach provides an efficient deep learning architecture for the automatic mapping of date palm trees from VHSR UAV-based images.
Collapse
|
8
|
Hein NT, Ciampitti IA, Jagadish SVK. Bottlenecks and opportunities in field-based high-throughput phenotyping for heat and drought stress. JOURNAL OF EXPERIMENTAL BOTANY 2021; 72:5102-5116. [PMID: 33474563 PMCID: PMC8272563 DOI: 10.1093/jxb/erab021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 01/18/2021] [Indexed: 05/27/2023]
Abstract
Flowering and grain-filling stages are highly sensitive to heat and drought stress exposure, leading to significant loss in crop yields. Therefore, phenotyping to enhance resilience to these abiotic stresses is critical for sustaining genetic gains in crop improvement programs. However, traditional methods for screening traits related to these stresses are slow, laborious, and often expensive. Remote sensing provides opportunities to introduce low-cost, less biased, high-throughput phenotyping methods to capture large genetic diversity to facilitate enhancement of stress resilience in crops. This review focuses on four key physiological traits and processes that are critical in understanding crop responses to drought and heat stress during reproductive and grain-filling periods. Specifically, these traits include: (i) time of day of flowering, to escape these stresses during flowering; (ii) optimizing photosynthetic efficiency; (iii) storage and translocation of water-soluble carbohydrates; and (iv) yield and yield components to provide in-season yield estimates. Moreover, we provide an overview of current advances in remote sensing in capturing these traits, and discuss the limitations with existing technology as well as future direction of research to develop high-throughput phenotyping approaches. In the future, phenotyping these complex traits will require sensor advancement, high-quality imagery combined with machine learning methods, and efforts in transdisciplinary science to foster integration across disciplines.
Collapse
Affiliation(s)
- Nathan T Hein
- Department of Agronomy, Kansas State University, Manhattan, KS, USA
| | | | | |
Collapse
|
9
|
Hobbs J, Prakash P, Paull R, Hovhannisyan H, Markowicz B, Rose G. Large-Scale Counting and Localization of Pineapple Inflorescence Through Deep Density-Estimation. FRONTIERS IN PLANT SCIENCE 2021; 11:599705. [PMID: 33584745 PMCID: PMC7876329 DOI: 10.3389/fpls.2020.599705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 12/09/2020] [Indexed: 06/12/2023]
Abstract
Natural flowering affects fruit development and quality, and impacts the harvest of specialty plants like pineapple. Pineapple growers use chemicals to induce flowering so that most plants within a field produce fruit of high quality that is ready to harvest at the same time. Since pineapple is hand-harvested, the ability to harvest all of the fruit of a field in a single pass is critical to reduce field losses, costs, and waste, and to maximize efficiency. Traditionally, due to high planting densities, pineapple growers have been limited to gathering crop intelligence through manual inspection around the edges of the field, giving them only a limited view of their crop's status. Through the advances in remote sensing and computer vision, we can enable the regular inspection of the field and automated inflorescence counting enabling growers to optimize their management practices. Our work uses a deep learning-based density estimation approach to count the number of flowering pineapple plants in a field with a test MAE of 11.5 and MAPD of 6.37%. Notably, the computational complexity of this method does not depend on the number of plants present and therefore efficiently scale to easily detect over a 1.6 million flowering plants in a field. We further embed this approach in an active learning framework for continual learning and model improvement.
Collapse
Affiliation(s)
| | - Prajwal Prakash
- IntelinAir, Inc., Champaign, IL, United States
- Department of Electrical Engineering, Columbia University, New York, NY, United States
| | - Robert Paull
- Department of Tropical Plant and Soil Sciences, University of Hawaii at Manoa, Honolulu, HI, United States
| | | | | | - Greg Rose
- IntelinAir, Inc., Champaign, IL, United States
| |
Collapse
|
10
|
3D Characterization of Sorghum Panicles Using a 3D Point Cloud Derived from UAV Imagery. REMOTE SENSING 2021. [DOI: 10.3390/rs13020282] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Sorghum is one of the most important crops worldwide. An accurate and efficient high-throughput phenotyping method for individual sorghum panicles is needed for assessing genetic diversity, variety selection, and yield estimation. High-resolution imagery acquired using an unmanned aerial vehicle (UAV) provides a high-density 3D point cloud with color information. In this study, we developed a detecting and characterizing method for individual sorghum panicles using a 3D point cloud derived from UAV images. The RGB color ratio was used to filter non-panicle points out and select potential panicle points. Individual sorghum panicles were detected using the concept of tree identification. Panicle length and width were determined from potential panicle points. We proposed cylinder fitting and disk stacking to estimate individual panicle volumes, which are directly related to yield. The results showed that the correlation coefficient of the average panicle length and width between the UAV-based and ground measurements were 0.61 and 0.83, respectively. The UAV-derived panicle length and diameter were more highly correlated with the panicle weight than ground measurements. The cylinder fitting and disk stacking yielded R2 values of 0.77 and 0.67 with the actual panicle weight, respectively. The experimental results showed that the 3D point cloud derived from UAV imagery can provide reliable and consistent individual sorghum panicle parameters, which were highly correlated with ground measurements of panicle weight.
Collapse
|
11
|
Development of a Miniaturized Mobile Mapping System for In-Row, Under-Canopy Phenotyping. REMOTE SENSING 2021. [DOI: 10.3390/rs13020276] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This paper focuses on the development of a miniaturized mobile mapping platform with advantages over current agricultural phenotyping systems in terms of acquiring data that facilitate under-canopy plant trait extraction. The system is based on an unmanned ground vehicle (UGV) for in-row, under-canopy data acquisition to deliver accurately georeferenced 2D and 3D products. The paper addresses three main aspects pertaining to the UGV development: (a) architecture of the UGV mobile mapping system (MMS), (b) quality assessment of acquired data in terms of georeferencing information as well as derived 3D point cloud, and (c) ability to derive phenotypic plant traits using data acquired by the UGV MMS. The experimental results from this study demonstrate the ability of the UGV MMS to acquire dense and accurate data over agricultural fields that would facilitate highly accurate plant phenotyping (better than above-canopy platforms such as unmanned aerial systems and high-clearance tractors). Plant centers and plant count with an accuracy in the 90% range have been achieved.
Collapse
|
12
|
PhotonLabeler: An Inter-Disciplinary Platform for Visual Interpretation and Labeling of ICESat-2 Geolocated Photon Data. REMOTE SENSING 2020. [DOI: 10.3390/rs12193168] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
NASA’s ICESat-2 space-borne photon-counting lidar mission is providing global elevation measurements that will provide significant benefits to a variety of ecosystem related research applications. Given the novelty of elevation and the derived data products from the ICESat-2 mission, the research community needs software tools that can facilitate photon-level analyses to support product validation and development new analysis methods. Here, we describe PhotonLabeler, a free graphic user interface (GUI) for manual labeling and visualization of ICESat-2 Geolocated Photon data (ATL03). Developed in MATLAB, the GUI facilitates the reading and display of ATL03 Hierarchical Data Format (HDF) files, the manual labeling of individual photons into target classes of choice using a number of point selections tools and enables eventual saving of labeled data in ASCII format. Other capabilities include saving and loading of labeling sessions to manage labeling tasks over time. We expect labeled data generated using the application to serve two main purposes. First, serve as reference data for validating various products from ICESat-2 mission, especially for study sites around the world that do not have existing reference datasets such as airborne lidar. Second, serve as training and validation data in the development of new algorithms for generating various ICESat-2 data products. We demonstrate the first use case through a validation case study for the land and vegetation product (ATL08), which provides canopy and terrain height estimates, over two sites. For the first site, located in northwestern Zambia, we used ICESat-2 ATL03 data acquired at night and for our second site in Texas, US, we used ATL03 data acquired during the day. ATL08 canopy and terrain height data showed good agreement (mean R2 > 0.8) with corresponding height metrics generated from manually labeled data. A comparison between PhotonLabeler and ATL08 photon labels also showed good agreement −93.3% and 95.4% overall accuracies for the Texas and Zambia site, respectively. These results, while limited in scope, show how PhotonLabeler can facilitate photon-level analyses for ICESat-2 data products beyond the ATL08 product. The PhotonLabeler application is freely available as a compiled MATLAB binary to enable free access and utilization by interested researchers.
Collapse
|
13
|
Object Detection and Image Segmentation with Deep Learning on Earth Observation Data: A Review—Part II: Applications. REMOTE SENSING 2020. [DOI: 10.3390/rs12183053] [Citation(s) in RCA: 61] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
In Earth observation (EO), large-scale land-surface dynamics are traditionally analyzed by investigating aggregated classes. The increase in data with a very high spatial resolution enables investigations on a fine-grained feature level which can help us to better understand the dynamics of land surfaces by taking object dynamics into account. To extract fine-grained features and objects, the most popular deep-learning model for image analysis is commonly used: the convolutional neural network (CNN). In this review, we provide a comprehensive overview of the impact of deep learning on EO applications by reviewing 429 studies on image segmentation and object detection with CNNs. We extensively examine the spatial distribution of study sites, employed sensors, used datasets and CNN architectures, and give a thorough overview of applications in EO which used CNNs. Our main finding is that CNNs are in an advanced transition phase from computer vision to EO. Upon this, we argue that in the near future, investigations which analyze object dynamics with CNNs will have a significant impact on EO research. With a focus on EO applications in this Part II, we complete the methodological review provided in Part I.
Collapse
|
14
|
Mask R-CNN Refitting Strategy for Plant Counting and Sizing in UAV Imagery. REMOTE SENSING 2020. [DOI: 10.3390/rs12183015] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This work introduces a method that combines remote sensing and deep learning into a framework that is tailored for accurate, reliable and efficient counting and sizing of plants in aerial images. The investigated task focuses on two low-density crops, potato and lettuce. This double objective of counting and sizing is achieved through the detection and segmentation of individual plants by fine-tuning an existing deep learning architecture called Mask R-CNN. This paper includes a thorough discussion on the optimal parametrisation to adapt the Mask R-CNN architecture to this novel task. As we examine the correlation of the Mask R-CNN performance to the annotation volume and granularity (coarse or refined) of remotely sensed images of plants, we conclude that transfer learning can be effectively used to reduce the required amount of labelled data. Indeed, a previously trained Mask R-CNN on a low-density crop can improve performances after training on new crops. Once trained for a given crop, the Mask R-CNN solution is shown to outperform a manually-tuned computer vision algorithm. Model performances are assessed using intuitive metrics such as Mean Average Precision (mAP) from Intersection over Union (IoU) of the masks for individual plant segmentation and Multiple Object Tracking Accuracy (MOTA) for detection. The presented model reaches an mAP of 0.418 for potato plants and 0.660 for lettuces for the individual plant segmentation task. In detection, we obtain a MOTA of 0.781 for potato plants and 0.918 for lettuces.
Collapse
|
15
|
Lin Z, Guo W. Sorghum Panicle Detection and Counting Using Unmanned Aerial System Images and Deep Learning. FRONTIERS IN PLANT SCIENCE 2020; 11:534853. [PMID: 32983210 PMCID: PMC7492560 DOI: 10.3389/fpls.2020.534853] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 08/17/2020] [Indexed: 05/26/2023]
Abstract
Machine learning and computer vision technologies based on high-resolution imagery acquired using unmanned aerial systems (UAS) provide a potential for accurate and efficient high-throughput plant phenotyping. In this study, we developed a sorghum panicle detection and counting pipeline using UAS images based on an integration of image segmentation and a convolutional neural networks (CNN) model. A UAS with an RGB camera was used to acquire images (2.7 mm resolution) at 10-m height in a research field with 120 small plots. A set of 1,000 images were randomly selected, and a mask was developed for each by manually delineating sorghum panicles. These images and their corresponding masks were randomly divided into 10 training datasets, each with a different number of images and masks, ranging from 100 to 1,000 with an interval of 100. A U-Net CNN model was built using these training datasets. The sorghum panicles were detected and counted by a predicted mask through the algorithm. The algorithm was implemented using Python with the Tensorflow library for the deep learning procedure and the OpenCV library for the process of sorghum panicle counting. Results showed the accuracy had a general increasing trend with the number of training images. The algorithm performed the best with 1,000 training images, with an accuracy of 95.5% and a root mean square error (RMSE) of 2.5. The results indicate that the integration of image segmentation and the U-Net CNN model is an accurate and robust method for sorghum panicle counting and offers an opportunity for enhanced sorghum breeding efficiency and accurate yield estimation.
Collapse
Affiliation(s)
- Zhe Lin
- Department of Plant and Soil Science, Texas Tech University, Lubbock, TX, United States
| | - Wenxuan Guo
- Department of Plant and Soil Science, Texas Tech University, Lubbock, TX, United States
- Department of Soil and Crop Sciences, Texas A&M AgriLife Research, Lubbock, TX, United States
| |
Collapse
|
16
|
Change Detection of Deforestation in the Brazilian Amazon Using Landsat Data and Convolutional Neural Networks. REMOTE SENSING 2020. [DOI: 10.3390/rs12060901] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Mapping deforestation is an essential step in the process of managing tropical rainforests. It lets us understand and monitor both legal and illegal deforestation and its implications, which include the effect deforestation may have on climate change through greenhouse gas emissions. Given that there is ample room for improvements when it comes to mapping deforestation using satellite imagery, in this study, we aimed to test and evaluate the use of algorithms belonging to the growing field of deep learning (DL), particularly convolutional neural networks (CNNs), to this end. Although studies have been using DL algorithms for a variety of remote sensing tasks for the past few years, they are still relatively unexplored for deforestation mapping. We attempted to map the deforestation between images approximately one year apart, specifically between 2017 and 2018 and between 2018 and 2019. Three CNN architectures that are available in the literature—SharpMask, U-Net, and ResUnet—were used to classify the change between years and were then compared to two classic machine learning (ML) algorithms—random forest (RF) and multilayer perceptron (MLP)—as points of reference. After validation, we found that the DL models were better in most performance metrics including the Kappa index, F1 score, and mean intersection over union (mIoU) measure, while the ResUnet model achieved the best overall results with a value of 0.94 in all three measures in both time sequences. Visually, the DL models also provided classifications with better defined deforestation patches and did not need any sort of post-processing to remove noise, unlike the ML models, which needed some noise removal to improve results.
Collapse
|
17
|
GNSS/INS-Assisted Structure from Motion Strategies for UAV-Based Imagery over Mechanized Agricultural Fields. REMOTE SENSING 2020. [DOI: 10.3390/rs12030351] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
Acquired imagery by unmanned aerial vehicles (UAVs) has been widely used for three-dimensional (3D) reconstruction/modeling in various digital agriculture applications, such as phenotyping, crop monitoring, and yield prediction. 3D reconstruction from well-textured UAV-based images has matured and the user community has access to several commercial and opensource tools that provide accurate products at a high level of automation. However, in some applications, such as digital agriculture, due to repetitive image patterns, these approaches are not always able to produce reliable/complete products. The main limitation of these techniques is their inability to establish a sufficient number of correctly matched features among overlapping images, causing incomplete and/or inaccurate 3D reconstruction. This paper provides two structure from motion (SfM) strategies, which use trajectory information provided by an onboard survey-grade global navigation satellite system/inertial navigation system (GNSS/INS) and system calibration parameters. The main difference between the proposed strategies is that the first one—denoted as partially GNSS/INS-assisted SfM—implements the four stages of an automated triangulation procedure, namely, imaging matching, relative orientation parameters (ROPs) estimation, exterior orientation parameters (EOPs) recovery, and bundle adjustment (BA). The second strategy— denoted as fully GNSS/INS-assisted SfM—removes the EOPs estimation step while introducing a random sample consensus (RANSAC)-based strategy for removing matching outliers before the BA stage. Both strategies modify the image matching by restricting the search space for conjugate points. They also implement a linear procedure for ROPs’ refinement. Finally, they use the GNSS/INS information in modified collinearity equations for a simpler BA procedure that could be used for refining system calibration parameters. Eight datasets over six agricultural fields are used to evaluate the performance of the developed strategies. In comparison with a traditional SfM framework and Pix4D Mapper Pro, the proposed strategies are able to generate denser and more accurate 3D point clouds as well as orthophotos without any gaps.
Collapse
|