1
|
Okyere FG, Cudjoe D, Sadeghi-Tehran P, Virlet N, Riche AB, Castle M, Greche L, Mohareb F, Simms D, Mhada M, Hawkesford MJ. Machine Learning Methods for Automatic Segmentation of Images of Field- and Glasshouse-Based Plants for High-Throughput Phenotyping. PLANTS (BASEL, SWITZERLAND) 2023; 12:2035. [PMID: 37653952 PMCID: PMC10224253 DOI: 10.3390/plants12102035] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 05/03/2023] [Accepted: 05/10/2023] [Indexed: 07/15/2023]
Abstract
Image segmentation is a fundamental but critical step for achieving automated high- throughput phenotyping. While conventional segmentation methods perform well in homogenous environments, the performance decreases when used in more complex environments. This study aimed to develop a fast and robust neural-network-based segmentation tool to phenotype plants in both field and glasshouse environments in a high-throughput manner. Digital images of cowpea (from glasshouse) and wheat (from field) with different nutrient supplies across their full growth cycle were acquired. Image patches from 20 randomly selected images from the acquired dataset were transformed from their original RGB format to multiple color spaces. The pixels in the patches were annotated as foreground and background with a pixel having a feature vector of 24 color properties. A feature selection technique was applied to choose the sensitive features, which were used to train a multilayer perceptron network (MLP) and two other traditional machine learning models: support vector machines (SVMs) and random forest (RF). The performance of these models, together with two standard color-index segmentation techniques (excess green (ExG) and excess green-red (ExGR)), was compared. The proposed method outperformed the other methods in producing quality segmented images with over 98%-pixel classification accuracy. Regression models developed from the different segmentation methods to predict Soil Plant Analysis Development (SPAD) values of cowpea and wheat showed that images from the proposed MLP method produced models with high predictive power and accuracy comparably. This method will be an essential tool for the development of a data analysis pipeline for high-throughput plant phenotyping. The proposed technique is capable of learning from different environmental conditions, with a high level of robustness.
Collapse
Affiliation(s)
- Frank Gyan Okyere
- Sustainable Soils and Crops, Rothamsted Research, Harpenden AL5 2JQ, UK
- School of Water, Energy and Environment, Soil, Agrifood and Biosciences, Cranfield University, Bedford MK43 0AL, UK
| | - Daniel Cudjoe
- Sustainable Soils and Crops, Rothamsted Research, Harpenden AL5 2JQ, UK
- School of Water, Energy and Environment, Soil, Agrifood and Biosciences, Cranfield University, Bedford MK43 0AL, UK
| | | | - Nicolas Virlet
- Sustainable Soils and Crops, Rothamsted Research, Harpenden AL5 2JQ, UK
| | - Andrew B. Riche
- Sustainable Soils and Crops, Rothamsted Research, Harpenden AL5 2JQ, UK
| | - March Castle
- Sustainable Soils and Crops, Rothamsted Research, Harpenden AL5 2JQ, UK
| | - Latifa Greche
- Sustainable Soils and Crops, Rothamsted Research, Harpenden AL5 2JQ, UK
| | - Fady Mohareb
- School of Water, Energy and Environment, Soil, Agrifood and Biosciences, Cranfield University, Bedford MK43 0AL, UK
| | - Daniel Simms
- School of Water, Energy and Environment, Soil, Agrifood and Biosciences, Cranfield University, Bedford MK43 0AL, UK
| | - Manal Mhada
- African Integrated Plant and Soil Science, Agro-Biosciences, University of Mohammed VI Polytechnic, Lot 660, Ben Guerir 43150, Morocco
| | | |
Collapse
|
2
|
Comparison of Deep Learning Methods for Detecting and Counting Sorghum Heads in UAV Imagery. REMOTE SENSING 2022. [DOI: 10.3390/rs14133143] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
With the rapid development of remote sensing with small, lightweight unmanned aerial vehicles (UAV), efficient and accurate crop spike counting, and yield estimation methods based on deep learning (DL) methods have begun to emerge, greatly reducing labor costs and enabling fast and accurate counting of sorghum spikes. However, there has not been a systematic, comprehensive evaluation of their applicability in cereal crop spike identification in UAV images, especially in sorghum head counting. To this end, this paper conducts a comparative study of the performance of three common DL algorithms, EfficientDet, Single Shot MultiBox Detector (SSD), and You Only Look Once (YOLOv4), for sorghum head detection based on lightweight UAV remote sensing data. The paper explores the effects of overlap ratio, confidence, and intersection over union (IoU) parameters, using the evaluation metrics of precision P, recall R, average precision AP, F1 score, computational efficiency, and the number of detected positive/negative samples (Objects detected consistent/inconsistent with real samples). The experiment results show the following. (1) The detection results of the three methods under dense coverage conditions were better than those under medium and sparse conditions. YOLOv4 had the most accurate detection under different coverage conditions; on the contrary, EfficientDet was the worst. While SSD obtained better detection results under dense conditions, the number of over-detections was larger. (2) It was concluded that although EfficientDet had a good positive sample detection rate, it detected the fewest samples, had the smallest R and F1, and its actual precision was poor, while its training time, although medium, had the lowest detection efficiency, and the detection time per image was 2.82-times that of SSD. SSD had medium values for P, AP, and the number of detected samples, but had the highest training and detection efficiency. YOLOv4 detected the largest number of positive samples, and its values for R, AP, and F1 were the highest among the three methods. Although the training time was the slowest, the detection efficiency was better than EfficientDet. (3) With an increase in the overlap ratios, both positive and negative samples tended to increase, and when the threshold value was 0.3, all three methods had better detection results. With an increase in the confidence value, the number of positive and negative samples significantly decreased, and when the threshold value was 0.3, it balanced the numbers for sample detection and detection accuracy. An increase in IoU was accompanied by a gradual decrease in the number of positive samples and a gradual increase in the number of negative samples. When the threshold value was 0.3, better detection was achieved. The research findings can provide a methodological basis for accurately detecting and counting sorghum heads using UAV.
Collapse
|
3
|
Wang J, Wang C, Lu X, Zhang Y, Zhao Y, Wen W, Song W, Guo X. Dissecting the Genetic Structure of Maize Leaf Sheaths at Seedling Stage by Image-Based High-Throughput Phenotypic Acquisition and Characterization. FRONTIERS IN PLANT SCIENCE 2022; 13:826875. [PMID: 35837446 PMCID: PMC9274118 DOI: 10.3389/fpls.2022.826875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 02/17/2022] [Indexed: 06/15/2023]
Abstract
The rapid development of high-throughput phenotypic detection techniques makes it possible to obtain a large number of crop phenotypic information quickly, efficiently, and accurately. Among them, image-based phenotypic acquisition method has been widely used in crop phenotypic identification and characteristic research due to its characteristics of automation, non-invasive, non-destructive and high throughput. In this study, we proposed a method to define and analyze the traits related to leaf sheaths including morphology-related, color-related and biomass-related traits at V6 stage. Next, we analyzed the phenotypic variation of leaf sheaths of 418 maize inbred lines based on 87 leaf sheath-related phenotypic traits. In order to further analyze the mechanism of leaf sheath phenotype formation, 25 key traits (2 biomass-related, 19 morphology-related and 4 color-related traits) with heritability greater than 0.3 were analyzed by genome-wide association studies (GWAS). And 1816 candidate genes of 17 whole plant leaf sheath traits and 1,297 candidate genes of 8 sixth leaf sheath traits were obtained, respectively. Among them, 46 genes with clear functional descriptions were annotated by single nucleotide polymorphism (SNPs) that both Top1 and multi-method validated. Functional enrichment analysis results showed that candidate genes of leaf sheath traits were enriched into multiple pathways related to cellular component assembly and organization, cell proliferation and epidermal cell differentiation, and response to hunger, nutrition and extracellular stimulation. The results presented here are helpful to further understand phenotypic traits of maize leaf sheath and provide a reference for revealing the genetic mechanism of maize leaf sheath phenotype formation.
Collapse
Affiliation(s)
- Jinglu Wang
- Beijing Key Lab of Digital Plant, Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Chuanyu Wang
- Beijing Key Lab of Digital Plant, Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Xianju Lu
- Beijing Key Lab of Digital Plant, Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Ying Zhang
- Beijing Key Lab of Digital Plant, Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Yanxin Zhao
- Beijing Key Laboratory of Maize DNA Fingerprinting and Molecular Breeding, Maize Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Weiliang Wen
- Beijing Key Lab of Digital Plant, Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Wei Song
- Key Laboratory of Crop Genetics and Breeding of Hebei Province, Institute of Cereal and Oil Crops, Hebei Academy of Agriculture and Forestry Sciences, Shijiazhuang, China
| | - Xinyu Guo
- Beijing Key Lab of Digital Plant, Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| |
Collapse
|
4
|
Danilevicz MF, Bayer PE, Nestor BJ, Bennamoun M, Edwards D. Resources for image-based high-throughput phenotyping in crops and data sharing challenges. PLANT PHYSIOLOGY 2021; 187:699-715. [PMID: 34608963 PMCID: PMC8561249 DOI: 10.1093/plphys/kiab301] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 05/26/2021] [Indexed: 05/06/2023]
Abstract
High-throughput phenotyping (HTP) platforms are capable of monitoring the phenotypic variation of plants through multiple types of sensors, such as red green and blue (RGB) cameras, hyperspectral sensors, and computed tomography, which can be associated with environmental and genotypic data. Because of the wide range of information provided, HTP datasets represent a valuable asset to characterize crop phenotypes. As HTP becomes widely employed with more tools and data being released, it is important that researchers are aware of these resources and how they can be applied to accelerate crop improvement. Researchers may exploit these datasets either for phenotype comparison or employ them as a benchmark to assess tool performance and to support the development of tools that are better at generalizing between different crops and environments. In this review, we describe the use of image-based HTP for yield prediction, root phenotyping, development of climate-resilient crops, detecting pathogen and pest infestation, and quantitative trait measurement. We emphasize the need for researchers to share phenotypic data, and offer a comprehensive list of available datasets to assist crop breeders and tool developers to leverage these resources in order to accelerate crop breeding.
Collapse
Affiliation(s)
- Monica F. Danilevicz
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, Western Australia 6009, Australia
| | - Philipp E. Bayer
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, Western Australia 6009, Australia
| | - Benjamin J. Nestor
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, Western Australia 6009, Australia
| | - Mohammed Bennamoun
- Department of Computer Science and Software Engineering, University of Western Australia, Perth, Western Australia 6009, Australia
| | - David Edwards
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, Western Australia 6009, Australia
- Author for communication:
| |
Collapse
|
5
|
Color Calibration of Proximal Sensing RGB Images of Oilseed Rape Canopy via Deep Learning Combined with K-Means Algorithm. REMOTE SENSING 2019. [DOI: 10.3390/rs11243001] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Plant color is a key feature for estimating parameters of the plant grown under different conditions using remote sensing images. In this case, the variation in plant color should be only due to the influence of the growing conditions and not due to external confounding factors like a light source. Hence, the impact of the light source in plant color should be alleviated using color calibration algorithms. This study aims to develop an efficient, robust, and cutting-edge approach for automatic color calibration of three-band (red green blue: RGB) images. Specifically, we combined the k-means model and deep learning for accurate color calibration matrix (CCM) estimation. A dataset of 3150 RGB images for oilseed rape was collected by a proximal sensing technique under varying illumination conditions and used to train, validate, and test our proposed framework. Firstly, we manually derived CCMs by mapping RGB color values of each patch of a color chart obtained in an image to standard RGB (sRGB) color values of that chart. Secondly, we grouped the images into clusters according to the CCM assigned to each image using the unsupervised k-means algorithm. Thirdly, the images with the new cluster labels were used to train and validate the deep learning convolutional neural network (CNN) algorithm for an automatic CCM estimation. Finally, the estimated CCM was applied to the input image to obtain an image with a calibrated color. The performance of our model for estimating CCM was evaluated using the Euclidean distance between the standard and the estimated color values of the test dataset. The experimental results showed that our deep learning framework can efficiently extract useful low-level features for discriminating images with inconsistent colors and achieved overall training and validation accuracies of 98.00% and 98.53%, respectively. Further, the final CCM provided an average Euclidean distance of 16.23 ΔΕ and outperformed the previously reported methods. This proposed technique can be used in real-time plant phenotyping at multiscale levels.
Collapse
|
6
|
A Deep Learning Semantic Segmentation-Based Approach for Field-Level Sorghum Panicle Counting. REMOTE SENSING 2019. [DOI: 10.3390/rs11242939] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Small unmanned aerial systems (UAS) have emerged as high-throughput platforms for the collection of high-resolution image data over large crop fields to support precision agriculture and plant breeding research. At the same time, the improved efficiency in image capture is leading to massive datasets, which pose analysis challenges in providing needed phenotypic data. To complement these high-throughput platforms, there is an increasing need in crop improvement to develop robust image analysis methods to analyze large amount of image data. Analysis approaches based on deep learning models are currently the most promising and show unparalleled performance in analyzing large image datasets. This study developed and applied an image analysis approach based on a SegNet deep learning semantic segmentation model to estimate sorghum panicles counts, which are critical phenotypic data in sorghum crop improvement, from UAS images over selected sorghum experimental plots. The SegNet model was trained to semantically segment UAS images into sorghum panicles, foliage and the exposed ground using 462, 250 × 250 labeled images, which was then applied to field orthomosaic to generate a field-level semantic segmentation. Individual panicle locations were obtained after post-processing the segmentation output to remove small objects and split merged panicles. A comparison between model panicle count estimates and manually digitized panicle locations in 60 randomly selected plots showed an overall detection accuracy of 94%. A per-plot panicle count comparison also showed high agreement between estimated and reference panicle counts (Spearman correlation ρ = 0.88, mean bias = 0.65). Misclassifications of panicles during the semantic segmentation step and mosaicking errors in the field orthomosaic contributed mainly to panicle detection errors. Overall, the approach based on deep learning semantic segmentation showed good promise and with a larger labeled dataset and extensive hyper-parameter tuning, should provide even more robust and effective characterization of sorghum panicle counts.
Collapse
|
7
|
Quantitative Estimation of Wheat Phenotyping Traits Using Ground and Aerial Imagery. REMOTE SENSING 2018. [DOI: 10.3390/rs10060950] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|