1
|
Gao H, Qi M, Du B, Yang S, Li H, Wang T, Zhong W, Tang Y. An accurate semantic segmentation model for bean seedlings and weeds identification based on improved ERFnet. Sci Rep 2024; 14:12288. [PMID: 38811674 PMCID: PMC11136954 DOI: 10.1038/s41598-024-61981-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 05/13/2024] [Indexed: 05/31/2024] Open
Abstract
In agricultural production activities, the growth of crops always accompanies the competition of weeds for nutrients and sunlight. In order to mitigate the adverse effects of weeds on yield, we apply semantic segmentation techniques to differentiate between seedlings and weeds, leading to precision weeding. The proposed EPAnet employs a loss function coupled with Cross-entropy loss and Dice loss to enhance attention to feature information. A multi-Decoder cooperative module based on ERFnet is designed to enhance information transfer during feature mapping. The SimAM is introduced to enhance position recognition. DO-CONV is used to replace the traditional convolution Feature Pyramid Networks (FPN) connection layer to integrate feature information, improving the model's performance on leaf edge processing, and is named FDPN. Moreover, the Overall Accuracy has been improved by 0.65%, the mean Intersection over Union (mIoU) by 1.91%, and the Frequency-Weighted Intersection over Union (FWIoU) by 1.19%. Compared to other advanced methods, EPAnet demonstrates superior image segmentation results in complex natural environments with uneven lighting, leaf interference, and shadows.
Collapse
Affiliation(s)
- Haozhang Gao
- Electrical and Information Engineering College, Jilin Agricultural Science and Technology University, Jilin, 132101, China
- School of Information and Control Engineering, Jilin Institute of Chemical Technology, Jilin, 132022, China
| | - Mingyang Qi
- Electrical and Information Engineering College, Jilin Agricultural Science and Technology University, Jilin, 132101, China
| | - Baoxia Du
- Electrical and Information Engineering College, Jilin Agricultural Science and Technology University, Jilin, 132101, China
- School of Information and Control Engineering, Jilin Institute of Chemical Technology, Jilin, 132022, China
| | - Shuang Yang
- Electrical and Information Engineering College, Jilin Agricultural Science and Technology University, Jilin, 132101, China
- School of Information and Control Engineering, Jilin Institute of Chemical Technology, Jilin, 132022, China
| | - Han Li
- Electrical and Information Engineering College, Jilin Agricultural Science and Technology University, Jilin, 132101, China
- School of Information and Control Engineering, Jilin Institute of Chemical Technology, Jilin, 132022, China
| | - Tete Wang
- R &D Department, Jilin Province Electric Innovation Information Technology Limited Company, Changchun, 130117, China
| | - Wenyu Zhong
- Electrical and Information Engineering College, Jilin Agricultural Science and Technology University, Jilin, 132101, China.
| | - You Tang
- Electrical and Information Engineering College, Jilin Agricultural Science and Technology University, Jilin, 132101, China.
- School of Information and Control Engineering, Jilin Institute of Chemical Technology, Jilin, 132022, China.
| |
Collapse
|
2
|
Bose S, Banerjee S, Kumar S, Saha A, Nandy D, Hazra S. Review of applications of artificial intelligence (AI) methods in crop research. J Appl Genet 2024; 65:225-240. [PMID: 38216788 DOI: 10.1007/s13353-023-00826-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 12/23/2023] [Accepted: 12/26/2023] [Indexed: 01/14/2024]
Abstract
Sophisticated and modern crop improvement techniques can bridge the gap for feeding the ever-increasing population. Artificial intelligence (AI) refers to the simulation of human intelligence in machines, which refers to the application of computational algorithms, machine learning (ML) and deep learning (DL) techniques. This is aimed to generalise patterns and relationships from historical data, employing various mathematical optimisation techniques thus making prediction models for facilitating selection of superior genotypes. These techniques are less resource intensive and can solve the problem based on the analysis of large-scale phenotypic datasets. ML for genomic selection (GS) uses high-throughput genotyping technologies to gather genetic information on a large number of markers across the genome. The prediction of GS models is based on the mathematical relation between genotypic and phenotypic data from the training population. ML techniques have emerged as powerful tools for genome editing through analysing large-scale genomic data and facilitating the development of accurate prediction models. Precise phenotyping is a prerequisite to advance crop breeding for solving agricultural production-related issues. ML algorithms can solve this problem through generating predictive models, based on the analysis of large-scale phenotypic datasets. DL models also have the potential reliability of precise phenotyping. This review provides a comprehensive overview on various ML and DL models, their applications, potential to enhance the efficiency, specificity and safety towards advanced crop improvement protocols such as genomic selection, genome editing, along with phenotypic prediction to promote accelerated breeding.
Collapse
Affiliation(s)
- Suvojit Bose
- Department of Vegetables and Spice Crops, Uttar Banga Krishi Viswavidyalaya, Pundibari, Cooch Behar, 736165, West Bengal, India
| | | | - Soumya Kumar
- School of Agricultural Sciences, JIS University, Kolkata, 700109, West Bengal, India
| | - Akash Saha
- School of Agricultural Sciences, JIS University, Kolkata, 700109, West Bengal, India
| | - Debalina Nandy
- School of Agricultural Sciences, JIS University, Kolkata, 700109, West Bengal, India
| | - Soham Hazra
- Department of Agriculture, Brainware University, Barasat, 700125, West Bengal, India.
| |
Collapse
|
3
|
Holan KL, White CH, Whitham SA. Application of a U-Net Neural Network to the Puccinia sorghi-Maize Pathosystem. PHYTOPATHOLOGY 2024; 114:990-999. [PMID: 38281155 DOI: 10.1094/phyto-09-23-0313-kc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/30/2024]
Abstract
Computer vision approaches to analyze plant disease data can be both faster and more reliable than traditional, manual methods. However, the requirement of manually annotating training data for the majority of machine learning applications can present a challenge for pipeline development. Here, we describe a machine learning approach to quantify Puccinia sorghi incidence on maize leaves utilizing U-Net convolutional neural network models. We analyzed several U-Net models with increasing amounts of training image data, either randomly chosen from a large data pool or randomly chosen from a subset of disease time course data. As the training dataset size increases, the models perform better, but the rate of performance decreases. Additionally, the use of a diverse training dataset can improve model performance and reduce the amount of annotated training data required for satisfactory performance. Models with as few as 48 whole-leaf training images are able to replicate the ground truth results within our testing dataset. The final model utilizing our entire training dataset performs similarly to our ground truth data, with an intersection over union value of 0.5002 and an F1 score of 0.6669. This work illustrates the capacity of U-Nets to accurately answer real-world plant pathology questions related to quantification and estimation of plant disease symptoms. [Formula: see text] Copyright © 2024 The Author(s). This is an open access article distributed under the CC BY-NC-ND 4.0 International license.
Collapse
Affiliation(s)
- Katerina L Holan
- Department of Plant Pathology, Entomology, and Microbiology, Iowa State University, Ames, IA 50014
| | - Charles H White
- Cooperative Institute for Research in the Atmosphere, Colorado State University, Fort Collins, CO 80523
| | - Steven A Whitham
- Department of Plant Pathology, Entomology, and Microbiology, Iowa State University, Ames, IA 50014
| |
Collapse
|
4
|
Liu H, Zhu H, Liu F, Deng L, Wu G, Han Z, Zhao L. From Organelle Morphology to Whole-Plant Phenotyping: A Phenotypic Detection Method Based on Deep Learning. PLANTS (BASEL, SWITZERLAND) 2024; 13:1177. [PMID: 38732392 PMCID: PMC11085357 DOI: 10.3390/plants13091177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 04/17/2024] [Accepted: 04/19/2024] [Indexed: 05/13/2024]
Abstract
The analysis of plant phenotype parameters is closely related to breeding, so plant phenotype research has strong practical significance. This paper used deep learning to classify Arabidopsis thaliana from the macro (plant) to the micro level (organelle). First, the multi-output model identifies Arabidopsis accession lines and regression to predict Arabidopsis's 22-day growth status. The experimental results showed that the model had excellent performance in identifying Arabidopsis lines, and the model's classification accuracy was 99.92%. The model also had good performance in predicting plant growth status, and the regression prediction of the model root mean square error (RMSE) was 1.536. Next, a new dataset was obtained by increasing the time interval of Arabidopsis images, and the model's performance was verified at different time intervals. Finally, the model was applied to classify Arabidopsis organelles to verify the model's generalizability. Research suggested that deep learning will broaden plant phenotype detection methods. Furthermore, this method will facilitate the design and development of a high-throughput information collection platform for plant phenotypes.
Collapse
Affiliation(s)
- Hang Liu
- College of Grassland Science, Qingdao Agricultural University, Qingdao 266109, China;
| | - Hongfei Zhu
- College of Computer Science and Technology, Tiangong University, Tianjin 300387, China;
| | - Fei Liu
- College of Science and Information, Qingdao Agricultural University, Qingdao 266109, China; (F.L.); (L.D.)
| | - Limiao Deng
- College of Science and Information, Qingdao Agricultural University, Qingdao 266109, China; (F.L.); (L.D.)
| | - Guangxia Wu
- College of Agronomy, Qingdao Agricultural University, Qingdao 266109, China;
| | - Zhongzhi Han
- College of Science and Information, Qingdao Agricultural University, Qingdao 266109, China; (F.L.); (L.D.)
| | - Longgang Zhao
- College of Grassland Science, Qingdao Agricultural University, Qingdao 266109, China;
| |
Collapse
|
5
|
Dermail A, Mitchell M, Foster T, Fakude M, Chen YR, Suriharn K, Frei UK, Lübberstedt T. Haploid identification in maize. FRONTIERS IN PLANT SCIENCE 2024; 15:1378421. [PMID: 38708398 PMCID: PMC11067884 DOI: 10.3389/fpls.2024.1378421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 04/08/2024] [Indexed: 05/07/2024]
Abstract
Doubled haploid (DH) line production through in vivo maternal haploid induction is widely adopted in maize breeding programs. The established protocol for DH production includes four steps namely in vivo maternal haploid induction, haploid identification, genome doubling of haploid, and self-fertilization of doubled haploids. Since modern haploid inducers still produce relatively small portion of haploids among undesirable hybrid kernels, haploid identification is typically laborious, costly, and time-consuming, making this step the second foremost in the DH technique. This manuscript reviews numerous methods for haploid identification from different approaches including the innate differences in haploids and diploids, biomarkers integrated in haploid inducers, and automated seed sorting. The phenotypic differentiation, genetic basis, advantages, and limitations of each biomarker system are highlighted. Several approaches of automated seed sorting from different research groups are also discussed regarding the platform or instrument used, sorting time, accuracy, advantages, limitations, and challenges before they go through commercialization. The past haploid selection was focusing on finding the distinguishable marker systems with the key to effectiveness. The current haploid selection is adopting multiple reliable biomarker systems with the key to efficiency while seeking the possibility for automation. Fully automated high-throughput haploid sorting would be promising in near future with the key to robustness with retaining the feasible level of accuracy. The system that can meet between three major constraints (time, workforce, and budget) and the sorting scale would be the best option.
Collapse
Affiliation(s)
- Abil Dermail
- Department of Agronomy, Faculty of Agriculture, Khon Kaen University, Khon Kaen, Thailand
| | - Mariah Mitchell
- Department of Agronomy, Iowa State University, Ames, IA, United States
| | - Tyler Foster
- Department of Agronomy, Iowa State University, Ames, IA, United States
| | - Mercy Fakude
- Department of Agronomy, Iowa State University, Ames, IA, United States
| | - Yu-Ru Chen
- Department of Agronomy, Iowa State University, Ames, IA, United States
| | - Khundej Suriharn
- Department of Agronomy, Faculty of Agriculture, Khon Kaen University, Khon Kaen, Thailand
- Plant Breeding Research Center for Sustainable Agriculture, Faculty of Agriculture, Khon Kaen University, Khon Kaen, Thailand
| | | | | |
Collapse
|
6
|
Chang-Brahim I, Koppensteiner LJ, Beltrame L, Bodner G, Saranti A, Salzinger J, Fanta-Jende P, Sulzbachner C, Bruckmüller F, Trognitz F, Samad-Zamini M, Zechner E, Holzinger A, Molin EM. Reviewing the essential roles of remote phenotyping, GWAS and explainable AI in practical marker-assisted selection for drought-tolerant winter wheat breeding. FRONTIERS IN PLANT SCIENCE 2024; 15:1319938. [PMID: 38699541 PMCID: PMC11064034 DOI: 10.3389/fpls.2024.1319938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 03/13/2024] [Indexed: 05/05/2024]
Abstract
Marker-assisted selection (MAS) plays a crucial role in crop breeding improving the speed and precision of conventional breeding programmes by quickly and reliably identifying and selecting plants with desired traits. However, the efficacy of MAS depends on several prerequisites, with precise phenotyping being a key aspect of any plant breeding programme. Recent advancements in high-throughput remote phenotyping, facilitated by unmanned aerial vehicles coupled to machine learning, offer a non-destructive and efficient alternative to traditional, time-consuming, and labour-intensive methods. Furthermore, MAS relies on knowledge of marker-trait associations, commonly obtained through genome-wide association studies (GWAS), to understand complex traits such as drought tolerance, including yield components and phenology. However, GWAS has limitations that artificial intelligence (AI) has been shown to partially overcome. Additionally, AI and its explainable variants, which ensure transparency and interpretability, are increasingly being used as recognised problem-solving tools throughout the breeding process. Given these rapid technological advancements, this review provides an overview of state-of-the-art methods and processes underlying each MAS, from phenotyping, genotyping and association analyses to the integration of explainable AI along the entire workflow. In this context, we specifically address the challenges and importance of breeding winter wheat for greater drought tolerance with stable yields, as regional droughts during critical developmental stages pose a threat to winter wheat production. Finally, we explore the transition from scientific progress to practical implementation and discuss ways to bridge the gap between cutting-edge developments and breeders, expediting MAS-based winter wheat breeding for drought tolerance.
Collapse
Affiliation(s)
- Ignacio Chang-Brahim
- Unit Bioresources, Center for Health & Bioresources, AIT Austrian Institute of Technology, Tulln, Austria
| | | | - Lorenzo Beltrame
- Unit Assistive and Autonomous Systems, Center for Vision, Automation & Control, AIT Austrian Institute of Technology, Vienna, Austria
| | - Gernot Bodner
- Department of Crop Sciences, Institute of Agronomy, University of Natural Resources and Life Sciences Vienna, Tulln, Austria
| | - Anna Saranti
- Human-Centered AI Lab, Department of Forest- and Soil Sciences, Institute of Forest Engineering, University of Natural Resources and Life Sciences Vienna, Vienna, Austria
| | - Jules Salzinger
- Unit Assistive and Autonomous Systems, Center for Vision, Automation & Control, AIT Austrian Institute of Technology, Vienna, Austria
| | - Phillipp Fanta-Jende
- Unit Assistive and Autonomous Systems, Center for Vision, Automation & Control, AIT Austrian Institute of Technology, Vienna, Austria
| | - Christoph Sulzbachner
- Unit Assistive and Autonomous Systems, Center for Vision, Automation & Control, AIT Austrian Institute of Technology, Vienna, Austria
| | - Felix Bruckmüller
- Unit Assistive and Autonomous Systems, Center for Vision, Automation & Control, AIT Austrian Institute of Technology, Vienna, Austria
| | - Friederike Trognitz
- Unit Bioresources, Center for Health & Bioresources, AIT Austrian Institute of Technology, Tulln, Austria
| | | | - Elisabeth Zechner
- Verein zur Förderung einer nachhaltigen und regionalen Pflanzenzüchtung, Zwettl, Austria
| | - Andreas Holzinger
- Human-Centered AI Lab, Department of Forest- and Soil Sciences, Institute of Forest Engineering, University of Natural Resources and Life Sciences Vienna, Vienna, Austria
| | - Eva M. Molin
- Unit Bioresources, Center for Health & Bioresources, AIT Austrian Institute of Technology, Tulln, Austria
- Human-Centered AI Lab, Department of Forest- and Soil Sciences, Institute of Forest Engineering, University of Natural Resources and Life Sciences Vienna, Vienna, Austria
| |
Collapse
|
7
|
Rodene E, Fernando GD, Piyush V, Ge Y, Schnable JC, Ghosh S, Yang J. Image Filtering to Improve Maize Tassel Detection Accuracy Using Machine Learning Algorithms. SENSORS (BASEL, SWITZERLAND) 2024; 24:2172. [PMID: 38610383 PMCID: PMC11013961 DOI: 10.3390/s24072172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 03/20/2024] [Accepted: 03/26/2024] [Indexed: 04/14/2024]
Abstract
Unmanned aerial vehicle (UAV)-based imagery has become widely used to collect time-series agronomic data, which are then incorporated into plant breeding programs to enhance crop improvements. To make efficient analysis possible, in this study, by leveraging an aerial photography dataset for a field trial of 233 different inbred lines from the maize diversity panel, we developed machine learning methods for obtaining automated tassel counts at the plot level. We employed both an object-based counting-by-detection (CBD) approach and a density-based counting-by-regression (CBR) approach. Using an image segmentation method that removes most of the pixels not associated with the plant tassels, the results showed a dramatic improvement in the accuracy of object-based (CBD) detection, with the cross-validation prediction accuracy (r2) peaking at 0.7033 on a detector trained with images with a filter threshold of 90. The CBR approach showed the greatest accuracy when using unfiltered images, with a mean absolute error (MAE) of 7.99. However, when using bootstrapping, images filtered at a threshold of 90 showed a slightly better MAE (8.65) than the unfiltered images (8.90). These methods will allow for accurate estimates of flowering-related traits and help to make breeding decisions for crop improvement.
Collapse
Affiliation(s)
- Eric Rodene
- Department of Agronomy and Horticulture, University of Nebraska-Lincoln, Lincoln, NE 68583, USA; (E.R.); (J.C.S.)
- Center for Plant Science Innovation, University of Nebraska-Lincoln, Lincoln, NE 68583, USA
| | | | - Ved Piyush
- Department of Statistics, University of Nebraska-Lincoln, Lincoln, NE 68583, USA
| | - Yufeng Ge
- Center for Plant Science Innovation, University of Nebraska-Lincoln, Lincoln, NE 68583, USA
- Department of Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, NE 68583, USA
| | - James C. Schnable
- Department of Agronomy and Horticulture, University of Nebraska-Lincoln, Lincoln, NE 68583, USA; (E.R.); (J.C.S.)
- Center for Plant Science Innovation, University of Nebraska-Lincoln, Lincoln, NE 68583, USA
| | - Souparno Ghosh
- Department of Statistics, University of Nebraska-Lincoln, Lincoln, NE 68583, USA
| | - Jinliang Yang
- Department of Agronomy and Horticulture, University of Nebraska-Lincoln, Lincoln, NE 68583, USA; (E.R.); (J.C.S.)
- Center for Plant Science Innovation, University of Nebraska-Lincoln, Lincoln, NE 68583, USA
| |
Collapse
|
8
|
Montesinos-López A, Crespo-Herrera L, Dreisigacker S, Gerard G, Vitale P, Saint Pierre C, Govindan V, Tarekegn ZT, Flores MC, Pérez-Rodríguez P, Ramos-Pulido S, Lillemo M, Li H, Montesinos-López OA, Crossa J. Deep learning methods improve genomic prediction of wheat breeding. FRONTIERS IN PLANT SCIENCE 2024; 15:1324090. [PMID: 38504889 PMCID: PMC10949530 DOI: 10.3389/fpls.2024.1324090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 02/19/2024] [Indexed: 03/21/2024]
Abstract
In the field of plant breeding, various machine learning models have been developed and studied to evaluate the genomic prediction (GP) accuracy of unseen phenotypes. Deep learning has shown promise. However, most studies on deep learning in plant breeding have been limited to small datasets, and only a few have explored its application in moderate-sized datasets. In this study, we aimed to address this limitation by utilizing a moderately large dataset. We examined the performance of a deep learning (DL) model and compared it with the widely used and powerful best linear unbiased prediction (GBLUP) model. The goal was to assess the GP accuracy in the context of a five-fold cross-validation strategy and when predicting complete environments using the DL model. The results revealed the DL model outperformed the GBLUP model in terms of GP accuracy for two out of the five included traits in the five-fold cross-validation strategy, with similar results in the other traits. This indicates the superiority of the DL model in predicting these specific traits. Furthermore, when predicting complete environments using the leave-one-environment-out (LOEO) approach, the DL model demonstrated competitive performance. It is worth noting that the DL model employed in this study extends a previously proposed multi-modal DL model, which had been primarily applied to image data but with small datasets. By utilizing a moderately large dataset, we were able to evaluate the performance and potential of the DL model in a context with more information and challenging scenario in plant breeding.
Collapse
Affiliation(s)
- Abelardo Montesinos-López
- Departamento de Matemáticas, Centro Universitario de Ciencias Exactas e Ingenierías (CUCEI), Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Leonardo Crespo-Herrera
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
| | - Susanna Dreisigacker
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
| | - Guillermo Gerard
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
| | - Paolo Vitale
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
| | - Carolina Saint Pierre
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
| | - Velu Govindan
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
| | | | - Moisés Chavira Flores
- Instituto de Investigaciones en Matemáticas Aplicadas y Sistemas (IIMAS), Universidad Nacional Autónoma de México (UNAM), Ciudad Universitaria, Ciudad de México, Mexico
| | - Paulino Pérez-Rodríguez
- Estudios del Desarrollo Rural, Economía, Estadística y Cómputo Aplicado, Colegio de Postgraduados, Texcoco, Estado de México, Mexico
| | - Sofía Ramos-Pulido
- Departamento de Matemáticas, Centro Universitario de Ciencias Exactas e Ingenierías (CUCEI), Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Morten Lillemo
- Department of Plant Science, Norwegian University of Life Science (NMBU), Ås, Norway
| | - Huihui Li
- 6State Key Laboratory of Crop Gene Resources and Breeding, Institute of Crop Sciences and CIMMYT China Office, Chinese Academy of Agricultural Sciences (CAAS), Beijing, China
| | | | - Jose Crossa
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
- Estudios del Desarrollo Rural, Economía, Estadística y Cómputo Aplicado, Colegio de Postgraduados, Texcoco, Estado de México, Mexico
| |
Collapse
|
9
|
Ma N, Su Y, Yang L, Li Z, Yan H. Wheat Seed Detection and Counting Method Based on Improved YOLOv8 Model. SENSORS (BASEL, SWITZERLAND) 2024; 24:1654. [PMID: 38475189 DOI: 10.3390/s24051654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 02/29/2024] [Accepted: 03/01/2024] [Indexed: 03/14/2024]
Abstract
Wheat seed detection has important applications in calculating thousand-grain weight and crop breeding. In order to solve the problems of seed accumulation, adhesion, and occlusion that can lead to low counting accuracy, while ensuring fast detection speed with high accuracy, a wheat seed counting method is proposed to provide technical support for the development of the embedded platform of the seed counter. This study proposes a lightweight real-time wheat seed detection model, YOLOv8-HD, based on YOLOv8. Firstly, we introduce the concept of shared convolutional layers to improve the YOLOv8 detection head, reducing the number of parameters and achieving a lightweight design to improve runtime speed. Secondly, we incorporate the Vision Transformer with a Deformable Attention mechanism into the C2f module of the backbone network to enhance the network's feature extraction capability and improve detection accuracy. The results show that in the stacked scenes with impurities (severe seed adhesion), the YOLOv8-HD model achieves an average detection accuracy (mAP) of 77.6%, which is 9.1% higher than YOLOv8. In all scenes, the YOLOv8-HD model achieves an average detection accuracy (mAP) of 99.3%, which is 16.8% higher than YOLOv8. The memory size of the YOLOv8-HD model is 6.35 MB, approximately 4/5 of YOLOv8. The GFLOPs of YOLOv8-HD decrease by 16%. The inference time of YOLOv8-HD is 2.86 ms (on GPU), which is lower than YOLOv8. Finally, we conducted numerous experiments and the results showed that YOLOv8-HD outperforms other mainstream networks in terms of mAP, speed, and model size. Therefore, our YOLOv8-HD can efficiently detect wheat seeds in various scenarios, providing technical support for the development of seed counting instruments.
Collapse
Affiliation(s)
- Na Ma
- College of Information Science and Engineering, Shanxi Agricultural University, Taigu District, Jinzhong 030801, China
| | - Yaxin Su
- College of Information Science and Engineering, Shanxi Agricultural University, Taigu District, Jinzhong 030801, China
| | - Lexin Yang
- College of Information Science and Engineering, Shanxi Agricultural University, Taigu District, Jinzhong 030801, China
| | - Zhongtao Li
- College of Information Science and Engineering, Shanxi Agricultural University, Taigu District, Jinzhong 030801, China
| | - Hongwen Yan
- College of Information Science and Engineering, Shanxi Agricultural University, Taigu District, Jinzhong 030801, China
| |
Collapse
|
10
|
Davidson SJ, Saggese T, Krajňáková J. Deep learning for automated segmentation and counting of hypocotyl and cotyledon regions in mature Pinus radiata D. Don. somatic embryo images. FRONTIERS IN PLANT SCIENCE 2024; 15:1322920. [PMID: 38495377 PMCID: PMC10940415 DOI: 10.3389/fpls.2024.1322920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 02/12/2024] [Indexed: 03/19/2024]
Abstract
In commercial forestry and large-scale plant propagation, the utilization of artificial intelligence techniques for automated somatic embryo analysis has emerged as a highly valuable tool. Notably, image segmentation plays a key role in the automated assessment of mature somatic embryos. However, to date, the application of Convolutional Neural Networks (CNNs) for segmentation of mature somatic embryos remains unexplored. In this study, we present a novel application of CNNs for delineating mature somatic conifer embryos from background and residual proliferating embryogenic tissue and differentiating various morphological regions within the embryos. A semantic segmentation CNN was trained to assign pixels to cotyledon, hypocotyl, and background regions, while an instance segmentation network was trained to detect individual cotyledons for automated counting. The main dataset comprised 275 high-resolution microscopic images of mature Pinus radiata somatic embryos, with 42 images reserved for testing and validation sets. The evaluation of different segmentation methods revealed that semantic segmentation achieved the highest performance averaged across classes, achieving F1 scores of 0.929 and 0.932, with IoU scores of 0.867 and 0.872 for the cotyledon and hypocotyl regions respectively. The instance segmentation approach demonstrated proficiency in accurate detection and counting of the number of cotyledons, as indicated by a mean squared error (MSE) of 0.79 and mean absolute error (MAE) of 0.60. The findings highlight the efficacy of neural network-based methods in accurately segmenting somatic embryos and delineating individual morphological parts, providing additional information compared to previous segmentation techniques. This opens avenues for further analysis, including quantification of morphological characteristics in each region, enabling the identification of features of desirable embryos in large-scale production systems. These advancements contribute to the improvement of automated somatic embryogenesis systems, facilitating efficient and reliable plant propagation for commercial forestry applications.
Collapse
Affiliation(s)
- Sam J. Davidson
- Data and Geospatial Intelligence, New Zealand Forest Research Institute (Scion), Christchurch, New Zealand
| | - Taryn Saggese
- Forest Genetics and Biotechnology, New Zealand Forest Research Institute (Scion), Rotorua, New Zealand
| | - Jana Krajňáková
- Forest Genetics and Biotechnology, New Zealand Forest Research Institute (Scion), Rotorua, New Zealand
| |
Collapse
|
11
|
Zheng Y, Wang D, Jin N, Zhao X, Li F, Sun F, Dou G, Bai H. The improved stratified transformer for organ segmentation of Arabidopsis. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:4669-4697. [PMID: 38549344 DOI: 10.3934/mbe.2024205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
Segmenting plant organs is a crucial step in extracting plant phenotypes. Despite the advancements in point-based neural networks, the field of plant point cloud segmentation suffers from a lack of adequate datasets. In this study, we addressed this issue by generating Arabidopsis models using L-system and proposing the surface-weighted sampling method. This approach enables automated point sampling and annotation, resulting in fully annotated point clouds. To create the Arabidopsis dataset, we employed Voxel Centroid Sampling and Random Sampling as point cloud downsampling methods, effectively reducing the number of points. To enhance the efficiency of semantic segmentation in plant point clouds, we introduced the Plant Stratified Transformer. This network is an improved version of the Stratified Transformer, incorporating the Fast Downsample Layer. Our improved network underwent training and testing on our dataset, and we compared its performance with PointNet++, PAConv, and the original Stratified Transformer network. For semantic segmentation, our improved network achieved mean Precision, Recall, F1-score and IoU of 84.20, 83.03, 83.61 and 73.11%, respectively. It outperformed PointNet++ and PAConv and performed similarly to the original network. Regarding efficiency, the training time and inference time were 714.3 and 597.9 ms, respectively, which were reduced by 320.9 and 271.8 ms, respectively, compared to the original network. The improved network significantly accelerated the speed of feeding point clouds into the network while maintaining segmentation performance. We demonstrated the potential of virtual plants and deep learning methods in rapidly extracting plant phenotypes, contributing to the advancement of plant phenotype research.
Collapse
Affiliation(s)
- Yuhui Zheng
- College of Mechanical and Electrical Engineering, Qingdao Agricultural University, Qingdao 266109, China
| | - Dongwei Wang
- College of Mechanical and Electrical Engineering, Qingdao Agricultural University, Qingdao 266109, China
| | - Ning Jin
- Graduate School, Shenyang Jianzhu University, Shenyang 110168, China
| | - Xueguan Zhao
- Beijing PAIDE Science and Technology Development Co., Ltd., Beijing 100097, China
| | - Fengmei Li
- College of Food Science and Engineering, Qingdao Agricultural University, Qingdao 266109, China
| | - Fengbo Sun
- China Zhongxin Construction Engineering Co., Ltd., Qingdao 266205, China
| | - Gang Dou
- Weichai Lovol Intelligent Agricultural Technology Co., Ltd., Weifang 261000, China
| | - Haoran Bai
- College of Mechanical and Electrical Engineering, Qingdao Agricultural University, Qingdao 266109, China
| |
Collapse
|
12
|
Khoroshevsky F, Zhou K, Chemweno S, Edan Y, Bar-Hillel A, Hadar O, Rewald B, Baykalov P, Ephrath JE, Lazarovitch N. Automatic Root Length Estimation from Images Acquired In Situ without Segmentation. PLANT PHENOMICS (WASHINGTON, D.C.) 2024; 6:0132. [PMID: 38230354 PMCID: PMC10790720 DOI: 10.34133/plantphenomics.0132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 12/12/2023] [Indexed: 01/18/2024]
Abstract
Image-based root phenotyping technologies, including the minirhizotron (MR), have expanded our understanding of the in situ root responses to changing environmental conditions. The conventional manual methods used to analyze MR images are time-consuming, limiting their implementation. This study presents an adaptation of our previously developed convolutional neural network-based models to estimate the total (cumulative) root length (TRL) per MR image without requiring segmentation. Training data were derived from manual annotations in Rootfly, commonly used software for MR image analysis. We compared TRL estimation with 2 models, a regression-based model and a detection-based model that detects the annotated points along the roots. Notably, the detection-based model can assist in examining human annotations by providing a visual inspection of roots in MR images. The models were trained and tested with 4,015 images acquired using 2 MR system types (manual and automated) and from 4 crop species (corn, pepper, melon, and tomato) grown under various abiotic stresses. These datasets are made publicly available as part of this publication. The coefficients of determination (R2), between the measurements made using Rootfly and the suggested TRL estimation models were 0.929 to 0.986 for the main datasets, demonstrating that this tool is accurate and robust. Additional analyses were conducted to examine the effects of (a) the data acquisition system and thus the image quality on the models' performance, (b) automated differentiation between images with and without roots, and (c) the use of the transfer learning technique. These approaches can support precision agriculture by providing real-time root growth information.
Collapse
Affiliation(s)
- Faina Khoroshevsky
- Department of Industrial Engineering and Management,
Ben-Gurion University of the Negev, Beer Sheva, Israel
| | - Kaining Zhou
- The Jacob Blaustein Center for Scientific Cooperation,
The Jacob Blaustein Institutes for Desert Research, Ben-Gurion University of the Negev, Sede Boqer, Israel
- French Associates Institute for Agriculture and Biotechnology of Drylands, The Jacob Blaustein Institutes for Desert Research, Ben-Gurion University of the Negev, Sede Boqer, Israel
| | - Sharon Chemweno
- The Albert Katz International School for Desert Studies,
The Jacob Blaustein Institutes for Desert Research, Ben-Gurion University of the Negev, Sede Boqer, Israel
- French Associates Institute for Agriculture and Biotechnology of Drylands, The Jacob Blaustein Institutes for Desert Research, Ben-Gurion University of the Negev, Sede Boqer, Israel
| | - Yael Edan
- Department of Industrial Engineering and Management,
Ben-Gurion University of the Negev, Beer Sheva, Israel
| | - Aharon Bar-Hillel
- Department of Industrial Engineering and Management,
Ben-Gurion University of the Negev, Beer Sheva, Israel
| | - Ofer Hadar
- Department of Communication Systems Engineering, School of Electrical and Computer Engineering,
Ben-Gurion University of the Negev, Beer Sheva, Israel
| | - Boris Rewald
- Institute of Forest Ecology, Department of Forest and Soil Sciences,
University of Natural Resources and Life Sciences, Vienna (BOKU), Vienna, Austria
- Faculty of Forestry and Wood Technology,
Mendel University in Brno, Brno, Czech Republic
| | - Pavel Baykalov
- Institute of Forest Ecology, Department of Forest and Soil Sciences,
University of Natural Resources and Life Sciences, Vienna (BOKU), Vienna, Austria
- Vienna Scientific Instruments GmbH, Alland, Austria
| | - Jhonathan E. Ephrath
- French Associates Institute for Agriculture and Biotechnology of Drylands, The Jacob Blaustein Institutes for Desert Research, Ben-Gurion University of the Negev, Sede Boqer, Israel
| | - Naftali Lazarovitch
- French Associates Institute for Agriculture and Biotechnology of Drylands, The Jacob Blaustein Institutes for Desert Research, Ben-Gurion University of the Negev, Sede Boqer, Israel
| |
Collapse
|
13
|
Artemenko NV, Genaev MA, Epifanov RU, Komyshev EG, Kruchinina YV, Koval VS, Goncharov NP, Afonnikov DA. Image-based classification of wheat spikes by glume pubescence using convolutional neural networks. FRONTIERS IN PLANT SCIENCE 2024; 14:1336192. [PMID: 38283969 PMCID: PMC10811101 DOI: 10.3389/fpls.2023.1336192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Accepted: 12/20/2023] [Indexed: 01/30/2024]
Abstract
Introduction Pubescence is an important phenotypic trait observed in both vegetative and generative plant organs. Pubescent plants demonstrate increased resistance to various environmental stresses such as drought, low temperatures, and pests. It serves as a significant morphological marker and aids in selecting stress-resistant cultivars, particularly in wheat. In wheat, pubescence is visible on leaves, leaf sheath, glumes and nodes. Regarding glumes, the presence of pubescence plays a pivotal role in its classification. It supplements other spike characteristics, aiding in distinguishing between different varieties within the wheat species. The determination of pubescence typically involves visual analysis by an expert. However, methods without the use of binocular loupe tend to be subjective, while employing additional equipment is labor-intensive. This paper proposes an integrated approach to determine glume pubescence presence in spike images captured under laboratory conditions using a digital camera and convolutional neural networks. Methods Initially, image segmentation is conducted to extract the contour of the spike body, followed by cropping of the spike images to an equal size. These images are then classified based on glume pubescence (pubescent/glabrous) using various convolutional neural network architectures (Resnet-18, EfficientNet-B0, and EfficientNet-B1). The networks were trained and tested on a dataset comprising 9,719 spike images. Results For segmentation, the U-Net model with EfficientNet-B1 encoder was chosen, achieving the segmentation accuracy IoU = 0.947 for the spike body and 0.777 for awns. The classification model for glume pubescence with the highest performance utilized the EfficientNet-B1 architecture. On the test sample, the model exhibited prediction accuracy parameters of F1 = 0.85 and AUC = 0.96, while on the holdout sample it showed F1 = 0.84 and AUC = 0.89. Additionally, the study investigated the relationship between image scale, artificial distortions, and model prediction performance, revealing that higher magnification and smaller distortions yielded a more accurate prediction of glume pubescence.
Collapse
Affiliation(s)
- Nikita V Artemenko
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
- Department of Mathematics and Mechanics, Novosibirsk State University, Novosibirsk, Russia
| | - Mikhail A Genaev
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
- Kurchatov Center for Genome Research, Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
| | - Rostislav Ui Epifanov
- Department of Mathematics and Mechanics, Novosibirsk State University, Novosibirsk, Russia
| | - Evgeny G Komyshev
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
| | - Yulia V Kruchinina
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
- Kurchatov Center for Genome Research, Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
| | - Vasiliy S Koval
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
- Kurchatov Center for Genome Research, Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
| | - Nikolay P Goncharov
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
| | - Dmitry A Afonnikov
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
- Department of Mathematics and Mechanics, Novosibirsk State University, Novosibirsk, Russia
- Kurchatov Center for Genome Research, Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
| |
Collapse
|
14
|
Wei Y, Feng Y, Zhou X, Wang G. Attention-aided lightweight networks friendly to smart weeding robot hardware resources for crops and weeds semantic segmentation. FRONTIERS IN PLANT SCIENCE 2023; 14:1320448. [PMID: 38186601 PMCID: PMC10768065 DOI: 10.3389/fpls.2023.1320448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 12/04/2023] [Indexed: 01/09/2024]
Abstract
Weed control is a global issue of great concern, and smart weeding robots equipped with advanced vision algorithms can perform efficient and precise weed control. Furthermore, the application of smart weeding robots has great potential for building environmentally friendly agriculture and saving human and material resources. However, most networks used in intelligent weeding robots tend to solely prioritize enhancing segmentation accuracy, disregarding the hardware constraints of embedded devices. Moreover, generalized lightweight networks are unsuitable for crop and weed segmentation tasks. Therefore, we propose an Attention-aided lightweight network for crop and weed semantic segmentation. The proposed network has a parameter count of 0.11M, Floating-point Operations count of 0.24G. Our network is based on an encoder and decoder structure, incorporating attention module to ensures both fast inference speed and accurate segmentation while utilizing fewer hardware resources. The dual attention block is employed to explore the potential relationships within the dataset, providing powerful regularization and enhancing the generalization ability of the attention mechanism, it also facilitates information integration between channels. To enhance the local and global semantic information acquisition and interaction, we utilize the refinement dilated conv block instead of 2D convolution within the deep network. This substitution effectively reduces the number and complexity of network parameters and improves the computation rate. To preserve spatial information, we introduce the spatial connectivity attention block. This block not only acquires more precise spatial information but also utilizes shared weight convolution to handle multi-stage feature maps, thereby further reducing network complexity. The segmentation performance of the proposed network is evaluated on three publicly available datasets: the BoniRob dataset, the Rice Seeding dataset, and the WeedMap dataset. Additionally, we measure the inference time and Frame Per Second on the NVIDIA Jetson Xavier NX embedded system, the results are 18.14 msec and 55.1 FPS. Experimental results demonstrate that our network maintains better inference speed on resource-constrained embedded systems and has competitive segmentation performance.
Collapse
Affiliation(s)
- Yifan Wei
- College of Computer Science and Engineering, Changchun University of Technology, Changchun, Jilin, China
| | - Yuncong Feng
- College of Computer Science and Engineering, Changchun University of Technology, Changchun, Jilin, China
- Artificial Intelligence Research Institute, Changchun University of Technology, Changchun, Jilin, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
| | - Xiaotang Zhou
- College of Computer Science and Engineering, Changchun University of Technology, Changchun, Jilin, China
- Artificial Intelligence Research Institute, Changchun University of Technology, Changchun, Jilin, China
| | - Guishen Wang
- College of Computer Science and Engineering, Changchun University of Technology, Changchun, Jilin, China
- Artificial Intelligence Research Institute, Changchun University of Technology, Changchun, Jilin, China
| |
Collapse
|
15
|
Yamazaki A, Takezawa A, Nagasaka K, Motoki K, Nishimura K, Nakano R, Nakazaki T. A simple method for measuring pollen germination rate using machine learning. PLANT REPRODUCTION 2023; 36:355-364. [PMID: 37278944 DOI: 10.1007/s00497-023-00472-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 05/30/2023] [Indexed: 06/07/2023]
Abstract
The pollen germination rate decreases under various abiotic stresses, such as high-temperature stress, and it is one of the causes of inhibition of plant reproduction. Thus, measuring pollen germination rate is vital for understanding the reproductive ability of plants. However, measuring the pollen germination rate requires much labor when counting pollen. Therefore, we used the Yolov5 machine learning package in order to perform transfer learning and constructed a model that can detect germinated and non-germinated pollen separately. Pollen images of the chili pepper, Capsicum annuum, were used to create this model. Using images with a width of 640 pixels for training constructed a more accurate model than using images with a width of 320 pixels. This model could estimate the pollen germination rate of the F2 population of C. chinense previously studied with high accuracy. In addition, significantly associated gene regions previously detected in genome-wide association studies in this F2 population could again be detected using the pollen germination rate predicted by this model as a trait. Moreover, the model detected rose, tomato, radish, and strawberry pollen grains with similar accuracy to chili pepper. The pollen germination rate could be estimated even for plants other than chili pepper, probably because pollen images were similar among different plant species. We obtained a model that can identify genes related to pollen germination rate through genetic analyses in many plants.
Collapse
Affiliation(s)
- Akira Yamazaki
- Faculty of Agriculture, Kindai University, Nara, 631-8505, Japan.
| | - Ao Takezawa
- Graduate School of Agriculture, Kyoto University, Kizugawa, 619-0218, Japan
| | - Kyoka Nagasaka
- Graduate School of Agriculture, Kyoto University, Kizugawa, 619-0218, Japan
| | - Ko Motoki
- Graduate School of Agriculture, Kyoto University, Kizugawa, 619-0218, Japan
| | - Kazusa Nishimura
- Graduate School of Agriculture, Kyoto University, Kizugawa, 619-0218, Japan
| | - Ryohei Nakano
- Graduate School of Agriculture, Kyoto University, Kizugawa, 619-0218, Japan
| | - Tetsuya Nakazaki
- Graduate School of Agriculture, Kyoto University, Kizugawa, 619-0218, Japan
| |
Collapse
|
16
|
Ming L, Zabala-Gutierrez I, Rodríguez-Sevilla P, Retama JR, Jaque D, Marin R, Ximendes E. Neural Networks Push the Limits of Luminescence Lifetime Nanosensing. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2023; 35:e2306606. [PMID: 37787978 DOI: 10.1002/adma.202306606] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 09/18/2023] [Indexed: 10/04/2023]
Abstract
Luminescence lifetime-based sensing is ideally suited to monitor biological systems due to its minimal invasiveness and remote working principle. Yet, its applicability is limited in conditions of low signal-to-noise ratio (SNR) induced by, e.g., short exposure times and presence of opaque tissues. Herein this limitation is overcome by applying a U-shaped convolutional neural network (U-NET) to improve luminescence lifetime estimation under conditions of extremely low SNR. Specifically, the prowess of the U-NET is showcased in the context of luminescence lifetime thermometry, achieving more precise thermal readouts using Ag2 S nanothermometers. Compared to traditional analysis methods of decay curve fitting and integration, the U-NET can extract average lifetimes more precisely and consistently regardless of the SNR value. The improvement achieved in the sensing performance using the U-NET is demonstrated with two experiments characterized by extreme measurement conditions: thermal monitoring of free-falling droplets, and monitoring of thermal transients in suspended droplets through an opaque medium. These results broaden the applicability of luminescence lifetime-based sensing in fields including in vivo experimentation and microfluidics, while, hopefully, spurring further research on the implementation of machine learning (ML) in luminescence sensing.
Collapse
Affiliation(s)
- Liyan Ming
- Nanomaterials for Bioimaging Group (nanoBIG), Departamento de Física de Materiales, Facultad de Ciencias, Autonomous University of Madrid, Madrid, 28049, Spain
- Departamento de Química en Ciencias Farmacéuticas, Complutense University of Madrid, Madrid, 28040, Spain
| | - Irene Zabala-Gutierrez
- Nanomaterials for Bioimaging Group (nanoBIG), Instituto Ramón y Cajal de Investigación Sanitaria (IRYCIS), Hospital Ramón y Cajal, Madrid, 28034, Spain
| | - Paloma Rodríguez-Sevilla
- Nanomaterials for Bioimaging Group (nanoBIG), Departamento de Física de Materiales, Facultad de Ciencias, Autonomous University of Madrid, Madrid, 28049, Spain
| | - Jorge Rubio Retama
- Nanomaterials for Bioimaging Group (nanoBIG), Instituto Ramón y Cajal de Investigación Sanitaria (IRYCIS), Hospital Ramón y Cajal, Madrid, 28034, Spain
| | - Daniel Jaque
- Nanomaterials for Bioimaging Group (nanoBIG), Departamento de Física de Materiales, Facultad de Ciencias, Autonomous University of Madrid, Madrid, 28049, Spain
- Departamento de Química en Ciencias Farmacéuticas, Complutense University of Madrid, Madrid, 28040, Spain
- Institute for Advanced Research in Chemical Sciences (IAdChem), Autonomous University of Madrid, Madrid, 28049, Spain
| | - Riccardo Marin
- Nanomaterials for Bioimaging Group (nanoBIG), Departamento de Física de Materiales, Facultad de Ciencias, Autonomous University of Madrid, Madrid, 28049, Spain
- Institute for Advanced Research in Chemical Sciences (IAdChem), Autonomous University of Madrid, Madrid, 28049, Spain
| | - Erving Ximendes
- Nanomaterials for Bioimaging Group (nanoBIG), Departamento de Física de Materiales, Facultad de Ciencias, Autonomous University of Madrid, Madrid, 28049, Spain
- Departamento de Química en Ciencias Farmacéuticas, Complutense University of Madrid, Madrid, 28040, Spain
| |
Collapse
|
17
|
Carlier A, Dandrifosse S, Dumont B, Mercatoris B. Comparing CNNs and PLSr for estimating wheat organs biophysical variables using proximal sensing. FRONTIERS IN PLANT SCIENCE 2023; 14:1204791. [PMID: 38053768 PMCID: PMC10694231 DOI: 10.3389/fpls.2023.1204791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 10/30/2023] [Indexed: 12/07/2023]
Abstract
Estimation of biophysical vegetation variables is of interest for diverse applications, such as monitoring of crop growth and health or yield prediction. However, remote estimation of these variables remains challenging due to the inherent complexity of plant architecture, biology and surrounding environment, and the need for features engineering. Recent advancements in deep learning, particularly convolutional neural networks (CNN), offer promising solutions to address this challenge. Unfortunately, the limited availability of labeled data has hindered the exploration of CNNs for regression tasks, especially in the frame of crop phenotyping. In this study, the effectiveness of various CNN models in predicting wheat dry matter, nitrogen uptake, and nitrogen concentration from RGB and multispectral images taken from tillering to maturity was examined. To overcome the scarcity of labeled data, a training pipeline was devised. This pipeline involves transfer learning, pseudo-labeling of unlabeled data and temporal relationship correction. The results demonstrated that CNN models significantly benefit from the pseudolabeling method, while the machine learning approach employing a PLSr did not show comparable performance. Among the models evaluated, EfficientNetB4 achieved the highest accuracy for predicting above-ground biomass, with an R² value of 0.92. In contrast, Resnet50 demonstrated superior performance in predicting LAI, nitrogen uptake, and nitrogen concentration, with R² values of 0.82, 0.73, and 0.80, respectively. Moreover, the study explored multi-output models to predict the distribution of dry matter and nitrogen uptake between stem, inferior leaves, flag leaf, and ear. The findings indicate that CNNs hold promise as accessible and promising tools for phenotyping quantitative biophysical variables of crops. However, further research is required to harness their full potential.
Collapse
Affiliation(s)
- Alexis Carlier
- Biosystems Dynamics and Exchanges, TERRA Teaching and Research Center, Gembloux Agro-Bio Tech, University of Liège, Gembloux, Belgium
| | - Sébastien Dandrifosse
- Biosystems Dynamics and Exchanges, TERRA Teaching and Research Center, Gembloux Agro-Bio Tech, University of Liège, Gembloux, Belgium
| | - Benjamin Dumont
- Plant Sciences, TERRA Teaching and Research Center, Gembloux Agro-Bio Tech, University of Liège, Gembloux, Belgium
| | - Benoit Mercatoris
- Biosystems Dynamics and Exchanges, TERRA Teaching and Research Center, Gembloux Agro-Bio Tech, University of Liège, Gembloux, Belgium
| |
Collapse
|
18
|
Wen T, Li JH, Wang Q, Gao YY, Hao GF, Song BA. Thermal imaging: The digital eye facilitates high-throughput phenotyping traits of plant growth and stress responses. THE SCIENCE OF THE TOTAL ENVIRONMENT 2023; 899:165626. [PMID: 37481085 DOI: 10.1016/j.scitotenv.2023.165626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 07/13/2023] [Accepted: 07/16/2023] [Indexed: 07/24/2023]
Abstract
Plant phenotyping is important for plants to cope with environmental changes and ensure plant health. Imaging techniques are perceived as the most critical and reliable tools for studying plant phenotypes. Thermal imaging has opened up new opportunities for nondestructive imaging of plant phenotyping. However, a comprehensive summary of thermal imaging in plant phenotyping is still lacking. Here we discuss the progress and future prospects of thermal imaging for assessing plant growth and stress responses. First, we classify thermal imaging into ground-based and aerial platforms based on their adaptability to different experimental environments (including laboratory, greenhouse, and field). It is convenient to collect phenotypic information of different dimensions. Second, in order to enhance the efficiency of thermal image processing, automatic algorithms based on deep learning are employed instead of traditional manual methods, greatly reducing the time cost of experiments. Considering its ease of implementation, handling and instant response, thermal imaging has been widely used in research on environmental stress, crop yield, and seed vigor. We have found that thermal imaging can detect thermal energy dissipation caused by living organisms (e.g., pests, viruses, bacteria, fungi, and oomycetes), enabling early disease diagnosis. It also recognizes changes leaf surface temperatures resulting from reduced transpiration rates caused by nutrient deficiency, drought, salinity, or freezing. Furthermore, thermal imaging predicts crop yield under different water states and forecasts the viability of dormant seeds after water absorption by monitoring temperature changes in the seeds. This work will assist biologists and agronomists in studying plant phenotypes and serve a guide for breeders to develop high-yielding, stress-tolerant, and superior crops.
Collapse
Affiliation(s)
- Ting Wen
- National Key Laboratory of Green Pesticide, State Key Laboratory Breeding Base of Green Pesticide and Agricultural Bioengineering, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, PR China
| | - Jian-Hong Li
- National Key Laboratory of Green Pesticide, State Key Laboratory Breeding Base of Green Pesticide and Agricultural Bioengineering, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, PR China
| | - Qi Wang
- State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, PR China.
| | - Yang-Yang Gao
- National Key Laboratory of Green Pesticide, State Key Laboratory Breeding Base of Green Pesticide and Agricultural Bioengineering, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, PR China.
| | - Ge-Fei Hao
- National Key Laboratory of Green Pesticide, State Key Laboratory Breeding Base of Green Pesticide and Agricultural Bioengineering, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, PR China; Key Laboratory of Pesticide & Chemical Biology, Ministry of Education, College of Chemistry, Central China Normal University, Wuhan 430079, China.
| | - Bao-An Song
- National Key Laboratory of Green Pesticide, State Key Laboratory Breeding Base of Green Pesticide and Agricultural Bioengineering, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, PR China
| |
Collapse
|
19
|
Sharma N, Raman H, Wheeler D, Kalenahalli Y, Sharma R. Data-driven approaches to improve water-use efficiency and drought resistance in crop plants. PLANT SCIENCE : AN INTERNATIONAL JOURNAL OF EXPERIMENTAL PLANT BIOLOGY 2023; 336:111852. [PMID: 37659733 DOI: 10.1016/j.plantsci.2023.111852] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 08/23/2023] [Accepted: 08/29/2023] [Indexed: 09/04/2023]
Abstract
With the increasing population, there lies a pressing demand for food, feed and fibre, while the changing climatic conditions pose severe challenges for agricultural production worldwide. Water is the lifeline for crop production; thus, enhancing crop water-use efficiency (WUE) and improving drought resistance in crop varieties are crucial for overcoming these challenges. Genetically-driven improvements in yield, WUE and drought tolerance traits can buffer the worst effects of climate change on crop production in dry areas. While traditional crop breeding approaches have delivered impressive results in increasing yield, the methods remain time-consuming and are often limited by the existing allelic variation present in the germplasm. Significant advances in breeding and high-throughput omics technologies in parallel with smart agriculture practices have created avenues to dramatically speed up the process of trait improvement by leveraging the vast volumes of genomic and phenotypic data. For example, individual genome and pan-genome assemblies, along with transcriptomic, metabolomic and proteomic data from germplasm collections, characterised at phenotypic levels, could be utilised to identify marker-trait associations and superior haplotypes for crop genetic improvement. In addition, these omics approaches enable the identification of genes involved in pathways leading to the expression of a trait, thereby providing an understanding of the genetic, physiological and biochemical basis of trait variation. These data-driven gene discoveries and validation approaches are essential for crop improvement pipelines, including genomic breeding, speed breeding and gene editing. Herein, we provide an overview of prospects presented using big data-driven approaches (including artificial intelligence and machine learning) to harness new genetic gains for breeding programs and develop drought-tolerant crop varieties with favourable WUE and high-yield potential traits.
Collapse
Affiliation(s)
- Niharika Sharma
- NSW Department of Primary Industries, Orange Agricultural Institute, Orange, NSW 2800, Australia.
| | - Harsh Raman
- NSW Department of Primary Industries, Wagga Wagga Agricultural Institute, Wagga Wagga, NSW 2650, Australia
| | - David Wheeler
- NSW Department of Primary Industries, Orange Agricultural Institute, Orange, NSW 2800, Australia
| | - Yogendra Kalenahalli
- International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), Patancheru, Hyderabad, Telangana 502324, India
| | - Rita Sharma
- Department of Biological Sciences, BITS Pilani, Pilani Campus, Rajasthan 333031, India
| |
Collapse
|
20
|
Quiñones R, Samal A, Das Choudhury S, Muñoz-Arriola F. OSC-CO 2: coattention and cosegmentation framework for plant state change with multiple features. FRONTIERS IN PLANT SCIENCE 2023; 14:1211409. [PMID: 38023863 PMCID: PMC10644038 DOI: 10.3389/fpls.2023.1211409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 10/06/2023] [Indexed: 12/01/2023]
Abstract
Cosegmentation and coattention are extensions of traditional segmentation methods aimed at detecting a common object (or objects) in a group of images. Current cosegmentation and coattention methods are ineffective for objects, such as plants, that change their morphological state while being captured in different modalities and views. The Object State Change using Coattention-Cosegmentation (OSC-CO2) is an end-to-end unsupervised deep-learning framework that enhances traditional segmentation techniques, processing, analyzing, selecting, and combining suitable segmentation results that may contain most of our target object's pixels, and then displaying a final segmented image. The framework leverages coattention-based convolutional neural networks (CNNs) and cosegmentation-based dense Conditional Random Fields (CRFs) to address segmentation accuracy in high-dimensional plant imagery with evolving plant objects. The efficacy of OSC-CO2 is demonstrated using plant growth sequences imaged with infrared, visible, and fluorescence cameras in multiple views using a remote sensing, high-throughput phenotyping platform, and is evaluated using Jaccard index and precision measures. We also introduce CosegPP+, a dataset that is structured and can provide quantitative information on the efficacy of our framework. Results show that OSC-CO2 out performed state-of-the art segmentation and cosegmentation methods by improving segementation accuracy by 3% to 45%.
Collapse
Affiliation(s)
- Rubi Quiñones
- School of Computing, University of Nebraska-Lincoln, Lincoln, NE, United States
- Computer Science Department, Southern Illinois University Edwardsville, Edwardsville, IL, United States
| | - Ashok Samal
- School of Computing, University of Nebraska-Lincoln, Lincoln, NE, United States
| | - Sruti Das Choudhury
- School of Computing, University of Nebraska-Lincoln, Lincoln, NE, United States
- School of Natural Resources, University of Nebraska-Lincoln, Lincoln, NE, United States
| | - Francisco Muñoz-Arriola
- School of Natural Resources, University of Nebraska-Lincoln, Lincoln, NE, United States
- Department of Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, NE, United States
| |
Collapse
|
21
|
Zhao J, Cai Y, Wang S, Yan J, Qiu X, Yao X, Tian Y, Zhu Y, Cao W, Zhang X. Small and Oriented Wheat Spike Detection at the Filling and Maturity Stages Based on WheatNet. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0109. [PMID: 37915995 PMCID: PMC10618025 DOI: 10.34133/plantphenomics.0109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 09/27/2023] [Indexed: 11/03/2023]
Abstract
Accurate wheat spike detection is crucial in wheat field phenotyping for precision farming. Advances in artificial intelligence have enabled deep learning models to improve the accuracy of detecting wheat spikes. However, wheat growth is a dynamic process characterized by important changes in the color feature of wheat spikes and the background. Existing models for wheat spike detection are typically designed for a specific growth stage. Their adaptability to other growth stages or field scenes is limited. Such models cannot detect wheat spikes accurately caused by the difference in color, size, and morphological features between growth stages. This paper proposes WheatNet to detect small and oriented wheat spikes from the filling to the maturity stage. WheatNet constructs a Transform Network to reduce the effect of differences in the color features of spikes at the filling and maturity stages on detection accuracy. Moreover, a Detection Network is designed to improve wheat spike detection capability. A Circle Smooth Label is proposed to classify wheat spike angles in drone imagery. A new micro-scale detection layer is added to the network to extract the features of small spikes. Localization loss is improved by Complete Intersection over Union to reduce the impact of the background. The results show that WheatNet can achieve greater accuracy than classical detection methods. The detection accuracy with average precision of spike detection at the filling stage is 90.1%, while it is 88.6% at the maturity stage. It suggests that WheatNet is a promising tool for detection of wheat spikes.
Collapse
Affiliation(s)
- Jianqing Zhao
- National Engineering and Technology Center for Information Agriculture,
Nanjing Agricultural University, Nanjing 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing 210095, China
| | - Yucheng Cai
- National Engineering and Technology Center for Information Agriculture,
Nanjing Agricultural University, Nanjing 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing 210095, China
| | - Suwan Wang
- National Engineering and Technology Center for Information Agriculture,
Nanjing Agricultural University, Nanjing 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing 210095, China
| | - Jiawei Yan
- National Engineering and Technology Center for Information Agriculture,
Nanjing Agricultural University, Nanjing 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing 210095, China
| | - Xiaolei Qiu
- National Engineering and Technology Center for Information Agriculture,
Nanjing Agricultural University, Nanjing 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing 210095, China
| | - Xia Yao
- National Engineering and Technology Center for Information Agriculture,
Nanjing Agricultural University, Nanjing 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing 210095, China
- Jiangsu Key Laboratory for Information Agriculture, Nanjing 210095, China
| | - Yongchao Tian
- National Engineering and Technology Center for Information Agriculture,
Nanjing Agricultural University, Nanjing 210095, China
- Jiangsu Collaborative Innovation Center for Modern Crop Production, Nanjing 210095, China
| | - Yan Zhu
- National Engineering and Technology Center for Information Agriculture,
Nanjing Agricultural University, Nanjing 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing 210095, China
| | - Weixing Cao
- National Engineering and Technology Center for Information Agriculture,
Nanjing Agricultural University, Nanjing 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing 210095, China
| | - Xiaohu Zhang
- National Engineering and Technology Center for Information Agriculture,
Nanjing Agricultural University, Nanjing 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing 210095, China
- Jiangsu Collaborative Innovation Center for Modern Crop Production, Nanjing 210095, China
| |
Collapse
|
22
|
Mostafa S, Mondal D, Panjvani K, Kochian L, Stavness I. Explainable deep learning in plant phenotyping. Front Artif Intell 2023; 6:1203546. [PMID: 37795496 PMCID: PMC10546035 DOI: 10.3389/frai.2023.1203546] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 08/25/2023] [Indexed: 10/06/2023] Open
Abstract
The increasing human population and variable weather conditions, due to climate change, pose a threat to the world's food security. To improve global food security, we need to provide breeders with tools to develop crop cultivars that are more resilient to extreme weather conditions and provide growers with tools to more effectively manage biotic and abiotic stresses in their crops. Plant phenotyping, the measurement of a plant's structural and functional characteristics, has the potential to inform, improve and accelerate both breeders' selections and growers' management decisions. To improve the speed, reliability and scale of plant phenotyping procedures, many researchers have adopted deep learning methods to estimate phenotypic information from images of plants and crops. Despite the successful results of these image-based phenotyping studies, the representations learned by deep learning models remain difficult to interpret, understand, and explain. For this reason, deep learning models are still considered to be black boxes. Explainable AI (XAI) is a promising approach for opening the deep learning model's black box and providing plant scientists with image-based phenotypic information that is interpretable and trustworthy. Although various fields of study have adopted XAI to advance their understanding of deep learning models, it has yet to be well-studied in the context of plant phenotyping research. In this review article, we reviewed existing XAI studies in plant shoot phenotyping, as well as related domains, to help plant researchers understand the benefits of XAI and make it easier for them to integrate XAI into their future studies. An elucidation of the representations within a deep learning model can help researchers explain the model's decisions, relate the features detected by the model to the underlying plant physiology, and enhance the trustworthiness of image-based phenotypic information used in food production systems.
Collapse
Affiliation(s)
- Sakib Mostafa
- Department of Computer Science, University of Saskatchewan, Saskatoon, SK, Canada
| | - Debajyoti Mondal
- Department of Computer Science, University of Saskatchewan, Saskatoon, SK, Canada
| | - Karim Panjvani
- Global Institute for Food Security, University of Saskatchewan, Saskatoon, SK, Canada
| | - Leon Kochian
- Global Institute for Food Security, University of Saskatchewan, Saskatoon, SK, Canada
| | - Ian Stavness
- Department of Computer Science, University of Saskatchewan, Saskatoon, SK, Canada
| |
Collapse
|
23
|
Carlier A, Dandrifosse S, Dumont B, Mercatoris B. To What Extent Does Yellow Rust Infestation Affect Remotely Sensed Nitrogen Status? PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0083. [PMID: 37681000 PMCID: PMC10482323 DOI: 10.34133/plantphenomics.0083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 08/03/2023] [Indexed: 09/09/2023]
Abstract
The utilization of high-throughput in-field phenotyping systems presents new opportunities for evaluating crop stress. However, existing studies have primarily focused on individual stresses, overlooking the fact that crops in field conditions frequently encounter multiple stresses, which can display similar symptoms or interfere with the detection of other stress factors. Therefore, this study aimed to investigate the impact of wheat yellow rust on reflectance measurements and nitrogen status assessment. A multi-sensor mobile platform was utilized to capture RGB and multispectral images throughout a 2-year fertilization-fungicide trial. To identify disease-induced damage, the SegVeg approach, which combines a U-NET architecture and a pixel-wise classifier, was applied to RGB images, generating a mask capable of distinguishing between healthy and damaged areas of the leaves. The observed proportion of damage in the images demonstrated similar effectiveness to visual scoring methods in explaining grain yield. Furthermore, the study discovered that the disease not only affected reflectance through leaf damage but also influenced the reflectance of healthy areas by disrupting the overall nitrogen status of the plants. This emphasizes the importance of incorporating disease impact into reflectance-based decision support tools to account for its effects on spectral data. This effect was successfully mitigated by employing the NDRE vegetation index calculated exclusively from the healthy portions of the leaves or by incorporating the proportion of damage into the model. However, these findings also highlight the necessity for further research specifically addressing the challenges presented by multiple stresses in crop phenotyping.
Collapse
Affiliation(s)
- Alexis Carlier
- Biosystems Dynamics and Exchanges, TERRA Teaching and Research Center, Gembloux Agro-Bio Tech,
University of Liège, 5030 Gembloux, Belgium
| | - Sebastien Dandrifosse
- Biosystems Dynamics and Exchanges, TERRA Teaching and Research Center, Gembloux Agro-Bio Tech,
University of Liège, 5030 Gembloux, Belgium
| | - Benjamin Dumont
- Plant Sciences, TERRA Teaching and Research Center, Gembloux Agro-Bio Tech,
University of Liège, 5030 Gembloux, Belgium
| | - Benoît Mercatoris
- Biosystems Dynamics and Exchanges, TERRA Teaching and Research Center, Gembloux Agro-Bio Tech,
University of Liège, 5030 Gembloux, Belgium
| |
Collapse
|
24
|
Lagergren J, Pavicic M, Chhetri HB, York LM, Hyatt D, Kainer D, Rutter EM, Flores K, Bailey-Bale J, Klein M, Taylor G, Jacobson D, Streich J. Few-Shot Learning Enables Population-Scale Analysis of Leaf Traits in Populus trichocarpa. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0072. [PMID: 37519935 PMCID: PMC10380552 DOI: 10.34133/plantphenomics.0072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Accepted: 06/27/2023] [Indexed: 08/01/2023]
Abstract
Plant phenotyping is typically a time-consuming and expensive endeavor, requiring large groups of researchers to meticulously measure biologically relevant plant traits, and is the main bottleneck in understanding plant adaptation and the genetic architecture underlying complex traits at population scale. In this work, we address these challenges by leveraging few-shot learning with convolutional neural networks to segment the leaf body and visible venation of 2,906 Populus trichocarpa leaf images obtained in the field. In contrast to previous methods, our approach (a) does not require experimental or image preprocessing, (b) uses the raw RGB images at full resolution, and (c) requires very few samples for training (e.g., just 8 images for vein segmentation). Traits relating to leaf morphology and vein topology are extracted from the resulting segmentations using traditional open-source image-processing tools, validated using real-world physical measurements, and used to conduct a genome-wide association study to identify genes controlling the traits. In this way, the current work is designed to provide the plant phenotyping community with (a) methods for fast and accurate image-based feature extraction that require minimal training data and (b) a new population-scale dataset, including 68 different leaf phenotypes, for domain scientists and machine learning researchers. All of the few-shot learning code, data, and results are made publicly available.
Collapse
Affiliation(s)
- John Lagergren
- Biosciences Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
- Biosciences Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Mirko Pavicic
- Biosciences Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Hari B Chhetri
- Biosciences Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Larry M York
- Biosciences Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Doug Hyatt
- Biosciences Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - David Kainer
- Biosciences Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Erica M Rutter
- Department of Applied Mathematics, University of California, Merced, CA, USA
| | - Kevin Flores
- Department of Mathematics, North Carolina State University, Raleigh, NC, USA
| | - Jack Bailey-Bale
- Department of Plant Sciences, University of California, Davis, CA, USA
| | - Marie Klein
- Department of Plant Sciences, University of California, Davis, CA, USA
| | - Gail Taylor
- Department of Plant Sciences, University of California, Davis, CA, USA
| | - Daniel Jacobson
- Biosciences Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
- Biosciences Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Jared Streich
- Biosciences Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
- Biosciences Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| |
Collapse
|
25
|
Xing D, Wang Y, Sun P, Huang H, Lin E. A CNN-LSTM-att hybrid model for classification and evaluation of growth status under drought and heat stress in chinese fir (Cunninghamia lanceolata). PLANT METHODS 2023; 19:66. [PMID: 37400865 DOI: 10.1186/s13007-023-01044-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 06/22/2023] [Indexed: 07/05/2023]
Abstract
BACKGROUND Cunninghamia lanceolata (Chinese fir), is one of the most important timber trees in China. With the global warming, to develop new resistant varieties to drought or heat stress has become an essential task for breeders of Chinese fir. However, classification and evaluation of growth status of Chinese fir under drought or heat stress are still labor-intensive and time-consuming. RESULTS In this study, we proposed a CNN-LSTM-att hybrid model for classification of growth status of Chinese fir seedlings under drought and heat stress, respectively. Two RGB image datasets of Chinese fir seedling under drought and heat stress were generated for the first time, and utilized in this study. By comparing four base CNN models with LSTM, the Resnet50-LSTM was identified as the best model in classification of growth status, and LSTM would dramatically improve the classification performance. Moreover, attention mechanism further enhanced performance of Resnet50-LSTM, which was verified by Grad-CAM. By applying the established Resnet50-LSTM-att model, the accuracy rate and recall rate of classification was up to 96.91% and 96.79% for dataset of heat stress, and 96.05% and 95.88% for dataset of drought, respectively. Accordingly, the R2 value and RMSE value for evaluation on growth status under heat stress were 0.957 and 0.067, respectively. And, the R2 value and RMSE value for evaluation on growth status under drought were 0.944 and 0.076, respectively. CONCLUSION In summary, our proposed model provides an important tool for stress phenotyping in Chinese fir, which will be a great help for selection and breeding new resistant varieties in future.
Collapse
Affiliation(s)
- Dong Xing
- State Key Laboratory of Subtropical Silviculture, Zhejiang A&F University, Hangzhou, 311300, Zhejiang, China
| | - Yulin Wang
- State Key Laboratory of Subtropical Silviculture, Zhejiang A&F University, Hangzhou, 311300, Zhejiang, China
| | - Penghui Sun
- State Key Laboratory of Subtropical Silviculture, Zhejiang A&F University, Hangzhou, 311300, Zhejiang, China
| | - Huahong Huang
- State Key Laboratory of Subtropical Silviculture, Zhejiang A&F University, Hangzhou, 311300, Zhejiang, China
| | - Erpei Lin
- State Key Laboratory of Subtropical Silviculture, Zhejiang A&F University, Hangzhou, 311300, Zhejiang, China.
| |
Collapse
|
26
|
Harandi N, Vandenberghe B, Vankerschaver J, Depuydt S, Van Messem A. How to make sense of 3D representations for plant phenotyping: a compendium of processing and analysis techniques. PLANT METHODS 2023; 19:60. [PMID: 37353846 DOI: 10.1186/s13007-023-01031-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 05/19/2023] [Indexed: 06/25/2023]
Abstract
Computer vision technology is moving more and more towards a three-dimensional approach, and plant phenotyping is following this trend. However, despite its potential, the complexity of the analysis of 3D representations has been the main bottleneck hindering the wider deployment of 3D plant phenotyping. In this review we provide an overview of typical steps for the processing and analysis of 3D representations of plants, to offer potential users of 3D phenotyping a first gateway into its application, and to stimulate its further development. We focus on plant phenotyping applications where the goal is to measure characteristics of single plants or crop canopies on a small scale in research settings, as opposed to large scale crop monitoring in the field.
Collapse
Affiliation(s)
- Negin Harandi
- Center for Biosystems and Biotech Data Science, Ghent University Global Campus, 119 Songdomunhwa-ro, Yeonsu-gu, Incheon, South Korea
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Krijgslaan 281, S9, Ghent, Belgium
| | | | - Joris Vankerschaver
- Center for Biosystems and Biotech Data Science, Ghent University Global Campus, 119 Songdomunhwa-ro, Yeonsu-gu, Incheon, South Korea
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Krijgslaan 281, S9, Ghent, Belgium
| | - Stephen Depuydt
- Erasmus Applied University of Sciences and Arts, Campus Kaai, Nijverheidskaai 170, Anderlecht, Belgium
| | - Arnout Van Messem
- Department of Mathematics, Université de Liège, Allée de la Découverte 12, Liège, Belgium.
| |
Collapse
|
27
|
Dwivedi SL, Heslop-Harrison P, Spillane C, McKeown PC, Edwards D, Goldman I, Ortiz R. Evolutionary dynamics and adaptive benefits of deleterious mutations in crop gene pools. TRENDS IN PLANT SCIENCE 2023; 28:685-697. [PMID: 36764870 DOI: 10.1016/j.tplants.2023.01.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 12/03/2022] [Accepted: 01/18/2023] [Indexed: 05/13/2023]
Abstract
Mutations with deleterious consequences in nature may be conditionally deleterious in crop plants. That is, while some genetic variants may reduce fitness under wild conditions and be subject to purifying selection, they can be under positive selection in domesticates. Such deleterious alleles can be plant breeding targets, particularly for complex traits. The difficulty of distinguishing favorable from unfavorable variants reduces the power of selection, while favorable trait variation and heterosis may be attributable to deleterious alleles. Here, we review the roles of deleterious mutations in crop breeding and discuss how they can be used as a new avenue for crop improvement with emerging genomic tools, including HapMaps and pangenome analysis, aiding the identification, removal, or exploitation of deleterious mutations.
Collapse
Affiliation(s)
| | - Pat Heslop-Harrison
- Key Laboratory of Plant Resources Conservation and Sustainable Utilization, South China Botanical Garden, Chinese Academy of Sciences, Guangzhou, 510650, China; Department of Genetics and Genome Biology, University of Leicester, Leicester, LE1 7RH, UK
| | - Charles Spillane
- Agriculture and Bioeconomy Research Centre, Ryan Institute, University of Galway, University Road, Galway, H91 REW4, Ireland
| | - Peter C McKeown
- Agriculture and Bioeconomy Research Centre, Ryan Institute, University of Galway, University Road, Galway, H91 REW4, Ireland
| | - David Edwards
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, WA 6009, Australia
| | - Irwin Goldman
- Department of Horticulture, College of Agricultural and Life Sciences, University of Wisconsin Madison, WI 53706, USA
| | - Rodomiro Ortiz
- Department of Plant Breeding, Swedish University of Agricultural Sciences, Alnarp, SE 23053, Sweden.
| |
Collapse
|
28
|
Sakeef N, Scandola S, Kennedy C, Lummer C, Chang J, Uhrig RG, Lin G. Machine learning classification of plant genotypes grown under different light conditions through the integration of multi-scale time-series data. Comput Struct Biotechnol J 2023; 21:3183-3195. [PMID: 37333861 PMCID: PMC10275741 DOI: 10.1016/j.csbj.2023.05.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 05/04/2023] [Accepted: 05/04/2023] [Indexed: 06/20/2023] Open
Abstract
In order to mitigate the effects of a changing climate, agriculture requires more effective evaluation, selection, and production of crop cultivars in order to accelerate genotype-to-phenotype connections and the selection of beneficial traits. Critically, plant growth and development are highly dependent on sunlight, with light energy providing plants with the energy required to photosynthesize as well as a means to directly intersect with the environment in order to develop. In plant analyses, machine learning and deep learning techniques have a proven ability to learn plant growth patterns, including detection of disease, plant stress, and growth using a variety of image data. To date, however, studies have not assessed machine learning and deep learning algorithms for their ability to differentiate a large cohort of genotypes grown under several growth conditions using time-series data automatically acquired across multiple scales (daily and developmentally). Here, we extensively evaluate a wide range of machine learning and deep learning algorithms for their ability to differentiate 17 well-characterized photoreceptor deficient genotypes differing in their light detection capabilities grown under several different light conditions. Using algorithm performance measurements of precision, recall, F1-Score, and accuracy, we find that Suport Vector Machine (SVM) maintains the greatest classification accuracy, while a combined ConvLSTM2D deep learning model produces the best genotype classification results across the different growth conditions. Our successful integration of time-series growth data across multiple scales, genotypes and growth conditions sets a new foundational baseline from which more complex plant science traits can be assessed for genotype-to-phenotype connections.
Collapse
Affiliation(s)
- Nazmus Sakeef
- Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
- Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada
| | - Sabine Scandola
- Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada
| | - Curtis Kennedy
- Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
- Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada
| | - Christina Lummer
- Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada
| | - Jiameng Chang
- Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
| | - R. Glen Uhrig
- Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada
- Department of Biochemistry, University of Alberta, Edmonton, Alberta, Canada
| | - Guohui Lin
- Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
29
|
Okyere FG, Cudjoe D, Sadeghi-Tehran P, Virlet N, Riche AB, Castle M, Greche L, Mohareb F, Simms D, Mhada M, Hawkesford MJ. Machine Learning Methods for Automatic Segmentation of Images of Field- and Glasshouse-Based Plants for High-Throughput Phenotyping. PLANTS (BASEL, SWITZERLAND) 2023; 12:2035. [PMID: 37653952 PMCID: PMC10224253 DOI: 10.3390/plants12102035] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 05/03/2023] [Accepted: 05/10/2023] [Indexed: 07/15/2023]
Abstract
Image segmentation is a fundamental but critical step for achieving automated high- throughput phenotyping. While conventional segmentation methods perform well in homogenous environments, the performance decreases when used in more complex environments. This study aimed to develop a fast and robust neural-network-based segmentation tool to phenotype plants in both field and glasshouse environments in a high-throughput manner. Digital images of cowpea (from glasshouse) and wheat (from field) with different nutrient supplies across their full growth cycle were acquired. Image patches from 20 randomly selected images from the acquired dataset were transformed from their original RGB format to multiple color spaces. The pixels in the patches were annotated as foreground and background with a pixel having a feature vector of 24 color properties. A feature selection technique was applied to choose the sensitive features, which were used to train a multilayer perceptron network (MLP) and two other traditional machine learning models: support vector machines (SVMs) and random forest (RF). The performance of these models, together with two standard color-index segmentation techniques (excess green (ExG) and excess green-red (ExGR)), was compared. The proposed method outperformed the other methods in producing quality segmented images with over 98%-pixel classification accuracy. Regression models developed from the different segmentation methods to predict Soil Plant Analysis Development (SPAD) values of cowpea and wheat showed that images from the proposed MLP method produced models with high predictive power and accuracy comparably. This method will be an essential tool for the development of a data analysis pipeline for high-throughput plant phenotyping. The proposed technique is capable of learning from different environmental conditions, with a high level of robustness.
Collapse
Affiliation(s)
- Frank Gyan Okyere
- Sustainable Soils and Crops, Rothamsted Research, Harpenden AL5 2JQ, UK
- School of Water, Energy and Environment, Soil, Agrifood and Biosciences, Cranfield University, Bedford MK43 0AL, UK
| | - Daniel Cudjoe
- Sustainable Soils and Crops, Rothamsted Research, Harpenden AL5 2JQ, UK
- School of Water, Energy and Environment, Soil, Agrifood and Biosciences, Cranfield University, Bedford MK43 0AL, UK
| | | | - Nicolas Virlet
- Sustainable Soils and Crops, Rothamsted Research, Harpenden AL5 2JQ, UK
| | - Andrew B. Riche
- Sustainable Soils and Crops, Rothamsted Research, Harpenden AL5 2JQ, UK
| | - March Castle
- Sustainable Soils and Crops, Rothamsted Research, Harpenden AL5 2JQ, UK
| | - Latifa Greche
- Sustainable Soils and Crops, Rothamsted Research, Harpenden AL5 2JQ, UK
| | - Fady Mohareb
- School of Water, Energy and Environment, Soil, Agrifood and Biosciences, Cranfield University, Bedford MK43 0AL, UK
| | - Daniel Simms
- School of Water, Energy and Environment, Soil, Agrifood and Biosciences, Cranfield University, Bedford MK43 0AL, UK
| | - Manal Mhada
- African Integrated Plant and Soil Science, Agro-Biosciences, University of Mohammed VI Polytechnic, Lot 660, Ben Guerir 43150, Morocco
| | | |
Collapse
|
30
|
Montesinos-López A, Rivera C, Pinto F, Piñera F, Gonzalez D, Reynolds M, Pérez-Rodríguez P, Li H, Montesinos-López OA, Crossa J. Multimodal deep learning methods enhance genomic prediction of wheat breeding. G3 (BETHESDA, MD.) 2023; 13:jkad045. [PMID: 36869747 PMCID: PMC10151399 DOI: 10.1093/g3journal/jkad045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 02/21/2023] [Accepted: 02/22/2023] [Indexed: 03/05/2023]
Abstract
While several statistical machine learning methods have been developed and studied for assessing the genomic prediction (GP) accuracy of unobserved phenotypes in plant breeding research, few methods have linked genomics and phenomics (imaging). Deep learning (DL) neural networks have been developed to increase the GP accuracy of unobserved phenotypes while simultaneously accounting for the complexity of genotype-environment interaction (GE); however, unlike conventional GP models, DL has not been investigated for when genomics is linked with phenomics. In this study we used 2 wheat data sets (DS1 and DS2) to compare a novel DL method with conventional GP models. Models fitted for DS1 were GBLUP, gradient boosting machine (GBM), support vector regression (SVR) and the DL method. Results indicated that for 1 year, DL provided better GP accuracy than results obtained by the other models. However, GP accuracy obtained for other years indicated that the GBLUP model was slightly superior to the DL. DS2 is comprised only of genomic data from wheat lines tested for 3 years, 2 environments (drought and irrigated) and 2-4 traits. DS2 results showed that when predicting the irrigated environment with the drought environment, DL had higher accuracy than the GBLUP model in all analyzed traits and years. When predicting drought environment with information on the irrigated environment, the DL model and GBLUP model had similar accuracy. The DL method used in this study is novel and presents a strong degree of generalization as several modules can potentially be incorporated and concatenated to produce an output for a multi-input data structure.
Collapse
Affiliation(s)
- Abelardo Montesinos-López
- Departamento de Matemáticas, Centro Universitario de Ciencias Exactas e Ingenierías (CUCEI), Universidad de Guadalajara, 44430, Guadalajara, Jalisco, Mexico
| | - Carolina Rivera
- International Maize and Wheat Improvement Center (CIMMYT), Carretera México- Veracruz Km. 45, El Batán, CP 56237, Texcoco, Edo. de México, Mexico
| | - Francisco Pinto
- International Maize and Wheat Improvement Center (CIMMYT), Carretera México- Veracruz Km. 45, El Batán, CP 56237, Texcoco, Edo. de México, Mexico
| | - Francisco Piñera
- International Maize and Wheat Improvement Center (CIMMYT), Carretera México- Veracruz Km. 45, El Batán, CP 56237, Texcoco, Edo. de México, Mexico
| | - David Gonzalez
- International Maize and Wheat Improvement Center (CIMMYT), Carretera México- Veracruz Km. 45, El Batán, CP 56237, Texcoco, Edo. de México, Mexico
| | - Mathew Reynolds
- International Maize and Wheat Improvement Center (CIMMYT), Carretera México- Veracruz Km. 45, El Batán, CP 56237, Texcoco, Edo. de México, Mexico
| | | | - Huihui Li
- Institute of Crop Sciences, The National Key Facility for Crop Gene Resources and Genetic Improvement and CIMMYT China office, Chinese Academy of Agricultural Sciences, Beijing, 100081, China
| | | | - Jose Crossa
- International Maize and Wheat Improvement Center (CIMMYT), Carretera México- Veracruz Km. 45, El Batán, CP 56237, Texcoco, Edo. de México, Mexico
- Colegio de Postgraduados, Montecillos, Edo. de México, CP 56230, Mexico
| |
Collapse
|
31
|
Li Y, Wen W, Fan J, Gou W, Gu S, Lu X, Yu Z, Wang X, Guo X. Multi-Source Data Fusion Improves Time-Series Phenotype Accuracy in Maize under a Field High-Throughput Phenotyping Platform. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0043. [PMID: 37223316 PMCID: PMC10202381 DOI: 10.34133/plantphenomics.0043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 03/26/2023] [Indexed: 05/25/2023]
Abstract
The field phenotyping platforms that can obtain high-throughput and time-series phenotypes of plant populations at the 3-dimensional level are crucial for plant breeding and management. However, it is difficult to align the point cloud data and extract accurate phenotypic traits of plant populations. In this study, high-throughput, time-series raw data of field maize populations were collected using a field rail-based phenotyping platform with light detection and ranging (LiDAR) and an RGB (red, green, and blue) camera. The orthorectified images and LiDAR point clouds were aligned via the direct linear transformation algorithm. On this basis, time-series point clouds were further registered by the time-series image guidance. The cloth simulation filter algorithm was then used to remove the ground points. Individual plants and plant organs were segmented from maize population by fast displacement and region growth algorithms. The plant heights of 13 maize cultivars obtained using the multi-source fusion data were highly correlated with the manual measurements (R2 = 0.98), and the accuracy was higher than only using one source point cloud data (R2 = 0.93). It demonstrates that multi-source data fusion can effectively improve the accuracy of time series phenotype extraction, and rail-based field phenotyping platforms can be a practical tool for plant growth dynamic observation of phenotypes in individual plant and organ scales.
Collapse
Affiliation(s)
- Yinglun Li
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Weiliang Wen
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Jiangchuan Fan
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Wenbo Gou
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Shenghao Gu
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Xianju Lu
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Zetao Yu
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Xiaodong Wang
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Xinyu Guo
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| |
Collapse
|
32
|
Yun C, Kim YH, Lee SJ, Im SJ, Park KR. WRA-Net: Wide Receptive Field Attention Network for Motion Deblurring in Crop and Weed Image. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0031. [PMID: 37287583 PMCID: PMC10243196 DOI: 10.34133/plantphenomics.0031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 02/16/2023] [Indexed: 06/09/2023]
Abstract
Automatically segmenting crops and weeds in the image input from cameras accurately is essential in various agricultural technology fields, such as herbicide spraying by farming robots based on crop and weed segmentation information. However, crop and weed images taken with a camera have motion blur due to various causes (e.g., vibration or shaking of a camera on farming robots, shaking of crops and weeds), which reduces the accuracy of crop and weed segmentation. Therefore, robust crop and weed segmentation for motion-blurred images is essential. However, previous crop and weed segmentation studies were performed without considering motion-blurred images. To solve this problem, this study proposed a new motion-blur image restoration method based on a wide receptive field attention network (WRA-Net), based on which we investigated improving crop and weed segmentation accuracy in motion-blurred images. WRA-Net comprises a main block called a lite wide receptive field attention residual block, which comprises modified depthwise separable convolutional blocks, an attention gate, and a learnable skip connection. We conducted experiments using the proposed method with 3 open databases: BoniRob, crop/weed field image, and rice seedling and weed datasets. According to the results, the crop and weed segmentation accuracy based on mean intersection over union was 0.7444, 0.7741, and 0.7149, respectively, demonstrating that this method outperformed the state-of-the-art methods.
Collapse
|
33
|
Solimani F, Cardellicchio A, Nitti M, Lako A, Dimauro G, Renò V. A Systematic Review of Effective Hardware and Software Factors Affecting High-Throughput Plant Phenotyping. INFORMATION 2023. [DOI: 10.3390/info14040214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023] Open
Abstract
Plant phenotyping studies the complex characteristics of plants, with the aim of evaluating and assessing their condition and finding better exemplars. Recently, a new branch emerged in the phenotyping field, namely, high-throughput phenotyping (HTP). Specifically, HTP exploits modern data sampling techniques to gather a high amount of data that can be used to improve the effectiveness of phenotyping. Hence, HTP combines the knowledge derived from the phenotyping domain with computer science, engineering, and data analysis techniques. In this scenario, machine learning (ML) and deep learning (DL) algorithms have been successfully integrated with noninvasive imaging techniques, playing a key role in automation, standardization, and quantitative data analysis. This study aims to systematically review two main areas of interest for HTP: hardware and software. For each of these areas, two influential factors were identified: for hardware, platforms and sensing equipment were analyzed; for software, the focus was on algorithms and new trends. The study was conducted following the PRISMA protocol, which allowed the refinement of the research on a wide selection of papers by extracting a meaningful dataset of 32 articles of interest. The analysis highlighted the diffusion of ground platforms, which were used in about 47% of reviewed methods, and RGB sensors, mainly due to their competitive costs, high compatibility, and versatility. Furthermore, DL-based algorithms accounted for the larger share (about 69%) of reviewed approaches, mainly due to their effectiveness and the focus posed by the scientific community over the last few years. Future research will focus on improving DL models to better handle hardware-generated data. The final aim is to create integrated, user-friendly, and scalable tools that can be directly deployed and used on the field to improve the overall crop yield.
Collapse
Affiliation(s)
- Firozeh Solimani
- Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, National Research Council of Italy, Via Amendola 122 D/O, 70126 Bari, Italy
| | - Angelo Cardellicchio
- Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, National Research Council of Italy, Via Amendola 122 D/O, 70126 Bari, Italy
| | - Massimiliano Nitti
- Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, National Research Council of Italy, Via Amendola 122 D/O, 70126 Bari, Italy
| | - Alfred Lako
- Faculty of Civil Engineering, Polytechnic University of Tirana, Bulevardi Dëshmorët e Kombit Nr. 4, 1000 Tiranë, Albania
| | - Giovanni Dimauro
- Department of Computer Science, University of Bari, Via E. Orabona, 4, 70125 Bari, Italy
| | - Vito Renò
- Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, National Research Council of Italy, Via Amendola 122 D/O, 70126 Bari, Italy
| |
Collapse
|
34
|
Wu X, Fan X, Luo P, Choudhury SD, Tjahjadi T, Hu C. From Laboratory to Field: Unsupervised Domain Adaptation for Plant Disease Recognition in the Wild. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0038. [PMID: 37011278 PMCID: PMC10059679 DOI: 10.34133/plantphenomics.0038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2022] [Accepted: 02/28/2023] [Indexed: 06/19/2023]
Abstract
Plant disease recognition is of vital importance to monitor plant development and predicting crop production. However, due to data degradation caused by different conditions of image acquisition, e.g., laboratory vs. field environment, machine learning-based recognition models generated within a specific dataset (source domain) tend to lose their validity when generalized to a novel dataset (target domain). To this end, domain adaptation methods can be leveraged for the recognition by learning invariant representations across domains. In this paper, we aim at addressing the issues of domain shift existing in plant disease recognition and propose a novel unsupervised domain adaptation method via uncertainty regularization, namely, Multi-Representation Subdomain Adaptation Network with Uncertainty Regularization for Cross-Species Plant Disease Classification (MSUN). Our simple but effective MSUN makes a breakthrough in plant disease recognition in the wild by using a large amount of unlabeled data and via nonadversarial training. Specifically, MSUN comprises multirepresentation, subdomain adaptation modules and auxiliary uncertainty regularization. The multirepresentation module enables MSUN to learn the overall structure of features and also focus on capturing more details by using the multiple representations of the source domain. This effectively alleviates the problem of large interdomain discrepancy. Subdomain adaptation is used to capture discriminative properties by addressing the issue of higher interclass similarity and lower intraclass variation. Finally, the auxiliary uncertainty regularization effectively suppresses the uncertainty problem due to domain transfer. MSUN was experimentally validated to achieve optimal results on the PlantDoc, Plant-Pathology, Corn-Leaf-Diseases, and Tomato-Leaf-Diseases datasets, with accuracies of 56.06%, 72.31%, 96.78%, and 50.58%, respectively, surpassing other state-of-the-art domain adaptation techniques considerably.
Collapse
Affiliation(s)
- Xinlu Wu
- College of Information Science and Technology,
Nanjing Forestry University, Nanjing 210037, China
| | - Xijian Fan
- College of Information Science and Technology,
Nanjing Forestry University, Nanjing 210037, China
| | - Peng Luo
- Institute of Forest Resource Information Techniques,
Chinese Academy of Forestry, Beijing 100091, China
- Key Laboratory of Forestry Remote Sensing and Information System,
National Forestry and Grassland Administration, Beijing 100091, China
| | - Sruti Das Choudhury
- Department of Computer Science and Engineering,
University of Nebraska-Lincoln, Lincoln, NE 68588, USA
| | - Tardi Tjahjadi
- School of Engineering,
University of Warwick, Coventry CV4 7AL, UK
| | - Chunhua Hu
- College of Information Science and Technology,
Nanjing Forestry University, Nanjing 210037, China
| |
Collapse
|
35
|
Xu J, Yao J, Zhai H, Li Q, Xu Q, Xiang Y, Liu Y, Liu T, Ma H, Mao Y, Wu F, Wang Q, Feng X, Mu J, Lu Y. TrichomeYOLO: A Neural Network for Automatic Maize Trichome Counting. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0024. [PMID: 36930773 PMCID: PMC10013788 DOI: 10.34133/plantphenomics.0024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
Plant trichomes are epidermal structures with a wide variety of functions in plant development and stress responses. Although the functional importance of trichomes has been realized, the tedious and time-consuming manual phenotyping process greatly limits the research progress of trichome gene cloning. Currently, there are no fully automated methods for identifying maize trichomes. We introduce TrichomeYOLO, an automated trichome counting and measuring method that uses a deep convolutional neural network, to identify the density and length of maize trichomes from scanning electron microscopy images. Our network achieved 92.1% identification accuracy on scanning electron microscopy micrographs of maize leaves, which is much better performed than the other 5 currently mainstream object detection models, Faster R-CNN, YOLOv3, YOLOv5, DETR, and Cascade R-CNN. We applied TrichomeYOLO to investigate trichome variations in a natural population of maize and achieved robust trichome identification. Our method and the pretrained model are open access in Github (https://github.com/yaober/trichomecounter). We believe TrichomeYOLO will help make efficient trichome identification and help facilitate researches on maize trichomes.
Collapse
Affiliation(s)
- Jie Xu
- Maize Research Institute,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
| | - Jia Yao
- College of Information Engineering,
Sichuan Agricultural University, Yaan 625014, Sichuan, China
| | - Hang Zhai
- Maize Research Institute,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
| | - Qimeng Li
- Maize Research Institute,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
| | - Qi Xu
- Maize Research Institute,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
| | - Ying Xiang
- College of Information Engineering,
Sichuan Agricultural University, Yaan 625014, Sichuan, China
| | - Yaxi Liu
- Triticeae Research Institute,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
| | - Tianhong Liu
- Maize Research Institute,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
| | - Huili Ma
- Maize Research Institute,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
| | - Yan Mao
- College of Chemistry and Life Sciences,
Chengdu Normal University, Wenjiang 611130, Sichuan, China
| | - Fengkai Wu
- Maize Research Institute,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
| | - Qingjun Wang
- Maize Research Institute,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
| | - Xuanjun Feng
- Maize Research Institute,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
| | - Jiong Mu
- College of Information Engineering,
Sichuan Agricultural University, Yaan 625014, Sichuan, China
| | - Yanli Lu
- Maize Research Institute,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China,
Sichuan Agricultural University, Wenjiang 611130, Sichuan, China
| |
Collapse
|
36
|
Field‐based robotic leaf angle detection and characterization of maize plants using stereo vision and deep convolutional neural networks. J FIELD ROBOT 2023. [DOI: 10.1002/rob.22166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2023]
|
37
|
Gupta S, Kumar P, Tekchandani R. A multimodal facial cues based engagement detection system in e-learning context using deep learning approach. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-27. [PMID: 36789011 PMCID: PMC9911959 DOI: 10.1007/s11042-023-14392-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Revised: 11/20/2022] [Indexed: 06/18/2023]
Abstract
Due to the COVID-19 crisis, the education sector has been shifted to a virtual environment. Monitoring the engagement level and providing regular feedback during e-classes is one of the major concerns, as this facility lacks in the e-learning environment due to no physical observation of the teacher. According to present study, an engagement detection system to ensure that the students get immediate feedback during e-Learning. Our proposed engagement system analyses the student's behaviour throughout the e-Learning session. The proposed novel approach evaluates three modalities based on the student's behaviour, such as facial expression, eye blink count, and head movement, from the live video streams to predict student engagement in e-learning. The proposed system is implemented based on deep-learning approaches such as VGG-19 and ResNet-50 for facial emotion recognition and the facial landmark approach for eye-blinking and head movement detection. The results from different modalities (for which the algorithms are proposed) are combined to determine the EI (engagement index). Based on EI value, an engaged or disengaged state is predicted. The present study suggests that the proposed facial cues-based multimodal system accurately determines student engagement in real time. The experimental research achieved an accuracy of 92.58% and showed that the proposed engagement detection approach significantly outperforms the existing approaches.
Collapse
Affiliation(s)
- Swadha Gupta
- Department of Computer Science and Engineering, Thapar Institute of Engineering and Technology, Patiala, 147001 Punjab India
| | - Parteek Kumar
- Department of Computer Science and Engineering, Thapar Institute of Engineering and Technology, Patiala, 147001 Punjab India
| | - Rajkumar Tekchandani
- Department of Computer Science and Engineering, Thapar Institute of Engineering and Technology, Patiala, 147001 Punjab India
| |
Collapse
|
38
|
Coleman GRY, Salter WT. More eyes on the prize: open-source data, software and hardware for advancing plant science through collaboration. AOB PLANTS 2023; 15:plad010. [PMID: 37025102 PMCID: PMC10071051 DOI: 10.1093/aobpla/plad010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 03/08/2023] [Indexed: 06/19/2023]
Abstract
Automating the analysis of plants using image processing would help remove barriers to phenotyping and large-scale precision agricultural technologies, such as site-specific weed control. The combination of accessible hardware and high-performance deep learning (DL) tools for plant analysis is becoming widely recognised as a path forward for both plant science and applied precision agricultural purposes. Yet, a lack of collaboration in image analysis for plant science, despite the open-source origins of much of the technology, is hindering development. Here, we show how tools developed for specific attributes of phenotyping or weed recognition for precision weed control have substantial overlapping data structure, software/hardware requirements and outputs. An open-source approach to these tools facilitates interdisciplinary collaboration, avoiding unnecessary repetition and allowing research groups in both basic and applied sciences to capitalise on advancements and resolve respective bottlenecks. The approach mimics that of machine learning in its nascence. Three areas of collaboration are identified as critical for improving efficiency, (1) standardized, open-source, annotated dataset development with consistent metadata reporting; (2) establishment of accessible and reliable training and testing platforms for DL algorithms; and (3) sharing of all source code used in the research process. The complexity of imaging plants and cost of annotating image datasets means that collaboration from typically distinct fields will be necessary to capitalize on the benefits of DL for both applied and basic science purposes.
Collapse
Affiliation(s)
- Guy R Y Coleman
- School of Life and Environmental Sciences, Sydney Institute of Agriculture, The University of Sydney, Brownlow Hill, New South Wales 2570, Australia
| | - William T Salter
- School of Life and Environmental Sciences, Sydney Institute of Agriculture, The University of Sydney, Narrabri, New South Wales 2390, Australia
| |
Collapse
|
39
|
Harfouche AL, Nakhle F, Harfouche AH, Sardella OG, Dart E, Jacobson D. A primer on artificial intelligence in plant digital phenomics: embarking on the data to insights journey. TRENDS IN PLANT SCIENCE 2023; 28:154-184. [PMID: 36167648 DOI: 10.1016/j.tplants.2022.08.021] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 08/22/2022] [Accepted: 08/25/2022] [Indexed: 06/16/2023]
Abstract
Artificial intelligence (AI) has emerged as a fundamental component of global agricultural research that is poised to impact on many aspects of plant science. In digital phenomics, AI is capable of learning intricate structure and patterns in large datasets. We provide a perspective and primer on AI applications to phenome research. We propose a novel human-centric explainable AI (X-AI) system architecture consisting of data architecture, technology infrastructure, and AI architecture design. We clarify the difference between post hoc models and 'interpretable by design' models. We include guidance for effectively using an interpretable by design model in phenomic analysis. We also provide directions to sources of tools and resources for making data analytics increasingly accessible. This primer is accompanied by an interactive online tutorial.
Collapse
Affiliation(s)
- Antoine L Harfouche
- Department for Innovation in Biological, Agro-Food, and Forest Systems, University of Tuscia, Viterbo, VT 01100, Italy.
| | - Farid Nakhle
- Department for Innovation in Biological, Agro-Food, and Forest Systems, University of Tuscia, Viterbo, VT 01100, Italy
| | - Antoine H Harfouche
- Unité de Formation et de Recherche en Sciences Économiques, Gestion, Mathématiques, et Informatique, Université Paris Nanterre, 92001 Nanterre, France
| | - Orlando G Sardella
- Department for Innovation in Biological, Agro-Food, and Forest Systems, University of Tuscia, Viterbo, VT 01100, Italy
| | - Eli Dart
- Energy Sciences Network (ESnet), Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Daniel Jacobson
- Biosciences Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
| |
Collapse
|
40
|
Du X, Chen Z, Li Q, Yang S, Jiang L, Yang Y, Li Y, Gu Z. Organoids revealed: morphological analysis of the profound next generation in-vitro model with artificial intelligence. Biodes Manuf 2023; 6:319-339. [PMID: 36713614 PMCID: PMC9867835 DOI: 10.1007/s42242-022-00226-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 12/06/2022] [Indexed: 01/21/2023]
Abstract
In modern terminology, "organoids" refer to cells that grow in a specific three-dimensional (3D) environment in vitro, sharing similar structures with their source organs or tissues. Observing the morphology or growth characteristics of organoids through a microscope is a commonly used method of organoid analysis. However, it is difficult, time-consuming, and inaccurate to screen and analyze organoids only manually, a problem which cannot be easily solved with traditional technology. Artificial intelligence (AI) technology has proven to be effective in many biological and medical research fields, especially in the analysis of single-cell or hematoxylin/eosin stained tissue slices. When used to analyze organoids, AI should also provide more efficient, quantitative, accurate, and fast solutions. In this review, we will first briefly outline the application areas of organoids and then discuss the shortcomings of traditional organoid measurement and analysis methods. Secondly, we will summarize the development from machine learning to deep learning and the advantages of the latter, and then describe how to utilize a convolutional neural network to solve the challenges in organoid observation and analysis. Finally, we will discuss the limitations of current AI used in organoid research, as well as opportunities and future research directions. Graphic abstract
Collapse
Affiliation(s)
- Xuan Du
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Zaozao Chen
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Qiwei Li
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Sheng Yang
- Key Laboratory of Environmental Medicine Engineering, Ministry of Education, School of Public Health, Southeast University, Nanjing, 210009 China
| | - Lincao Jiang
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Yi Yang
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Yanhui Li
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210008 China
| | - Zhongze Gu
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| |
Collapse
|
41
|
Dong D, Nagasubramanian K, Wang R, Frei UK, Jubery TZ, Lübberstedt T, Ganapathysubramanian B. Self-supervised maize kernel classification and segmentation for embryo identification. FRONTIERS IN PLANT SCIENCE 2023; 14:1108355. [PMID: 37123832 PMCID: PMC10140504 DOI: 10.3389/fpls.2023.1108355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 03/28/2023] [Indexed: 05/03/2023]
Abstract
Introduction Computer vision and deep learning (DL) techniques have succeeded in a wide range of diverse fields. Recently, these techniques have been successfully deployed in plant science applications to address food security, productivity, and environmental sustainability problems for a growing global population. However, training these DL models often necessitates the large-scale manual annotation of data which frequently becomes a tedious and time-and-resource- intensive process. Recent advances in self-supervised learning (SSL) methods have proven instrumental in overcoming these obstacles, using purely unlabeled datasets to pre-train DL models. Methods Here, we implement the popular self-supervised contrastive learning methods of NNCLR Nearest neighbor Contrastive Learning of visual Representations) and SimCLR (Simple framework for Contrastive Learning of visual Representations) for the classification of spatial orientation and segmentation of embryos of maize kernels. Maize kernels are imaged using a commercial high-throughput imaging system. This image data is often used in multiple downstream applications across both production and breeding applications, for instance, sorting for oil content based on segmenting and quantifying the scutellum's size and for classifying haploid and diploid kernels. Results and discussion We show that in both classification and segmentation problems, SSL techniques outperform their purely supervised transfer learning-based counterparts and are significantly more annotation efficient. Additionally, we show that a single SSL pre-trained model can be efficiently finetuned for both classification and segmentation, indicating good transferability across multiple downstream applications. Segmentation models with SSL-pretrained backbones produce DICE similarity coefficients of 0.81, higher than the 0.78 and 0.73 of those with ImageNet-pretrained and randomly initialized backbones, respectively. We observe that finetuning classification and segmentation models on as little as 1% annotation produces competitive results. These results show SSL provides a meaningful step forward in data efficiency with agricultural deep learning and computer vision.
Collapse
Affiliation(s)
- David Dong
- Ames High School, Ames, IA, United States
- Translational AI Center, Iowa State University, Ames, IA, United States
| | - Koushik Nagasubramanian
- Translational AI Center, Iowa State University, Ames, IA, United States
- Department of Electrical Engineering, Iowa State University, Ames, IA, United States
| | - Ruidong Wang
- Department of Agronomy, Iowa State University, Ames, IA, United States
| | - Ursula K. Frei
- Department of Agronomy, Iowa State University, Ames, IA, United States
| | - Talukder Z. Jubery
- Translational AI Center, Iowa State University, Ames, IA, United States
- Department of Mechanical Engineering, Iowa State University, Ames, IA, United States
- *Correspondence: Talukder Z. Jubery, ; Baskar Ganapathysubramanian,
| | | | - Baskar Ganapathysubramanian
- Translational AI Center, Iowa State University, Ames, IA, United States
- Department of Electrical Engineering, Iowa State University, Ames, IA, United States
- Department of Mechanical Engineering, Iowa State University, Ames, IA, United States
- *Correspondence: Talukder Z. Jubery, ; Baskar Ganapathysubramanian,
| |
Collapse
|
42
|
Carley CN, Zubrod MJ, Dutta S, Singh AK. Using machine learning enabled phenotyping to characterize nodulation in three early vegetative stages in soybean. CROP SCIENCE 2023; 63:204-226. [PMID: 37503354 PMCID: PMC10369931 DOI: 10.1002/csc2.20861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 09/29/2022] [Indexed: 07/29/2023]
Abstract
The symbiotic relationship between soybean [Glycine max L. (Merr.)] roots and bacteria (Bradyrhizobium japonicum) lead to the development of nodules, important legume root structures where atmospheric nitrogen (N2) is fixed into bio-available ammonia (NH3) for plant growth and development. With the recent development of the Soybean Nodule Acquisition Pipeline (SNAP), nodules can more easily be quantified and evaluated for genetic diversity and growth patterns across unique soybean root system architectures. We explored six diverse soybean genotypes across three field year combinations in three early vegetative stages of development and report the unique relationships between soybean nodules in the taproot and non-taproot growth zones of diverse root system architectures of these genotypes. We found unique growth patterns in the nodules of taproots showing genotypic differences in how nodules grew in count, size, and total nodule area per genotype compared to non-taproot nodules. We propose that nodulation should be defined as a function of both nodule count and individual nodule area resulting in a total nodule area per root or growth regions of the root. We also report on the relationships between the nodules and total nitrogen in the seed at maturity, finding a strong correlation between the taproot nodules and final seed nitrogen at maturity. The applications of these findings could lead to an enhanced understanding of the plant-Bradyrhizobium relationship and exploring these relationships could lead to leveraging greater nitrogen use efficiency and nodulation carbon to nitrogen production efficiency across the soybean germplasm.
Collapse
|
43
|
Fan J, Li Y, Yu S, Gou W, Guo X, Zhao C. Application of Internet of Things to Agriculture-The LQ-FieldPheno Platform: A High-Throughput Platform for Obtaining Crop Phenotypes in Field. RESEARCH (WASHINGTON, D.C.) 2023; 6:0059. [PMID: 36951796 PMCID: PMC10027232 DOI: 10.34133/research.0059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 01/07/2023] [Indexed: 01/22/2023]
Abstract
The lack of efficient crop phenotypic measurement methods has become a bottleneck in the field of breeding and precision cultivation. However, high-throughput and accurate phenotypic measurement could accelerate the breeding and improve the existing cultivation management technology. In view of this, this paper introduces a high-throughput crop phenotype measurement platform named the LQ-FieldPheno, which was developed by China National Agricultural Information Engineering Technology Research Centre. The proposed platform represents a mobile phenotypic high-throughput automatic acquisition system based on a field track platform, which introduces the Internet of Things (IoT) into agricultural breeding. The proposed platform uses the crop phenotype multisensor central imaging unit as a core and integrates different types of equipment, including an automatic control system, upward field track, intelligent navigation vehicle, and environmental sensors. Furthermore, it combines an RGB camera, a 6-band multispectral camera, a thermal infrared camera, a 3-dimensional laser radar, and a deep camera. Special software is developed to control motions and sensors and to design run lines. Using wireless sensor networks and mobile communication wireless networks of IoT, the proposed system can obtain phenotypic information about plants in their growth period with a high-throughput, automatic, and high time sequence. Moreover, the LQ-FieldPheno has the characteristics of multiple data acquisition, vital timeliness, remarkable expansibility, high-cost performance, and flexible customization. The LQ-FieldPheno has been operated in the 2020 maize growing season, and the collected point cloud data are used to estimate the maize plant height. Compared with the traditional crop phenotypic measurement technology, the LQ-FieldPheno has the advantage of continuously and synchronously obtaining multisource phenotypic data at different growth stages and extracting different plant parameters. The proposed platform could contribute to the research of crop phenotype, remote sensing, agronomy, and related disciplines.
Collapse
Affiliation(s)
- Jiangchuan Fan
- Beijing Key Laboratory of Digital Plant,
National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
- Beijing Research Center for Information Technology in Agriculture,
Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- China National Engineering Research Center for Information Technology in Agriculture (NERCITA), Beijing 100097, China
| | - Yinglun Li
- Beijing Key Laboratory of Digital Plant,
National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
- Beijing Research Center for Information Technology in Agriculture,
Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- China National Engineering Research Center for Information Technology in Agriculture (NERCITA), Beijing 100097, China
| | - Shuan Yu
- Beijing Key Laboratory of Digital Plant,
National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
- Beijing Research Center for Information Technology in Agriculture,
Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- China National Engineering Research Center for Information Technology in Agriculture (NERCITA), Beijing 100097, China
| | - Wenbo Gou
- Beijing Key Laboratory of Digital Plant,
National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
- Beijing Research Center for Information Technology in Agriculture,
Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- China National Engineering Research Center for Information Technology in Agriculture (NERCITA), Beijing 100097, China
| | - Xinyu Guo
- Beijing Key Laboratory of Digital Plant,
National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
- Beijing Research Center for Information Technology in Agriculture,
Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- China National Engineering Research Center for Information Technology in Agriculture (NERCITA), Beijing 100097, China
- Address correspondence to: (X.G.); (C.Z.)
| | - Chunjiang Zhao
- Beijing Key Laboratory of Digital Plant,
National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
- Beijing Research Center for Information Technology in Agriculture,
Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- China National Engineering Research Center for Information Technology in Agriculture (NERCITA), Beijing 100097, China
- Address correspondence to: (X.G.); (C.Z.)
| |
Collapse
|
44
|
Dong X, Wang Q, Huang Q, Ge Q, Zhao K, Wu X, Wu X, Lei L, Hao G. PDDD-PreTrain: A Series of Commonly Used Pre-Trained Models Support Image-Based Plant Disease Diagnosis. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0054. [PMID: 37213546 PMCID: PMC10194370 DOI: 10.34133/plantphenomics.0054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 04/25/2023] [Indexed: 05/23/2023]
Abstract
Plant diseases threaten global food security by reducing crop yield; thus, diagnosing plant diseases is critical to agricultural production. Artificial intelligence technologies gradually replace traditional plant disease diagnosis methods due to their time-consuming, costly, inefficient, and subjective disadvantages. As a mainstream AI method, deep learning has substantially improved plant disease detection and diagnosis for precision agriculture. In the meantime, most of the existing plant disease diagnosis methods usually adopt a pre-trained deep learning model to support diagnosing diseased leaves. However, the commonly used pre-trained models are from the computer vision dataset, not the botany dataset, which barely provides the pre-trained models sufficient domain knowledge about plant disease. Furthermore, this pre-trained way makes the final diagnosis model more difficult to distinguish between different plant diseases and lowers the diagnostic precision. To address this issue, we propose a series of commonly used pre-trained models based on plant disease images to promote the performance of disease diagnosis. In addition, we have experimented with the plant disease pre-trained model on plant disease diagnosis tasks such as plant disease identification, plant disease detection, plant disease segmentation, and other subtasks. The extended experiments prove that the plant disease pre-trained model can achieve higher accuracy than the existing pre-trained model with less training time, thereby supporting the better diagnosis of plant diseases. In addition, our pre-trained models will be open-sourced at https://pd.samlab.cn/ and Zenodo platform https://doi.org/10.5281/zenodo.7856293.
Collapse
Affiliation(s)
- Xinyu Dong
- State Key Laboratory of Public Big Data, College of Computer Science and Technology,
Guizhou University, Guiyang 550025, China
| | - Qi Wang
- State Key Laboratory of Public Big Data, College of Computer Science and Technology,
Guizhou University, Guiyang 550025, China
- Address correspondence to: (Q.W.); (G.H.)
| | - Qianding Huang
- State Key Laboratory of Public Big Data, College of Computer Science and Technology,
Guizhou University, Guiyang 550025, China
| | - Qinglong Ge
- State Key Laboratory of Public Big Data, College of Computer Science and Technology,
Guizhou University, Guiyang 550025, China
| | - Kejun Zhao
- State Key Laboratory of Public Big Data, College of Computer Science and Technology,
Guizhou University, Guiyang 550025, China
| | - Xingcai Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology,
Guizhou University, Guiyang 550025, China
| | - Xue Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology,
Guizhou University, Guiyang 550025, China
| | - Liang Lei
- The School of Physics and Optoelectronic Engineering,
Guangdong University of Technology, Guangzhou 510006, China
| | - Gefei Hao
- State Key Laboratory of Public Big Data, College of Computer Science and Technology,
Guizhou University, Guiyang 550025, China
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education,
Guizhou University, Guiyang 550025, China
- Address correspondence to: (Q.W.); (G.H.)
| |
Collapse
|
45
|
Thesma V, Mohammadpour Velni J. Plant Root Phenotyping Using Deep Conditional GANs and Binary Semantic Segmentation. SENSORS (BASEL, SWITZERLAND) 2022; 23:s23010309. [PMID: 36616905 PMCID: PMC9823511 DOI: 10.3390/s23010309] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 12/20/2022] [Accepted: 12/21/2022] [Indexed: 05/05/2023]
Abstract
This paper develops an approach to perform binary semantic segmentation on Arabidopsis thaliana root images for plant root phenotyping using a conditional generative adversarial network (cGAN) to address pixel-wise class imbalance. Specifically, we use Pix2PixHD, an image-to-image translation cGAN, to generate realistic and high resolution images of plant roots and annotations similar to the original dataset. Furthermore, we use our trained cGAN to triple the size of our original root dataset to reduce pixel-wise class imbalance. We then feed both the original and generated datasets into SegNet to semantically segment the root pixels from the background. Furthermore, we postprocess our segmentation results to close small, apparent gaps along the main and lateral roots. Lastly, we present a comparison of our binary semantic segmentation approach with the state-of-the-art in root segmentation. Our efforts demonstrate that cGAN can produce realistic and high resolution root images, reduce pixel-wise class imbalance, and our segmentation model yields high testing accuracy (of over 99%), low cross entropy error (of less than 2%), high Dice Score (of near 0.80), and low inference time for near real-time processing.
Collapse
Affiliation(s)
- Vaishnavi Thesma
- School of Electrical and Computer Engineering, University of Georgia, Athens, GA 30602, USA
| | - Javad Mohammadpour Velni
- Department of Mechanical Engineering, Clemson University, Clemson, SC 29634, USA
- Correspondence: ; Tel.: +1-864-656-0139
| |
Collapse
|
46
|
Growth of alpine grassland will start and stop earlier under climate warming. Nat Commun 2022; 13:7398. [PMID: 36456572 PMCID: PMC9715633 DOI: 10.1038/s41467-022-35194-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 11/22/2022] [Indexed: 12/03/2022] Open
Abstract
Alpine plants have evolved a tight seasonal cycle of growth and senescence to cope with a short growing season. The potential growing season length (GSL) is increasing because of climate warming, possibly prolonging plant growth above- and belowground. We tested whether growth dynamics in typical alpine grassland are altered when the natural GSL (2-3 months) is experimentally advanced and thus, prolonged by 2-4 months. Additional summer months did not extend the growing period, as canopy browning started 34-41 days after the start of the season, even when GSL was more than doubled. Less than 10% of roots were produced during the added months, suggesting that root growth was as conservative as leaf growth. Few species showed a weak second greening under prolonged GSL, but not the dominant sedge. A longer growing season under future climate may therefore not extend growth in this widespread alpine community, but will foster species that follow a less strict phenology.
Collapse
|
47
|
Xu X, Qiu J, Zhang W, Zhou Z, Kang Y. Soybean Seedling Root Segmentation Using Improved U-Net Network. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22228904. [PMID: 36433500 PMCID: PMC9698826 DOI: 10.3390/s22228904] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 11/08/2022] [Accepted: 11/15/2022] [Indexed: 06/01/2023]
Abstract
Soybean seedling root morphology is important to genetic breeding. Root segmentation is a key technique for identifying root morphological characteristics. This paper proposed a semantic segmentation model of soybean seedling root images based on an improved U-Net network to address the problems of the over-segmentation phenomenon, unsmooth root edges and root disconnection, which are easily caused by background interference such as water stains and noise, as well as inconspicuous contrast in soybean seedling images. Soybean seedling root images in the hydroponic environment were collected for annotation and augmentation. A double attention mechanism was introduced in the downsampling process, and an Attention Gate mechanism was added in the skip connection part to enhance the weight of the root region and suppress the interference of background and noise. Then, the model prediction process was visually interpreted using feature maps and class activation mapping maps. The remaining background noise was removed by connected component analysis. The experimental results showed that the Accuracy, Precision, Recall, F1-Score and Intersection over Union of the model were 0.9962, 0.9883, 0.9794, 0.9837 and 0.9683, respectively. The processing time of an individual image was 0.153 s. A segmentation experiment on soybean root images was performed in the soil-culturing environment. The results showed that this proposed model could extract more complete detail information and had strong generalization ability. It can achieve accurate root segmentation in soybean seedlings and provide a theoretical basis and technical support for the quantitative evaluation of the root morphological characteristics in soybean seedlings.
Collapse
Affiliation(s)
- Xiuying Xu
- College of Engineering, Heilongjiang Bayi Agricultural University, Daqing 163319, China
- Heilongjiang Province Conservation Tillage Engineering Technology Research Center, Daqing 163319, China
| | - Jinkai Qiu
- College of Engineering, Heilongjiang Bayi Agricultural University, Daqing 163319, China
| | - Wei Zhang
- College of Engineering, Heilongjiang Bayi Agricultural University, Daqing 163319, China
- Heilongjiang Province Conservation Tillage Engineering Technology Research Center, Daqing 163319, China
| | - Zheng Zhou
- College of Engineering, Heilongjiang Bayi Agricultural University, Daqing 163319, China
| | - Ye Kang
- College of Engineering, Heilongjiang Bayi Agricultural University, Daqing 163319, China
| |
Collapse
|
48
|
Tao H, Xu S, Tian Y, Li Z, Ge Y, Zhang J, Wang Y, Zhou G, Deng X, Zhang Z, Ding Y, Jiang D, Guo Q, Jin S. Proximal and remote sensing in plant phenomics: 20 years of progress, challenges, and perspectives. PLANT COMMUNICATIONS 2022; 3:100344. [PMID: 35655429 PMCID: PMC9700174 DOI: 10.1016/j.xplc.2022.100344] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 05/08/2022] [Accepted: 05/27/2022] [Indexed: 06/01/2023]
Abstract
Plant phenomics (PP) has been recognized as a bottleneck in studying the interactions of genomics and environment on plants, limiting the progress of smart breeding and precise cultivation. High-throughput plant phenotyping is challenging owing to the spatio-temporal dynamics of traits. Proximal and remote sensing (PRS) techniques are increasingly used for plant phenotyping because of their advantages in multi-dimensional data acquisition and analysis. Substantial progress of PRS applications in PP has been observed over the last two decades and is analyzed here from an interdisciplinary perspective based on 2972 publications. This progress covers most aspects of PRS application in PP, including patterns of global spatial distribution and temporal dynamics, specific PRS technologies, phenotypic research fields, working environments, species, and traits. Subsequently, we demonstrate how to link PRS to multi-omics studies, including how to achieve multi-dimensional PRS data acquisition and processing, how to systematically integrate all kinds of phenotypic information and derive phenotypic knowledge with biological significance, and how to link PP to multi-omics association analysis. Finally, we identify three future perspectives for PRS-based PP: (1) strengthening the spatial and temporal consistency of PRS data, (2) exploring novel phenotypic traits, and (3) facilitating multi-omics communication.
Collapse
Affiliation(s)
- Haiyu Tao
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China
| | - Shan Xu
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China
| | - Yongchao Tian
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China
| | - Zhaofeng Li
- The Key Laboratory of Oasis Eco-agriculture, Xinjiang Production and Construction Corps, Agriculture College, Shihezi University, Shihezi 832003, China
| | - Yan Ge
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China
| | - Jiaoping Zhang
- State Key Laboratory of Crop Genetics and Germplasm Enhancement, National Center for Soybean Improvement, Key Laboratory for Biology and Genetic Improvement of Soybean (General, Ministry of Agriculture), Nanjing Agricultural University, Nanjing 210095, China
| | - Yu Wang
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China
| | - Guodong Zhou
- Sanya Research Institute of Nanjing Agriculture University, Sanya 572024, China
| | - Xiong Deng
- Key Laboratory of Plant Molecular Physiology, Institute of Botany, Chinese Academy of Sciences, Beijing 100093, China
| | - Ze Zhang
- The Key Laboratory of Oasis Eco-agriculture, Xinjiang Production and Construction Corps, Agriculture College, Shihezi University, Shihezi 832003, China
| | - Yanfeng Ding
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China; Hainan Yazhou Bay Seed Laboratory, Sanya 572025, China; Sanya Research Institute of Nanjing Agriculture University, Sanya 572024, China
| | - Dong Jiang
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China; Hainan Yazhou Bay Seed Laboratory, Sanya 572025, China; Sanya Research Institute of Nanjing Agriculture University, Sanya 572024, China
| | - Qinghua Guo
- Institute of Ecology, College of Urban and Environmental Science, Peking University, Beijing 100871, China
| | - Shichao Jin
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China; Hainan Yazhou Bay Seed Laboratory, Sanya 572025, China; Sanya Research Institute of Nanjing Agriculture University, Sanya 572024, China; Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, International Institute for Earth System Sciences, Nanjing University, Nanjing, Jiangsu 210023, China.
| |
Collapse
|
49
|
Sun J, Cao W, Yamanaka T. JustDeepIt: Software tool with graphical and character user interfaces for deep learning-based object detection and segmentation in image analysis. FRONTIERS IN PLANT SCIENCE 2022; 13:964058. [PMID: 36275541 PMCID: PMC9583140 DOI: 10.3389/fpls.2022.964058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
Image processing and analysis based on deep learning are becoming mainstream and increasingly accessible for solving various scientific problems in diverse fields. However, it requires advanced computer programming skills and a basic familiarity with character user interfaces (CUIs). Consequently, programming beginners face a considerable technical hurdle. Because potential users of image analysis are experimentalists, who often use graphical user interfaces (GUIs) in their daily work, there is a need to develop GUI-based easy-to-use deep learning software to support their work. Here, we introduce JustDeepIt, a software written in Python, to simplify object detection and instance segmentation using deep learning. JustDeepIt provides both a GUI and a CUI. It contains various functional modules for model building and inference, and it is built upon the popular PyTorch, MMDetection, and Detectron2 libraries. The GUI is implemented using the Python library FastAPI, simplifying model building for various deep learning approaches for beginners. As practical examples of JustDeepIt, we prepared four case studies that cover critical issues in plant science: (1) wheat head detection with Faster R-CNN, YOLOv3, SSD, and RetinaNet; (2) sugar beet and weed segmentation with Mask R-CNN; (3) plant segmentation with U2-Net; and (4) leaf segmentation with U2-Net. The results support the wide applicability of JustDeepIt in plant science applications. In addition, we believe that JustDeepIt has the potential to be applied to deep learning-based image analysis in various fields beyond plant science.
Collapse
|
50
|
Zaji A, Liu Z, Xiao G, Sangha JS, Ruan Y. A survey on deep learning applications in wheat phenotyping. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|