1
|
Li Y, Wu X, Wang Q, Pei Z, Zhao K, Chen P, Hao G. CSNet: A Count-Supervised Network via Multiscale MLP-Mixer for Wheat Ear Counting. PLANT PHENOMICS (WASHINGTON, D.C.) 2024; 6:0236. [PMID: 39165670 PMCID: PMC11334574 DOI: 10.34133/plantphenomics.0236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 07/23/2024] [Indexed: 08/22/2024]
Abstract
Wheat is the most widely grown crop in the world, and its yield is closely related to global food security. The number of ears is important for wheat breeding and yield estimation. Therefore, automated wheat ear counting techniques are essential for breeding high-yield varieties and increasing grain yield. However, all existing methods require position-level annotation for training, implying that a large amount of labor is required for annotation, limiting the application and development of deep learning technology in the agricultural field. To address this problem, we propose a count-supervised multiscale perceptive wheat counting network (CSNet, count-supervised network), which aims to achieve accurate counting of wheat ears using quantity information. In particular, in the absence of location information, CSNet adopts MLP-Mixer to construct a multiscale perception module with a global receptive field that implements the learning of small target attention maps between wheat ear features. We conduct comparative experiments on a publicly available global wheat head detection dataset, showing that the proposed count-supervised strategy outperforms existing position-supervised methods in terms of mean absolute error (MAE) and root mean square error (RMSE). This superior performance indicates that the proposed approach has a positive impact on improving ear counts and reducing labeling costs, demonstrating its great potential for agricultural counting tasks. The code is available at http://csnet.samlab.cn.
Collapse
Affiliation(s)
- Yaoxi Li
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Xingcai Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Qi Wang
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Zhixun Pei
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Kejun Zhao
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Panfeng Chen
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Gefei Hao
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Guiyang 550025, China
| |
Collapse
|
2
|
Bai B, Wang J, Li J, Yu L, Wen J, Han Y. T-YOLO: a lightweight and efficient detection model for nutrient buds in complex tea-plantation environments. JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE 2024; 104:5698-5711. [PMID: 38372581 DOI: 10.1002/jsfa.13396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 01/29/2024] [Accepted: 02/15/2024] [Indexed: 02/20/2024]
Abstract
BACKGROUND Quick and accurate detection of nutrient buds is essential for yield prediction and field management in tea plantations. However, the complexity of tea plantation environments and the similarity in color between nutrient buds and older leaves make the location of tea nutrient buds challenging. RESULTS This research presents a lightweight and efficient detection model, T-YOLO, for the accurate detection of tea nutrient buds in unstructured environments. First, a lightweight module, C2fG2, and an efficient feature extraction module, DBS, are introduced into the backbone and neck of the YOLOv5 baseline model. Second, the head network of the model is pruned to achieve further lightweighting. Finally, the dynamic detection head is integrated to mitigate the feature loss caused by lightweighting. The experimental data show that T-YOLO achieves a mean average precision (mAP) of 84.1%, the total number of parameters for model training (Params) is 11.26 million (M), and the number of floating-point operations (FLOPs) is 17.2 Giga (G). Compared with the baseline YOLOv5 model, T-YOLO reduces Params by 47% and lowers FLOPs by 65%. T-YOLO also outperforms the existing optimal detection YOLOv8 model by 7.5% in terms of mAP. CONCLUSION The T-YOLO model proposed in this study performs well in detecting small tea nutrient buds. It provides a decision-making basis for tea farmers to manage smart tea gardens. The T-YOLO model outperforms mainstream detection models on the public dataset, Global Wheat Head Detection (GWHD), which offers a reference for the construction of lightweight and efficient detection models for other small target crops. © 2024 Society of Chemical Industry.
Collapse
Affiliation(s)
- Bingyi Bai
- College of Electronic Engineering, South China Agricultural University, Guangzhou, China
- Guangdong Laboratory for Lingnan Modern Agriculture, South China Agricultural University, Guangzhou, China
| | - Junshu Wang
- School of robotics, Guangdong Open University, Guangzhou, China
| | - Jianlong Li
- Tea Research Institute, Guangdong Academy of Agricultural Sciences & Guangdong Provincial Key Laboratory of Tea Plant Resources Innovation and Utilization, Guangzhou, China
| | - Long Yu
- College of Electronic Engineering, South China Agricultural University, Guangzhou, China
| | - Jiangtao Wen
- Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, China
| | - Yuxing Han
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| |
Collapse
|
3
|
Sadeh R, Ben-David R, Herrmann I, Peleg Z. Spectral-genomic chain-model approach enhances the wheat yield component prediction under the Mediterranean climate. PHYSIOLOGIA PLANTARUM 2024; 176:e14480. [PMID: 39187437 DOI: 10.1111/ppl.14480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 06/25/2024] [Accepted: 06/27/2024] [Indexed: 08/28/2024]
Abstract
In light of the changing climate that jeopardizes future food security, genomic selection is emerging as a valuable tool for breeders to enhance genetic gains and introduce high-yielding varieties. However, predicting grain yield is challenging due to the genetic and physiological complexities involved and the effect of genetic-by-environment interactions on prediction accuracy. We utilized a chained model approach to address these challenges, breaking down the complex prediction task into simpler steps. A diversity panel with a narrow phenological range was phenotyped across three Mediterranean environments for various morpho-physiological and yield-related traits. The results indicated that a multi-environment model outperformed a single-environment model in prediction accuracy for most traits. However, prediction accuracy for grain yield was not improved. Thus, in an attempt to ameliorate the grain yield prediction accuracy, we integrated a spectral estimation of spike number, being a major wheat yield component, with genomic data. A machine learning approach was used for spike number estimation from canopy hyperspectral reflectance captured by an unmanned aerial vehicle. The spectral-based estimated spike number was utilized as a secondary trait in a multi-trait genomic selection, significantly improving grain yield prediction accuracy. Moreover, the ability to predict the spike number based on data from previous seasons implies that it could be applied to new trials at various scales, even in small plot sizes. Overall, we demonstrate here that incorporating a novel spectral-genomic chain-model workflow, which utilizes spectral-based phenotypes as a secondary trait, improves the predictive accuracy of wheat grain yield.
Collapse
Affiliation(s)
- Roy Sadeh
- The Robert H. Smith Institute of Plant Sciences and Genetics in Agriculture, The Hebrew University of Jerusalem, Rehovot, Israel
| | - Roi Ben-David
- Institute of Plant Sciences, Agriculture Research Organization (ARO)-Volcani Institute, Rishon LeZion, Israel
| | - Ittai Herrmann
- The Robert H. Smith Institute of Plant Sciences and Genetics in Agriculture, The Hebrew University of Jerusalem, Rehovot, Israel
| | - Zvi Peleg
- The Robert H. Smith Institute of Plant Sciences and Genetics in Agriculture, The Hebrew University of Jerusalem, Rehovot, Israel
| |
Collapse
|
4
|
Dainelli R, Bruno A, Martinelli M, Moroni D, Rocchi L, Morelli S, Ferrari E, Silvestri M, Agostinelli S, La Cava P, Toscano P. GranoScan: an AI-powered mobile app for in-field identification of biotic threats of wheat. FRONTIERS IN PLANT SCIENCE 2024; 15:1298791. [PMID: 38911980 PMCID: PMC11190326 DOI: 10.3389/fpls.2024.1298791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 05/07/2024] [Indexed: 06/25/2024]
Abstract
Capitalizing on the widespread adoption of smartphones among farmers and the application of artificial intelligence in computer vision, a variety of mobile applications have recently emerged in the agricultural domain. This paper introduces GranoScan, a freely available mobile app accessible on major online platforms, specifically designed for the real-time detection and identification of over 80 threats affecting wheat in the Mediterranean region. Developed through a co-design methodology involving direct collaboration with Italian farmers, this participatory approach resulted in an app featuring: (i) a graphical interface optimized for diverse in-field lighting conditions, (ii) a user-friendly interface allowing swift selection from a predefined menu, (iii) operability even in low or no connectivity, (iv) a straightforward operational guide, and (v) the ability to specify an area of interest in the photo for targeted threat identification. Underpinning GranoScan is a deep learning architecture named efficient minimal adaptive ensembling that was used to obtain accurate and robust artificial intelligence models. The method is based on an ensembling strategy that uses as core models two instances of the EfficientNet-b0 architecture, selected through the weighted F1-score. In this phase a very good precision is reached with peaks of 100% for pests, as well as in leaf damage and root disease tasks, and in some classes of spike and stem disease tasks. For weeds in the post-germination phase, the precision values range between 80% and 100%, while 100% is reached in all the classes for pre-flowering weeds, except one. Regarding recognition accuracy towards end-users in-field photos, GranoScan achieved good performances, with a mean accuracy of 77% and 95% for leaf diseases and for spike, stem and root diseases, respectively. Pests gained an accuracy of up to 94%, while for weeds the app shows a great ability (100% accuracy) in recognizing whether the target weed is a dicot or monocot and 60% accuracy for distinguishing species in both the post-germination and pre-flowering stage. Our precision and accuracy results conform to or outperform those of other studies deploying artificial intelligence models on mobile devices, confirming that GranoScan is a valuable tool also in challenging outdoor conditions.
Collapse
Affiliation(s)
- Riccardo Dainelli
- Institute of BioEconomy (IBE), National Research Council (CNR), Firenze, Italy
| | - Antonio Bruno
- Institute of Information Science and Technologies (ISTI), National Research Council (CNR), Pisa, Italy
| | - Massimo Martinelli
- Institute of Information Science and Technologies (ISTI), National Research Council (CNR), Pisa, Italy
| | - Davide Moroni
- Institute of Information Science and Technologies (ISTI), National Research Council (CNR), Pisa, Italy
| | - Leandro Rocchi
- Institute of BioEconomy (IBE), National Research Council (CNR), Firenze, Italy
| | | | | | | | | | | | - Piero Toscano
- Institute of BioEconomy (IBE), National Research Council (CNR), Firenze, Italy
| |
Collapse
|
5
|
Jiang T, Yu Q, Zhong Y, Shao M. PlantSR: Super-Resolution Improves Object Detection in Plant Images. J Imaging 2024; 10:137. [PMID: 38921614 PMCID: PMC11204869 DOI: 10.3390/jimaging10060137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Revised: 06/01/2024] [Accepted: 06/04/2024] [Indexed: 06/27/2024] Open
Abstract
Recent advancements in computer vision, especially deep learning models, have shown considerable promise in tasks related to plant image object detection. However, the efficiency of these deep learning models heavily relies on input image quality, with low-resolution images significantly hindering model performance. Therefore, reconstructing high-quality images through specific techniques will help extract features from plant images, thus improving model performance. In this study, we explored the value of super-resolution technology for improving object detection model performance on plant images. Firstly, we built a comprehensive dataset comprising 1030 high-resolution plant images, named the PlantSR dataset. Subsequently, we developed a super-resolution model using the PlantSR dataset and benchmarked it against several state-of-the-art models designed for general image super-resolution tasks. Our proposed model demonstrated superior performance on the PlantSR dataset, indicating its efficacy in enhancing the super-resolution of plant images. Furthermore, we explored the effect of super-resolution on two specific object detection tasks: apple counting and soybean seed counting. By incorporating super-resolution as a pre-processing step, we observed a significant reduction in mean absolute error. Specifically, with the YOLOv7 model employed for apple counting, the mean absolute error decreased from 13.085 to 5.71. Similarly, with the P2PNet-Soy model utilized for soybean seed counting, the mean absolute error decreased from 19.159 to 15.085. These findings underscore the substantial potential of super-resolution technology in improving the performance of object detection models for accurately detecting and counting specific plants from images. The source codes and associated datasets related to this study are available at Github.
Collapse
Affiliation(s)
- Tianyou Jiang
- College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271018, China; (T.J.); (Y.Z.); (M.S.)
| | - Qun Yu
- College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271018, China; (T.J.); (Y.Z.); (M.S.)
- Huanghuaihai Key Laboratory of Smart Agricultural Technology, Ministry of Agriculture and Rural Affairs, Tai’an 271018, China
| | - Yang Zhong
- College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271018, China; (T.J.); (Y.Z.); (M.S.)
| | - Mingshun Shao
- College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271018, China; (T.J.); (Y.Z.); (M.S.)
| |
Collapse
|
6
|
Zang H, Su X, Wang Y, Li G, Zhang J, Zheng G, Hu W, Shen H. Automatic grading evaluation of winter wheat lodging based on deep learning. FRONTIERS IN PLANT SCIENCE 2024; 15:1284861. [PMID: 38726297 PMCID: PMC11079220 DOI: 10.3389/fpls.2024.1284861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 03/26/2024] [Indexed: 05/12/2024]
Abstract
Lodging is a crucial factor that limits wheat yield and quality in wheat breeding. Therefore, accurate and timely determination of winter wheat lodging grading is of great practical importance for agricultural insurance companies to assess agricultural losses and good seed selection. However, using artificial fields to investigate the inclination angle and lodging area of winter wheat lodging in actual production is time-consuming, laborious, subjective, and unreliable in measuring results. This study addresses these issues by designing a classification-semantic segmentation multitasking neural network model MLP_U-Net, which can accurately estimate the inclination angle and lodging area of winter wheat lodging. This model can also comprehensively, qualitatively, and quantitatively evaluate the grading of winter wheat lodging. The model is based on U-Net architecture and improves the shift MLP module structure to achieve network refinement and segmentation for complex tasks. The model utilizes a common encoder to enhance its robustness, improve classification accuracy, and strengthen the segmentation network, considering the correlation between lodging degree and lodging area parameters. This study used 82 winter wheat varieties sourced from the regional experiment of national winter wheat in the Huang-Huai-Hai southern area of the water land group at the Henan Modern Agriculture Research and Development Base. The base is located in Xinxiang City, Henan Province. Winter wheat lodging images were collected using the unmanned aerial vehicle (UAV) remote sensing platform. Based on these images, winter wheat lodging datasets were created using different time sequences and different UAV flight heights. These datasets aid in segmenting and classifying winter wheat lodging degrees and areas. The results show that MLP_U-Net has demonstrated superior detection performance in a small sample dataset. The accuracies of winter wheat lodging degree and lodging area grading were 96.1% and 92.2%, respectively, when the UAV flight height was 30 m. For a UAV flight height of 50 m, the accuracies of winter wheat lodging degree and lodging area grading were 84.1% and 84.7%, respectively. These findings indicate that MLP_U-Net is highly robust and efficient in accurately completing the winter wheat lodging-grading task. This valuable insight provides technical references for UAV remote sensing of winter wheat disaster severity and the assessment of losses.
Collapse
Affiliation(s)
- Hecang Zang
- Institute of Agricultural Information Technology, Henan Academy of Agricultural Sciences, Zhengzhou, China
- Huanghuaihai Key Laboratory of Intelligent Agricultural Technology, Ministry of Agriculture and Rural Areas, Zhengzhou, China
| | - Xinqi Su
- Institute of Agricultural Information Technology, Henan Academy of Agricultural Sciences, Zhengzhou, China
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, China
| | - Yanjing Wang
- School of Life Science, Zhengzhou Normal University, Zhengzhou, China
| | - Guoqiang Li
- Institute of Agricultural Information Technology, Henan Academy of Agricultural Sciences, Zhengzhou, China
- Huanghuaihai Key Laboratory of Intelligent Agricultural Technology, Ministry of Agriculture and Rural Areas, Zhengzhou, China
| | - Jie Zhang
- Institute of Agricultural Information Technology, Henan Academy of Agricultural Sciences, Zhengzhou, China
- Huanghuaihai Key Laboratory of Intelligent Agricultural Technology, Ministry of Agriculture and Rural Areas, Zhengzhou, China
| | - Guoqing Zheng
- Institute of Agricultural Information Technology, Henan Academy of Agricultural Sciences, Zhengzhou, China
- Huanghuaihai Key Laboratory of Intelligent Agricultural Technology, Ministry of Agriculture and Rural Areas, Zhengzhou, China
| | - Weiguo Hu
- Wheat Research Institution, Henan Academy of Agricultural Sciences, Zhengzhou, China
| | - Hualei Shen
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, China
| |
Collapse
|
7
|
Janni M, Maestri E, Gullì M, Marmiroli M, Marmiroli N. Plant responses to climate change, how global warming may impact on food security: a critical review. FRONTIERS IN PLANT SCIENCE 2024; 14:1297569. [PMID: 38250438 PMCID: PMC10796516 DOI: 10.3389/fpls.2023.1297569] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 12/14/2023] [Indexed: 01/23/2024]
Abstract
Global agricultural production must double by 2050 to meet the demands of an increasing world human population but this challenge is further exacerbated by climate change. Environmental stress, heat, and drought are key drivers in food security and strongly impacts on crop productivity. Moreover, global warming is threatening the survival of many species including those which we rely on for food production, forcing migration of cultivation areas with further impoverishing of the environment and of the genetic variability of crop species with fall out effects on food security. This review considers the relationship of climatic changes and their bearing on sustainability of natural and agricultural ecosystems, as well as the role of omics-technologies, genomics, proteomics, metabolomics, phenomics and ionomics. The use of resource saving technologies such as precision agriculture and new fertilization technologies are discussed with a focus on their use in breeding plants with higher tolerance and adaptability and as mitigation tools for global warming and climate changes. Nevertheless, plants are exposed to multiple stresses. This study lays the basis for the proposition of a novel research paradigm which is referred to a holistic approach and that went beyond the exclusive concept of crop yield, but that included sustainability, socio-economic impacts of production, commercialization, and agroecosystem management.
Collapse
Affiliation(s)
- Michela Janni
- Institute of Bioscience and Bioresources (IBBR), National Research Council (CNR), Bari, Italy
- Institute of Materials for Electronics and Magnetism (IMEM), National Research Council (CNR), Parma, Italy
| | - Elena Maestri
- Department of Chemistry, Life Sciences and Environmental Sustainability, Interdepartmental Centers SITEIA.PARMA and CIDEA, University of Parma, Parma, Italy
| | - Mariolina Gullì
- Department of Chemistry, Life Sciences and Environmental Sustainability, Interdepartmental Centers SITEIA.PARMA and CIDEA, University of Parma, Parma, Italy
| | - Marta Marmiroli
- Department of Chemistry, Life Sciences and Environmental Sustainability, Interdepartmental Centers SITEIA.PARMA and CIDEA, University of Parma, Parma, Italy
| | - Nelson Marmiroli
- Consorzio Interuniversitario Nazionale per le Scienze Ambientali (CINSA) Interuniversity Consortium for Environmental Sciences, Parma/Venice, Italy
| |
Collapse
|
8
|
Camenzind MP, Yu K. Multi temporal multispectral UAV remote sensing allows for yield assessment across European wheat varieties already before flowering. FRONTIERS IN PLANT SCIENCE 2024; 14:1214931. [PMID: 38235203 PMCID: PMC10791776 DOI: 10.3389/fpls.2023.1214931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Accepted: 11/29/2023] [Indexed: 01/19/2024]
Abstract
High throughput field phenotyping techniques employing multispectral cameras allow extracting a variety of variables and features to predict yield and yield related traits, but little is known about which types of multispectral features are optimal to forecast yield potential in the early growth phase. In this study, we aim to identify multispectral features that are able to accurately predict yield and aid in variety classification at different growth stages throughout the season. Furthermore, we hypothesize that texture features (TFs) are more suitable for variety classification than for yield prediction. Throughout 2021 and 2022, a trial involving 19 and 18 European wheat varieties, respectively, was conducted. Multispectral images, encompassing visible, Red-edge, and near-infrared (NIR) bands, were captured at 19 and 22 time points from tillering to harvest using an unmanned aerial vehicle (UAV) in the first and second year of trial. Subsequently, orthomosaic images were generated, and various features were extracted, including single-band reflectances, vegetation indices (VI), and TFs derived from a gray level correlation matrix (GLCM). The performance of these features in predicting yield and classifying varieties at different growth stages was assessed using random forest models. Measurements during the flowering stage demonstrated superior performance for most features. Specifically, Red reflectance achieved a root mean square error (RMSE) of 52.4 g m-2 in the first year and 64.4 g m-2 in the second year. The NDRE VI yielded the most accurate predictions with an RMSE of 49.1 g m-2 and 60.6 g m-2, respectively. Moreover, TFs such as CONTRAST and DISSIMILARITY displayed the best performance in predicting yield, with RMSE values of 55.5 g m-2 and 66.3 g m-2 across the two years of trial. Combining data from different dates enhanced yield prediction and stabilized predictions across dates. TFs exhibited high accuracy in classifying low and high-yielding varieties. The CORRELATION feature achieved an accuracy of 88% in the first year, while the HOMOGENEITY feature reached 92% accuracy in the second year. This study confirms the hypothesis that TFs are more suitable for variety classification than for yield prediction. The results underscore the potential of TFs derived from multispectral images in early yield prediction and varietal classification, offering insights for HTP and precision agriculture alike.
Collapse
Affiliation(s)
- Moritz Paul Camenzind
- Precision Agriculture Lab, School of Life Sciences, Technical University of Munich, Freising, Germany
| | - Kang Yu
- Precision Agriculture Lab, School of Life Sciences, Technical University of Munich, Freising, Germany
- World Agricultural Systems Center (Hans Eisenmann-Forum for Agricultural Sciences – HEF), Technical University of Munich, Freising, Germany
| |
Collapse
|
9
|
Dong J, Fuentes A, Yoon S, Kim H, Park DS. An iterative noisy annotation correction model for robust plant disease detection. FRONTIERS IN PLANT SCIENCE 2023; 14:1238722. [PMID: 37941667 PMCID: PMC10628849 DOI: 10.3389/fpls.2023.1238722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 09/22/2023] [Indexed: 11/10/2023]
Abstract
Previous work on plant disease detection demonstrated that object detectors generally suffer from degraded training data, and annotations with noise may cause the training task to fail. Well-annotated datasets are therefore crucial to build a robust detector. However, a good label set generally requires much expert knowledge and meticulous work, which is expensive and time-consuming. This paper aims to learn robust feature representations with inaccurate bounding boxes, thereby reducing the model requirements for annotation quality. Specifically, we analyze the distribution of noisy annotations in the real world. A teacher-student learning paradigm is proposed to correct inaccurate bounding boxes. The teacher model is used to rectify the degraded bounding boxes, and the student model extracts more robust feature representations from the corrected bounding boxes. Furthermore, the method can be easily generalized to semi-supervised learning paradigms and auto-labeling techniques. Experimental results show that applying our method to the Faster-RCNN detector achieves a 26% performance improvement on the noisy dataset. Besides, our method achieves approximately 75% of the performance of a fully supervised object detector when 1% of the labels are available. Overall, this work provides a robust solution to real-world location noise. It alleviates the challenges posed by noisy data to precision agriculture, optimizes data labeling technology, and encourages practitioners to further investigate plant disease detection and intelligent agriculture at a lower cost. The code will be released at https://github.com/JiuqingDong/TS_OAMIL-for-Plant-disease-detection.
Collapse
Affiliation(s)
- Jiuqing Dong
- Department of Electronic Engineering, Jeonbuk National University, Jeonju, Republic of Korea
- Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, Republic of Korea
| | - Alvaro Fuentes
- Department of Electronic Engineering, Jeonbuk National University, Jeonju, Republic of Korea
- Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, Republic of Korea
| | - Sook Yoon
- Department of Computer Engineering, Mokpo National University, Muan, Republic of Korea
| | - Hyongsuk Kim
- Department of Electronic Engineering, Jeonbuk National University, Jeonju, Republic of Korea
- Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, Republic of Korea
| | - Dong Sun Park
- Department of Electronic Engineering, Jeonbuk National University, Jeonju, Republic of Korea
- Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, Republic of Korea
| |
Collapse
|
10
|
Ye J, Yu Z, Wang Y, Lu D, Zhou H. WheatLFANet: in-field detection and counting of wheat heads with high-real-time global regression network. PLANT METHODS 2023; 19:103. [PMID: 37794515 PMCID: PMC10548667 DOI: 10.1186/s13007-023-01079-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 09/15/2023] [Indexed: 10/06/2023]
Abstract
BACKGROUND Detection and counting of wheat heads are of crucial importance in the field of plant science, as they can be used for crop field management, yield prediction, and phenotype analysis. With the widespread application of computer vision technology in plant science, monitoring of automated high-throughput plant phenotyping platforms has become possible. Currently, many innovative methods and new technologies have been proposed that have made significant progress in the accuracy and robustness of wheat head recognition. Nevertheless, these methods are often built on high-performance computing devices and lack practicality. In resource-limited situations, these methods may not be effectively applied and deployed, thereby failing to meet the needs of practical applications. RESULTS In our recent research on maize tassels, we proposed TasselLFANet, the most advanced neural network for detecting and counting maize tassels. Building on this work, we have now developed a high-real-time lightweight neural network called WheatLFANet for wheat head detection. WheatLFANet features a more compact encoder-decoder structure and an effective multi-dimensional information mapping fusion strategy, allowing it to run efficiently on low-end devices while maintaining high accuracy and practicality. According to the evaluation report on the global wheat head detection dataset, WheatLFANet outperforms other state-of-the-art methods with an average precision AP of 0.900 and an R2 value of 0.949 between predicted values and ground truth values. Moreover, it runs significantly faster than all other methods by an order of magnitude (TasselLFANet: FPS: 61). CONCLUSIONS Extensive experiments have shown that WheatLFANet exhibits better generalization ability than other state-of-the-art methods, and achieved a speed increase of an order of magnitude while maintaining accuracy. The success of this study demonstrates the feasibility of achieving real-time, lightweight detection of wheat heads on low-end devices, and also indicates the usefulness of simple yet powerful neural network designs.
Collapse
Affiliation(s)
- Jianxiong Ye
- College of Robotics, Guangdong Polytechnic of Science and Technology, Zhuhai, Guangdong, China
| | - Zhenghong Yu
- College of Robotics, Guangdong Polytechnic of Science and Technology, Zhuhai, Guangdong, China.
| | - Yangxu Wang
- College of Robotics, Guangdong Polytechnic of Science and Technology, Zhuhai, Guangdong, China
| | - Dunlu Lu
- College of Robotics, Guangdong Polytechnic of Science and Technology, Zhuhai, Guangdong, China
| | - Huabing Zhou
- Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, China
| |
Collapse
|
11
|
Tanaka Y, Watanabe T, Katsura K, Tsujimoto Y, Takai T, Tanaka TST, Kawamura K, Saito H, Homma K, Mairoua SG, Ahouanton K, Ibrahim A, Senthilkumar K, Semwal VK, Matute EJG, Corredor E, El-Namaky R, Manigbas N, Quilang EJP, Iwahashi Y, Nakajima K, Takeuchi E, Saito K. Deep Learning Enables Instant and Versatile Estimation of Rice Yield Using Ground-Based RGB Images. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0073. [PMID: 38239736 PMCID: PMC10795498 DOI: 10.34133/plantphenomics.0073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Accepted: 06/28/2023] [Indexed: 01/22/2024]
Abstract
Rice (Oryza sativa L.) is one of the most important cereals, which provides 20% of the world's food energy. However, its productivity is poorly assessed especially in the global South. Here, we provide a first study to perform a deep-learning-based approach for instantaneously estimating rice yield using red-green-blue images. During ripening stage and at harvest, over 22,000 digital images were captured vertically downward over the rice canopy from a distance of 0.8 to 0.9 m at 4,820 harvesting plots having the yield of 0.1 to 16.1 t·ha-1 across 6 countries in Africa and Japan. A convolutional neural network applied to these data at harvest predicted 68% variation in yield with a relative root mean square error of 0.22. The developed model successfully detected genotypic difference and impact of agronomic interventions on yield in the independent dataset. The model also demonstrated robustness against the images acquired at different shooting angles up to 30° from right angle, diverse light environments, and shooting date during late ripening stage. Even when the resolution of images was reduced (from 0.2 to 3.2 cm·pixel-1 of ground sampling distance), the model could predict 57% variation in yield, implying that this approach can be scaled by the use of unmanned aerial vehicles. Our work offers low-cost, hands-on, and rapid approach for high-throughput phenotyping and can lead to impact assessment of productivity-enhancing interventions, detection of fields where these are needed to sustainably increase crop production, and yield forecast at several weeks before harvesting.
Collapse
Affiliation(s)
- Yu Tanaka
- Graduate School of Agriculture,
Kyoto University, Kitashirakawa Oiwake-chou, Sakyo-ku, Kyoto 606-8502, Japan
- Graduate School of Environmental, Life, Natural Science and Technology,
Okayama University, 1-1-1, Tsushima Naka, Okayama 700-8530, Japan
| | - Tomoya Watanabe
- Graduate School of Mathematics,
Kyushu University, 744, Motooka, Fukuoka Shi Nishi Ku, Fukuoka 819-0395, Japan
| | - Keisuke Katsura
- Graduate School of Agriculture,
Tokyo University of Agriculture and Technology, 3-5-8 Saiwaicho, Fuchu, Tokyo 183-8509, Japan
| | - Yasuhiro Tsujimoto
- Japan International Research Center for Agricultural Sciences, 1-1 Ohwashi, Tsukuba, Ibaraki 305-8686, Japan
| | - Toshiyuki Takai
- Japan International Research Center for Agricultural Sciences, 1-1 Ohwashi, Tsukuba, Ibaraki 305-8686, Japan
| | - Takashi Sonam Tashi Tanaka
- Faculty of Applied Biological Sciences,
Gifu University, 1-1 Yanagido, Gifu 501-1193, Japan
- Artificial Intelligence Advanced Research Center,
Gifu University, 1-1 Yanagido, Gifu 501-1193, Japan
| | - Kensuke Kawamura
- Japan International Research Center for Agricultural Sciences, 1-1 Ohwashi, Tsukuba, Ibaraki 305-8686, Japan
| | - Hiroki Saito
- Tropical Agriculture Research Front,
Japan International Research Center for Agricultural Sciences, 1091-1 Maezato, Ishigaki, Okinawa 907-0002, Japan
| | - Koki Homma
- Graduate School of Agricultural Science,
Tohoku University, Aramaki Aza-Aoba, Aoba, Sendai, Miyagi 980-8572, Japan
| | | | - Kokou Ahouanton
- Africa Rice Center (AfricaRice), 01 BP 2551 Bouaké, Côte d'Ivoire
| | - Ali Ibrahim
- Africa Rice Center (AfricaRice), Regional Station for the Sahel, B.P. 96, Saint-Louis, Senegal
| | - Kalimuthu Senthilkumar
- Africa Rice Center (AfricaRice), P.O. Box 1690, Ampandrianomby, Antananarivo, Madagascar
| | - Vimal Kumar Semwal
- Africa Rice Center (AfricaRice), Nigeria Station, c/o IITA, PMB 5320, Ibadan, Nigeria
| | - Eduardo Jose Graterol Matute
- Latin American Fund for Irrigated Rice - The Alliance of Bioversity International and CIAT, Km 17 Recta Cali-Palmira, C.P. 763537, A.A. 6713, Cali, Colombia
| | - Edgar Corredor
- Latin American Fund for Irrigated Rice - The Alliance of Bioversity International and CIAT, Km 17 Recta Cali-Palmira, C.P. 763537, A.A. 6713, Cali, Colombia
| | - Raafat El-Namaky
- Rice Research and Training Center,
Field Crops Research Institute, ARC, Giza, Egypt
| | - Norvie Manigbas
- Philippine Rice Research Institute (PhilRice), Maligaya, Science City of Muñoz, 3119 Nueva Ecija, Philippines
| | - Eduardo Jimmy P. Quilang
- Philippine Rice Research Institute (PhilRice), Maligaya, Science City of Muñoz, 3119 Nueva Ecija, Philippines
| | - Yu Iwahashi
- Graduate School of Agriculture,
Kyoto University, Kitashirakawa Oiwake-chou, Sakyo-ku, Kyoto 606-8502, Japan
| | - Kota Nakajima
- Graduate School of Agriculture,
Kyoto University, Kitashirakawa Oiwake-chou, Sakyo-ku, Kyoto 606-8502, Japan
| | - Eisuke Takeuchi
- Graduate School of Agriculture,
Kyoto University, Kitashirakawa Oiwake-chou, Sakyo-ku, Kyoto 606-8502, Japan
| | - Kazuki Saito
- Japan International Research Center for Agricultural Sciences, 1-1 Ohwashi, Tsukuba, Ibaraki 305-8686, Japan
- Africa Rice Center (AfricaRice), 01 BP 2551 Bouaké, Côte d'Ivoire
- International Rice Research Institute (IRRI), DAPO Box 7777, Metro Manila 1301, Philippines
| |
Collapse
|
12
|
David E, Ogidi F, Smith D, Chapman S, de Solan B, Guo W, Baret F, Stavness I. Global Wheat Head Detection Challenges: Winning Models and Application for Head Counting. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0059. [PMID: 38239739 PMCID: PMC10795497 DOI: 10.34133/plantphenomics.0059] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 06/01/2023] [Indexed: 01/22/2024]
Abstract
Data competitions have become a popular approach to crowdsource new data analysis methods for general and specialized data science problems. Data competitions have a rich history in plant phenotyping, and new outdoor field datasets have the potential to embrace solutions across research and commercial applications. We developed the Global Wheat Challenge as a generalization competition in 2020 and 2021 to find more robust solutions for wheat head detection using field images from different regions. We analyze the winning challenge solutions in terms of their robustness when applied to new datasets. We found that the design of the competition had an influence on the selection of winning solutions and provide recommendations for future competitions to encourage the selection of more robust solutions.
Collapse
Affiliation(s)
- Etienne David
- UMR 1114 EMMAH, INRAE, Avignon, France
- Arvalis – Institut du Végétal, Paris, France
| | - Franklin Ogidi
- Department of Computer Science,
University of Saskatchewan, Saskatoon, Canada
| | - Daniel Smith
- School of Food and Agricultural Sciences,
University of Queensland, Brisbane, Australia
| | - Scott Chapman
- School of Food and Agricultural Sciences,
University of Queensland, Brisbane, Australia
| | | | - Wei Guo
- Graduate School of Agricultural and Life Sciences,
The University of Tokyo, Tokyo, Japan
| | | | - Ian Stavness
- Department of Computer Science,
University of Saskatchewan, Saskatoon, Canada
| |
Collapse
|
13
|
Anderegg J, Zenkl R, Walter A, Hund A, McDonald BA. Combining High-Resolution Imaging, Deep Learning, and Dynamic Modeling to Separate Disease and Senescence in Wheat Canopies. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0053. [PMID: 37363146 PMCID: PMC10287056 DOI: 10.34133/plantphenomics.0053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 04/25/2023] [Indexed: 06/28/2023]
Abstract
Maintenance of sufficiently healthy green leaf area after anthesis is key to ensuring an adequate assimilate supply for grain filling. Tightly regulated age-related physiological senescence and various biotic and abiotic stressors drive overall greenness decay dynamics under field conditions. Besides direct effects on green leaf area in terms of leaf damage, stressors often anticipate or accelerate physiological senescence, which may multiply their negative impact on grain filling. Here, we present an image processing methodology that enables the monitoring of chlorosis and necrosis separately for ears and shoots (stems + leaves) based on deep learning models for semantic segmentation and color properties of vegetation. A vegetation segmentation model was trained using semisynthetic training data generated using image composition and generative adversarial neural networks, which greatly reduced the risk of annotation uncertainties and annotation effort. Application of the models to image time series revealed temporal patterns of greenness decay as well as the relative contributions of chlorosis and necrosis. Image-based estimation of greenness decay dynamics was highly correlated with scoring-based estimations (r ≈ 0.9). Contrasting patterns were observed for plots with different levels of foliar diseases, particularly septoria tritici blotch. Our results suggest that tracking the chlorotic and necrotic fractions separately may enable (a) a separate quantification of the contribution of biotic stress and physiological senescence on overall green leaf area dynamics and (b) investigation of interactions between biotic stress and physiological senescence. The high-throughput nature of our methodology paves the way to conducting genetic studies of disease resistance and tolerance.
Collapse
Affiliation(s)
- Jonas Anderegg
- Plant Pathology Group, Institute of Integrative Biology, ETH Zurich, Zurich, Switzerland
| | - Radek Zenkl
- Plant Pathology Group, Institute of Integrative Biology, ETH Zurich, Zurich, Switzerland
| | - Achim Walter
- Crop Science Group, Institute of Agricultural Sciences,
ETH Zurich, Zurich, Switzerland
| | - Andreas Hund
- Crop Science Group, Institute of Agricultural Sciences,
ETH Zurich, Zurich, Switzerland
| | - Bruce A. McDonald
- Plant Pathology Group, Institute of Integrative Biology, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
14
|
Chen J, Zhou J, Li Q, Li H, Xia Y, Jackson R, Sun G, Zhou G, Deakin G, Jiang D, Zhou J. CropQuant-Air: an AI-powered system to enable phenotypic analysis of yield- and performance-related traits using wheat canopy imagery collected by low-cost drones. FRONTIERS IN PLANT SCIENCE 2023; 14:1219983. [PMID: 37404534 PMCID: PMC10316027 DOI: 10.3389/fpls.2023.1219983] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 05/26/2023] [Indexed: 07/06/2023]
Abstract
As one of the most consumed stable foods around the world, wheat plays a crucial role in ensuring global food security. The ability to quantify key yield components under complex field conditions can help breeders and researchers assess wheat's yield performance effectively. Nevertheless, it is still challenging to conduct large-scale phenotyping to analyse canopy-level wheat spikes and relevant performance traits, in the field and in an automated manner. Here, we present CropQuant-Air, an AI-powered software system that combines state-of-the-art deep learning (DL) models and image processing algorithms to enable the detection of wheat spikes and phenotypic analysis using wheat canopy images acquired by low-cost drones. The system includes the YOLACT-Plot model for plot segmentation, an optimised YOLOv7 model for quantifying the spike number per m2 (SNpM2) trait, and performance-related trait analysis using spectral and texture features at the canopy level. Besides using our labelled dataset for model training, we also employed the Global Wheat Head Detection dataset to incorporate varietal features into the DL models, facilitating us to perform reliable yield-based analysis from hundreds of varieties selected from main wheat production regions in China. Finally, we employed the SNpM2 and performance traits to develop a yield classification model using the Extreme Gradient Boosting (XGBoost) ensemble and obtained significant positive correlations between the computational analysis results and manual scoring, indicating the reliability of CropQuant-Air. To ensure that our work could reach wider researchers, we created a graphical user interface for CropQuant-Air, so that non-expert users could readily use our work. We believe that our work represents valuable advances in yield-based field phenotyping and phenotypic analysis, providing useful and reliable toolkits to enable breeders, researchers, growers, and farmers to assess crop-yield performance in a cost-effective approach.
Collapse
Affiliation(s)
- Jiawei Chen
- State Key Laboratory of Crop Genetics & Germplasm Enhancement, Academy for Advanced Interdisciplinary Studies, Nanjing Agricultural University, Nanjing, China
- College of Engineering, Nanjing Agricultural University, Nanjing, China
| | - Jie Zhou
- State Key Laboratory of Crop Genetics & Germplasm Enhancement, Academy for Advanced Interdisciplinary Studies, Nanjing Agricultural University, Nanjing, China
- College of Engineering, Nanjing Agricultural University, Nanjing, China
| | - Qing Li
- Regional Technique Innovation Center for Wheat Production, Key Laboratory of Crop Physiology and Ecology in Southern China, Ministry of Agriculture, Nanjing Agricultural University, Nanjing, China
| | - Hanghang Li
- State Key Laboratory of Crop Genetics & Germplasm Enhancement, Academy for Advanced Interdisciplinary Studies, Nanjing Agricultural University, Nanjing, China
| | - Yunpeng Xia
- State Key Laboratory of Crop Genetics & Germplasm Enhancement, Academy for Advanced Interdisciplinary Studies, Nanjing Agricultural University, Nanjing, China
| | - Robert Jackson
- Cambridge Crop Research, National Institute of Agricultural Botany (NIAB), Cambridge, United Kingdom
| | - Gang Sun
- State Key Laboratory of Crop Genetics & Germplasm Enhancement, Academy for Advanced Interdisciplinary Studies, Nanjing Agricultural University, Nanjing, China
| | - Guodong Zhou
- State Key Laboratory of Crop Genetics & Germplasm Enhancement, Academy for Advanced Interdisciplinary Studies, Nanjing Agricultural University, Nanjing, China
| | - Greg Deakin
- Cambridge Crop Research, National Institute of Agricultural Botany (NIAB), Cambridge, United Kingdom
| | - Dong Jiang
- Regional Technique Innovation Center for Wheat Production, Key Laboratory of Crop Physiology and Ecology in Southern China, Ministry of Agriculture, Nanjing Agricultural University, Nanjing, China
| | - Ji Zhou
- State Key Laboratory of Crop Genetics & Germplasm Enhancement, Academy for Advanced Interdisciplinary Studies, Nanjing Agricultural University, Nanjing, China
- Cambridge Crop Research, National Institute of Agricultural Botany (NIAB), Cambridge, United Kingdom
| |
Collapse
|
15
|
Madec S, Irfan K, Velumani K, Baret F, David E, Daubige G, Samatan LB, Serouart M, Smith D, James C, Camacho F, Guo W, De Solan B, Chapman SC, Weiss M. VegAnn, Vegetation Annotation of multi-crop RGB images acquired under diverse conditions for segmentation. Sci Data 2023; 10:302. [PMID: 37208401 DOI: 10.1038/s41597-023-02098-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 03/22/2023] [Indexed: 05/21/2023] Open
Abstract
Applying deep learning to images of cropping systems provides new knowledge and insights in research and commercial applications. Semantic segmentation or pixel-wise classification, of RGB images acquired at the ground level, into vegetation and background is a critical step in the estimation of several canopy traits. Current state of the art methodologies based on convolutional neural networks (CNNs) are trained on datasets acquired under controlled or indoor environments. These models are unable to generalize to real-world images and hence need to be fine-tuned using new labelled datasets. This motivated the creation of the VegAnn - Vegetation Annotation - dataset, a collection of 3775 multi-crop RGB images acquired for different phenological stages using different systems and platforms in diverse illumination conditions. We anticipate that VegAnn will help improving segmentation algorithm performances, facilitate benchmarking and promote large-scale crop vegetation segmentation research.
Collapse
Affiliation(s)
- Simon Madec
- UMR TETIS, CIRAD, Montpellier, France.
- INRAE, Avignon Université, UMR EMMAH 1114, 84000, Avignon, France.
- Arvalis, 228, route de l'Aérodrome - CS 40509, 84914, Avignon, Cedex 9, France.
| | - Kamran Irfan
- INRAE, Avignon Université, UMR EMMAH 1114, 84000, Avignon, France
- HIPHEN SAS, 120 Rue Jean Dausset, Agroparc-Batiment Technicité, 84140, Avignon, France
| | - Kaaviya Velumani
- INRAE, Avignon Université, UMR EMMAH 1114, 84000, Avignon, France
| | - Frederic Baret
- INRAE, Avignon Université, UMR EMMAH 1114, 84000, Avignon, France
| | - Etienne David
- INRAE, Avignon Université, UMR EMMAH 1114, 84000, Avignon, France
- Arvalis, 228, route de l'Aérodrome - CS 40509, 84914, Avignon, Cedex 9, France
- HIPHEN SAS, 120 Rue Jean Dausset, Agroparc-Batiment Technicité, 84140, Avignon, France
| | - Gaetan Daubige
- Arvalis, 228, route de l'Aérodrome - CS 40509, 84914, Avignon, Cedex 9, France
| | | | - Mario Serouart
- INRAE, Avignon Université, UMR EMMAH 1114, 84000, Avignon, France
- Arvalis, 228, route de l'Aérodrome - CS 40509, 84914, Avignon, Cedex 9, France
| | - Daniel Smith
- The University of Queensland, School of Agriculture and Food Sciences, Gatton, QLD, 4343, Australia
| | - Chrisbin James
- The University of Queensland, School of Agriculture and Food Sciences, Gatton, QLD, 4343, Australia
| | | | - Wei Guo
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, 188-0002, Japan
| | - Benoit De Solan
- Arvalis, 228, route de l'Aérodrome - CS 40509, 84914, Avignon, Cedex 9, France
| | - Scott C Chapman
- The University of Queensland, School of Agriculture and Food Sciences, Gatton, QLD, 4343, Australia
| | - Marie Weiss
- INRAE, Avignon Université, UMR EMMAH 1114, 84000, Avignon, France
| |
Collapse
|
16
|
Yan J, Zhao J, Cai Y, Wang S, Qiu X, Yao X, Tian Y, Zhu Y, Cao W, Zhang X. Improving multi-scale detection layers in the deep learning network for wheat spike detection based on interpretive analysis. PLANT METHODS 2023; 19:46. [PMID: 37179312 PMCID: PMC10183117 DOI: 10.1186/s13007-023-01020-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 04/29/2023] [Indexed: 05/15/2023]
Abstract
BACKGROUND Detecting and counting wheat spikes is essential for predicting and measuring wheat yield. However, current wheat spike detection researches often directly apply the new network structure. There are few studies that can combine the prior knowledge of wheat spike size characteristics to design a suitable wheat spike detection model. It remains unclear whether the complex detection layers of the network play their intended role. RESULTS This study proposes an interpretive analysis method for quantitatively evaluating the role of three-scale detection layers in a deep learning-based wheat spike detection model. The attention scores in each detection layer of the YOLOv5 network are calculated using the Gradient-weighted Class Activation Mapping (Grad-CAM) algorithm, which compares the prior labeled wheat spike bounding boxes with the attention areas of the network. By refining the multi-scale detection layers using the attention scores, a better wheat spike detection network is obtained. The experiments on the Global Wheat Head Detection (GWHD) dataset show that the large-scale detection layer performs poorly, while the medium-scale detection layer performs best among the three-scale detection layers. Consequently, the large-scale detection layer is removed, a micro-scale detection layer is added, and the feature extraction ability in the medium-scale detection layer is enhanced. The refined model increases the detection accuracy and reduces the network complexity by decreasing the network parameters. CONCLUSION The proposed interpretive analysis method to evaluate the contribution of different detection layers in the wheat spike detection network and provide a correct network improvement scheme. The findings of this study will offer a useful reference for future applications of deep network refinement in this field.
Collapse
Affiliation(s)
- Jiawei Yan
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Jianqing Zhao
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Yucheng Cai
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Suwan Wang
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Xiaolei Qiu
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Xia Yao
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
- Jiangsu Key Laboratory for Information Agriculture, Nanjing, 210095, China
| | - Yongchao Tian
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Jiangsu Collaborative Innovation Center for Modern Crop Production, Nanjing, 210095, China
| | - Yan Zhu
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Weixing Cao
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Xiaohu Zhang
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China.
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China.
- Jiangsu Collaborative Innovation Center for Modern Crop Production, Nanjing, 210095, China.
| |
Collapse
|
17
|
Ogidi FC, Eramian MG, Stavness I. Benchmarking Self-Supervised Contrastive Learning Methods for Image-Based Plant Phenotyping. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0037. [PMID: 37040288 PMCID: PMC10079263 DOI: 10.34133/plantphenomics.0037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 02/28/2023] [Indexed: 06/19/2023]
Abstract
The rise of self-supervised learning (SSL) methods in recent years presents an opportunity to leverage unlabeled and domain-specific datasets generated by image-based plant phenotyping platforms to accelerate plant breeding programs. Despite the surge of research on SSL, there has been a scarcity of research exploring the applications of SSL to image-based plant phenotyping tasks, particularly detection and counting tasks. We address this gap by benchmarking the performance of 2 SSL methods-momentum contrast (MoCo) v2 and dense contrastive learning (DenseCL)-against the conventional supervised learning method when transferring learned representations to 4 downstream (target) image-based plant phenotyping tasks: wheat head detection, plant instance detection, wheat spikelet counting, and leaf counting. We studied the effects of the domain of the pretraining (source) dataset on the downstream performance and the influence of redundancy in the pretraining dataset on the quality of learned representations. We also analyzed the similarity of the internal representations learned via the different pretraining methods. We find that supervised pretraining generally outperforms self-supervised pretraining and show that MoCo v2 and DenseCL learn different high-level representations compared to the supervised method. We also find that using a diverse source dataset in the same domain as or a similar domain to the target dataset maximizes performance in the downstream task. Finally, our results show that SSL methods may be more sensitive to redundancy in the pretraining dataset than the supervised pretraining method. We hope that this benchmark/evaluation study will guide practitioners in developing better SSL methods for image-based plant phenotyping.
Collapse
Affiliation(s)
- Franklin C. Ogidi
- Department of Computer Science,
University of Saskatchewan, Saskatoon, Canada
| | | | | |
Collapse
|
18
|
Najafian K, Ghanbari A, Sabet Kish M, Eramian M, Shirdel GH, Stavness I, Jin L, Maleki F. Semi-Self-Supervised Learning for Semantic Segmentation in Images with Dense Patterns. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0025. [PMID: 36930764 PMCID: PMC10013790 DOI: 10.34133/plantphenomics.0025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
Deep learning has shown potential in domains with large-scale annotated datasets. However, manual annotation is expensive, time-consuming, and tedious. Pixel-level annotations are particularly costly for semantic segmentation in images with dense irregular patterns of object instances, such as in plant images. In this work, we propose a method for developing high-performing deep learning models for semantic segmentation of such images utilizing little manual annotation. As a use case, we focus on wheat head segmentation. We synthesize a computationally annotated dataset-using a few annotated images, a short unannotated video clip of a wheat field, and several video clips with no wheat-to train a customized U-Net model. Considering the distribution shift between the synthesized and real images, we apply three domain adaptation steps to gradually bridge the domain gap. Only using two annotated images, we achieved a Dice score of 0.89 on the internal test set. When further evaluated on a diverse external dataset collected from 18 different domains across five countries, this model achieved a Dice score of 0.73. To expose the model to images from different growth stages and environmental conditions, we incorporated two annotated images from each of the 18 domains to further fine-tune the model. This increased the Dice score to 0.91. The result highlights the utility of the proposed approach in the absence of large-annotated datasets. Although our use case is wheat head segmentation, the proposed approach can be extended to other segmentation tasks with similar characteristics of irregularly repeating patterns of object instances.
Collapse
Affiliation(s)
- Keyhan Najafian
- Department of Computer Science, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Alireza Ghanbari
- Mathematics Department, Faculty of Sciences, University of Qom, Qom, Iran
| | - Mahdi Sabet Kish
- Department of Mathematics, Faculty of Mathematical Science, Shahid Beheshti University, Tehran, Iran
| | - Mark Eramian
- Department of Computer Science, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | | | - Ian Stavness
- Department of Computer Science, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Lingling Jin
- Department of Computer Science, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Farhad Maleki
- Department of Computer Science, University of Calgary, Calgary, Alberta, Canada
| |
Collapse
|
19
|
Harfouche AL, Nakhle F, Harfouche AH, Sardella OG, Dart E, Jacobson D. A primer on artificial intelligence in plant digital phenomics: embarking on the data to insights journey. TRENDS IN PLANT SCIENCE 2023; 28:154-184. [PMID: 36167648 DOI: 10.1016/j.tplants.2022.08.021] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 08/22/2022] [Accepted: 08/25/2022] [Indexed: 06/16/2023]
Abstract
Artificial intelligence (AI) has emerged as a fundamental component of global agricultural research that is poised to impact on many aspects of plant science. In digital phenomics, AI is capable of learning intricate structure and patterns in large datasets. We provide a perspective and primer on AI applications to phenome research. We propose a novel human-centric explainable AI (X-AI) system architecture consisting of data architecture, technology infrastructure, and AI architecture design. We clarify the difference between post hoc models and 'interpretable by design' models. We include guidance for effectively using an interpretable by design model in phenomic analysis. We also provide directions to sources of tools and resources for making data analytics increasingly accessible. This primer is accompanied by an interactive online tutorial.
Collapse
Affiliation(s)
- Antoine L Harfouche
- Department for Innovation in Biological, Agro-Food, and Forest Systems, University of Tuscia, Viterbo, VT 01100, Italy.
| | - Farid Nakhle
- Department for Innovation in Biological, Agro-Food, and Forest Systems, University of Tuscia, Viterbo, VT 01100, Italy
| | - Antoine H Harfouche
- Unité de Formation et de Recherche en Sciences Économiques, Gestion, Mathématiques, et Informatique, Université Paris Nanterre, 92001 Nanterre, France
| | - Orlando G Sardella
- Department for Innovation in Biological, Agro-Food, and Forest Systems, University of Tuscia, Viterbo, VT 01100, Italy
| | - Eli Dart
- Energy Sciences Network (ESnet), Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Daniel Jacobson
- Biosciences Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
| |
Collapse
|
20
|
James C, Gu Y, Potgieter A, David E, Madec S, Guo W, Baret F, Eriksson A, Chapman S. From Prototype to Inference: A Pipeline to Apply Deep Learning in Sorghum Panicle Detection. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0017. [PMID: 37040294 PMCID: PMC10076054 DOI: 10.34133/plantphenomics.0017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2022] [Accepted: 12/01/2022] [Indexed: 06/19/2023]
Abstract
Head (panicle) density is a major component in understanding crop yield, especially in crops that produce variable numbers of tillers such as sorghum and wheat. Use of panicle density both in plant breeding and in the agronomy scouting of commercial crops typically relies on manual counts observation, which is an inefficient and tedious process. Because of the easy availability of red-green-blue images, machine learning approaches have been applied to replacing manual counting. However, much of this research focuses on detection per se in limited testing conditions and does not provide a general protocol to utilize deep-learning-based counting. In this paper, we provide a comprehensive pipeline from data collection to model deployment in deep-learning-assisted panicle yield estimation for sorghum. This pipeline provides a basis from data collection and model training, to model validation and model deployment in commercial fields. Accurate model training is the foundation of the pipeline. However, in natural environments, the deployment dataset is frequently different from the training data (domain shift) causing the model to fail, so a robust model is essential to build a reliable solution. Although we demonstrate our pipeline in a sorghum field, the pipeline can be generalized to other grain species. Our pipeline provides a high-resolution head density map that can be utilized for diagnosis of agronomic variability within a field, in a pipeline built without commercial software.
Collapse
Affiliation(s)
- Chrisbin James
- School of Agriculture and Food Sciences, The University of Queensland, Brisbane, Australia
| | - Yanyang Gu
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| | - Andries Potgieter
- Queensland Alliance for Agriculture and Food Innovation, The University of Queensland, Brisbane, Australia
| | | | | | - Wei Guo
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan
| | - Frédéric Baret
- Institut National de la Recherche Agronomique, Paris, France
| | - Anders Eriksson
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| | - Scott Chapman
- School of Agriculture and Food Sciences, The University of Queensland, Brisbane, Australia
| |
Collapse
|
21
|
Besson M, Alison J, Bjerge K, Gorochowski TE, Høye TT, Jucker T, Mann HMR, Clements CF. Towards the fully automated monitoring of ecological communities. Ecol Lett 2022; 25:2753-2775. [PMID: 36264848 PMCID: PMC9828790 DOI: 10.1111/ele.14123] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 08/09/2022] [Accepted: 09/06/2022] [Indexed: 01/12/2023]
Abstract
High-resolution monitoring is fundamental to understand ecosystems dynamics in an era of global change and biodiversity declines. While real-time and automated monitoring of abiotic components has been possible for some time, monitoring biotic components-for example, individual behaviours and traits, and species abundance and distribution-is far more challenging. Recent technological advancements offer potential solutions to achieve this through: (i) increasingly affordable high-throughput recording hardware, which can collect rich multidimensional data, and (ii) increasingly accessible artificial intelligence approaches, which can extract ecological knowledge from large datasets. However, automating the monitoring of facets of ecological communities via such technologies has primarily been achieved at low spatiotemporal resolutions within limited steps of the monitoring workflow. Here, we review existing technologies for data recording and processing that enable automated monitoring of ecological communities. We then present novel frameworks that combine such technologies, forming fully automated pipelines to detect, track, classify and count multiple species, and record behavioural and morphological traits, at resolutions which have previously been impossible to achieve. Based on these rapidly developing technologies, we illustrate a solution to one of the greatest challenges in ecology: the ability to rapidly generate high-resolution, multidimensional and standardised data across complex ecologies.
Collapse
Affiliation(s)
- Marc Besson
- School of Biological SciencesUniversity of BristolBristolUK,Sorbonne Université CNRS UMR Biologie des Organismes Marins, BIOMBanyuls‐sur‐MerFrance
| | - Jamie Alison
- Department of EcoscienceAarhus UniversityAarhusDenmark,UK Centre for Ecology & HydrologyBangorUK
| | - Kim Bjerge
- Department of Electrical and Computer EngineeringAarhus UniversityAarhusDenmark
| | - Thomas E. Gorochowski
- School of Biological SciencesUniversity of BristolBristolUK,BrisEngBio, School of ChemistryUniversity of BristolCantock's CloseBristolBS8 1TSUK
| | - Toke T. Høye
- Department of EcoscienceAarhus UniversityAarhusDenmark,Arctic Research CentreAarhus UniversityAarhusDenmark
| | - Tommaso Jucker
- School of Biological SciencesUniversity of BristolBristolUK
| | - Hjalte M. R. Mann
- Department of EcoscienceAarhus UniversityAarhusDenmark,Arctic Research CentreAarhus UniversityAarhusDenmark
| | | |
Collapse
|
22
|
Sun J, Cao W, Yamanaka T. JustDeepIt: Software tool with graphical and character user interfaces for deep learning-based object detection and segmentation in image analysis. FRONTIERS IN PLANT SCIENCE 2022; 13:964058. [PMID: 36275541 PMCID: PMC9583140 DOI: 10.3389/fpls.2022.964058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
Image processing and analysis based on deep learning are becoming mainstream and increasingly accessible for solving various scientific problems in diverse fields. However, it requires advanced computer programming skills and a basic familiarity with character user interfaces (CUIs). Consequently, programming beginners face a considerable technical hurdle. Because potential users of image analysis are experimentalists, who often use graphical user interfaces (GUIs) in their daily work, there is a need to develop GUI-based easy-to-use deep learning software to support their work. Here, we introduce JustDeepIt, a software written in Python, to simplify object detection and instance segmentation using deep learning. JustDeepIt provides both a GUI and a CUI. It contains various functional modules for model building and inference, and it is built upon the popular PyTorch, MMDetection, and Detectron2 libraries. The GUI is implemented using the Python library FastAPI, simplifying model building for various deep learning approaches for beginners. As practical examples of JustDeepIt, we prepared four case studies that cover critical issues in plant science: (1) wheat head detection with Faster R-CNN, YOLOv3, SSD, and RetinaNet; (2) sugar beet and weed segmentation with Mask R-CNN; (3) plant segmentation with U2-Net; and (4) leaf segmentation with U2-Net. The results support the wide applicability of JustDeepIt in plant science applications. In addition, we believe that JustDeepIt has the potential to be applied to deep learning-based image analysis in various fields beyond plant science.
Collapse
Affiliation(s)
- Jianqiang Sun
- Research Center for Agricultural Information Technology, National Agriculture and Food Research Organization (NARO), Tsukuba, Japan
| | | | | |
Collapse
|
23
|
Zaji A, Liu Z, Xiao G, Sangha JS, Ruan Y. A survey on deep learning applications in wheat phenotyping. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
24
|
Ito R, Nobuhara H, Kato S. Transfer Learning Method for Object Detection Model Using Genetic Algorithm. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2022. [DOI: 10.20965/jaciii.2022.p0776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper proposes a transfer learning method for an object detection model using a genetic algorithm to solve the difficulty of the conventional transfer learning of deep learning-based object detection models. The genetic algorithm of the proposed method can select the re-learning layers automatically in the transfer learning process instead of a trial-and-error selection of the conventional method. Transfer learning was performed using the EfficientDet-d0 model pre-trained on the COCO dataset and the Global Wheat Head Detection (GWHD) dataset, and experiments were conducted to compare fine-tuning and the proposed method. Using the training data and the validation data of the GWHD, we compare the mean average precision (mAP) of the models trained by the conventional and the proposed methods, respectively, on the test data of the GWHD. It is confirmed that the model trained by the proposed method has higher performance than the model trained by the conventional method. The average of mAP of the proposed method, which automatically selects the re-learning layer (≈0.603), is higher than the average of mAP of the conventional method (≈0.594). Furthermore, the standard deviation of results obtained by the proposed method is smaller than that of the conventional method, and it shows the stable learning process of the proposed method.
Collapse
|
25
|
Zhu R, Wang X, Yan Z, Qiao Y, Tian H, Hu Z, Zhang Z, Li Y, Zhao H, Xin D, Chen Q. Exploring Soybean Flower and Pod Variation Patterns During Reproductive Period Based on Fusion Deep Learning. FRONTIERS IN PLANT SCIENCE 2022; 13:922030. [PMID: 35909768 PMCID: PMC9326440 DOI: 10.3389/fpls.2022.922030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 06/20/2022] [Indexed: 06/15/2023]
Abstract
The soybean flower and the pod drop are important factors in soybean yield, and the use of computer vision techniques to obtain the phenotypes of flowers and pods in bulk, as well as in a quick and accurate manner, is a key aspect of the study of the soybean flower and pod drop rate (PDR). This paper compared a variety of deep learning algorithms for identifying and counting soybean flowers and pods, and found that the Faster R-CNN model had the best performance. Furthermore, the Faster R-CNN model was further improved and optimized based on the characteristics of soybean flowers and pods. The accuracy of the final model for identifying flowers and pods was increased to 94.36 and 91%, respectively. Afterward, a fusion model for soybean flower and pod recognition and counting was proposed based on the Faster R-CNN model, where the coefficient of determinationR2 between counts of soybean flowers and pods by the fusion model and manual counts reached 0.965 and 0.98, respectively. The above results show that the fusion model is a robust recognition and counting algorithm that can reduce labor intensity and improve efficiency. Its application will greatly facilitate the study of the variable patterns of soybean flowers and pods during the reproductive period. Finally, based on the fusion model, we explored the variable patterns of soybean flowers and pods during the reproductive period, the spatial distribution patterns of soybean flowers and pods, and soybean flower and pod drop patterns.
Collapse
Affiliation(s)
- Rongsheng Zhu
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
| | - Xueying Wang
- College of Engineering, Northeast Agricultural University, Harbin, China
| | - Zhuangzhuang Yan
- College of Engineering, Northeast Agricultural University, Harbin, China
| | - Yinglin Qiao
- College of Engineering, Northeast Agricultural University, Harbin, China
| | - Huilin Tian
- College of Agriculture, Northeast Agricultural University, Harbin, China
| | - Zhenbang Hu
- College of Agriculture, Northeast Agricultural University, Harbin, China
| | - Zhanguo Zhang
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
| | - Yang Li
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
| | - Hongjie Zhao
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
| | - Dawei Xin
- College of Agriculture, Northeast Agricultural University, Harbin, China
| | - Qingshan Chen
- College of Agriculture, Northeast Agricultural University, Harbin, China
| |
Collapse
|
26
|
Khaki S, Safaei N, Pham H, Wang L. WheatNet: A lightweight convolutional neural network for high-throughput image-based wheat head detection and counting. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.017] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
27
|
Abstract
The detection and counting of wheat ears are essential for crop field management, but the adhesion and obscuration of wheat ears limit detection accuracy, with problems such as false detection, missed detection, and insufficient feature extraction capability. Previous research results have shown that most methods for detecting wheat ears are of two types: colour and texture extracted by machine learning methods or convolutional neural networks. Therefore, we proposed an improved YOLO v5 algorithm based on a shallow feature layer. There are two main core ideas: (1) to increase the perceptual field by adding quadruple down-sampling in the feature pyramid to improve the detection of small targets, and (2) introducing the CBAM attention mechanism into the neural network to solve the problem of gradient disappearance during training. CBAM is a model that includes both spatial and channel attention, and by adding this module, the feature extraction capability of the network can be improved. Finally, to make the model have better generalization ability, we proposed the Mosaic-8 data enhancement method, with adjusted loss function and modified regression formula for the target frame. The experimental results show that the improved algorithm has an mAP of 94.3%, an accuracy of 88.5%, and a recall of 98.1%. Compared with the relevant model, the improvement effect is noticeable. It shows that the model can effectively overcome the noise of the field environment to meet the practical requirements of wheat ear detection and counting.
Collapse
|
28
|
Hybrid machine learning methods combined with computer vision approaches to estimate biophysical parameters of pastures. EVOLUTIONARY INTELLIGENCE 2022. [DOI: 10.1007/s12065-022-00736-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
29
|
Yuan J, Kaur D, Zhou Z, Nagle M, Kiddle NG, Doshi NA, Behnoudfar A, Peremyslova E, Ma C, Strauss SH, Li F. Robust High-Throughput Phenotyping with Deep Segmentation Enabled by a Web-Based Annotator. PLANT PHENOMICS 2022; 2022:9893639. [PMID: 36059601 PMCID: PMC9394117 DOI: 10.34133/2022/9893639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 03/17/2022] [Indexed: 11/24/2022]
Abstract
The abilities of plant biologists and breeders to characterize the genetic basis of physiological traits are limited by their abilities to obtain quantitative data representing precise details of trait variation, and particularly to collect this data at a high-throughput scale with low cost. Although deep learning methods have demonstrated unprecedented potential to automate plant phenotyping, these methods commonly rely on large training sets that can be time-consuming to generate. Intelligent algorithms have therefore been proposed to enhance the productivity of these annotations and reduce human efforts. We propose a high-throughput phenotyping system which features a Graphical User Interface (GUI) and a novel interactive segmentation algorithm: Semantic-Guided Interactive Object Segmentation (SGIOS). By providing a user-friendly interface and intelligent assistance with annotation, this system offers potential to streamline and accelerate the generation of training sets, reducing the effort required by the user. Our evaluation shows that our proposed SGIOS model requires fewer user inputs compared to the state-of-art models for interactive segmentation. As a case study of the use of the GUI applied for genetic discovery in plants, we present an example of results from a preliminary genome-wide association study (GWAS) of in planta regeneration in Populus trichocarpa (poplar). We further demonstrate that the inclusion of a semantic prior map with SGIOS can accelerate the training process for future GWAS, using a sample of a dataset extracted from a poplar GWAS of in vitro regeneration. The capabilities of our phenotyping system surpass those of unassisted humans to rapidly and precisely phenotype our traits of interest. The scalability of this system enables large-scale phenomic screens that would otherwise be time-prohibitive, thereby providing increased power for GWAS, mutant screens, and other studies relying on large sample sizes to characterize the genetic basis of trait variation. Our user-friendly system can be used by researchers lacking a computational background, thus helping to democratize the use of deep segmentation as a tool for plant phenotyping.
Collapse
Affiliation(s)
| | | | - Zheng Zhou
- Oregon State University, Corvallis, OR, USA
| | | | | | | | | | | | | | | | - Fuxin Li
- Oregon State University, Corvallis, OR, USA
| |
Collapse
|
30
|
Kuroki K, Yan K, Iwata H, Shimizu KK, Tameshige T, Nasuda S, Guo W. Development of a high-throughput field phenotyping rover optimized for size-limited breeding fields as open-source hardware. BREEDING SCIENCE 2022; 72:66-74. [PMID: 36045888 PMCID: PMC8987849 DOI: 10.1270/jsbbs.21059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 11/30/2021] [Indexed: 06/15/2023]
Abstract
Phenotyping is a critical process in plant breeding, especially when there is an increasing demand for streamlining a selection process in a breeding program. Since manual phenotyping has limited efficiency, high-throughput phenotyping methods are recently popularized owing to progress in sensor and image processing technologies. However, in a size-limited breeding field, which is common in Japan and other Asian countries, it is challenging to introduce large machinery in the field or fly unmanned aerial vehicles over the field. In this study, we developed a ground-based high-throughput field phenotyping rover that could be easily introduced to a field regardless of the scale and location of the field even without special facilities. We also made the field rover open-source hardware, making its system available to public for easy modification, so that anyone can build one for their own use at a low cost. The trial run of the field rover revealed that it allowed the collection of detailed remote-sensing images of plants and quantitative analyses based on the images. The results suggest that the field rover developed in this study could allow efficient phenotyping of plants especially in a small breeding field.
Collapse
Affiliation(s)
- Ken Kuroki
- Graduate School of Agriculture, Kyoto University, Kitashirakawaoiwake-cho, Sakyo, Kyoto 606-8502, Japan
- Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan
| | - Kai Yan
- LabRomance Inc, 1-3-29-2F Ureshino, Fujimino, Saitama 356-0056, Japan
| | - Hiroyoshi Iwata
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1 Yayoi, Bunkyo, Tokyo 113-8657
| | - Kentaro K. Shimizu
- Department of Evolutionary Biology and Environmental Studies, University of Zurich, Zurich 8057, Switzerland
- Kihara Institute for Biological Research, Yokohama City University, 641-12 Maioka, Totsuka, Yokohama, Kanagawa 244-0813, Japan
| | - Toshiaki Tameshige
- Kihara Institute for Biological Research, Yokohama City University, 641-12 Maioka, Totsuka, Yokohama, Kanagawa 244-0813, Japan
- Department of Biology, Faculty of Science, Niigata University, 8050 Ikarashi 2-no-cho, Nishi, Niigata 950-2181, Japan
| | - Shuhei Nasuda
- Graduate School of Agriculture, Kyoto University, Kitashirakawaoiwake-cho, Sakyo, Kyoto 606-8502, Japan
| | - Wei Guo
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1 Midori, Nishitokyo, Tokyo 188-0002, Japan
| |
Collapse
|
31
|
Okura F. 3D modeling and reconstruction of plants and trees: A cross-cutting review across computer graphics, vision, and plant phenotyping. BREEDING SCIENCE 2022; 72:31-47. [PMID: 36045890 PMCID: PMC8987840 DOI: 10.1270/jsbbs.21074] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/26/2021] [Indexed: 06/15/2023]
Abstract
This paper reviews the past and current trends of three-dimensional (3D) modeling and reconstruction of plants and trees. These topics have been studied in multiple research fields, including computer vision, graphics, plant phenotyping, and forestry. This paper, therefore, provides a cross-cutting review. Representations of plant shape and structure are first summarized, where every method for plant modeling and reconstruction is based on a shape/structure representation. The methods were then categorized into 1) creating non-existent plants (modeling) and 2) creating models from real-world plants (reconstruction). This paper also discusses the limitations of current methods and possible future directions.
Collapse
Affiliation(s)
- Fumio Okura
- Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan
| |
Collapse
|
32
|
Sethy PK. Identification of wheat tiller based on AlexNet-feature fusion. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:8309-8316. [DOI: 10.1007/s11042-022-12286-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 11/22/2021] [Accepted: 01/14/2022] [Indexed: 08/02/2023]
|
33
|
Ninomiya S. High-throughput field crop phenotyping: current status and challenges. BREEDING SCIENCE 2022; 72:3-18. [PMID: 36045897 PMCID: PMC8987842 DOI: 10.1270/jsbbs.21069] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 12/16/2021] [Indexed: 05/03/2023]
Abstract
In contrast to the rapid advances made in plant genotyping, plant phenotyping is considered a bottleneck in plant science. This has promoted high-throughput plant phenotyping (HTP) studies, resulting in an exponential increase in phenotyping-related publications. The development of HTP was originally intended for use as indoor HTP technologies for model plant species under controlled environments. However, this subsequently shifted to HTP for use in crops in fields. Although HTP in fields is much more difficult to conduct due to unstable environmental conditions compared to HTP in controlled environments, recent advances in HTP technology have allowed these difficulties to be overcome, allowing for rapid, efficient, non-destructive, non-invasive, quantitative, repeatable, and objective phenotyping. Recent HTP developments have been accelerated by the advances in data analysis, sensors, and robot technologies, including machine learning, image analysis, three dimensional (3D) reconstruction, image sensors, laser sensors, environmental sensors, and drones, along with high-speed computational resources. This article provides an overview of recent HTP technologies, focusing mainly on canopy-based phenotypes of major crops, such as canopy height, canopy coverage, canopy biomass, and canopy stressed appearance, in addition to crop organ detection and counting in the fields. Current topics in field HTP are also presented, followed by a discussion on the low rates of adoption of HTP in practical breeding programs.
Collapse
Affiliation(s)
- Seishi Ninomiya
- Graduate School of Agriculture and Life Sciences, The University of Tokyo, Nishitokyo, Tokyo 188-0002, Japan
- Plant Phenomics Research Center, Nanjing Agricultural University, Nanjing, China
| |
Collapse
|
34
|
Shi M, Li XY, Lu H, Cao ZG. Background-Aware Domain Adaptation for Plant Counting. FRONTIERS IN PLANT SCIENCE 2022; 13:731816. [PMID: 35185973 PMCID: PMC8850787 DOI: 10.3389/fpls.2022.731816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 01/10/2022] [Indexed: 06/14/2023]
Abstract
Deep learning-based object counting models have recently been considered preferable choices for plant counting. However, the performance of these data-driven methods would probably deteriorate when a discrepancy exists between the training and testing data. Such a discrepancy is also known as the domain gap. One way to mitigate the performance drop is to use unlabeled data sampled from the testing environment to correct the model behavior. This problem setting is also called unsupervised domain adaptation (UDA). Despite UDA has been a long-standing topic in machine learning society, UDA methods are less studied for plant counting. In this paper, we first evaluate some frequently-used UDA methods on the plant counting task, including feature-level and image-level methods. By analyzing the failure patterns of these methods, we propose a novel background-aware domain adaptation (BADA) module to address the drawbacks. We show that BADA can easily fit into object counting models to improve the cross-domain plant counting performance, especially on background areas. Benefiting from learning where to count, background counting errors are reduced. We also show that BADA can work with adversarial training strategies to further enhance the robustness of counting models against the domain gap. We evaluated our method on 7 different domain adaptation settings, including different camera views, cultivars, locations, and image acquisition devices. Results demonstrate that our method achieved the lowest Mean Absolute Error on 6 out of the 7 settings. The usefulness of BADA is also supported by controlled ablation studies and visualizations.
Collapse
|
35
|
Liu C, Wang K, Lu H, Cao Z. Dynamic Color Transform Networks for Wheat Head Detection. PLANT PHENOMICS (WASHINGTON, D.C.) 2022; 2022:9818452. [PMID: 35198987 PMCID: PMC8829536 DOI: 10.34133/2022/9818452] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 01/04/2022] [Indexed: 11/06/2022]
Abstract
Wheat head detection can measure wheat traits such as head density and head characteristics. Standard wheat breeding largely relies on manual observation to detect wheat heads, yielding a tedious and inefficient procedure. The emergence of affordable camera platforms provides opportunities for deploying computer vision (CV) algorithms in wheat head detection, enabling automated measurements of wheat traits. Accurate wheat head detection, however, is challenging due to the variability of observation circumstances and the uncertainty of wheat head appearances. In this work, we propose a simple but effective idea-dynamic color transform (DCT)-for accurate wheat head detection. This idea is based on an observation that modifying the color channel of an input image can significantly alleviate false negatives and therefore improve detection results. DCT follows a linear color transform and can be easily implemented as a dynamic network. A key property of DCT is that the transform parameters are data-dependent such that illumination variations can be corrected adaptively. The DCT network can be incorporated into any existing object detectors. Experimental results on the Global Wheat Detection Dataset (GWHD) 2021 show that DCT can achieve notable improvements with negligible overhead parameters. In addition, DCT plays an important role in our solution participating in the Global Wheat Challenge (GWC) 2021, where our solution ranks the first on the initial public leaderboard, with an Average Domain Accuracy (ADA) of 0.821, and obtains the runner-up reward on the final private testing set, with an ADA of 0.695.
Collapse
Affiliation(s)
- Chengxin Liu
- Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Kewei Wang
- Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Hao Lu
- Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Zhiguo Cao
- Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
36
|
Zenkl R, Timofte R, Kirchgessner N, Roth L, Hund A, Van Gool L, Walter A, Aasen H. Outdoor Plant Segmentation With Deep Learning for High-Throughput Field Phenotyping on a Diverse Wheat Dataset. FRONTIERS IN PLANT SCIENCE 2022; 12:774068. [PMID: 35058948 PMCID: PMC8765702 DOI: 10.3389/fpls.2021.774068] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 11/05/2021] [Indexed: 05/25/2023]
Abstract
Robust and automated segmentation of leaves and other backgrounds is a core prerequisite of most approaches in high-throughput field phenotyping. So far, the possibilities of deep learning approaches for this purpose have not been explored adequately, partly due to a lack of publicly available, appropriate datasets. This study presents a workflow based on DeepLab v3+ and on a diverse annotated dataset of 190 RGB (350 x 350 pixels) images. Images of winter wheat plants of 76 different genotypes and developmental stages have been acquired throughout multiple years at high resolution in outdoor conditions using nadir view, encompassing a wide range of imaging conditions. Inconsistencies of human annotators in complex images have been quantified, and metadata information of camera settings has been included. The proposed approach achieves an intersection over union (IoU) of 0.77 and 0.90 for plants and soil, respectively. This outperforms the benchmarked machine learning methods which use Support Vector Classifier and/or Random Forrest. The results show that a small but carefully chosen and annotated set of images can provide a good basis for a powerful segmentation pipeline. Compared to earlier methods based on machine learning, the proposed method achieves better performance on the selected dataset in spite of using a deep learning approach with limited data. Increasing the amount of publicly available data with high human agreement on annotations and further development of deep neural network architectures will provide high potential for robust field-based plant segmentation in the near future. This, in turn, will be a cornerstone of data-driven improvement in crop breeding and agricultural practices of global benefit.
Collapse
Affiliation(s)
- Radek Zenkl
- Group of Crop Science, Department of Environmental Systems Science, Institute of Agricultural Sciences, ETH Zurich, Zurich, Switzerland
| | - Radu Timofte
- Computer Vision Lab, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich, Switzerland
| | - Norbert Kirchgessner
- Group of Crop Science, Department of Environmental Systems Science, Institute of Agricultural Sciences, ETH Zurich, Zurich, Switzerland
| | - Lukas Roth
- Group of Crop Science, Department of Environmental Systems Science, Institute of Agricultural Sciences, ETH Zurich, Zurich, Switzerland
| | - Andreas Hund
- Group of Crop Science, Department of Environmental Systems Science, Institute of Agricultural Sciences, ETH Zurich, Zurich, Switzerland
| | - Luc Van Gool
- Computer Vision Lab, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich, Switzerland
| | - Achim Walter
- Group of Crop Science, Department of Environmental Systems Science, Institute of Agricultural Sciences, ETH Zurich, Zurich, Switzerland
| | - Helge Aasen
- Remote Sensing Team, Division of Agroecology and Environment, Agroscope, Zurich, Switzerland
| |
Collapse
|
37
|
Dong Y, Liu Y, Kang H, Li C, Liu P, Liu Z. Lightweight and efficient neural network with SPSA attention for wheat ear detection. PeerJ Comput Sci 2022; 8:e931. [PMID: 35494849 PMCID: PMC9044259 DOI: 10.7717/peerj-cs.931] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 03/03/2022] [Indexed: 05/10/2023]
Abstract
Advancements in deep neural networks have made remarkable leap-forwards in crop detection. However, the detection of wheat ears is an important yet challenging task due to the complex background, dense targets, and overlaps between wheat ears. Currently, many detectors have made significant progress in improving detection accuracy. However, some of them are not able to make a good balance between computational cost and precision to meet the needs of deployment in real world. To address these issues, a lightweight and efficient wheat ear detector with Shuffle Polarized Self-Attention (SPSA) is proposed in this paper. Specifically, we first utilize a lightweight backbone network with asymmetric convolution for effective feature extraction. Next, SPSA attention is given to adaptively select focused positions and produce a more discriminative representation of the features. This strategy introduces polarized self-attention to spatial dimension and channel dimension and adopts Shuffle Units to combine those two types of attention mechanisms effectively. Finally, the TanhExp activation function is adopted to accelerate the inference speed and reduce the training time, and CIOU loss is used as the border regression loss function to enhance the detection ability of occlusion and overlaps between targets. Experimental results on the Global Wheat Head Detection dataset show that our method achieves superior detection performance compared with other state-of-the-art approaches.
Collapse
Affiliation(s)
- Yan Dong
- School of Electronic and Information Engineering, Zhongyuan University of Technology, ZhengZhou, China
| | - Yundong Liu
- School of Electronic and Information Engineering, Zhongyuan University of Technology, ZhengZhou, China
| | - Haonan Kang
- Department of Statistics and Data Science, National University of Singapore, Singapore
| | - Chunlei Li
- School of Electronic and Information Engineering, Zhongyuan University of Technology, ZhengZhou, China
| | - Pengcheng Liu
- Department of Computer Science, University of York, York, United Kingdom
| | - Zhoufeng Liu
- School of Electronic and Information Engineering, Zhongyuan University of Technology, ZhengZhou, China
| |
Collapse
|
38
|
Wen C, Wu J, Chen H, Su H, Chen X, Li Z, Yang C. Wheat Spike Detection and Counting in the Field Based on SpikeRetinaNet. FRONTIERS IN PLANT SCIENCE 2022; 13:821717. [PMID: 35310650 PMCID: PMC8928106 DOI: 10.3389/fpls.2022.821717] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 01/17/2022] [Indexed: 05/21/2023]
Abstract
The number of wheat spikes per unit area is one of the most important agronomic traits associated with wheat yield. However, quick and accurate detection for the counting of wheat spikes faces persistent challenges due to the complexity of wheat field conditions. This work has trained a RetinaNet (SpikeRetinaNet) based on several optimizations to detect and count wheat spikes efficiently. This RetinaNet consists of several improvements. First, a weighted bidirectional feature pyramid network (BiFPN) was introduced into the feature pyramid network (FPN) of RetinaNet, which could fuse multiscale features to recognize wheat spikes in different varieties and complicated environments. Then, to detect objects more efficiently, focal loss and attention modules were added. Finally, soft non-maximum suppression (Soft-NMS) was used to solve the occlusion problem. Based on these improvements, the new network detector was created and tested on the Global Wheat Head Detection (GWHD) dataset supplemented with wheat-wheatgrass spike detection (WSD) images. The WSD images were supplemented with new varieties of wheat, which makes the mixed dataset richer in species. The method of this study achieved 0.9262 for mAP50, which improved by 5.59, 49.06, 2.79, 1.35, and 7.26% compared to the state-of-the-art RetinaNet, single-shot multiBox detector (SSD), You Only Look Once version3 (Yolov3), You Only Look Once version4 (Yolov4), and faster region-based convolutional neural network (Faster-RCNN), respectively. In addition, the counting accuracy reached 0.9288, which was improved from other methods as well. Our implementation code and partial validation data are available at https://github.com/wujians122/The-Wheat-Spikes-Detecting-and-Counting.
Collapse
Affiliation(s)
- Changji Wen
- College of Information and Technology, Jilin Agricultural University, Changchun, China
- Institute for the Smart Agriculture, Jilin Agricultural University, Changchun, China
| | - Jianshuang Wu
- College of Food, Agricultural and Natural Resource Sciences, University of Minnesota, Paul, MN, United States
| | - Hongrui Chen
- College of Food, Agricultural and Natural Resource Sciences, University of Minnesota, Paul, MN, United States
| | - Hengqiang Su
- College of Information and Technology, Jilin Agricultural University, Changchun, China
- Institute for the Smart Agriculture, Jilin Agricultural University, Changchun, China
| | - Xiao Chen
- College of Information and Technology, Jilin Agricultural University, Changchun, China
- Institute for the Smart Agriculture, Jilin Agricultural University, Changchun, China
| | - Zhuoshi Li
- College of Information and Technology, Jilin Agricultural University, Changchun, China
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun, China
| | - Ce Yang
- College of Food, Agricultural and Natural Resource Sciences, University of Minnesota, Paul, MN, United States
- *Correspondence: Changji Wen,
| |
Collapse
|
39
|
Zang H, Wang Y, Ru L, Zhou M, Chen D, Zhao Q, Zhang J, Li G, Zheng G. Detection method of wheat spike improved YOLOv5s based on the attention mechanism. FRONTIERS IN PLANT SCIENCE 2022; 13:993244. [PMID: 36247573 PMCID: PMC9554473 DOI: 10.3389/fpls.2022.993244] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 08/30/2022] [Indexed: 05/17/2023]
Abstract
In wheat breeding, spike number is a key indicator for evaluating wheat yield, and the timely and accurate acquisition of wheat spike number is of great practical significance for yield prediction. In actual production; the method of using an artificial field survey to count wheat spikes is time-consuming and labor-intensive. Therefore, this paper proposes a method based on YOLOv5s with an improved attention mechanism, which can accurately detect the number of small-scale wheat spikes and better solve the problems of occlusion and cross-overlapping of the wheat spikes. This method introduces an efficient channel attention module (ECA) in the C3 module of the backbone structure of the YOLOv5s network model; at the same time, the global attention mechanism module (GAM) is inserted between the neck structure and the head structure; the attention mechanism can be more Effectively extract feature information and suppress useless information. The result shows that the accuracy of the improved YOLOv5s model reached 71.61% in the task of wheat spike number, which was 4.95% higher than that of the standard YOLOv5s model and had higher counting accuracy. The improved YOLOv5s and YOLOv5m have similar parameters, while RMSE and MEA are reduced by 7.62 and 6.47, respectively, and the performance is better than YOLOv5l. Therefore, the improved YOLOv5s method improves its applicability in complex field environments and provides a technical reference for the automatic identification of wheat spike numbers and yield estimation. Labeled images, source code, and trained models are available at: https://github.com/228384274/improved-yolov5.
Collapse
Affiliation(s)
- Hecang Zang
- Institute of Agricultural Economics and Information, Henan Academy of Agricultural Sciences, Zhengzhou, China
- Key Laboratory of Huang-Huai-Hai Smart Agricultural Technology, Ministry of Agriculture and Rural Affairs, Zhengzhou, China
| | - Yanjing Wang
- College of Life Sciences, Zhengzhou Normal University, Zhengzhou, China
- *Correspondence: Yanjing Wang,
| | - Linyuan Ru
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, China
| | - Meng Zhou
- Institute of Agricultural Economics and Information, Henan Academy of Agricultural Sciences, Zhengzhou, China
- Key Laboratory of Huang-Huai-Hai Smart Agricultural Technology, Ministry of Agriculture and Rural Affairs, Zhengzhou, China
| | - Dandan Chen
- Institute of Agricultural Economics and Information, Henan Academy of Agricultural Sciences, Zhengzhou, China
- Key Laboratory of Huang-Huai-Hai Smart Agricultural Technology, Ministry of Agriculture and Rural Affairs, Zhengzhou, China
| | - Qing Zhao
- Institute of Agricultural Economics and Information, Henan Academy of Agricultural Sciences, Zhengzhou, China
- Key Laboratory of Huang-Huai-Hai Smart Agricultural Technology, Ministry of Agriculture and Rural Affairs, Zhengzhou, China
| | - Jie Zhang
- Institute of Agricultural Economics and Information, Henan Academy of Agricultural Sciences, Zhengzhou, China
- Key Laboratory of Huang-Huai-Hai Smart Agricultural Technology, Ministry of Agriculture and Rural Affairs, Zhengzhou, China
| | - Guoqiang Li
- Institute of Agricultural Economics and Information, Henan Academy of Agricultural Sciences, Zhengzhou, China
- Key Laboratory of Huang-Huai-Hai Smart Agricultural Technology, Ministry of Agriculture and Rural Affairs, Zhengzhou, China
- Guoqiang Li,
| | - Guoqing Zheng
- Institute of Agricultural Economics and Information, Henan Academy of Agricultural Sciences, Zhengzhou, China
- Key Laboratory of Huang-Huai-Hai Smart Agricultural Technology, Ministry of Agriculture and Rural Affairs, Zhengzhou, China
| |
Collapse
|
40
|
Kozhekin M, Genaev M, Koval V, Slobodchikov A, Afonnikov D. Wheat yield estimation based on analysis of UAV images at low altitude. BIO WEB OF CONFERENCES 2022. [DOI: 10.1051/bioconf/20224705006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Information about the yield of wheat crops makes it possible to correctly assess their productivity and choose apropriate agronomic procedures to maximize yield. However, determining yields based on manual ear counts is labor intensive. Recently UAVs demonstrated high efficiency for rapid yield estimation. This paper presents a software package WDS (Wheat Detection System) for ears counting in wheat crops based on RGB images obtained from UAVs. WDS creates the flight plan, for the acquired images carries out automatic georeferencing to the appropriate fragment of the field, counts ears using the neural network models, reconstructs the density of ears in the crop and visualizes it as a heat map in the interactive web application. Based on the field experiment the accuracy of ears counting in plots was assessed: Spearman and Pearson correlation coefficients between the ears density counted manually and using WDS were 0.618 and 0.541, respectively (p-value < 0.05). WDS avaliable at https://github.com/Sl07h/wheat_detection.
Collapse
|
41
|
Zhang J, Min A, Steffenson BJ, Su WH, Hirsch CD, Anderson J, Wei J, Ma Q, Yang C. Wheat-Net: An Automatic Dense Wheat Spike Segmentation Method Based on an Optimized Hybrid Task Cascade Model. FRONTIERS IN PLANT SCIENCE 2022; 13:834938. [PMID: 35222491 PMCID: PMC8866238 DOI: 10.3389/fpls.2022.834938] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 01/18/2022] [Indexed: 05/12/2023]
Abstract
Precise segmentation of wheat spikes from a complex background is necessary for obtaining image-based phenotypic information of wheat traits such as yield estimation and spike morphology. A new instance segmentation method based on a Hybrid Task Cascade model was proposed to solve the wheat spike detection problem with improved detection results. In this study, wheat images were collected from fields where the environment varied both spatially and temporally. Res2Net50 was adopted as a backbone network, combined with multi-scale training, deformable convolutional networks, and Generic ROI Extractor for rich feature learning. The proposed methods were trained and validated, and the average precision (AP) obtained for the bounding box and mask was 0.904 and 0.907, respectively, and the accuracy for wheat spike counting was 99.29%. Comprehensive empirical analyses revealed that our method (Wheat-Net) performed well on challenging field-based datasets with mixed qualities, particularly those with various backgrounds and wheat spike adjacence/occlusion. These results provide evidence for dense wheat spike detection capabilities with masking, which is useful for not only wheat yield estimation but also spike morphology assessments.
Collapse
Affiliation(s)
- Jiajing Zhang
- College of Information and Electrical Engineering, China Agricultural University, Beijing, China
- The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - An Min
- Department of Bioproducts and Biosystems Engineering, University of Minnesota, Saint Paul, MN, United States
| | - Brian J. Steffenson
- Department of Plant Pathology, University of Minnesota, Saint Paul, MN, United States
| | - Wen-Hao Su
- College of Engineering, China Agricultural University, Beijing, China
| | - Cory D. Hirsch
- Department of Plant Pathology, University of Minnesota, Saint Paul, MN, United States
| | - James Anderson
- Department of Agronomy and Plant Genetics, University of Minnesota, Saint Paul, MN, United States
| | - Jian Wei
- College of Information and Electrical Engineering, China Agricultural University, Beijing, China
| | - Qin Ma
- College of Information and Electrical Engineering, China Agricultural University, Beijing, China
- *Correspondence: Qin Ma,
| | - Ce Yang
- Department of Bioproducts and Biosystems Engineering, University of Minnesota, Saint Paul, MN, United States
- Ce Yang,
| |
Collapse
|
42
|
Hartley ZKJ, French AP. Domain Adaptation of Synthetic Images for Wheat Head Detection. PLANTS (BASEL, SWITZERLAND) 2021; 10:plants10122633. [PMID: 34961104 PMCID: PMC8708756 DOI: 10.3390/plants10122633] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 11/22/2021] [Accepted: 11/25/2021] [Indexed: 06/12/2023]
Abstract
Wheat head detection is a core computer vision problem related to plant phenotyping that in recent years has seen increased interest as large-scale datasets have been made available for use in research. In deep learning problems with limited training data, synthetic data have been shown to improve performance by increasing the number of training examples available but have had limited effectiveness due to domain shift. To overcome this, many adversarial approaches such as Generative Adversarial Networks (GANs) have been proposed as a solution by better aligning the distribution of synthetic data to that of real images through domain augmentation. In this paper, we examine the impacts of performing wheat head detection on the global wheat head challenge dataset using synthetic data to supplement the original dataset. Through our experimentation, we demonstrate the challenges of performing domain augmentation where the target domain is large and diverse. We then present a novel approach to improving scores through using heatmap regression as a support network, and clustering to combat high variation of the target domain.
Collapse
Affiliation(s)
- Zane K. J. Hartley
- School of Computer Science, University of Nottingham, Nottingham NG8 1BB, UK
| | - Andrew P. French
- School of Computer Science, University of Nottingham, Nottingham NG8 1BB, UK
- School of Biosciences, University of Nottingham, Loughborough LE12 5RD, UK;
| |
Collapse
|
43
|
Danilevicz MF, Bayer PE, Nestor BJ, Bennamoun M, Edwards D. Resources for image-based high-throughput phenotyping in crops and data sharing challenges. PLANT PHYSIOLOGY 2021; 187:699-715. [PMID: 34608963 PMCID: PMC8561249 DOI: 10.1093/plphys/kiab301] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 05/26/2021] [Indexed: 05/06/2023]
Abstract
High-throughput phenotyping (HTP) platforms are capable of monitoring the phenotypic variation of plants through multiple types of sensors, such as red green and blue (RGB) cameras, hyperspectral sensors, and computed tomography, which can be associated with environmental and genotypic data. Because of the wide range of information provided, HTP datasets represent a valuable asset to characterize crop phenotypes. As HTP becomes widely employed with more tools and data being released, it is important that researchers are aware of these resources and how they can be applied to accelerate crop improvement. Researchers may exploit these datasets either for phenotype comparison or employ them as a benchmark to assess tool performance and to support the development of tools that are better at generalizing between different crops and environments. In this review, we describe the use of image-based HTP for yield prediction, root phenotyping, development of climate-resilient crops, detecting pathogen and pest infestation, and quantitative trait measurement. We emphasize the need for researchers to share phenotypic data, and offer a comprehensive list of available datasets to assist crop breeders and tool developers to leverage these resources in order to accelerate crop breeding.
Collapse
Affiliation(s)
- Monica F. Danilevicz
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, Western Australia 6009, Australia
| | - Philipp E. Bayer
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, Western Australia 6009, Australia
| | - Benjamin J. Nestor
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, Western Australia 6009, Australia
| | - Mohammed Bennamoun
- Department of Computer Science and Software Engineering, University of Western Australia, Perth, Western Australia 6009, Australia
| | - David Edwards
- School of Biological Sciences and Institute of Agriculture, University of Western Australia, Perth, Western Australia 6009, Australia
- Author for communication:
| |
Collapse
|
44
|
Bozada T, Borden J, Workman J, Del Cid M, Malinowski J, Luechtefeld T. Sysrev: A FAIR Platform for Data Curation and Systematic Evidence Review. Front Artif Intell 2021; 4:685298. [PMID: 34423285 PMCID: PMC8374944 DOI: 10.3389/frai.2021.685298] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 07/13/2021] [Indexed: 11/16/2022] Open
Abstract
Well-curated datasets are essential to evidence based decision making and to the integration of artificial intelligence with human reasoning across disciplines. However, many sources of data remain siloed, unstructured, and/or unavailable for complementary and secondary research. Sysrev was developed to address these issues. First, Sysrev was built to aid in systematic evidence reviews (SER), where digital documents are evaluated according to a well defined process, and where Sysrev provides an easy to access, publicly available and free platform for collaborating in SER projects. Secondly, Sysrev addresses the issue of unstructured, siloed, and inaccessible data in the context of generalized data extraction, where human and machine learning algorithms are combined to extract insights and evidence for better decision making across disciplines. Sysrev uses FAIR - Findability, Accessibility, Interoperability, and Reuse of digital assets - as primary principles in design. Sysrev was developed primarily because of an observed need to reduce redundancy, reduce inefficient use of human time and increase the impact of evidence based decision making. This publication is an introduction to Sysrev as a novel technology, with an overview of the features, motivations and use cases of the tool. Methods: Sysrev. com is a FAIR motivated web platform for data curation and SER. Sysrev allows users to create data curation projects called "sysrevs" wherein users upload documents, define review tasks, recruit reviewers, perform review tasks, and automate review tasks. Conclusion: Sysrev is a web application designed to facilitate data curation and SERs. Thousands of publicly accessible Sysrev projects have been created, accommodating research in a wide variety of disciplines. Described use cases include data curation, managed reviews, and SERs.
Collapse
Affiliation(s)
| | | | | | | | | | - Thomas Luechtefeld
- Insilica LLC, Bethesda, MD, United States
- Toxtrack LLC, Baltimore, MD, United States
| |
Collapse
|
45
|
Wheat Ear Recognition Based on RetinaNet and Transfer Learning. SENSORS 2021; 21:s21144845. [PMID: 34300585 PMCID: PMC8309814 DOI: 10.3390/s21144845] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 07/07/2021] [Accepted: 07/12/2021] [Indexed: 02/02/2023]
Abstract
The number of wheat ears is an essential indicator for wheat production and yield estimation, but accurately obtaining wheat ears requires expensive manual cost and labor time. Meanwhile, the characteristics of wheat ears provide less information, and the color is consistent with the background, which can be challenging to obtain the number of wheat ears required. In this paper, the performance of Faster regions with convolutional neural networks (Faster R-CNN) and RetinaNet to predict the number of wheat ears for wheat at different growth stages under different conditions is investigated. The results show that using the Global WHEAT dataset for recognition, the RetinaNet method, and the Faster R-CNN method achieve an average accuracy of 0.82 and 0.72, with the RetinaNet method obtaining the highest recognition accuracy. Secondly, using the collected image data for recognition, the R2 of RetinaNet and Faster R-CNN after transfer learning is 0.9722 and 0.8702, respectively, indicating that the recognition accuracy of the RetinaNet method is higher on different data sets. We also tested wheat ears at both the filling and maturity stages; our proposed method has proven to be very robust (the R2 is above 90). This study provides technical support and a reference for automatic wheat ear recognition and yield estimation.
Collapse
|
46
|
EasyIDP: A Python Package for Intermediate Data Processing in UAV-Based Plant Phenotyping. REMOTE SENSING 2021. [DOI: 10.3390/rs13132622] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Unmanned aerial vehicle (UAV) and structure from motion (SfM) photogrammetry techniques are widely used for field-based, high-throughput plant phenotyping nowadays, but some of the intermediate processes throughout the workflow remain manual. For example, geographic information system (GIS) software is used to manually assess the 2D/3D field reconstruction quality and cropping region of interests (ROIs) from the whole field. In addition, extracting phenotypic traits from raw UAV images is more competitive than directly from the digital orthomosaic (DOM). Currently, no easy-to-use tools are available to implement previous tasks for commonly used commercial SfM software, such as Pix4D and Agisoft Metashape. Hence, an open source software package called easy intermediate data processor (EasyIDP; MIT license) was developed to decrease the workload in intermediate data processing mentioned above. The functions of the proposed package include (1) an ROI cropping module, assisting in reconstruction quality assessment and cropping ROIs from the whole field, and (2) an ROI reversing module, projecting ROIs to relative raw images. The result showed that both cropping and reversing modules work as expected. Moreover, the effects of ROI height selection and reversed ROI position on raw images to reverse calculation were discussed. This tool shows great potential for decreasing workload in data annotation for machine learning applications.
Collapse
|
47
|
Riera LG, Carroll ME, Zhang Z, Shook JM, Ghosal S, Gao T, Singh A, Bhattacharya S, Ganapathysubramanian B, Singh AK, Sarkar S. Deep Multiview Image Fusion for Soybean Yield Estimation in Breeding Applications. PLANT PHENOMICS (WASHINGTON, D.C.) 2021; 2021:9846470. [PMID: 34250507 PMCID: PMC8240512 DOI: 10.34133/2021/9846470] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 05/19/2021] [Indexed: 05/17/2023]
Abstract
Reliable seed yield estimation is an indispensable step in plant breeding programs geared towards cultivar development in major row crops. The objective of this study is to develop a machine learning (ML) approach adept at soybean (Glycine max L. (Merr.)) pod counting to enable genotype seed yield rank prediction from in-field video data collected by a ground robot. To meet this goal, we developed a multiview image-based yield estimation framework utilizing deep learning architectures. Plant images captured from different angles were fused to estimate the yield and subsequently to rank soybean genotypes for application in breeding decisions. We used data from controlled imaging environment in field, as well as from plant breeding test plots in field to demonstrate the efficacy of our framework via comparing performance with manual pod counting and yield estimation. Our results demonstrate the promise of ML models in making breeding decisions with significant reduction of time and human effort and opening new breeding method avenues to develop cultivars.
Collapse
Affiliation(s)
- Luis G. Riera
- Department of Mechanical Engineering, Iowa State University, Ames, Iowa, USA
| | | | - Zhisheng Zhang
- Department of Mechanical Engineering, Iowa State University, Ames, Iowa, USA
| | | | - Sambuddha Ghosal
- Department of Mechanical Engineering, Iowa State University, Ames, Iowa, USA
| | - Tianshuang Gao
- Department of Computer Science, Iowa State University, Ames, Iowa, USA
| | - Arti Singh
- Department of Agronomy, Iowa State University, Ames, Iowa, USA
| | | | | | | | - Soumik Sarkar
- Department of Mechanical Engineering, Iowa State University, Ames, Iowa, USA
| |
Collapse
|
48
|
Wang Y, Qin Y, Cui J. Occlusion Robust Wheat Ear Counting Algorithm Based on Deep Learning. FRONTIERS IN PLANT SCIENCE 2021; 12:645899. [PMID: 34177976 PMCID: PMC8226325 DOI: 10.3389/fpls.2021.645899] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Accepted: 05/19/2021] [Indexed: 05/17/2023]
Abstract
Counting the number of wheat ears in images under natural light is an important way to evaluate the crop yield, thus, it is of great significance to modern intelligent agriculture. However, the distribution of wheat ears is dense, so the occlusion and overlap problem appears in almost every wheat image. It is difficult for traditional image processing methods to solve occlusion problem due to the deficiency of high-level semantic features, while existing deep learning based counting methods did not solve the occlusion efficiently. This article proposes an improved EfficientDet-D0 object detection model for wheat ear counting, and focuses on solving occlusion. First, the transfer learning method is employed in the pre-training of the model backbone network to extract the high-level semantic features of wheat ears. Secondly, an image augmentation method Random-Cutout is proposed, in which some rectangles are selected and erased according to the number and size of the wheat ears in the images to simulate occlusion in real wheat images. Finally, convolutional block attention module (CBAM) is adopted into the EfficientDet-D0 model after the backbone, which makes the model refine the features, pay more attention to the wheat ears and suppress other useless background information. Extensive experiments are done by feeding the features to detection layer, showing that the counting accuracy of the improved EfficientDet-D0 model reaches 94%, which is about 2% higher than the original model, and false detection rate is 5.8%, which is the lowest among comparative methods.
Collapse
Affiliation(s)
| | | | - Jiali Cui
- School of Information Science and Technology, North China University of Technology, Beijing, China
| |
Collapse
|
49
|
Smith DT, Potgieter AB, Chapman SC. Scaling up high-throughput phenotyping for abiotic stress selection in the field. TAG. THEORETICAL AND APPLIED GENETICS. THEORETISCHE UND ANGEWANDTE GENETIK 2021; 134:1845-1866. [PMID: 34076731 DOI: 10.1007/s00122-021-03864-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 05/13/2021] [Indexed: 05/18/2023]
Abstract
High-throughput phenotyping (HTP) is in its infancy for deployment in large-scale breeding programmes. With the ability to measure correlated traits associated with physiological ideotypes, in-field phenotyping methods are available for screening of abiotic stress responses. As cropping environments become more hostile and unpredictable due to the effects of climate change, the need to characterise variability across spatial and temporal scales will become increasingly important. The sensor technologies that have enabled HTP from macroscopic through to satellite sensors may also be utilised here to complement spatial characterisation using envirotyping, which can improve estimations of genotypic performance across environments by better accounting for variation at the plot, trial and inter-trial levels. Climate change is leading to increased variation at all physical and temporal scales in the cropping environment. Maintaining yield stability under circumstances with greater levels of abiotic stress while capitalising upon yield potential in good years, requires approaches to plant breeding that target the physiological limitations to crop performance in specific environments. This requires dynamic modelling of conditions within target populations of environments, GxExM predictions, clustering of environments so breeding trajectories can be defined, and the development of screens that enable selection for genetic gain to occur. High-throughput phenotyping (HTP), combined with related technologies used for envirotyping, can help to address these challenges. Non-destructive analysis of the morphological, biochemical and physiological qualities of plant canopies using HTP has great potential to complement whole-genome selection, which is becoming increasingly common in breeding programmes. A range of novel analytic techniques, such as machine learning and deep learning, combined with a widening range of sensors, allow rapid assessment of large breeding populations that are repeatable and objective. Secondary traits underlying radiation use efficiency and water use efficiency can be screened with HTP for selection at the early stages of a breeding programme. HTP and envirotyping technologies can also characterise spatial variability at trial and within-plot levels, which can be used to correct for spatial variations that confound measurements of genotypic values. This review explores HTP for abiotic stress selection through a physiological trait lens and additionally investigates the use of envirotyping and EC to characterise spatial variability at all physical scales in METs.
Collapse
Affiliation(s)
- Daniel T Smith
- The University of Queensland, St Lucia, Brisbane, QLD, 4072, Australia
| | - Andries B Potgieter
- Centre for Crop Science, Queensland Alliance for Agriculture and Food Innovation, University of Queensland, Brisbane, QLD, 4072, Australia
| | - Scott C Chapman
- The University of Queensland, St Lucia, Brisbane, QLD, 4072, Australia.
| |
Collapse
|
50
|
Gomez AS, Aptoula E, Parsons S, Bosilj P. Deep Regression Versus Detection for Counting in Robotic Phenotyping. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3062586] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|