1
|
Nandudu L, Strock C, Ogbonna A, Kawuki R, Jannink JL. Genetic analysis of cassava brown streak disease root necrosis using image analysis and genome-wide association studies. FRONTIERS IN PLANT SCIENCE 2024; 15:1360729. [PMID: 38562560 PMCID: PMC10982329 DOI: 10.3389/fpls.2024.1360729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Accepted: 03/07/2024] [Indexed: 04/04/2024]
Abstract
Cassava brown streak disease (CBSD) poses a substantial threat to food security. To address this challenge, we used PlantCV to extract CBSD root necrosis image traits from 320 clones, with an aim of identifying genomic regions through genome-wide association studies (GWAS) and candidate genes. Results revealed strong correlations among certain root necrosis image traits, such as necrotic area fraction and necrotic width fraction, as well as between the convex hull area of root necrosis and the percentage of necrosis. Low correlations were observed between CBSD scores obtained from the 1-5 scoring method and all root necrosis traits. Broad-sense heritability estimates of root necrosis image traits ranged from low to moderate, with the highest estimate of 0.42 observed for the percentage of necrosis, while narrow-sense heritability consistently remained low, ranging from 0.03 to 0.22. Leveraging data from 30,750 SNPs obtained through DArT genotyping, eight SNPs on chromosomes 1, 7, and 11 were identified and associated with both the ellipse eccentricity of root necrosis and the percentage of necrosis through GWAS. Candidate gene analysis in the 172.2kb region on the chromosome 1 revealed 24 potential genes with diverse functions, including ubiquitin-protein ligase, DNA-binding transcription factors, and RNA metabolism protein, among others. Despite our initial expectation that image analysis objectivity would yield better heritability estimates and stronger genomic associations than the 1-5 scoring method, the results were unexpectedly lower. Further research is needed to comprehensively understand the genetic basis of these traits and their relevance to cassava breeding and disease management.
Collapse
Affiliation(s)
- Leah Nandudu
- School of Integrative Plant Sciences, Section of Plant Breeding and Genetics, Cornell University, Ithaca, NY, United States
- Root Crops Department, National Crops Resources Research Institute (NaCRRI), Kampala, Uganda
| | - Christopher Strock
- School of Integrative Plant Sciences, Section of Plant Breeding and Genetics, Cornell University, Ithaca, NY, United States
| | - Alex Ogbonna
- School of Integrative Plant Sciences, Section of Plant Breeding and Genetics, Cornell University, Ithaca, NY, United States
| | - Robert Kawuki
- Root Crops Department, National Crops Resources Research Institute (NaCRRI), Kampala, Uganda
| | - Jean-Luc Jannink
- School of Integrative Plant Sciences, Section of Plant Breeding and Genetics, Cornell University, Ithaca, NY, United States
- US Department of Agriculture, Agricultural Research Service (USDA-ARS), Ithaca, NY, United States
| |
Collapse
|
2
|
Ma N, Su Y, Yang L, Li Z, Yan H. Wheat Seed Detection and Counting Method Based on Improved YOLOv8 Model. SENSORS (BASEL, SWITZERLAND) 2024; 24:1654. [PMID: 38475189 DOI: 10.3390/s24051654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 02/29/2024] [Accepted: 03/01/2024] [Indexed: 03/14/2024]
Abstract
Wheat seed detection has important applications in calculating thousand-grain weight and crop breeding. In order to solve the problems of seed accumulation, adhesion, and occlusion that can lead to low counting accuracy, while ensuring fast detection speed with high accuracy, a wheat seed counting method is proposed to provide technical support for the development of the embedded platform of the seed counter. This study proposes a lightweight real-time wheat seed detection model, YOLOv8-HD, based on YOLOv8. Firstly, we introduce the concept of shared convolutional layers to improve the YOLOv8 detection head, reducing the number of parameters and achieving a lightweight design to improve runtime speed. Secondly, we incorporate the Vision Transformer with a Deformable Attention mechanism into the C2f module of the backbone network to enhance the network's feature extraction capability and improve detection accuracy. The results show that in the stacked scenes with impurities (severe seed adhesion), the YOLOv8-HD model achieves an average detection accuracy (mAP) of 77.6%, which is 9.1% higher than YOLOv8. In all scenes, the YOLOv8-HD model achieves an average detection accuracy (mAP) of 99.3%, which is 16.8% higher than YOLOv8. The memory size of the YOLOv8-HD model is 6.35 MB, approximately 4/5 of YOLOv8. The GFLOPs of YOLOv8-HD decrease by 16%. The inference time of YOLOv8-HD is 2.86 ms (on GPU), which is lower than YOLOv8. Finally, we conducted numerous experiments and the results showed that YOLOv8-HD outperforms other mainstream networks in terms of mAP, speed, and model size. Therefore, our YOLOv8-HD can efficiently detect wheat seeds in various scenarios, providing technical support for the development of seed counting instruments.
Collapse
Affiliation(s)
- Na Ma
- College of Information Science and Engineering, Shanxi Agricultural University, Taigu District, Jinzhong 030801, China
| | - Yaxin Su
- College of Information Science and Engineering, Shanxi Agricultural University, Taigu District, Jinzhong 030801, China
| | - Lexin Yang
- College of Information Science and Engineering, Shanxi Agricultural University, Taigu District, Jinzhong 030801, China
| | - Zhongtao Li
- College of Information Science and Engineering, Shanxi Agricultural University, Taigu District, Jinzhong 030801, China
| | - Hongwen Yan
- College of Information Science and Engineering, Shanxi Agricultural University, Taigu District, Jinzhong 030801, China
| |
Collapse
|
3
|
Zou Y, Tian Z, Cao J, Ren Y, Zhang Y, Liu L, Zhang P, Ni J. Rice Grain Detection and Counting Method Based on TCLE-YOLO Model. SENSORS (BASEL, SWITZERLAND) 2023; 23:9129. [PMID: 38005517 PMCID: PMC10675024 DOI: 10.3390/s23229129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 11/06/2023] [Accepted: 11/10/2023] [Indexed: 11/26/2023]
Abstract
Thousand-grain weight is the main parameter for accurately estimating rice yields, and it is an important indicator for variety breeding and cultivation management. The accurate detection and counting of rice grains is an important prerequisite for thousand-grain weight measurements. However, because rice grains are small targets with high overall similarity and different degrees of adhesion, there are still considerable challenges preventing the accurate detection and counting of rice grains during thousand-grain weight measurements. A deep learning model based on a transformer encoder and coordinate attention module was, therefore, designed for detecting and counting rice grains, and named TCLE-YOLO in which YOLOv5 was used as the backbone network. Specifically, to improve the feature representation of the model for small target regions, a coordinate attention (CA) module was introduced into the backbone module of YOLOv5. In addition, another detection head for small targets was designed based on a low-level, high-resolution feature map, and the transformer encoder was applied to the neck module to expand the receptive field of the network and enhance the extraction of key feature of detected targets. This enabled our additional detection head to be more sensitive to rice grains, especially heavily adhesive grains. Finally, EIoU loss was used to further improve accuracy. The experimental results show that, when applied to the self-built rice grain dataset, the precision, recall, and mAP@0.5 of the TCLE-YOLO model were 99.20%, 99.10%, and 99.20%, respectively. Compared with several state-of-the-art models, the proposed TCLE-YOLO model achieves better detection performance. In summary, the rice grain detection method built in this study is suitable for rice grain recognition and counting, and it can provide guidance for accurate thousand-grain weight measurements and the effective evaluation of rice breeding.
Collapse
Affiliation(s)
- Yu Zou
- Rice Research Institute, Anhui Academy of Agricultural Sciences, Hefei 230031, China;
| | - Zefeng Tian
- College of Engineering, Anhui Agricultural University, Hefei 230036, China; (Z.T.); (J.C.)
| | - Jiawen Cao
- College of Engineering, Anhui Agricultural University, Hefei 230036, China; (Z.T.); (J.C.)
| | - Yi Ren
- College of Agriculture, Anhui Science and Technology University, Chuzhou 239000, China;
| | - Yaping Zhang
- Hefei Institute of Technology Innovation Engineering, Chinese Academy of Sciences, Hefei 230094, China; (Y.Z.); (L.L.)
| | - Lu Liu
- Hefei Institute of Technology Innovation Engineering, Chinese Academy of Sciences, Hefei 230094, China; (Y.Z.); (L.L.)
| | - Peijiang Zhang
- Rice Research Institute, Anhui Academy of Agricultural Sciences, Hefei 230031, China;
| | - Jinlong Ni
- Rice Research Institute, Anhui Academy of Agricultural Sciences, Hefei 230031, China;
| |
Collapse
|
4
|
Lu Y, Wang J, Fu L, Yu L, Liu Q. High-throughput and separating-free phenotyping method for on-panicle rice grains based on deep learning. FRONTIERS IN PLANT SCIENCE 2023; 14:1219584. [PMID: 37790779 PMCID: PMC10544938 DOI: 10.3389/fpls.2023.1219584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 08/28/2023] [Indexed: 10/05/2023]
Abstract
Rice is a vital food crop that feeds most of the global population. Cultivating high-yielding and superior-quality rice varieties has always been a critical research direction. Rice grain-related traits can be used as crucial phenotypic evidence to assess yield potential and quality. However, the analysis of rice grain traits is still mainly based on manual counting or various seed evaluation devices, which incur high costs in time and money. This study proposed a high-precision phenotyping method for rice panicles based on visible light scanning imaging and deep learning technology, which can achieve high-throughput extraction of critical traits of rice panicles without separating and threshing rice panicles. The imaging of rice panicles was realized through visible light scanning. The grains were detected and segmented using the Faster R-CNN-based model, and an improved Pix2Pix model cascaded with it was used to compensate for the information loss caused by the natural occlusion between the rice grains. An image processing pipeline was designed to calculate fifteen phenotypic traits of the on-panicle rice grains. Eight varieties of rice were used to verify the reliability of this method. The R2 values between the extraction by the method and manual measurements of the grain number, grain length, grain width, grain length/width ratio and grain perimeter were 0.99, 0.96, 0.83, 0.90 and 0.84, respectively. Their mean absolute percentage error (MAPE) values were 1.65%, 7.15%, 5.76%, 9.13% and 6.51%. The average imaging time of each rice panicle was about 60 seconds, and the total time of data processing and phenotyping traits extraction was less than 10 seconds. By randomly selecting one thousand grains from each of the eight varieties and analyzing traits, it was found that there were certain differences between varieties in the number distribution of thousand-grain length, thousand-grain width, and thousand-grain length/width ratio. The results show that this method is suitable for high-throughput, non-destructive, and high-precision extraction of on-panicle grains traits without separating. Low cost and robust performance make it easy to popularize. The research results will provide new ideas and methods for extracting panicle traits of rice and other crops.
Collapse
Affiliation(s)
- Yuwei Lu
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou, China
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, China
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Jinhu Wang
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou, China
| | - Ling Fu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, China
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, Hubei, China
- Department of Physics, School of Science, Hainan University, Haikou, China
| | - Lejun Yu
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou, China
| | - Qian Liu
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou, China
| |
Collapse
|
5
|
Xiang S, Wang S, Xu M, Wang W, Liu W. YOLO POD: a fast and accurate multi-task model for dense Soybean Pod counting. PLANT METHODS 2023; 19:8. [PMID: 36709313 PMCID: PMC9883929 DOI: 10.1186/s13007-023-00985-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/18/2023] [Indexed: 05/13/2023]
Abstract
BACKGROUND The number of soybean pods is one of the most important indicators of soybean yield, pod counting is crucial for yield estimation, cultivation management, and variety breeding. Counting pods manually is slow and laborious. For crop counting, using object detection network is a common practice, but the scattered and overlapped pods make the detection and counting of the pods difficult. RESULTS We propose an approach that we named YOLO POD, based on the YOLO X framework. On top of YOLO X, we added a block for predicting the number of pods, modified the loss function, thus constructing a multi-task model, and introduced the Convolutional Block Attention Module (CBAM). We achieve accurate identification and counting of pods without reducing the speed of inference. The results showed that the R2 between the number predicted by YOLO POD and the ground truth reached 0.967, which is improved by 0.049 compared to YOLO X, while the inference time only increased by 0.08 s. Moreover, MAE, MAPE, RMSE are only 4.18, 10.0%, 6.48 respectively, the deviation is very small. CONCLUSIONS We have achieved the first accurate counting of soybean pods and proposed a new solution for the detection and counting of dense objects.
Collapse
Affiliation(s)
- Shuai Xiang
- College of Agronomy, Sichuan Agricultural University, 211-Huimin Road, Wenjiang District, Chengdu, 611130, People's Republic of China
- Key Laboratory of Crop Ecophysiology and Farming System in Southwest China (Ministry of Agriculture), Sichuan Engineering Research Center for Crop Strip Intercropping System, Sichuan Agricultural University, Chengdu, 611130, People's Republic of China
| | - Siyu Wang
- College of Agronomy, Sichuan Agricultural University, 211-Huimin Road, Wenjiang District, Chengdu, 611130, People's Republic of China
- Key Laboratory of Crop Ecophysiology and Farming System in Southwest China (Ministry of Agriculture), Sichuan Engineering Research Center for Crop Strip Intercropping System, Sichuan Agricultural University, Chengdu, 611130, People's Republic of China
| | - Mei Xu
- College of Agronomy, Sichuan Agricultural University, 211-Huimin Road, Wenjiang District, Chengdu, 611130, People's Republic of China
- Key Laboratory of Crop Ecophysiology and Farming System in Southwest China (Ministry of Agriculture), Sichuan Engineering Research Center for Crop Strip Intercropping System, Sichuan Agricultural University, Chengdu, 611130, People's Republic of China
| | - Wenyan Wang
- College of Agronomy, Sichuan Agricultural University, 211-Huimin Road, Wenjiang District, Chengdu, 611130, People's Republic of China
- Key Laboratory of Crop Ecophysiology and Farming System in Southwest China (Ministry of Agriculture), Sichuan Engineering Research Center for Crop Strip Intercropping System, Sichuan Agricultural University, Chengdu, 611130, People's Republic of China
| | - Weiguo Liu
- College of Agronomy, Sichuan Agricultural University, 211-Huimin Road, Wenjiang District, Chengdu, 611130, People's Republic of China.
- Key Laboratory of Crop Ecophysiology and Farming System in Southwest China (Ministry of Agriculture), Sichuan Engineering Research Center for Crop Strip Intercropping System, Sichuan Agricultural University, Chengdu, 611130, People's Republic of China.
| |
Collapse
|
6
|
James C, Gu Y, Potgieter A, David E, Madec S, Guo W, Baret F, Eriksson A, Chapman S. From Prototype to Inference: A Pipeline to Apply Deep Learning in Sorghum Panicle Detection. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0017. [PMID: 37040294 PMCID: PMC10076054 DOI: 10.34133/plantphenomics.0017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2022] [Accepted: 12/01/2022] [Indexed: 06/19/2023]
Abstract
Head (panicle) density is a major component in understanding crop yield, especially in crops that produce variable numbers of tillers such as sorghum and wheat. Use of panicle density both in plant breeding and in the agronomy scouting of commercial crops typically relies on manual counts observation, which is an inefficient and tedious process. Because of the easy availability of red-green-blue images, machine learning approaches have been applied to replacing manual counting. However, much of this research focuses on detection per se in limited testing conditions and does not provide a general protocol to utilize deep-learning-based counting. In this paper, we provide a comprehensive pipeline from data collection to model deployment in deep-learning-assisted panicle yield estimation for sorghum. This pipeline provides a basis from data collection and model training, to model validation and model deployment in commercial fields. Accurate model training is the foundation of the pipeline. However, in natural environments, the deployment dataset is frequently different from the training data (domain shift) causing the model to fail, so a robust model is essential to build a reliable solution. Although we demonstrate our pipeline in a sorghum field, the pipeline can be generalized to other grain species. Our pipeline provides a high-resolution head density map that can be utilized for diagnosis of agronomic variability within a field, in a pipeline built without commercial software.
Collapse
Affiliation(s)
- Chrisbin James
- School of Agriculture and Food Sciences, The University of Queensland, Brisbane, Australia
| | - Yanyang Gu
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| | - Andries Potgieter
- Queensland Alliance for Agriculture and Food Innovation, The University of Queensland, Brisbane, Australia
| | | | | | - Wei Guo
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan
| | - Frédéric Baret
- Institut National de la Recherche Agronomique, Paris, France
| | - Anders Eriksson
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| | - Scott Chapman
- School of Agriculture and Food Sciences, The University of Queensland, Brisbane, Australia
| |
Collapse
|
7
|
Huang C, Li W, Zhang Z, Hua X, Yang J, Ye J, Duan L, Liang X, Yang W. An Intelligent Rice Yield Trait Evaluation System Based on Threshed Panicle Compensation. FRONTIERS IN PLANT SCIENCE 2022; 13:900408. [PMID: 35937323 PMCID: PMC9354939 DOI: 10.3389/fpls.2022.900408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Accepted: 06/22/2022] [Indexed: 06/15/2023]
Abstract
High-throughput phenotyping of yield-related traits is meaningful and necessary for rice breeding and genetic study. The conventional method for rice yield-related trait evaluation faces the problems of rice threshing difficulties, measurement process complexity, and low efficiency. To solve these problems, a novel intelligent system, which includes an integrated threshing unit, grain conveyor-imaging units, threshed panicle conveyor-imaging unit, and specialized image analysis software has been proposed to achieve rice yield trait evaluation with high throughput and high accuracy. To improve the threshed panicle detection accuracy, the Region of Interest Align, Convolution Batch normalization activation with Leaky Relu module, Squeeze-and-Excitation unit, and optimal anchor size have been adopted to optimize the Faster-RCNN architecture, termed 'TPanicle-RCNN,' and the new model achieved F1 score 0.929 with an increase of 0.044, which was robust to indica and japonica varieties. Additionally, AI cloud computing was adopted, which dramatically reduced the system cost and improved flexibility. To evaluate the system accuracy and efficiency, 504 panicle samples were tested, and the total spikelet measurement error decreased from 11.44 to 2.99% with threshed panicle compensation. The average measuring efficiency was approximately 40 s per sample, which was approximately twenty times more efficient than manual measurement. In this study, an automatic and intelligent system for rice yield-related trait evaluation was developed, which would provide an efficient and reliable tool for rice breeding and genetic research.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Wanneng Yang
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), College of Engineering, Huazhong Agricultural University, Wuhan, China
| |
Collapse
|
8
|
Zhu R, Wang X, Yan Z, Qiao Y, Tian H, Hu Z, Zhang Z, Li Y, Zhao H, Xin D, Chen Q. Exploring Soybean Flower and Pod Variation Patterns During Reproductive Period Based on Fusion Deep Learning. FRONTIERS IN PLANT SCIENCE 2022; 13:922030. [PMID: 35909768 PMCID: PMC9326440 DOI: 10.3389/fpls.2022.922030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 06/20/2022] [Indexed: 06/15/2023]
Abstract
The soybean flower and the pod drop are important factors in soybean yield, and the use of computer vision techniques to obtain the phenotypes of flowers and pods in bulk, as well as in a quick and accurate manner, is a key aspect of the study of the soybean flower and pod drop rate (PDR). This paper compared a variety of deep learning algorithms for identifying and counting soybean flowers and pods, and found that the Faster R-CNN model had the best performance. Furthermore, the Faster R-CNN model was further improved and optimized based on the characteristics of soybean flowers and pods. The accuracy of the final model for identifying flowers and pods was increased to 94.36 and 91%, respectively. Afterward, a fusion model for soybean flower and pod recognition and counting was proposed based on the Faster R-CNN model, where the coefficient of determinationR2 between counts of soybean flowers and pods by the fusion model and manual counts reached 0.965 and 0.98, respectively. The above results show that the fusion model is a robust recognition and counting algorithm that can reduce labor intensity and improve efficiency. Its application will greatly facilitate the study of the variable patterns of soybean flowers and pods during the reproductive period. Finally, based on the fusion model, we explored the variable patterns of soybean flowers and pods during the reproductive period, the spatial distribution patterns of soybean flowers and pods, and soybean flower and pod drop patterns.
Collapse
Affiliation(s)
- Rongsheng Zhu
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
| | - Xueying Wang
- College of Engineering, Northeast Agricultural University, Harbin, China
| | - Zhuangzhuang Yan
- College of Engineering, Northeast Agricultural University, Harbin, China
| | - Yinglin Qiao
- College of Engineering, Northeast Agricultural University, Harbin, China
| | - Huilin Tian
- College of Agriculture, Northeast Agricultural University, Harbin, China
| | - Zhenbang Hu
- College of Agriculture, Northeast Agricultural University, Harbin, China
| | - Zhanguo Zhang
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
| | - Yang Li
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
| | - Hongjie Zhao
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
| | - Dawei Xin
- College of Agriculture, Northeast Agricultural University, Harbin, China
| | - Qingshan Chen
- College of Agriculture, Northeast Agricultural University, Harbin, China
| |
Collapse
|
9
|
Deng R, Qi L, Pan W, Wang Z, Fu D, Yang X. Automatic estimation of rice grain number based on a convolutional neural network. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:1034-1044. [PMID: 36215533 DOI: 10.1364/josaa.459580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 04/26/2022] [Indexed: 06/16/2023]
Abstract
The grain number on the rice panicle, which directly determines the rice yield, is a very important agronomic trait in rice breeding and yield-related research. However, manual counting of grain number per rice panicle is time-consuming, error-prone, and laborious. In this study, a novel prototype, dubbed the "GN-System," was developed for the automatic calculation of grain number per rice panicle based on a deep convolutional neural network. First, a whole panicle grain detection (WPGD) model was established using the Cascade R-CNN method embedded with the feature pyramid network for grain recognition and location. Then, a GN-System integrated with the WPGD model was developed to automatically calculate grain number per rice panicle. The performance of the GN-System was evaluated through estimated stability and accuracy. One hundred twenty-four panicle samples were tested to evaluate the estimated stability of the GN-System. The results showed that the coefficient of determination (R2) was 0.810, the mean absolute percentage error was 8.44%, and the root mean square error was 16.73. Also, another 12 panicle samples were tested to further evaluate the estimated accuracy of the GN-System. The results revealed that the mean accuracy of the GN-System reached 90.6%. The GN-System, which can quickly and accurately predict the grain number per rice panicle, can provide an effective, convenient, and low-cost tool for yield evaluation, crop breeding, and genetic research. It also has great potential in assisting phenotypic research.
Collapse
|
10
|
Xu Y, Liu X, Cao X, Huang C, Liu E, Qian S, Liu X, Wu Y, Dong F, Qiu CW, Qiu J, Hua K, Su W, Wu J, Xu H, Han Y, Fu C, Yin Z, Liu M, Roepman R, Dietmann S, Virta M, Kengara F, Zhang Z, Zhang L, Zhao T, Dai J, Yang J, Lan L, Luo M, Liu Z, An T, Zhang B, He X, Cong S, Liu X, Zhang W, Lewis JP, Tiedje JM, Wang Q, An Z, Wang F, Zhang L, Huang T, Lu C, Cai Z, Wang F, Zhang J. Artificial intelligence: A powerful paradigm for scientific research. Innovation (N Y) 2021; 2:100179. [PMID: 34877560 PMCID: PMC8633405 DOI: 10.1016/j.xinn.2021.100179] [Citation(s) in RCA: 78] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Accepted: 10/26/2021] [Indexed: 12/18/2022] Open
Abstract
Artificial intelligence (AI) coupled with promising machine learning (ML) techniques well known from computer science is broadly affecting many aspects of various fields including science and technology, industry, and even our day-to-day life. The ML techniques have been developed to analyze high-throughput data with a view to obtaining useful insights, categorizing, predicting, and making evidence-based decisions in novel ways, which will promote the growth of novel applications and fuel the sustainable booming of AI. This paper undertakes a comprehensive survey on the development and application of AI in different aspects of fundamental sciences, including information science, mathematics, medical science, materials science, geoscience, life science, physics, and chemistry. The challenges that each discipline of science meets, and the potentials of AI techniques to handle these challenges, are discussed in detail. Moreover, we shed light on new research trends entailing the integration of AI into each scientific discipline. The aim of this paper is to provide a broad research guideline on fundamental sciences with potential infusion of AI, to help motivate researchers to deeply understand the state-of-the-art applications of AI-based fundamental sciences, and thereby to help promote the continuous development of these fundamental sciences.
Collapse
Affiliation(s)
- Yongjun Xu
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xin Liu
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xin Cao
- Zhongshan Hospital Institute of Clinical Science, Fudan University, Shanghai 200032, China
| | - Changping Huang
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Enke Liu
- Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
- Songshan Lake Materials Laboratory, Dongguan, Guangdong 523808, China
| | - Sen Qian
- Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
| | - Xingchen Liu
- Institute of Coal Chemistry, Chinese Academy of Sciences, Taiyuan 030001, China
| | - Yanjun Wu
- Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Fengliang Dong
- National Center for Nanoscience and Technology, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Cheng-Wei Qiu
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117583, Singapore
| | - Junjun Qiu
- Department of Gynaecology, Obstetrics and Gynaecology Hospital, Fudan University, Shanghai 200011, China
- Shanghai Key Laboratory of Female Reproductive Endocrine-Related Diseases, Shanghai 200011, China
| | - Keqin Hua
- Department of Gynaecology, Obstetrics and Gynaecology Hospital, Fudan University, Shanghai 200011, China
- Shanghai Key Laboratory of Female Reproductive Endocrine-Related Diseases, Shanghai 200011, China
| | - Wentao Su
- School of Food Science and Technology, Dalian Polytechnic University, Dalian 116034, China
| | - Jian Wu
- Second Affiliated Hospital School of Medicine, and School of Public Health, Zhejiang University, Hangzhou 310058, China
| | - Huiyu Xu
- Department of Obstetrics and Gynecology, Peking University Third Hospital, Beijing 100191, China
| | - Yong Han
- Zhejiang Provincial People’s Hospital, Hangzhou 310014, China
| | - Chenguang Fu
- School of Materials Science and Engineering, Zhejiang University, Hangzhou 310027, China
| | - Zhigang Yin
- Fujian Institute of Research on the Structure of Matter, Chinese Academy of Sciences, Fuzhou 350002, China
| | - Miao Liu
- Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
- Songshan Lake Materials Laboratory, Dongguan, Guangdong 523808, China
| | - Ronald Roepman
- Medical Center, Radboud University, 6500 Nijmegen, the Netherlands
| | - Sabine Dietmann
- Institute for Informatics, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Marko Virta
- Department of Microbiology, University of Helsinki, 00014 Helsinki, Finland
| | - Fredrick Kengara
- School of Pure and Applied Sciences, Bomet University College, Bomet 20400, Kenya
| | - Ze Zhang
- Agriculture College of Shihezi University, Xinjiang 832000, China
| | - Lifu Zhang
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
- Agriculture College of Shihezi University, Xinjiang 832000, China
| | - Taolan Zhao
- Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing 100101, China
| | - Ji Dai
- The Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- University of Chinese Academy of Sciences, Beijing 100049, China
- Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China
| | | | - Liang Lan
- Department of Communication Studies, Hong Kong Baptist University, Hong Kong, China
| | - Ming Luo
- South China Botanical Garden, Chinese Academy of Sciences, Guangzhou 510650, China
- Center of Economic Botany, Core Botanical Gardens, Chinese Academy of Sciences, Guangzhou 510650, China
| | - Zhaofeng Liu
- Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Tao An
- Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200030, China
| | - Bin Zhang
- Institute of Coal Chemistry, Chinese Academy of Sciences, Taiyuan 030001, China
| | - Xiao He
- Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
| | - Shan Cong
- Suzhou Institute of Nano-Tech and Nano-Bionics, Chinese Academy of Sciences, Suzhou 215123, China
| | - Xiaohong Liu
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
| | - Wei Zhang
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
| | - James P. Lewis
- Institute of Coal Chemistry, Chinese Academy of Sciences, Taiyuan 030001, China
| | - James M. Tiedje
- Center for Microbial Ecology, Department of Plant, Soil and Microbial Sciences, Michigan State University, East Lansing, MI 48824, USA
| | - Qi Wang
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
- Zhejiang Lab, Hangzhou 311121, China
| | - Zhulin An
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Fei Wang
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Libo Zhang
- Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Tao Huang
- Shanghai Institute of Nutrition and Health, Chinese Academy of Sciences, Shanghai 200031, China
| | - Chuan Lu
- Department of Computer Science, Aberystwyth University, Aberystwyth, Ceredigion SY23 3FL, UK
| | - Zhipeng Cai
- Department of Computer Science, Georgia State University, Atlanta, GA 30303, USA
| | - Fang Wang
- Institute of Soil Science, Chinese Academy of Sciences, Nanjing 210008, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jiabao Zhang
- Institute of Soil Science, Chinese Academy of Sciences, Nanjing 210008, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
11
|
Computer Vision and Machine Learning Analysis of Commercial Rice Grains: A Potential Digital Approach for Consumer Perception Studies. SENSORS 2021; 21:s21196354. [PMID: 34640673 PMCID: PMC8513047 DOI: 10.3390/s21196354] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 09/16/2021] [Accepted: 09/22/2021] [Indexed: 01/05/2023]
Abstract
Rice quality assessment is essential for meeting high-quality standards and consumer demands. However, challenges remain in developing cost-effective and rapid techniques to assess commercial rice grain quality traits. This paper presents the application of computer vision (CV) and machine learning (ML) to classify commercial rice samples based on dimensionless morphometric parameters and color parameters extracted using CV algorithms from digital images obtained from a smartphone camera. The artificial neural network (ANN) model was developed using nine morpho-colorimetric parameters to classify rice samples into 15 commercial rice types. Furthermore, the ANN models were deployed and evaluated on a different imaging system to simulate their practical applications under different conditions. Results showed that the best classification accuracy was obtained using the Bayesian Regularization (BR) algorithm of the ANN with ten hidden neurons at 91.6% (MSE = <0.01) and 88.5% (MSE = 0.01) for the training and testing stages, respectively, with an overall accuracy of 90.7% (Model 2). Deployment also showed high accuracy (93.9%) in the classification of the rice samples. The adoption by the industry of rapid, reliable, and accurate methods, such as those presented here, may allow the incorporation of different morpho-colorimetric traits in rice with consumer perception studies.
Collapse
|
12
|
Automated Counting Grains on the Rice Panicle Based on Deep Learning Method. SENSORS 2021; 21:s21010281. [PMID: 33406615 PMCID: PMC7795532 DOI: 10.3390/s21010281] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 12/30/2020] [Accepted: 12/30/2020] [Indexed: 11/17/2022]
Abstract
Grain number per rice panicle, which directly determines grain yield, is an important agronomic trait for rice breeding and yield-related research. However, manually counting grains of rice per panicle is time-consuming, laborious, and error-prone. In this research, a grain detection model was proposed to automatically recognize and count grains on primary branches of a rice panicle. The model used image analysis based on deep learning convolutional neural network (CNN), by integrating the feature pyramid network (FPN) into the faster R-CNN network. The performance of the grain detection model was compared to that of the original faster R-CNN model and the SSD model, and it was found that the grain detection model was more reliable and accurate. The accuracy of the grain detection model was not affected by the lighting condition in which images of rice primary branches were taken. The model worked well for all rice branches with various numbers of grains. Through applying the grain detection model to images of fresh and dry branches, it was found that the model performance was not affected by the grain moisture conditions. The overall accuracy of the grain detection model was 99.4%. Results demonstrated that the model was accurate, reliable, and suitable for detecting grains of rice panicles with various conditions.
Collapse
|
13
|
Guo Y, Li S, Zhang Z, Li Y, Hu Z, Xin D, Chen Q, Wang J, Zhu R. Automatic and Accurate Calculation of Rice Seed Setting Rate Based on Image Segmentation and Deep Learning. FRONTIERS IN PLANT SCIENCE 2021; 12:770916. [PMID: 34970287 PMCID: PMC8712771 DOI: 10.3389/fpls.2021.770916] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 11/23/2021] [Indexed: 05/03/2023]
Abstract
The rice seed setting rate (RSSR) is an important component in calculating rice yields and a key phenotype for its genetic analysis. Automatic calculations of RSSR through computer vision technology have great significance for rice yield predictions. The basic premise for calculating RSSR is having an accurate and high throughput identification of rice grains. In this study, we propose a method based on image segmentation and deep learning to automatically identify rice grains and calculate RSSR. By collecting information on the rice panicle, our proposed image automatic segmentation method can detect the full grain and empty grain, after which the RSSR can be calculated by our proposed rice seed setting rate optimization algorithm (RSSROA). Finally, the proposed method was used to predict the RSSR during which process, the average identification accuracy reached 99.43%. This method has therefore been proven as an effective, non-invasive method for high throughput identification and calculation of RSSR. It is also applicable to soybean yields, as well as wheat and other crops with similar characteristics.
Collapse
Affiliation(s)
- Yixin Guo
- College of Engineering, Northeast Agricultural University, Harbin, China
| | - Shuai Li
- College of Engineering, Northeast Agricultural University, Harbin, China
| | - Zhanguo Zhang
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
| | - Yang Li
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
| | - Zhenbang Hu
- Agricultural College, Northeast Agricultural University, Harbin, China
| | - Dawei Xin
- Agricultural College, Northeast Agricultural University, Harbin, China
| | - Qingshan Chen
- Agricultural College, Northeast Agricultural University, Harbin, China
- *Correspondence: Qingshan Chen,
| | - Jingguo Wang
- Agricultural College, Northeast Agricultural University, Harbin, China
- Jingguo Wang,
| | - Rongsheng Zhu
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
- Rongsheng Zhu,
| |
Collapse
|
14
|
Hu W, Zhang C, Jiang Y, Huang C, Liu Q, Xiong L, Yang W, Chen F. Nondestructive 3D Image Analysis Pipeline to Extract Rice Grain Traits Using X-Ray Computed Tomography. PLANT PHENOMICS (WASHINGTON, D.C.) 2020; 2020:3414926. [PMID: 33313550 PMCID: PMC7706343 DOI: 10.34133/2020/3414926] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 03/27/2020] [Indexed: 05/11/2023]
Abstract
The traits of rice panicles play important roles in yield assessment, variety classification, rice breeding, and cultivation management. Most traditional grain phenotyping methods require threshing and thus are time-consuming and labor-intensive; moreover, these methods cannot obtain 3D grain traits. In this work, based on X-ray computed tomography, we proposed an image analysis method to extract twenty-two 3D grain traits. After 104 samples were tested, the R 2 values between the extracted and manual measurements of the grain number and grain length were 0.980 and 0.960, respectively. We also found a high correlation between the total grain volume and weight. In addition, the extracted 3D grain traits were used to classify the rice varieties, and the support vector machine classifier had a higher recognition accuracy than the stepwise discriminant analysis and random forest classifiers. In conclusion, we developed a 3D image analysis pipeline to extract rice grain traits using X-ray computed tomography that can provide more 3D grain information and could benefit future research on rice functional genomics and rice breeding.
Collapse
Affiliation(s)
- Weijuan Hu
- Crop Phenomics Joint Research Center, Wuhan 430070, China
- Institute of Genetics and Developmental Biology Chinese Academy of Sciences, Beijing 100101, China
| | - Can Zhang
- Crop Phenomics Joint Research Center, Wuhan 430070, China
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, and Key Laboratory of Ministry of Education for Biomedical Photonics, Department of Biomedical Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Yuqiang Jiang
- Crop Phenomics Joint Research Center, Wuhan 430070, China
- Institute of Genetics and Developmental Biology Chinese Academy of Sciences, Beijing 100101, China
| | - Chenglong Huang
- Crop Phenomics Joint Research Center, Wuhan 430070, China
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Agricultural Bioinformatics Key Laboratory of Hubei Province, and College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
| | - Qian Liu
- Crop Phenomics Joint Research Center, Wuhan 430070, China
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, and Key Laboratory of Ministry of Education for Biomedical Photonics, Department of Biomedical Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Lizhong Xiong
- Crop Phenomics Joint Research Center, Wuhan 430070, China
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Agricultural Bioinformatics Key Laboratory of Hubei Province, and College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
| | - Wanneng Yang
- Crop Phenomics Joint Research Center, Wuhan 430070, China
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Agricultural Bioinformatics Key Laboratory of Hubei Province, and College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
| | - Fan Chen
- Crop Phenomics Joint Research Center, Wuhan 430070, China
- Institute of Genetics and Developmental Biology Chinese Academy of Sciences, Beijing 100101, China
| |
Collapse
|