1
|
Morales-Martín A, Mesas-Carrascosa FJ, Gutiérrez PA, Pérez-Porras FJ, Vargas VM, Hervás-Martínez C. Deep Ordinal Classification in Forest Areas Using Light Detection and Ranging Point Clouds. SENSORS (BASEL, SWITZERLAND) 2024; 24:2168. [PMID: 38610379 PMCID: PMC11014040 DOI: 10.3390/s24072168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 03/20/2024] [Accepted: 03/26/2024] [Indexed: 04/14/2024]
Abstract
Recent advances in Deep Learning and aerial Light Detection And Ranging (LiDAR) have offered the possibility of refining the classification and segmentation of 3D point clouds to contribute to the monitoring of complex environments. In this context, the present study focuses on developing an ordinal classification model in forest areas where LiDAR point clouds can be classified into four distinct ordinal classes: ground, low vegetation, medium vegetation, and high vegetation. To do so, an effective soft labeling technique based on a novel proposed generalized exponential function (CE-GE) is applied to the PointNet network architecture. Statistical analyses based on Kolmogorov-Smirnov and Student's t-test reveal that the CE-GE method achieves the best results for all the evaluation metrics compared to other methodologies. Regarding the confusion matrices of the best alternative conceived and the standard categorical cross-entropy method, the smoothed ordinal classification obtains a more consistent classification compared to the nominal approach. Thus, the proposed methodology significantly improves the point-by-point classification of PointNet, reducing the errors in distinguishing between the middle classes (low vegetation and medium vegetation).
Collapse
Affiliation(s)
- Alejandro Morales-Martín
- Department of Computer Science and Numerical Analysis, University of Córdoba, Campus de Rabanales, 14071 Córdoba, Spain; (P.A.G.); (V.M.V.); (C.H.-M.)
| | - Francisco-Javier Mesas-Carrascosa
- Department of Graphic Engineering and Geomatics, University of Córdoba, Campus de Rabanales, 14071 Córdoba, Spain; (F.-J.M.-C.); (F.-J.P.-P.)
| | - Pedro Antonio Gutiérrez
- Department of Computer Science and Numerical Analysis, University of Córdoba, Campus de Rabanales, 14071 Córdoba, Spain; (P.A.G.); (V.M.V.); (C.H.-M.)
| | - Fernando-Juan Pérez-Porras
- Department of Graphic Engineering and Geomatics, University of Córdoba, Campus de Rabanales, 14071 Córdoba, Spain; (F.-J.M.-C.); (F.-J.P.-P.)
| | - Víctor Manuel Vargas
- Department of Computer Science and Numerical Analysis, University of Córdoba, Campus de Rabanales, 14071 Córdoba, Spain; (P.A.G.); (V.M.V.); (C.H.-M.)
| | - César Hervás-Martínez
- Department of Computer Science and Numerical Analysis, University of Córdoba, Campus de Rabanales, 14071 Córdoba, Spain; (P.A.G.); (V.M.V.); (C.H.-M.)
| |
Collapse
|
2
|
Wang J, Jia J, Zhang Y, Wang H, Zhu S. RAAWC-UNet: an apple leaf and disease segmentation method based on residual attention and atrous spatial pyramid pooling improved UNet with weight compression loss. FRONTIERS IN PLANT SCIENCE 2024; 15:1305358. [PMID: 38529067 PMCID: PMC10961398 DOI: 10.3389/fpls.2024.1305358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 02/15/2024] [Indexed: 03/27/2024]
Abstract
Introduction Early detection of leaf diseases is necessary to control the spread of plant diseases, and one of the important steps is the segmentation of leaf and disease images. The uneven light and leaf overlap in complex situations make segmentation of leaves and diseases quite difficult. Moreover, the significant differences in ratios of leaf and disease pixels results in a challenge in identifying diseases. Methods To solve the above issues, the residual attention mechanism combined with atrous spatial pyramid pooling and weight compression loss of UNet is proposed, which is named RAAWC-UNet. Firstly, weights compression loss is a method that introduces a modulation factor in front of the cross-entropy loss, aiming at solving the problem of the imbalance between foreground and background pixels. Secondly, the residual network and the convolutional block attention module are combined to form Res_CBAM. It can accurately localize pixels at the edge of the disease and alleviate the vanishing of gradient and semantic information from downsampling. Finally, in the last layer of downsampling, the atrous spatial pyramid pooling is used instead of two convolutions to solve the problem of insufficient spatial context information. Results The experimental results show that the proposed RAAWC-UNet increases the intersection over union in leaf and disease segmentation by 1.91% and 5.61%, and the pixel accuracy of disease by 4.65% compared with UNet. Discussion The effectiveness of the proposed method was further verified by the better results in comparison with deep learning methods with similar network architectures.
Collapse
Affiliation(s)
- Jianlong Wang
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, China
| | - Junhao Jia
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, China
| | - Yake Zhang
- School of Computer and Information Engineering, Henan Normal University, Xinxiang, China
| | - Haotian Wang
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, China
| | - Shisong Zhu
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, China
| |
Collapse
|
3
|
Fryčák T, Fürst T, Koprna R, Špíšek Z, Miřijovský J, Humplík JF. Crop growth dynamics: Fast automatic analysis of LiDAR images in field-plot experiments by specialized software ALFA. PLoS One 2024; 19:e0297153. [PMID: 38236942 PMCID: PMC10796001 DOI: 10.1371/journal.pone.0297153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 12/28/2023] [Indexed: 01/22/2024] Open
Abstract
Repeated measurements of crop height to observe plant growth dynamics in real field conditions represent a challenging task. Although there are ways to collect data using sensors on UAV systems, proper data processing and analysis are the key to reliable results. As there is need for specialized software solutions for agricultural research and breeding purposes, we present here a fast algorithm ALFA for the processing of UAV LiDAR derived point-clouds to extract the information on crop height at many individual cereal field-plots at multiple time points. Seven scanning flights were performed over 3 blocks of experimental barley field plots between April and June 2021. Resulting point-clouds were processed by the new algorithm ALFA. The software converts point-cloud data into a digital image and extracts the traits of interest-the median crop height at individual field plots. The entire analysis of 144 field plots of dimension 80 x 33 meters measured at 7 time points (approx. 100 million LiDAR points) takes about 3 minutes at a standard PC. The Root Mean Square Deviation of the software-computed crop height from the manual measurement is 5.7 cm. Logistic growth model is fitted to the measured data by means of nonlinear regression. Three different ways of crop-height data visualization are provided by the software to enable further analysis of the variability in growth parameters. We show that the presented software solution is a fast and reliable tool for automatic extraction of plant height from LiDAR images of individual field-plots. We offer this tool freely to the scientific community for non-commercial use.
Collapse
Affiliation(s)
- Tadeáš Fryčák
- Department of Mathematical Analysis and Applications of Mathematics, Faculty of Science, Palacký University, Olomouc, Czech Republic
| | - Tomáš Fürst
- Department of Mathematical Analysis and Applications of Mathematics, Faculty of Science, Palacký University, Olomouc, Czech Republic
| | - Radoslav Koprna
- Department of Chemical Biology, Faculty of Science, Palacký University, Olomouc, Czech Republic
| | - Zdeněk Špíšek
- Department of Chemical Biology, Faculty of Science, Palacký University, Olomouc, Czech Republic
| | - Jakub Miřijovský
- Department of Geoinformatics, Faculty of Science, Palacký University, Olomouc, Czech Republic
| | - Jan F. Humplík
- Department of Chemical Biology, Faculty of Science, Palacký University, Olomouc, Czech Republic
| |
Collapse
|
4
|
Batin MA, Islam M, Hasan MM, Azad AKM, Alyami SA, Hossain MA, Miklavcic SJ. WheatSpikeNet: an improved wheat spike segmentation model for accurate estimation from field imaging. FRONTIERS IN PLANT SCIENCE 2023; 14:1226190. [PMID: 37692423 PMCID: PMC10485698 DOI: 10.3389/fpls.2023.1226190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 07/19/2023] [Indexed: 09/12/2023]
Abstract
Phenotyping is used in plant breeding to identify genotypes with desirable characteristics, such as drought tolerance, disease resistance, and high-yield potentials. It may also be used to evaluate the effect of environmental circumstances, such as drought, heat, and salt, on plant growth and development. Wheat spike density measure is one of the most important agronomic factors relating to wheat phenotyping. Nonetheless, due to the diversity of wheat field environments, fast and accurate identification for counting wheat spikes remains one of the challenges. This study proposes a meticulously curated and annotated dataset, named as SPIKE-segm, taken from the publicly accessible SPIKE dataset, and an optimal instance segmentation approach named as WheatSpikeNet for segmenting and counting wheat spikes from field imagery. The proposed method is based on the well-known Cascade Mask RCNN architecture with model enhancements and hyperparameter tuning to provide state-of-the-art detection and segmentation performance. A comprehensive ablation analysis incorporating many architectural components of the model was performed to determine the most efficient version. In addition, the model's hyperparameters were fine-tuned by conducting several empirical tests. ResNet50 with Deformable Convolution Network (DCN) as the backbone architecture for feature extraction, Generic RoI Extractor (GRoIE) for RoI pooling, and Side Aware Boundary Localization (SABL) for wheat spike localization comprises the final instance segmentation model. With bbox and mask mean average precision (mAP) scores of 0.9303 and 0.9416, respectively, on the test set, the proposed model achieved superior performance on the challenging SPIKE datasets. Furthermore, in comparison with other existing state-of-the-art methods, the proposed model achieved up to a 0.41% improvement of mAP in spike detection and a significant improvement of 3.46% of mAP in the segmentation tasks that will lead us to an appropriate yield estimation from wheat plants.
Collapse
Affiliation(s)
- M. A. Batin
- Department of Robotics and Mechatronics Engineering, University of Dhaka, Dhaka, Bangladesh
| | - Muhaiminul Islam
- Department of Robotics and Mechatronics Engineering, University of Dhaka, Dhaka, Bangladesh
| | - Md Mehedi Hasan
- Department of Robotics and Mechatronics Engineering, University of Dhaka, Dhaka, Bangladesh
| | - AKM Azad
- Department of Mathematics and Statistics, College of Science, Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia
| | - Salem A. Alyami
- Department of Mathematics and Statistics, College of Science, Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia
| | - Md Azam Hossain
- Department of Computer Science and Engineering, Islamic University of Technology, Gazipur, Bangladesh
| | - Stanley J. Miklavcic
- Phenomics and Bioinformatics Research Centre, University of South Australia, Adelaide, SA, Australia
| |
Collapse
|
5
|
Harandi N, Vandenberghe B, Vankerschaver J, Depuydt S, Van Messem A. How to make sense of 3D representations for plant phenotyping: a compendium of processing and analysis techniques. PLANT METHODS 2023; 19:60. [PMID: 37353846 DOI: 10.1186/s13007-023-01031-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 05/19/2023] [Indexed: 06/25/2023]
Abstract
Computer vision technology is moving more and more towards a three-dimensional approach, and plant phenotyping is following this trend. However, despite its potential, the complexity of the analysis of 3D representations has been the main bottleneck hindering the wider deployment of 3D plant phenotyping. In this review we provide an overview of typical steps for the processing and analysis of 3D representations of plants, to offer potential users of 3D phenotyping a first gateway into its application, and to stimulate its further development. We focus on plant phenotyping applications where the goal is to measure characteristics of single plants or crop canopies on a small scale in research settings, as opposed to large scale crop monitoring in the field.
Collapse
Affiliation(s)
- Negin Harandi
- Center for Biosystems and Biotech Data Science, Ghent University Global Campus, 119 Songdomunhwa-ro, Yeonsu-gu, Incheon, South Korea
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Krijgslaan 281, S9, Ghent, Belgium
| | | | - Joris Vankerschaver
- Center for Biosystems and Biotech Data Science, Ghent University Global Campus, 119 Songdomunhwa-ro, Yeonsu-gu, Incheon, South Korea
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Krijgslaan 281, S9, Ghent, Belgium
| | - Stephen Depuydt
- Erasmus Applied University of Sciences and Arts, Campus Kaai, Nijverheidskaai 170, Anderlecht, Belgium
| | - Arnout Van Messem
- Department of Mathematics, Université de Liège, Allée de la Découverte 12, Liège, Belgium.
| |
Collapse
|
6
|
Li H, Wu G, Tao S, Yin H, Qi K, Zhang S, Guo W, Ninomiya S, Mu Y. Automatic Branch-Leaf Segmentation and Leaf Phenotypic Parameter Estimation of Pear Trees Based on Three-Dimensional Point Clouds. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094572. [PMID: 37177776 PMCID: PMC10181666 DOI: 10.3390/s23094572] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/30/2023] [Accepted: 05/05/2023] [Indexed: 05/15/2023]
Abstract
The leaf phenotypic traits of plants have a significant impact on the efficiency of canopy photosynthesis. However, traditional methods such as destructive sampling will hinder the continuous monitoring of plant growth, while manual measurements in the field are both time-consuming and laborious. Nondestructive and accurate measurements of leaf phenotypic parameters can be achieved through the use of 3D canopy models and object segmentation techniques. This paper proposed an automatic branch-leaf segmentation pipeline based on lidar point cloud and conducted the automatic measurement of leaf inclination angle, length, width, and area, using pear canopy as an example. Firstly, a three-dimensional model using a lidar point cloud was established using SCENE software. Next, 305 pear tree branches were manually divided into branch points and leaf points, and 45 branch samples were selected as test data. Leaf points were further marked as 572 leaf instances on these test data. The PointNet++ model was used, with 260 point clouds as training input to carry out semantic segmentation of branches and leaves. Using the leaf point clouds in the test dataset as input, a single leaf instance was extracted by means of a mean shift clustering algorithm. Finally, based on the single leaf point cloud, the leaf inclination angle was calculated by plane fitting, while the leaf length, width, and area were calculated by midrib fitting and triangulation. The semantic segmentation model was tested on 45 branches, with a mean Precisionsem, mean Recallsem, mean F1-score, and mean Intersection over Union (IoU) of branches and leaves of 0.93, 0.94, 0.93, and 0.88, respectively. For single leaf extraction, the Precisionins, Recallins, and mean coverage (mCoV) were 0.89, 0.92, and 0.87, respectively. Using the proposed method, the estimated leaf inclination, length, width, and area of pear leaves showed a high correlation with manual measurements, with correlation coefficients of 0.94 (root mean squared error: 4.44°), 0.94 (root mean squared error: 0.43 cm), 0.91 (root mean squared error: 0.39 cm), and 0.93 (root mean squared error: 5.21 cm2), respectively. These results demonstrate that the method can automatically and accurately measure the phenotypic parameters of pear leaves. This has great significance for monitoring pear tree growth, simulating canopy photosynthesis, and optimizing orchard management.
Collapse
Affiliation(s)
- Haitao Li
- Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Center for Modern Crop Production Co-Sponsored by Province and Ministry, Nanjing Agricultural University, Nanjing 210095, China
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210095, China
| | - Gengchen Wu
- Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Center for Modern Crop Production Co-Sponsored by Province and Ministry, Nanjing Agricultural University, Nanjing 210095, China
| | - Shutian Tao
- Centre of Pear Engineering Technology Research, State Key Laboratory of Crop Genetics & Germplasm Enhancement and Utilization, Nanjing Agricultural University, Nanjing 210095, China
| | - Hao Yin
- Centre of Pear Engineering Technology Research, State Key Laboratory of Crop Genetics & Germplasm Enhancement and Utilization, Nanjing Agricultural University, Nanjing 210095, China
| | - Kaijie Qi
- Centre of Pear Engineering Technology Research, State Key Laboratory of Crop Genetics & Germplasm Enhancement and Utilization, Nanjing Agricultural University, Nanjing 210095, China
| | - Shaoling Zhang
- Centre of Pear Engineering Technology Research, State Key Laboratory of Crop Genetics & Germplasm Enhancement and Utilization, Nanjing Agricultural University, Nanjing 210095, China
| | - Wei Guo
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1 Midori-cho, Tokyo 188-0002, Japan
| | - Seishi Ninomiya
- Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Center for Modern Crop Production Co-Sponsored by Province and Ministry, Nanjing Agricultural University, Nanjing 210095, China
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1 Midori-cho, Tokyo 188-0002, Japan
| | - Yue Mu
- Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Center for Modern Crop Production Co-Sponsored by Province and Ministry, Nanjing Agricultural University, Nanjing 210095, China
| |
Collapse
|
7
|
Li Y, Wen W, Fan J, Gou W, Gu S, Lu X, Yu Z, Wang X, Guo X. Multi-Source Data Fusion Improves Time-Series Phenotype Accuracy in Maize under a Field High-Throughput Phenotyping Platform. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0043. [PMID: 37223316 PMCID: PMC10202381 DOI: 10.34133/plantphenomics.0043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 03/26/2023] [Indexed: 05/25/2023]
Abstract
The field phenotyping platforms that can obtain high-throughput and time-series phenotypes of plant populations at the 3-dimensional level are crucial for plant breeding and management. However, it is difficult to align the point cloud data and extract accurate phenotypic traits of plant populations. In this study, high-throughput, time-series raw data of field maize populations were collected using a field rail-based phenotyping platform with light detection and ranging (LiDAR) and an RGB (red, green, and blue) camera. The orthorectified images and LiDAR point clouds were aligned via the direct linear transformation algorithm. On this basis, time-series point clouds were further registered by the time-series image guidance. The cloth simulation filter algorithm was then used to remove the ground points. Individual plants and plant organs were segmented from maize population by fast displacement and region growth algorithms. The plant heights of 13 maize cultivars obtained using the multi-source fusion data were highly correlated with the manual measurements (R2 = 0.98), and the accuracy was higher than only using one source point cloud data (R2 = 0.93). It demonstrates that multi-source data fusion can effectively improve the accuracy of time series phenotype extraction, and rail-based field phenotyping platforms can be a practical tool for plant growth dynamic observation of phenotypes in individual plant and organ scales.
Collapse
Affiliation(s)
- Yinglun Li
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Weiliang Wen
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Jiangchuan Fan
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Wenbo Gou
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Shenghao Gu
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Xianju Lu
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Zetao Yu
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Xiaodong Wang
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| | - Xinyu Guo
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
| |
Collapse
|
8
|
Zang J, Jin S, Zhang S, Li Q, Mu Y, Li Z, Li S, Wang X, Su Y, Jiang D. Field-measured canopy height may not be as accurate and heritable as believed: evidence from advanced 3D sensing. PLANT METHODS 2023; 19:39. [PMID: 37009892 PMCID: PMC10069135 DOI: 10.1186/s13007-023-01012-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Accepted: 03/21/2023] [Indexed: 06/19/2023]
Abstract
Canopy height (CH) is an important trait for crop breeding and production. The rapid development of 3D sensing technologies shed new light on high-throughput height measurement. However, a systematic comparison of the accuracy and heritability of different 3D sensing technologies is seriously lacking. Moreover, it is questionable whether the field-measured height is as reliable as believed. This study uncovered these issues by comparing traditional height measurement with four advanced 3D sensing technologies, including terrestrial laser scanning (TLS), backpack laser scanning (BLS), gantry laser scanning (GLS), and digital aerial photogrammetry (DAP). A total of 1920 plots covering 120 varieties were selected for comparison. Cross-comparisons of different data sources were performed to evaluate their performances in CH estimation concerning different CH, leaf area index (LAI), and growth stage (GS) groups. Results showed that 1) All 3D sensing data sources had high correlations with field measurement (r > 0.82), while the correlations between different 3D sensing data sources were even better (r > 0.87). 2) The prediction accuracy between different data sources decreased in subgroups of CH, LAI, and GS. 3) Canopy height showed high heritability from all datasets, and 3D sensing datasets had even higher heritability (H2 = 0.79-0.89) than FM (field measurement) (H2 = 0.77). Finally, outliers of different datasets are analyzed. The results provide novel insights into different methods for canopy height measurement that may ensure the high-quality application of this important trait.
Collapse
Affiliation(s)
- Jingrong Zang
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production Co-Sponsored By Province and Ministry, College of Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
| | - Shichao Jin
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production Co-Sponsored By Province and Ministry, College of Agriculture, Nanjing Agricultural University, Nanjing, 210095, China.
| | - Songyin Zhang
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production Co-Sponsored By Province and Ministry, College of Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
| | - Qing Li
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production Co-Sponsored By Province and Ministry, College of Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
| | - Yue Mu
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production Co-Sponsored By Province and Ministry, College of Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
| | - Ziyu Li
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production Co-Sponsored By Province and Ministry, College of Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
| | - Shaochen Li
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production Co-Sponsored By Province and Ministry, College of Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
| | - Xiao Wang
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production Co-Sponsored By Province and Ministry, College of Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
| | - Yanjun Su
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093, China
| | - Dong Jiang
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production Co-Sponsored By Province and Ministry, College of Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
| |
Collapse
|
9
|
Liu K, Zhang X. PiTLiD: Identification of Plant Disease From Leaf Images Based on Convolutional Neural Network. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:1278-1288. [PMID: 35914052 DOI: 10.1109/tcbb.2022.3195291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
With the development of plant phenomics, the identification of plant diseases from leaf images has become an effective and economic approach in plant disease science. Among the methods of plant diseases identification, the convolutional neural network (CNN) is the most popular one for its superior performance. However, CNN's representation power is still a challenge in dealing with small datasets, which greatly affects its popularization. In this work, we propose a new method, namely PiTLiD, based on pretrained Inception-V3 convolutional neural network and transfer learning to identify plant leaf diseases from phenotype data of plant leaf with small sample size. To evaluate the robustness of the proposed method, the experiments on several datasets with small-scale samples were implemented. The results show that PiTLiD performs better than compared methods. This study provides a plant disease identification tool based on a deep learning algorithm for plant phenomics. All the source data and code are accessible at https://github.com/zhanglab-wbgcas/PiTLiD.
Collapse
|
10
|
Deng L, Fan Z, Chen B, Zhai H, He H, He C, Sun Y, Wang Y, Ma H. A Dual-Modality Imaging Method Based on Polarimetry and Second Harmonic Generation for Characterization and Evaluation of Skin Tissue Structures. Int J Mol Sci 2023; 24:ijms24044206. [PMID: 36835613 PMCID: PMC9966533 DOI: 10.3390/ijms24044206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 02/15/2023] [Accepted: 02/17/2023] [Indexed: 02/22/2023] Open
Abstract
The characterization and evaluation of skin tissue structures are crucial for dermatological applications. Recently, Mueller matrix polarimetry and second harmonic generation microscopy have been widely used in skin tissue imaging due to their unique advantages. However, the features of layered skin tissue structures are too complicated to use a single imaging modality for achieving a comprehensive evaluation. In this study, we propose a dual-modality imaging method combining Mueller matrix polarimetry and second harmonic generation microscopy for quantitative characterization of skin tissue structures. It is demonstrated that the dual-modality method can well divide the mouse tail skin tissue specimens' images into three layers of stratum corneum, epidermis, and dermis. Then, to quantitatively analyze the structural features of different skin layers, the gray level co-occurrence matrix is adopted to provide various evaluating parameters after the image segmentations. Finally, to quantitatively measure the structural differences between damaged and normal skin areas, an index named Q-Health is defined based on cosine similarity and the gray-level co-occurrence matrix parameters of imaging results. The experiments confirm the effectiveness of the dual-modality imaging parameters for skin tissue structure discrimination and assessment. It shows the potential of the proposed method for dermatological practices and lays the foundation for further, in-depth evaluation of the health status of human skin.
Collapse
Affiliation(s)
- Liangyu Deng
- Guangdong Research Center of Polarization Imaging and Measurement Engineering Technology, Shenzhen Key Laboratory for Minimal Invasive Medical Technologies, Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Zhipeng Fan
- Guangdong Research Center of Polarization Imaging and Measurement Engineering Technology, Shenzhen Key Laboratory for Minimal Invasive Medical Technologies, Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Binguo Chen
- Guangdong Research Center of Polarization Imaging and Measurement Engineering Technology, Shenzhen Key Laboratory for Minimal Invasive Medical Technologies, Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Haoyu Zhai
- Guangdong Research Center of Polarization Imaging and Measurement Engineering Technology, Shenzhen Key Laboratory for Minimal Invasive Medical Technologies, Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Honghui He
- Guangdong Research Center of Polarization Imaging and Measurement Engineering Technology, Shenzhen Key Laboratory for Minimal Invasive Medical Technologies, Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Correspondence: (H.H.); (C.H.)
| | - Chao He
- Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, UK
- Correspondence: (H.H.); (C.H.)
| | - Yanan Sun
- Experimental Research Center, China Academy of Chinese Medical Sciences, Beijing 100700, China
| | - Yi Wang
- Experimental Research Center, China Academy of Chinese Medical Sciences, Beijing 100700, China
| | - Hui Ma
- Guangdong Research Center of Polarization Imaging and Measurement Engineering Technology, Shenzhen Key Laboratory for Minimal Invasive Medical Technologies, Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Physics, Tsinghua University, Beijing 100084, China
| |
Collapse
|
11
|
Wang D, Song Z, Miao T, Zhu C, Yang X, Yang T, Zhou Y, Den H, Xu T. DFSP: A fast and automatic distance field-based stem-leaf segmentation pipeline for point cloud of maize shoot. FRONTIERS IN PLANT SCIENCE 2023; 14:1109314. [PMID: 36798707 PMCID: PMC9927642 DOI: 10.3389/fpls.2023.1109314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 01/10/2023] [Indexed: 06/18/2023]
Abstract
The 3D point cloud data are used to analyze plant morphological structure. Organ segmentation of a single plant can be directly used to determine the accuracy and reliability of organ-level phenotypic estimation in a point-cloud study. However, it is difficult to achieve a high-precision, automatic, and fast plant point cloud segmentation. Besides, a few methods can easily integrate the global structural features and local morphological features of point clouds relatively at a reduced cost. In this paper, a distance field-based segmentation pipeline (DFSP) which could code the global spatial structure and local connection of a plant was developed to realize rapid organ location and segmentation. The terminal point clouds of different plant organs were first extracted via DFSP during the stem-leaf segmentation, followed by the identification of the low-end point cloud of maize stem based on the local geometric features. The regional growth was then combined to obtain a stem point cloud. Finally, the instance segmentation of the leaf point cloud was realized using DFSP. The segmentation method was tested on 420 maize and compared with the manually obtained ground truth. Notably, DFSP had an average processing time of 1.52 s for about 15,000 points of maize plant data. The mean precision, recall, and micro F1 score of the DFSP segmentation algorithm were 0.905, 0.899, and 0.902, respectively. These findings suggest that DFSP can accurately, rapidly, and automatically achieve maize stem-leaf segmentation tasks and could be effective in maize phenotype research. The source code can be found at https://github.com/syau-miao/DFSP.git.
Collapse
Affiliation(s)
- Dabao Wang
- College of Information and Electrical Engineering, Shenyang Agricultural University, Shenyang, China
| | - Zhi Song
- College of Science, Shenyang Agricultural University, Shenyang, China
| | - Teng Miao
- College of Information and Electrical Engineering, Shenyang Agricultural University, Shenyang, China
| | - Chao Zhu
- School of Mathematics and Computer Science, Zhejiang Agriculture and Forestry University, Hangzhou, China
| | - Xin Yang
- College of Information and Electrical Engineering, Shenyang Agricultural University, Shenyang, China
| | - Tao Yang
- School of Information and Intelligence Engineering, University of Sanya, Sanya, China
| | - Yuncheng Zhou
- College of Information and Electrical Engineering, Shenyang Agricultural University, Shenyang, China
| | - Hanbing Den
- College of Information and Electrical Engineering, Shenyang Agricultural University, Shenyang, China
| | - Tongyu Xu
- College of Information and Electrical Engineering, Shenyang Agricultural University, Shenyang, China
| |
Collapse
|
12
|
Guo C, Zhang X, Li Y, Xie J, Gao P, Hao P, Han L, Zhang J, Wang W, Liu P, Ding J, Chang Y. Whole-genome resequencing reveals genetic differences and the genetic basis of parapodium number in Russian and Chinese Apostichopus japonicus. BMC Genomics 2023; 24:25. [PMID: 36647018 PMCID: PMC9843871 DOI: 10.1186/s12864-023-09113-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 01/04/2023] [Indexed: 01/18/2023] Open
Abstract
BACKGROUND Apostichopus japonicus is an economically important species in the global aquaculture industry. Russian A. japonicus, mainly harvested in the Vladivostok region, exhibits significant phenotypic differentiation, including in many economically important traits, compared with Chinese A. japonicus owing to differences in their habitat. However, both the genetic basis for the phenotypic divergence and the population genetic structure of Russian and Chinese A. japonicus are unknown. RESULT In this study, 210 individuals from seven Russian and Chinese A. japonicus populations were sampled for whole-genome resequencing. The genetic structure analysis differentiated the Russian and Chinese A. japonicus into two groups. Population genetic analyses indicated that the Russian population showed a high degree of allelic linkage and had undergone stronger positive selection compared with the Chinese populations. Gene ontology terms enriched among candidate genes with group selection analysis were mainly involved in immunity, such as inflammatory response, antimicrobial peptides, humoral immunity, and apoptosis. Genome-wide association analysis yielded eight single-nucleotide polymorphism loci significantly associated with parapodium number, and these loci are located in regions with a high degree of genomic differentiation between the Chinese and Russia populations. These SNPs were associated with five genes. Gene expression validation revealed that three of these genes were significantly differentially expressed in individuals differing in parapodium number. AJAP08772 and AJAP08773 may directly affect parapodium production by promoting endothelial cell proliferation and metabolism, whereas AJAP07248 indirectly affects parapodium production by participating in immune responses. CONCLUSIONS This study, we performed population genetic structure and GWAS analysis on Chinese and Russian A. japonicus, and found three candidate genes related to the number of parapodium. The results provide an in-depth understanding of the differences in the genetic structure of A. japonicus populations in China and Russia, and provide important information for subsequent genetic analysis and breeding of this species.
Collapse
Affiliation(s)
- Chao Guo
- grid.410631.10000 0001 1867 7333Key Laboratory of Mariculture & Stock Enhancement in North China’s Sea, Ministry of Agriculture and Rural Affairs, Dalian Ocean University, Dalian, Liaoning 116023 People’s Republic of China
| | - Xianglei Zhang
- grid.410631.10000 0001 1867 7333Key Laboratory of Mariculture & Stock Enhancement in North China’s Sea, Ministry of Agriculture and Rural Affairs, Dalian Ocean University, Dalian, Liaoning 116023 People’s Republic of China
| | - Yuanxin Li
- grid.410631.10000 0001 1867 7333Key Laboratory of Mariculture & Stock Enhancement in North China’s Sea, Ministry of Agriculture and Rural Affairs, Dalian Ocean University, Dalian, Liaoning 116023 People’s Republic of China
| | - Jiahui Xie
- grid.410631.10000 0001 1867 7333Key Laboratory of Mariculture & Stock Enhancement in North China’s Sea, Ministry of Agriculture and Rural Affairs, Dalian Ocean University, Dalian, Liaoning 116023 People’s Republic of China
| | - Pingping Gao
- grid.410631.10000 0001 1867 7333Key Laboratory of Mariculture & Stock Enhancement in North China’s Sea, Ministry of Agriculture and Rural Affairs, Dalian Ocean University, Dalian, Liaoning 116023 People’s Republic of China
| | - Pengfei Hao
- grid.410631.10000 0001 1867 7333Key Laboratory of Mariculture & Stock Enhancement in North China’s Sea, Ministry of Agriculture and Rural Affairs, Dalian Ocean University, Dalian, Liaoning 116023 People’s Republic of China
| | - Lingshu Han
- grid.410631.10000 0001 1867 7333Key Laboratory of Mariculture & Stock Enhancement in North China’s Sea, Ministry of Agriculture and Rural Affairs, Dalian Ocean University, Dalian, Liaoning 116023 People’s Republic of China ,grid.203507.30000 0000 8950 5267Ningbo University, Ningbo, Zhejiang 315211 People’s Republic of China
| | - Jinyuan Zhang
- grid.410631.10000 0001 1867 7333Key Laboratory of Mariculture & Stock Enhancement in North China’s Sea, Ministry of Agriculture and Rural Affairs, Dalian Ocean University, Dalian, Liaoning 116023 People’s Republic of China
| | - Wenpei Wang
- grid.410631.10000 0001 1867 7333Key Laboratory of Mariculture & Stock Enhancement in North China’s Sea, Ministry of Agriculture and Rural Affairs, Dalian Ocean University, Dalian, Liaoning 116023 People’s Republic of China
| | - Peng Liu
- grid.410631.10000 0001 1867 7333Key Laboratory of Mariculture & Stock Enhancement in North China’s Sea, Ministry of Agriculture and Rural Affairs, Dalian Ocean University, Dalian, Liaoning 116023 People’s Republic of China
| | - Jun Ding
- grid.410631.10000 0001 1867 7333Key Laboratory of Mariculture & Stock Enhancement in North China’s Sea, Ministry of Agriculture and Rural Affairs, Dalian Ocean University, Dalian, Liaoning 116023 People’s Republic of China
| | - Yaqing Chang
- grid.410631.10000 0001 1867 7333Key Laboratory of Mariculture & Stock Enhancement in North China’s Sea, Ministry of Agriculture and Rural Affairs, Dalian Ocean University, Dalian, Liaoning 116023 People’s Republic of China
| |
Collapse
|
13
|
Li B, Guo C. MASPC_Transform: A Plant Point Cloud Segmentation Network Based on Multi-Head Attention Separation and Position Code. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22239225. [PMID: 36501926 PMCID: PMC9740736 DOI: 10.3390/s22239225] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 11/24/2022] [Accepted: 11/25/2022] [Indexed: 06/12/2023]
Abstract
Plant point cloud segmentation is an important step in 3D plant phenotype research. Because the stems, leaves, flowers, and other organs of plants are often intertwined and small in size, this makes plant point cloud segmentation more challenging than other segmentation tasks. In this paper, we propose MASPC_Transform, a novel plant point cloud segmentation network base on multi-head attention separation and position code. The proposed MASPC_Transform establishes connections for similar point clouds scattered in different areas of the point cloud space through multiple attention heads. In order to avoid the aggregation of multiple attention heads, we propose a multi-head attention separation loss based on spatial similarity, so that the attention positions of different attention heads can be dispersed as much as possible. In order to reduce the impact of point cloud disorder and irregularity on feature extraction, we propose a new point cloud position coding method, and use the position coding network based on this method in the local and global feature extraction modules of MASPC_Transform. We evaluate our MASPC_Transform on the ROSE_X dataset. Compared with the state-of-the-art approaches, the proposed MASPC_Transform achieved better segmentation results.
Collapse
Affiliation(s)
- Bin Li
- School of Computer Science, Northeast Electric Power University, Jilin 132012, China
- Gongqing Institute of Science and Technology, No. 1 Gongqing Road, Gongqing 332020, China
| | - Chenhua Guo
- School of Computer Science, Northeast Electric Power University, Jilin 132012, China
| |
Collapse
|
14
|
Tao H, Xu S, Tian Y, Li Z, Ge Y, Zhang J, Wang Y, Zhou G, Deng X, Zhang Z, Ding Y, Jiang D, Guo Q, Jin S. Proximal and remote sensing in plant phenomics: 20 years of progress, challenges, and perspectives. PLANT COMMUNICATIONS 2022; 3:100344. [PMID: 35655429 PMCID: PMC9700174 DOI: 10.1016/j.xplc.2022.100344] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 05/08/2022] [Accepted: 05/27/2022] [Indexed: 06/01/2023]
Abstract
Plant phenomics (PP) has been recognized as a bottleneck in studying the interactions of genomics and environment on plants, limiting the progress of smart breeding and precise cultivation. High-throughput plant phenotyping is challenging owing to the spatio-temporal dynamics of traits. Proximal and remote sensing (PRS) techniques are increasingly used for plant phenotyping because of their advantages in multi-dimensional data acquisition and analysis. Substantial progress of PRS applications in PP has been observed over the last two decades and is analyzed here from an interdisciplinary perspective based on 2972 publications. This progress covers most aspects of PRS application in PP, including patterns of global spatial distribution and temporal dynamics, specific PRS technologies, phenotypic research fields, working environments, species, and traits. Subsequently, we demonstrate how to link PRS to multi-omics studies, including how to achieve multi-dimensional PRS data acquisition and processing, how to systematically integrate all kinds of phenotypic information and derive phenotypic knowledge with biological significance, and how to link PP to multi-omics association analysis. Finally, we identify three future perspectives for PRS-based PP: (1) strengthening the spatial and temporal consistency of PRS data, (2) exploring novel phenotypic traits, and (3) facilitating multi-omics communication.
Collapse
Affiliation(s)
- Haiyu Tao
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China
| | - Shan Xu
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China
| | - Yongchao Tian
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China
| | - Zhaofeng Li
- The Key Laboratory of Oasis Eco-agriculture, Xinjiang Production and Construction Corps, Agriculture College, Shihezi University, Shihezi 832003, China
| | - Yan Ge
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China
| | - Jiaoping Zhang
- State Key Laboratory of Crop Genetics and Germplasm Enhancement, National Center for Soybean Improvement, Key Laboratory for Biology and Genetic Improvement of Soybean (General, Ministry of Agriculture), Nanjing Agricultural University, Nanjing 210095, China
| | - Yu Wang
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China
| | - Guodong Zhou
- Sanya Research Institute of Nanjing Agriculture University, Sanya 572024, China
| | - Xiong Deng
- Key Laboratory of Plant Molecular Physiology, Institute of Botany, Chinese Academy of Sciences, Beijing 100093, China
| | - Ze Zhang
- The Key Laboratory of Oasis Eco-agriculture, Xinjiang Production and Construction Corps, Agriculture College, Shihezi University, Shihezi 832003, China
| | - Yanfeng Ding
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China; Hainan Yazhou Bay Seed Laboratory, Sanya 572025, China; Sanya Research Institute of Nanjing Agriculture University, Sanya 572024, China
| | - Dong Jiang
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China; Hainan Yazhou Bay Seed Laboratory, Sanya 572025, China; Sanya Research Institute of Nanjing Agriculture University, Sanya 572024, China
| | - Qinghua Guo
- Institute of Ecology, College of Urban and Environmental Science, Peking University, Beijing 100871, China
| | - Shichao Jin
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, National Engineering and Technology Center for Information Agriculture, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Address: No. 1 Weigang, Xuanwu District, Nanjing 210095, China; Hainan Yazhou Bay Seed Laboratory, Sanya 572025, China; Sanya Research Institute of Nanjing Agriculture University, Sanya 572024, China; Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, International Institute for Earth System Sciences, Nanjing University, Nanjing, Jiangsu 210023, China.
| |
Collapse
|
15
|
Individual Maize Location and Height Estimation in Field from UAV-Borne LiDAR and RGB Images. REMOTE SENSING 2022. [DOI: 10.3390/rs14102292] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Crop height is an essential parameter used to monitor overall crop growth, forecast crop yield, and estimate crop biomass in precision agriculture. However, individual maize segmentation is the prerequisite for precision field monitoring, which is a challenging task because the maize stalks are usually occluded by leaves between adjacent plants, especially when they grow up. In this study, we proposed a novel method that combined seedling detection and clustering algorithms to segment individual maize plants from UAV-borne LiDAR and RGB images. As seedlings emerged, the images collected by an RGB camera mounted on a UAV platform were processed and used to generate a digital orthophoto map. Based on this orthophoto, the location of each maize seedling was identified by extra-green detection and morphological filtering. A seed point set was then generated and used as input for the clustering algorithm. The fuzzy C-means clustering algorithm was used to segment individual maize plants. We computed the difference between the maximum elevation value of the LiDAR point cloud and the average elevation value of the bare digital terrain model (DTM) at each corresponding area for individual plant height estimation. The results revealed that our height estimation approach test on two cultivars produced the accuracy with R2 greater than 0.95, with the mean square error (RMSE) of 4.55 cm, 3.04 cm, and 3.29 cm, as well as the mean absolute percentage error (MAPE) of 3.75%, 0.91%, and 0.98% at three different growth stages, respectively. Our approach, utilizing UAV-borne LiDAR and RGB cameras, demonstrated promising performance for estimating maize height and its field position.
Collapse
|
16
|
Transgenic Rice Plants Expressing Artificial miRNA Targeting the Rice Stripe Virus MP Gene Are Highly Resistant to the Virus. BIOLOGY 2022; 11:biology11020332. [PMID: 35205198 PMCID: PMC8869529 DOI: 10.3390/biology11020332] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 02/15/2022] [Accepted: 02/16/2022] [Indexed: 11/29/2022]
Abstract
Simple Summary Rice stripe virus is a disastrous viral disease that causes significant yield losses in rice production in South, Southeast, and East Asian countries. To decrease the use of chemical insecticides, genetic engineering has become a pivotal strategy to combat the virus. In this study, we constructed a dimeric artificial microRNA precursor expression vector that targets the viral MP gene based on the structure of the rice osa-MIR528 precursor. Marker-free transgenic plants successfully expressing the MP amiRNAs were obtained and were highly resistant to RSV infection. The novel rice germplasms generated are promising for RSV control. Abstract Rice stripe virus (RSV) causes one of the most serious viral diseases of rice. RNA interference is one of the most efficient ways to control viral disease. In this study, we constructed an amiRNA targeting the RSV MP gene (amiR MP) based on the backbone sequence of the osa-MIR528 precursor, and obtained marker-free transgenic rice plants constitutively expressing amiR MP by Agrobacterium tumefaciens-mediated transformation. A transient expression assay demonstrated that dimeric amiR MP could be effectively recognized and cleaved at the target MP gene in plants. Northern blot of miRNA indicated that amiR MP-mediated viral resistance could be stably inherited. The transgenic rice plants were highly resistant to RSV (73–90%). Our research provides novel rice germplasm for RSV control.
Collapse
|
17
|
Phenotypic Traits Extraction and Genetic Characteristics Assessment of Eucalyptus Trials Based on UAV-Borne LiDAR and RGB Images. REMOTE SENSING 2022. [DOI: 10.3390/rs14030765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Phenotype describes the physical, physiological and biochemical characteristics of organisms that are determined or influenced by genes and environment. Accurate extraction of phenotypic data is a prerequisite for comprehensive forest phenotyping in order to improve the growth and development of forest plantations. Combined with the assessments of genetic characteristics, forest phenotyping will help to accelerate the breeding process, improve stress resistance and enhance the quality of the planted forest. In this study, we disposed our study in Eucalyptus trials within the Gaofeng forest farm (a typical Eucalyptus plantation site in southern China) for a high-throughput phenotypic traits extraction and genetic characteristics analysis based on high-density point clouds (acquired by a UAV-borne LiDAR sensor) and high-resolution RGB images (acquired by a UAV-borne camera), aiming at developing a high-resolution and high-throughput UAV-based phenotyping approach for tree breeding. First, we compared the effect of CHM-based Marker-Controlled Watershed Segmentation (MWS) and Point Cloud-based Cluster Segmentation (PCS) for extracting individual trees; Then, the phenotypic traits (i.e., tree height, diameter at breast height, crown width), the structural metrics (n = 19) and spectral indices (n = 9) of individual trees were extracted and assessed; Finally, a genetic characteristics analysis was carried out based on the above results, and we compared the differences between high-throughput phenotyping by UAV-based data and on manual measurements. Results showed that: in the relatively low stem density site of the trial (760 n/ha), the overall accuracy of MWS and PCS was similar, while in the higher stem density sites (982 n/ha, 1239 n/ha), the overall accuracy of MWS (F(2) = 0.93, F(3) = 0.86) was higher than PCS (F(2) = 0.84, F(3) = 0.74); With the increase of stem density, the difference between the overall accuracy of MWS and PCS gradually expanded. Both UAV–LiDAR extracted phenotypic traits and manual measurements were significantly different across the Eucalyptus clones (P < 0.05), as were most of the structural metrics (47/57) and spectral indices (26/27), revealing the genetic divergence between the clones. The rank of clones demonstrated that the pure clones (of E. urophylla), the hybrid clones (of E. urophylla as the female parent) and the hybrid clones (of E. wetarensis and E. grandis) have a higher fineness of growth. This study proved that UAV-based fine-resolution remote sensing could be an efficient, accurate and precise technology in phenotyping (used in genetic analysis) for tree breeding.
Collapse
|
18
|
Lu W, Du R, Niu P, Xing G, Luo H, Deng Y, Shu L. Soybean Yield Preharvest Prediction Based on Bean Pods and Leaves Image Recognition Using Deep Learning Neural Network Combined With GRNN. FRONTIERS IN PLANT SCIENCE 2022; 12:791256. [PMID: 35095964 PMCID: PMC8792930 DOI: 10.3389/fpls.2021.791256] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 12/08/2021] [Indexed: 06/14/2023]
Abstract
Soybean yield is a highly complex trait determined by multiple factors such as genotype, environment, and their interactions. The earlier the prediction during the growing season the better. Accurate soybean yield prediction is important for germplasm innovation and planting environment factor improvement. But until now, soybean yield has been determined by weight measurement manually after soybean plant harvest which is time-consuming, has high cost and low precision. This paper proposed a soybean yield in-field prediction method based on bean pods and leaves image recognition using a deep learning algorithm combined with a generalized regression neural network (GRNN). A faster region-convolutional neural network (Faster R-CNN), feature pyramid network (FPN), single shot multibox detector (SSD), and You Only Look Once (YOLOv3) were employed for bean pods recognition in which recognition precision and speed were 86.2, 89.8, 80.1, 87.4%, and 13 frames per second (FPS), 7 FPS, 24 FPS, and 39 FPS, respectively. Therefore, YOLOv3 was selected considering both recognition precision and speed. For enhancing detection performance, YOLOv3 was improved by changing IoU loss function, using the anchor frame clustering algorithm, and utilizing the partial neural network structure with which recognition precision increased to 90.3%. In order to improve soybean yield prediction precision, leaves were identified and counted, moreover, pods were further classified as single, double, treble, four, and five seeds types by improved YOLOv3 because each type seed weight varies. In addition, soybean seed number prediction models of each soybean planter were built using PLSR, BP, and GRNN with the input of different type pod numbers and leaf numbers with which prediction results were 96.24, 96.97, and 97.5%, respectively. Finally, the soybean yield of each planter was obtained by accumulating the weight of all soybean pod types and the average accuracy was up to 97.43%. The results show that it is feasible to predict the soybean yield of plants in situ with high precision by fusing the number of leaves and different type soybean pods recognized by a deep neural network combined with GRNN which can speed up germplasm innovation and planting environmental factor optimization.
Collapse
Affiliation(s)
- Wei Lu
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| | - Rongting Du
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| | - Pengshuai Niu
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| | - Guangnan Xing
- College of Agriculture, Nanjing Agricultural University, Nanjing, China
| | - Hui Luo
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| | - Yiming Deng
- College of Engineering, Michigan State University, East Lansing, MI, United States
| | - Lei Shu
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| |
Collapse
|
19
|
Automatic Liver Segmentation in CT Images with Enhanced GAN and Mask Region-Based CNN Architectures. BIOMED RESEARCH INTERNATIONAL 2021; 2021:9956983. [PMID: 34957310 PMCID: PMC8702320 DOI: 10.1155/2021/9956983] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 09/22/2021] [Accepted: 11/26/2021] [Indexed: 01/10/2023]
Abstract
Liver image segmentation has been increasingly employed for key medical purposes, including liver functional assessment, disease diagnosis, and treatment. In this work, we introduce a liver image segmentation method based on generative adversarial networks (GANs) and mask region-based convolutional neural networks (Mask R-CNN). Firstly, since most resulting images have noisy features, we further explored the combination of Mask R-CNN and GANs in order to enhance the pixel-wise classification. Secondly, k-means clustering was used to lock the image aspect ratio, in order to get more essential anchors which can help boost the segmentation performance. Finally, we proposed a GAN Mask R-CNN algorithm which achieved superior performance in comparison with the conventional Mask R-CNN, Mask-CNN, and k-means algorithms in terms of the Dice similarity coefficient (DSC) and the MICCAI metrics. The proposed algorithm also achieved superior performance in comparison with ten state-of-the-art algorithms in terms of six Boolean indicators. We hope that our work can be effectively used to optimize the segmentation and classification of liver anomalies.
Collapse
|
20
|
Li W, Guo S, Zhai Y, Liu F, Lai Z, Han S. Target classification of multislit streak tube imaging lidar based on deep learning. APPLIED OPTICS 2021; 60:8809-8817. [PMID: 34613107 DOI: 10.1364/ao.437470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 08/27/2021] [Indexed: 06/13/2023]
Abstract
To reduce the impact of the image reconstruction process and improve the identification efficiency of the multislit streak tube imaging lidar (MS-STIL) system, an object classification method based on the echo of the MS-STIL system is proposed. A streak image data set is constructed that contains a total of 240 common outdoor targets in 6 categories. Additionally, the deep-learning network model based on ResNet is chosen to implement streak image classification. The effects of two classification methods based on streak images and reconstructed depth images are compared. To verify the maximum classification capability of the proposed method, the recognition effects are investigated under 6 and 20 classes. The results show that the classification accuracy decreases from 99.42% to 67.64%. After the data set is expanded, the classification accuracy improved to 85.35% when the class number of the target is 20.
Collapse
|
21
|
Liu F, Song Q, Zhao J, Mao L, Bu H, Hu Y, Zhu XG. Canopy occupation volume as an indicator of canopy photosynthetic capacity. THE NEW PHYTOLOGIST 2021; 232:941-956. [PMID: 34245568 DOI: 10.1111/nph.17611] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 07/03/2021] [Indexed: 06/13/2023]
Abstract
Leaf angle and leaf area index together influence canopy light interception and canopy photosynthesis. However, so far, there is no effective method to identify the optimal combination of these two parameters for canopy photosynthesis. In this study, first a robust high-throughput method for accurate segmentation of maize organs based on 3D point clouds data was developed, then the segmented plant organs were used to generate new 3D point clouds for the canopy of altered architectures. With this, we simulated the synergistic effect of leaf area and leaf angle on canopy photosynthesis. The results show that, compared to the traditional parameters describing the canopy photosynthesis including leaf area index, facet angle and canopy coverage, a new parameter - the canopy occupation volume (COV) - can better explain the variations of canopy photosynthetic capacity. Specifically, COV can explain > 79% variations of canopy photosynthesis generated by changing leaf angle and > 84% variations of canopy photosynthesis generated by changing leaf area. As COV can be calculated in a high-throughput manner based on the canopy point clouds, it can be used to evaluate canopy architecture in breeding and agronomic research.
Collapse
Affiliation(s)
- Fusang Liu
- National Key Laboratory of Plant Molecular Genetics, CAS Center for Excellence in Molecular Plant Sciences, Shanghai Institute of Plant Physiology and Ecology, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Qingfeng Song
- National Key Laboratory of Plant Molecular Genetics, CAS Center for Excellence in Molecular Plant Sciences, Shanghai Institute of Plant Physiology and Ecology, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Jinke Zhao
- National Key Laboratory of Plant Molecular Genetics, CAS Center for Excellence in Molecular Plant Sciences, Shanghai Institute of Plant Physiology and Ecology, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Linxiong Mao
- National Key Laboratory of Plant Molecular Genetics, CAS Center for Excellence in Molecular Plant Sciences, Shanghai Institute of Plant Physiology and Ecology, Chinese Academy of Sciences, Shanghai, 200031, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Hongyi Bu
- Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai, 200083, China
| | - Yong Hu
- Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai, 200083, China
| | - Xin-Guang Zhu
- National Key Laboratory of Plant Molecular Genetics, CAS Center for Excellence in Molecular Plant Sciences, Shanghai Institute of Plant Physiology and Ecology, Chinese Academy of Sciences, Shanghai, 200031, China
| |
Collapse
|
22
|
Jin S, Su Y, Zhang Y, Song S, Li Q, Liu Z, Ma Q, Ge Y, Liu L, Ding Y, Baret F, Guo Q. Exploring Seasonal and Circadian Rhythms in Structural Traits of Field Maize from LiDAR Time Series. PLANT PHENOMICS (WASHINGTON, D.C.) 2021; 2021:9895241. [PMID: 34557676 PMCID: PMC8441379 DOI: 10.34133/2021/9895241] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 07/27/2021] [Indexed: 06/02/2023]
Abstract
Plant growth rhythm in structural traits is important for better understanding plant response to the ever-changing environment. Terrestrial laser scanning (TLS) is a well-suited tool to study structural rhythm under field conditions. Recent studies have used TLS to describe the structural rhythm of trees, but no consistent patterns have been drawn. Meanwhile, whether TLS can capture structural rhythm in crops is unclear. Here, we aim to explore the seasonal and circadian rhythms in maize structural traits at both the plant and leaf levels from time-series TLS. The seasonal rhythm was studied using TLS data collected at four key growth periods, including jointing, bell-mouthed, heading, and maturity periods. Circadian rhythms were explored by using TLS data acquired around every 2 hours in a whole day under standard and cold stress conditions. Results showed that TLS can quantify the seasonal and circadian rhythm in structural traits at both plant and leaf levels. (1) Leaf inclination angle decreased significantly between the jointing stage and bell-mouthed stage. Leaf azimuth was stable after the jointing stage. (2) Some individual-level structural rhythms (e.g., azimuth and projected leaf area/PLA) were consistent with leaf-level structural rhythms. (3) The circadian rhythms of some traits (e.g., PLA) were not consistent under standard and cold stress conditions. (4) Environmental factors showed better correlations with leaf traits under cold stress than standard conditions. Temperature was the most important factor that significantly correlated with all leaf traits except leaf azimuth. This study highlights the potential of time-series TLS in studying outdoor agricultural chronobiology.
Collapse
Affiliation(s)
- Shichao Jin
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production Co-Sponsored by Province and Ministry, Jiangsu Key Laboratory for Information Agriculture, Nanjing Agricultural University, Nanjing 210095, China
- Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, International Institute for Earth System Sciences, Nanjing University, Nanjing, Jiangsu 210023, China
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing 100093, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yanjun Su
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing 100093, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yongguang Zhang
- Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, International Institute for Earth System Sciences, Nanjing University, Nanjing, Jiangsu 210023, China
| | - Shilin Song
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing 100093, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qing Li
- National Technique Innovation Center for Regional Wheat Production/Key Laboratory of Crop Ecophysiology, Ministry of Agriculture, Nanjing Agricultural University, Nanjing, 210095 Jiangsu, China
| | - Zhonghua Liu
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing 100093, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qin Ma
- Department of Forestry, Mississippi State University, Mississippi State 39759, USA
| | - Yan Ge
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production Co-Sponsored by Province and Ministry, Jiangsu Key Laboratory for Information Agriculture, Nanjing Agricultural University, Nanjing 210095, China
| | - LingLi Liu
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing 100093, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yanfeng Ding
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production Co-Sponsored by Province and Ministry, Jiangsu Key Laboratory for Information Agriculture, Nanjing Agricultural University, Nanjing 210095, China
| | - Frédéric Baret
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production Co-Sponsored by Province and Ministry, Jiangsu Key Laboratory for Information Agriculture, Nanjing Agricultural University, Nanjing 210095, China
- Environnement Méditerranéen et Modélisation des Agro-Hydrosystèmes (EMMAH), Institut National de la Recherche Agronomique, Unité Mixte de Recherche 1114 Domaine Saint-Paul, Avignon Cedex 84914, France
| | - Qinghua Guo
- Department of Ecology, College of Environmental Sciences, and Key Laboratory of Earth Surface Processes of the Ministry of Education, Peking University, Beijing 100871, China
| |
Collapse
|
23
|
Miao T, Wen W, Li Y, Wu S, Zhu C, Guo X. Label3DMaize: toolkit for 3D point cloud data annotation of maize shoots. Gigascience 2021; 10:6272094. [PMID: 33963385 PMCID: PMC8105162 DOI: 10.1093/gigascience/giab031] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Revised: 03/10/2021] [Accepted: 04/12/2021] [Indexed: 01/31/2023] Open
Abstract
Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.
Collapse
Affiliation(s)
- Teng Miao
- College of Information and Electrical Engineering, Shenyang Agricultural University, Dongling Road, Shenhe District, Liaoning Province, Shenyang 110161, China
| | - Weiliang Wen
- Beijing Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,National Engineering Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,Beijing Key Lab of Digital Plant, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Yinglun Li
- National Engineering Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,Beijing Key Lab of Digital Plant, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Sheng Wu
- Beijing Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,National Engineering Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,Beijing Key Lab of Digital Plant, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Chao Zhu
- College of Information and Electrical Engineering, Shenyang Agricultural University, Dongling Road, Shenhe District, Liaoning Province, Shenyang 110161, China
| | - Xinyu Guo
- Beijing Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,National Engineering Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,Beijing Key Lab of Digital Plant, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| |
Collapse
|
24
|
Ghahremani M, Williams K, Corke FMK, Tiddeman B, Liu Y, Doonan JH. Deep Segmentation of Point Clouds of Wheat. FRONTIERS IN PLANT SCIENCE 2021; 12:608732. [PMID: 33841454 PMCID: PMC8025700 DOI: 10.3389/fpls.2021.608732] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 02/24/2021] [Indexed: 05/31/2023]
Abstract
The 3D analysis of plants has become increasingly effective in modeling the relative structure of organs and other traits of interest. In this paper, we introduce a novel pattern-based deep neural network, Pattern-Net, for segmentation of point clouds of wheat. This study is the first to segment the point clouds of wheat into defined organs and to analyse their traits directly in 3D space. Point clouds have no regular grid and thus their segmentation is challenging. Pattern-Net creates a dynamic link among neighbors to seek stable patterns from a 3D point set across several levels of abstraction using the K-nearest neighbor algorithm. To this end, different layers are connected to each other to create complex patterns from the simple ones, strengthen dynamic link propagation, alleviate the vanishing-gradient problem, encourage link reuse and substantially reduce the number of parameters. The proposed deep network is capable of analysing and decomposing unstructured complex point clouds into semantically meaningful parts. Experiments on a wheat dataset verify the effectiveness of our approach for segmentation of wheat in 3D space.
Collapse
Affiliation(s)
- Morteza Ghahremani
- National Plant Phenomics Centre, Institute of Biological, Environmental and Rural Sciences, Aberystwyth University, Aberystwyth, United Kingdom
- Department of Computer Science, Aberystwyth University, Aberystwyth, United Kingdom
| | - Kevin Williams
- National Plant Phenomics Centre, Institute of Biological, Environmental and Rural Sciences, Aberystwyth University, Aberystwyth, United Kingdom
| | - Fiona M. K. Corke
- National Plant Phenomics Centre, Institute of Biological, Environmental and Rural Sciences, Aberystwyth University, Aberystwyth, United Kingdom
| | - Bernard Tiddeman
- Department of Computer Science, Aberystwyth University, Aberystwyth, United Kingdom
| | - Yonghuai Liu
- Department of Computer Science, Edge Hill University, Ormskirk, United Kingdom
| | - John H. Doonan
- National Plant Phenomics Centre, Institute of Biological, Environmental and Rural Sciences, Aberystwyth University, Aberystwyth, United Kingdom
| |
Collapse
|
25
|
Wu D, Wu D, Feng H, Duan L, Dai G, Liu X, Wang K, Yang P, Chen G, Gay AP, Doonan JH, Niu Z, Xiong L, Yang W. A deep learning-integrated micro-CT image analysis pipeline for quantifying rice lodging resistance-related traits. PLANT COMMUNICATIONS 2021; 2:100165. [PMID: 33898978 PMCID: PMC8060729 DOI: 10.1016/j.xplc.2021.100165] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Revised: 11/07/2020] [Accepted: 01/26/2021] [Indexed: 05/20/2023]
Abstract
Lodging is a common problem in rice, reducing its yield and mechanical harvesting efficiency. Rice architecture is a key aspect of its domestication and a major factor that limits its high productivity. The ideal rice culm structure, including major_axis_culm, minor axis_culm, and wall thickness_culm, is critical for improving lodging resistance. However, the traditional method of measuring rice culms is destructive, time consuming, and labor intensive. In this study, we used a high-throughput micro-CT-RGB imaging system and deep learning (SegNet) to develop a high-throughput micro-CT image analysis pipeline that can extract 24 rice culm morphological traits and lodging resistance-related traits. When manual and automatic measurements were compared at the mature stage, the mean absolute percentage errors for major_axis_culm, minor_axis_culm, and wall_thickness_culm in 104 indica rice accessions were 6.03%, 5.60%, and 9.85%, respectively, and the R2 values were 0.799, 0.818, and 0.623. We also built models of bending stress using culm traits at the mature and tillering stages, and the R2 values were 0.722 and 0.544, respectively. The modeling results indicated that this method can quantify lodging resistance nondestructively, even at an early growth stage. In addition, we also evaluated the relationships of bending stress to shoot dry weight, culm density, and drought-related traits and found that plants with greater resistance to bending stress had slightly higher biomass, culm density, and culm area but poorer drought resistance. In conclusion, we developed a deep learning-integrated micro-CT image analysis pipeline to accurately quantify the phenotypic traits of rice culms in ∼4.6 min per plant; this pipeline will assist in future high-throughput screening of large rice populations for lodging resistance.
Collapse
Affiliation(s)
- Di Wu
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Hubei Key Laboratory of Agricultural Bioinformatics and College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
- School of Information Engineering, Wuhan Technology and Business University, Wuhan 430065, PR China
| | - Dan Wu
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Hubei Key Laboratory of Agricultural Bioinformatics and College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Hui Feng
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Hubei Key Laboratory of Agricultural Bioinformatics and College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Lingfeng Duan
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Hubei Key Laboratory of Agricultural Bioinformatics and College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Guoxing Dai
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Hubei Key Laboratory of Agricultural Bioinformatics and College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Xiao Liu
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Hubei Key Laboratory of Agricultural Bioinformatics and College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Kang Wang
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Hubei Key Laboratory of Agricultural Bioinformatics and College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Peng Yang
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Hubei Key Laboratory of Agricultural Bioinformatics and College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Guoxing Chen
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Hubei Key Laboratory of Agricultural Bioinformatics and College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Alan P. Gay
- The National Plant Phenomics Centre, Institute of Biological, Environmental and Rural Sciences, Aberystwyth University, Aberystwyth, UK
| | - John H. Doonan
- The National Plant Phenomics Centre, Institute of Biological, Environmental and Rural Sciences, Aberystwyth University, Aberystwyth, UK
| | - Zhiyou Niu
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Hubei Key Laboratory of Agricultural Bioinformatics and College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Lizhong Xiong
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Hubei Key Laboratory of Agricultural Bioinformatics and College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Wanneng Yang
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Hubei Key Laboratory of Agricultural Bioinformatics and College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
- Corresponding author
| |
Collapse
|
26
|
Individual Tree Crown Segmentation Directly from UAV-Borne LiDAR Data Using the PointNet of Deep Learning. FORESTS 2021. [DOI: 10.3390/f12020131] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Accurate individual tree crown (ITC) segmentation from scanned point clouds is a fundamental task in forest biomass monitoring and forest ecology management. Light detection and ranging (LiDAR) as a mainstream tool for forest survey is advancing the pattern of forest data acquisition. In this study, we performed a novel deep learning framework directly processing the forest point clouds belonging to the four forest types (i.e., the nursery base, the monastery garden, the mixed forest, and the defoliated forest) to realize the ITC segmentation. The specific steps of our approach were as follows: first, a voxelization strategy was conducted to subdivide the collected point clouds with various tree species from various forest types into many voxels. These voxels containing point clouds were taken as training samples for the PointNet deep learning framework to identify the tree crowns at the voxel scale. Second, based on the initial segmentation results, we used the height-related gradient information to accurately depict the boundaries of each tree crown. Meanwhile, the retrieved tree crown breadths of individual trees were compared with field measurements to verify the effectiveness of our approach. Among the four forest types, our results revealed the best performance for the nursery base (tree crown detection rate r = 0.90; crown breadth estimation R2 > 0.94 and root mean squared error (RMSE) < 0.2m). A sound performance was also achieved for the monastery garden and mixed forest, which had complex forest structures, complicated intersections of branches and different building types, with r = 0.85, R2 > 0.88 and RMSE < 0.6 m for the monastery garden and r = 0.80, R2 > 0.85 and RMSE < 0.8 m for the mixed forest. For the fourth forest plot type with the distribution of crown defoliation across the woodland, we achieved the performance with r = 0.82, R2 > 0.79 and RMSE < 0.7 m. Our method presents a robust framework inspired by the deep learning technology and computer graphics theory that solves the ITC segmentation problem and retrieves forest parameters under various forest conditions.
Collapse
|
27
|
Zhu B, Liu F, Xie Z, Guo Y, Li B, Ma Y. Quantification of light interception within image-based 3-D reconstruction of sole and intercropped canopies over the entire growth season. ANNALS OF BOTANY 2020; 126:701-712. [PMID: 32179920 PMCID: PMC7489074 DOI: 10.1093/aob/mcaa046] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Accepted: 03/12/2020] [Indexed: 05/27/2023]
Abstract
BACKGROUND AND AIMS Light interception is closely related to canopy architecture. Few studies based on multi-view photography have been conducted in a field environment, particularly studies that link 3-D plant architecture with a radiation model to quantify the dynamic canopy light interception. In this study, we combined realistic 3-D plant architecture with a radiation model to quantify and evaluate the effect of differences in planting patterns and row orientations on canopy light interception. METHODS The 3-D architectures of maize and soybean plants were reconstructed for sole crops and intercrops based on multi-view images obtained at five growth dates in the field. We evaluated the accuracy of the calculated leaf length, maximum leaf width, plant height and leaf area according to the measured data. The light distribution within the 3-D plant canopy was calculated with a 3-D radiation model. Finally, we evaluated canopy light interception in different row orientations. KEY RESULTS There was good agreement between the measured and calculated phenotypic traits, with an R2 >0.97. The light distribution was more uniform for intercropped maize and more concentrated for sole maize. At the maize silking stage, 85 % of radiation was intercepted by approx. 55 % of the upper canopy region for maize and by approx. 33 % of the upper canopy region for soybean. There was no significant difference in daily light interception between the different row orientations for the entire intercropping and sole systems. However, for intercropped maize, near east-west orientations showed approx. 19 % higher daily light interception than near south-north orientations. For intercropped soybean, daily light interception showed the opposite trend. It was approx. 49 % higher for near south-north orientations than for near east-west orientations. CONCLUSIONS The accurate reconstruction of 3-D plants grown in the field based on multi-view images provides the possibility for high-throughput 3-D phenotyping in the field and allows a better understanding of the relationship between canopy architecture and the light environment.
Collapse
Affiliation(s)
- Binglin Zhu
- Key Laboratory of Arable Land Conservation (North China), Ministry of Agriculture, College of Land Science and Technology, China Agricultural University, Beijing, China
| | - Fusang Liu
- Key Laboratory of Arable Land Conservation (North China), Ministry of Agriculture, College of Land Science and Technology, China Agricultural University, Beijing, China
| | - Ziwen Xie
- Key Laboratory of Arable Land Conservation (North China), Ministry of Agriculture, College of Land Science and Technology, China Agricultural University, Beijing, China
| | - Yan Guo
- Key Laboratory of Arable Land Conservation (North China), Ministry of Agriculture, College of Land Science and Technology, China Agricultural University, Beijing, China
| | - Baoguo Li
- Key Laboratory of Arable Land Conservation (North China), Ministry of Agriculture, College of Land Science and Technology, China Agricultural University, Beijing, China
| | - Yuntao Ma
- Key Laboratory of Arable Land Conservation (North China), Ministry of Agriculture, College of Land Science and Technology, China Agricultural University, Beijing, China
| |
Collapse
|
28
|
Maize Kernel Abortion Recognition and Classification Using Binary Classification Machine Learning Algorithms and Deep Convolutional Neural Networks. AI 2020. [DOI: 10.3390/ai1030024] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023] Open
Abstract
Maize kernel traits such as kernel length, kernel width, and kernel number determine the total kernel weight and, consequently, maize yield. Therefore, the measurement of kernel traits is important for maize breeding and the evaluation of maize yield. There are a few methods that allow the extraction of ear and kernel features through image processing. We evaluated the potential of deep convolutional neural networks and binary machine learning (ML) algorithms (logistic regression (LR), support vector machine (SVM), AdaBoost (ADB), Classification tree (CART), and the K-Neighbor (kNN)) for accurate maize kernel abortion detection and classification. The algorithms were trained using 75% of 66 total images, and the remaining 25% was used for testing their performance. Confusion matrix, classification accuracy, and precision were the major metrics in evaluating the performance of the algorithms. The SVM and LR algorithms were highly accurate and precise (100%) under all the abortion statuses, while the remaining algorithms had a performance greater than 95%. Deep convolutional neural networks were further evaluated using different activation and optimization techniques. The best performance (100% accuracy) was reached using the rectifier linear unit (ReLu) activation procedure and the Adam optimization technique. Maize ear with abortion were accurately detected by all tested algorithms with minimum training and testing time compared to ear without abortion. The findings suggest that deep convolutional neural networks can be used to detect the maize ear abortion status supplemented with the binary machine learning algorithms in maize breading programs. By using a convolution neural network (CNN) method, more data (big data) can be collected and processed for hundreds of maize ears, accelerating the phenotyping process.
Collapse
|
29
|
Yang Z, Gao S, Xiao F, Li G, Ding Y, Guo Q, Paul MJ, Liu Z. Leaf to panicle ratio (LPR): a new physiological trait indicative of source and sink relation in japonica rice based on deep learning. PLANT METHODS 2020; 16:117. [PMID: 32863854 PMCID: PMC7449046 DOI: 10.1186/s13007-020-00660-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Accepted: 08/18/2020] [Indexed: 05/17/2023]
Abstract
BACKGROUND Identification and characterization of new traits with sound physiological foundation is essential for crop breeding and production management. Deep learning has been widely used in image data analysis to explore spatial and temporal information on crop growth and development, thus strengthening the power of identification of physiological traits. Taking the advantage of deep learning, this study aims to develop a novel trait of canopy structure that integrate source and sink in japonica rice. RESULTS We applied a deep learning approach to accurately segment leaf and panicle, and subsequently developed the procedure of GvCrop to calculate the leaf to panicle ratio (LPR) of rice canopy during grain filling stage. Images of training dataset were captured in the field experiments, with large variations in camera shooting angle, the elevation and the azimuth angles of the sun, rice genotype, and plant phenological stages. Accurately labeled by manually annotating the panicle and leaf regions, the resulting dataset were used to train FPN-Mask (Feature Pyramid Network Mask) models, consisting of a backbone network and a task-specific sub-network. The model with the highest accuracy was then selected to check the variations in LPR among 192 rice germplasms and among agronomical practices. Despite the challenging field conditions, FPN-Mask models achieved a high detection accuracy, with Pixel Accuracy being 0.99 for panicles and 0.98 for leaves. The calculated LPR displayed large spatial and temporal variations as well as genotypic differences. In addition, it was responsive to agronomical practices such as nitrogen fertilization and spraying of plant growth regulators. CONCLUSION Deep learning technique can achieve high accuracy in simultaneous detection of panicle and leaf data from complex rice field images. The proposed FPN-Mask model is applicable to detect and quantify crop performance under field conditions. The newly identified trait of LPR should provide a high throughput protocol for breeders to select superior rice cultivars as well as for agronomists to precisely manage field crops that have a good balance of source and sink.
Collapse
Affiliation(s)
- Zongfeng Yang
- College of Agriculture, Nanjing Agricultural University, Nanjing, 210095 China
| | - Shang Gao
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
| | - Feng Xiao
- College of Agriculture, Nanjing Agricultural University, Nanjing, 210095 China
| | - Ganghua Li
- College of Agriculture, Nanjing Agricultural University, Nanjing, 210095 China
| | - Yangfeng Ding
- College of Agriculture, Nanjing Agricultural University, Nanjing, 210095 China
| | - Qinghua Guo
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
| | - Matthew J. Paul
- Plant Science, Rothamsted Research, Harpenden, Hertfordshire AL5 2JQ UK
| | - Zhenghui Liu
- College of Agriculture, Nanjing Agricultural University, Nanjing, 210095 China
- Collaborative Innovation Center for Modern Crop Production, Nanjing Agricultural University, Nanjing, 210095 China
| |
Collapse
|
30
|
Tausen M, Clausen M, Moeskjær S, Shihavuddin ASM, Dahl AB, Janss L, Andersen SU. Greenotyper: Image-Based Plant Phenotyping Using Distributed Computing and Deep Learning. FRONTIERS IN PLANT SCIENCE 2020; 11:1181. [PMID: 32849731 PMCID: PMC7427585 DOI: 10.3389/fpls.2020.01181] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Accepted: 07/21/2020] [Indexed: 05/07/2023]
Abstract
Image-based phenotype data with high temporal resolution offers advantages over end-point measurements in plant quantitative genetics experiments, because growth dynamics can be assessed and analysed for genotype-phenotype association. Recently, network-based camera systems have been deployed as customizable, low-cost phenotyping solutions. Here, we implemented a large, automated image-capture system based on distributed computing using 180 networked Raspberry Pi units that could simultaneously monitor 1,800 white clover (Trifolium repens) plants. The camera system proved stable with an average uptime of 96% across all 180 cameras. For analysis of the captured images, we developed the Greenotyper image analysis pipeline. It detected the location of the plants with a bounding box accuracy of 97.98%, and the U-net-based plant segmentation had an intersection over union accuracy of 0.84 and a pixel accuracy of 0.95. We used Greenotyper to analyze a total of 355,027 images, which required 24-36 h. Automated phenotyping using a large number of static cameras and plants thus proved a cost-effective alternative to systems relying on conveyor belts or mobile cameras.
Collapse
Affiliation(s)
- Marni Tausen
- Bioinformatics Research Centre, Aarhus University, Aarhus, Denmark
- Department of Molecular Biology and Genetics, Aarhus University, Aarhus, Denmark
| | - Marc Clausen
- Department of Molecular Biology and Genetics, Aarhus University, Aarhus, Denmark
| | - Sara Moeskjær
- Department of Molecular Biology and Genetics, Aarhus University, Aarhus, Denmark
| | - ASM Shihavuddin
- Image Analysis & Computer Graphics, DTU Compute, Lyngby, Denmark
- EEE Department, Green University of Bangladesh (GUB), Dhaka, Bangladesh
| | | | - Luc Janss
- Department of Molecular Biology and Genetics, Aarhus University, Aarhus, Denmark
| | | |
Collapse
|
31
|
Jin S, Su Y, Song S, Xu K, Hu T, Yang Q, Wu F, Xu G, Ma Q, Guan H, Pang S, Li Y, Guo Q. Non-destructive estimation of field maize biomass using terrestrial lidar: an evaluation from plot level to individual leaf level. PLANT METHODS 2020; 16:69. [PMID: 32435271 PMCID: PMC7222476 DOI: 10.1186/s13007-020-00613-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 05/05/2020] [Indexed: 06/02/2023]
Abstract
BACKGROUND Precision agriculture is an emerging research field that relies on monitoring and managing field variability in phenotypic traits. An important phenotypic trait is biomass, a comprehensive indicator that can reflect crop yields. However, non-destructive biomass estimation at fine levels is unknown and challenging due to the lack of accurate and high-throughput phenotypic data and algorithms. RESULTS In this study, we evaluated the capability of terrestrial light detection and ranging (lidar) data in estimating field maize biomass at the plot, individual plant, leaf group, and individual organ (i.e., individual leaf or stem) levels. The terrestrial lidar data of 59 maize plots with more than 1000 maize plants were collected and used to calculate phenotypes through a deep learning-based pipeline, which were then used to predict maize biomass through simple regression (SR), stepwise multiple regression (SMR), artificial neural network (ANN), and random forest (RF). The results showed that terrestrial lidar data were useful for estimating maize biomass at all levels (at each level, R2 was greater than 0.80), and biomass estimation at leaf group level was the most precise (R2 = 0.97, RMSE = 2.22 g) among all four levels. All four regression techniques performed similarly at all levels. However, considering the transferability and interpretability of the model itself, SR is the suggested method for estimating maize biomass from terrestrial lidar-derived phenotypes. Moreover, height-related variables showed to be the most important and robust variables for predicting maize biomass from terrestrial lidar at all levels, and some two-dimensional variables (e.g., leaf area) and three-dimensional variables (e.g., volume) showed great potential as well. CONCLUSION We believe that this study is a unique effort on evaluating the capability of terrestrial lidar on estimating maize biomass at difference levels, and can provide a useful resource for the selection of the phenotypes and models required to estimate maize biomass in precision agriculture practices.
Collapse
Affiliation(s)
- Shichao Jin
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing, 100049 China
| | - Yanjun Su
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
| | - Shilin Song
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing, 100049 China
| | - Kexin Xu
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing, 100049 China
| | - Tianyu Hu
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
| | - Qiuli Yang
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing, 100049 China
| | - Fangfang Wu
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing, 100049 China
| | - Guangcai Xu
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
| | - Qin Ma
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing, 100049 China
| | - Hongcan Guan
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing, 100049 China
| | - Shuxin Pang
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
| | - Yumei Li
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
| | - Qinghua Guo
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing, 100049 China
| |
Collapse
|
32
|
An Efficient Processing Approach for Colored Point Cloud-Based High-Throughput Seedling Phenotyping. REMOTE SENSING 2020. [DOI: 10.3390/rs12101540] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Plant height and leaf area are important morphological properties of leafy vegetable seedlings, and they can be particularly useful for plant growth and health research. The traditional measurement scheme is time-consuming and not suitable for continuously monitoring plant growth and health. Individual vegetable seedling quick segmentation is the prerequisite for high-throughput seedling phenotype data extraction at individual seedling level. This paper proposes an efficient learning- and model-free 3D point cloud data processing pipeline to measure the plant height and leaf area of every single seedling in a plug tray. The 3D point clouds are obtained by a low-cost red–green–blue (RGB)-Depth (RGB-D) camera. Firstly, noise reduction is performed on the original point clouds through the processing of useable-area filter, depth cut-off filter, and neighbor count filter. Secondly, the surface feature histograms-based approach is used to automatically remove the complicated natural background. Then, the Voxel Cloud Connectivity Segmentation (VCCS) and Locally Convex Connected Patches (LCCP) algorithms are employed for individual vegetable seedling partition. Finally, the height and projected leaf area of respective seedlings are calculated based on segmented point clouds and validation is carried out. Critically, we also demonstrate the robustness of our method for different growth conditions and species. The experimental results show that the proposed method could be used to quickly calculate the morphological parameters of each seedling and it is practical to use this approach for high-throughput seedling phenotyping.
Collapse
|
33
|
Jiang Y, Li C. Convolutional Neural Networks for Image-Based High-Throughput Plant Phenotyping: A Review. PLANT PHENOMICS (WASHINGTON, D.C.) 2020; 2020:4152816. [PMID: 33313554 PMCID: PMC7706326 DOI: 10.34133/2020/4152816] [Citation(s) in RCA: 104] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 03/12/2020] [Indexed: 05/19/2023]
Abstract
Plant phenotyping has been recognized as a bottleneck for improving the efficiency of breeding programs, understanding plant-environment interactions, and managing agricultural systems. In the past five years, imaging approaches have shown great potential for high-throughput plant phenotyping, resulting in more attention paid to imaging-based plant phenotyping. With this increased amount of image data, it has become urgent to develop robust analytical tools that can extract phenotypic traits accurately and rapidly. The goal of this review is to provide a comprehensive overview of the latest studies using deep convolutional neural networks (CNNs) in plant phenotyping applications. We specifically review the use of various CNN architecture for plant stress evaluation, plant development, and postharvest quality assessment. We systematically organize the studies based on technical developments resulting from imaging classification, object detection, and image segmentation, thereby identifying state-of-the-art solutions for certain phenotyping applications. Finally, we provide several directions for future research in the use of CNN architecture for plant phenotyping purposes.
Collapse
Affiliation(s)
- Yu Jiang
- Horticulture Section, School of Integrative Plant Science, Cornell AgriTech, Cornell University, USA
- School of Electrical and Computer Engineering, College of Engineering, The University of Georgia, USA
- Phenomics and Plant Robotics Center, The University of Georgia, USA
| | - Changying Li
- School of Electrical and Computer Engineering, College of Engineering, The University of Georgia, USA
- Phenomics and Plant Robotics Center, The University of Georgia, USA
| |
Collapse
|
34
|
The Delineation and Grading of Actual Crop Production Units in Modern Smallholder Areas Using RS Data and Mask R-CNN. REMOTE SENSING 2020. [DOI: 10.3390/rs12071074] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The extraction and evaluation of crop production units are important foundations for agricultural production and management in modern smallholder regions, which are very significant to the regulation and sustainable development of agriculture. Crop areas have been recognized efficiently and accurately via remote sensing (RS) and machine learning (ML), especially deep learning (DL), which are too rough for modern smallholder production. In this paper, a delimitation-grading method for actual crop production units (ACPUs) based on RS images was explored using a combination of a mask region-based convolutional neural network (Mask R-CNN), spatial analysis, comprehensive index evaluation, and cluster analysis. Da’an City, Jilin province, China, was chosen as the study region to satisfy the agro-production demands in modern smallholder areas. Firstly, the ACPUs were interpreted from perspectives such as production mode, spatial form, and actual productivity. Secondly, cultivated land plots (C-plots) were extracted by Mask R-CNN with high-resolution RS images, which were used to delineate contiguous cultivated land plots (CC-plots) on the basis of auxiliary data correction. Then, the refined delimitation-grading results of the ACPUs were obtained through comprehensive evaluation of spatial characteristics and real productivity clustering. For the conclusion, the effectiveness of the Mask R-CNN model in C-plot recognition (loss = 0.16, mean average precision (mAP) = 82.29%) and a reasonable distance threshold (20 m) for CC-plot delimiting were verified. The spatial features were evaluated with the scale-shape dimensions of nine specific indicators. Real productivities were clustered by the incorporation of two-step cluster and K-Means cluster. Furthermore, most of the ACPUs in the study area were of a reasonable scale and an appropriate shape, holding real productivities at a medium level or above. The proposed method in this paper can be adjusted according to the changes of the study area with flexibility to assist agro-supervision in many modern smallholder regions.
Collapse
|
35
|
An Improved Convolution Neural Network-Based Model for Classifying Foliage and Woody Components from Terrestrial Laser Scanning Data. REMOTE SENSING 2020. [DOI: 10.3390/rs12061010] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Separating foliage and woody components can effectively improve the accuracy of simulating the forest eco-hydrological processes. It is still challenging to use deep learning models to classify canopy components from the point cloud data collected in forests by terrestrial laser scanning (TLS). In this study, we developed a convolution neural network (CNN)-based model to separate foliage and woody components (FWCNN) by combing the geometrical and laser return intensity (LRI) information of local point sets in TLS datasets. Meanwhile, we corrected the LRI information and proposed a contribution score evaluation method to objectively determine hyper-parameters (learning rate, batch size, and validation split rate) in the FWCNN model. Our results show that: (1) Correcting the LRI information could improve the overall classification accuracy (OA) of foliage and woody points in tested broadleaf (from 95.05% to 96.20%) and coniferous (from 93.46% to 94.98%) TLS datasets (Kappa ≥ 0.86). (2) Optimizing hyper-parameters was essential to enhance the running efficiency of the FWCNN model, and the determined hyper-parameter set was suitable to classify all tested TLS data. (3) The FWCNN model has great potential to classify TLS data in mixed forests with OA > 84.26% (Kappa ≥ 0.67). This work provides a foundation for retrieving the structural features of woody materials within the forest canopy.
Collapse
|
36
|
Chaudhury A, Barron JL. 3D Phenotyping of Plants. 3D IMAGING, ANALYSIS AND APPLICATIONS 2020:699-732. [DOI: 10.1007/978-3-030-44070-1_14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
37
|
Huang L, Guo H, Rao Q, Hou Z, Li S, Qiu S, Fan X, Wang H. Body Dimension Measurements of Qinchuan Cattle with Transfer Learning from LiDAR Sensing. SENSORS 2019; 19:s19225046. [PMID: 31752400 PMCID: PMC6891291 DOI: 10.3390/s19225046] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 11/10/2019] [Accepted: 11/18/2019] [Indexed: 02/07/2023]
Abstract
For the time-consuming and stressful body measuring task of Qinchuan cattle and farmers, the demand for the automatic measurement of body dimensions has become more and more urgent. It is necessary to explore automatic measurements with deep learning to improve breeding efficiency and promote the development of industry. In this paper, a novel approach to measuring the body dimensions of live Qinchuan cattle with on transfer learning is proposed. Deep learning of the Kd-network was trained with classical three-dimensional (3D) point cloud datasets (PCD) of the ShapeNet datasets. After a series of processes of PCD sensed by the light detection and ranging (LiDAR) sensor, the cattle silhouettes could be extracted, which after augmentation could be applied as an input layer to the Kd-network. With the output of a convolutional layer of the trained deep model, the output layer of the deep model could be applied to pre-train the full connection network. The TrAdaBoost algorithm was employed to transfer the pre-trained convolutional layer and full connection of the deep model. To classify and recognize the PCD of the cattle silhouette, the average accuracy rate after training with transfer learning could reach up to 93.6%. On the basis of silhouette extraction, the candidate region of the feature surface shape could be extracted with mean curvature and Gaussian curvature. After the computation of the FPFH (fast point feature histogram) of the surface shape, the center of the feature surface could be recognized and the body dimensions of the cattle could finally be calculated. The experimental results showed that the comprehensive error of body dimensions was close to 2%, which could provide a feasible approach to the non-contact observations of the bodies of large physique livestock without any human intervention.
Collapse
Affiliation(s)
- Lvwen Huang
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (H.G.); (Q.R.); (Z.H.); (S.Q.)
- Key Laboratory of Agricultural Internet of Things, Ministry of Agriculture and Rural Affairs, Yangling, Xianyang 712100, China
- Correspondence: (L.H.); (S.L.); Tel.: +86-137-0922-3117 (L.H.); +86-137-5997-2183 (S.L.)
| | - Han Guo
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (H.G.); (Q.R.); (Z.H.); (S.Q.)
| | - Qinqin Rao
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (H.G.); (Q.R.); (Z.H.); (S.Q.)
| | - Zixia Hou
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (H.G.); (Q.R.); (Z.H.); (S.Q.)
| | - Shuqin Li
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (H.G.); (Q.R.); (Z.H.); (S.Q.)
- Key Laboratory of Agricultural Internet of Things, Ministry of Agriculture and Rural Affairs, Yangling, Xianyang 712100, China
- Correspondence: (L.H.); (S.L.); Tel.: +86-137-0922-3117 (L.H.); +86-137-5997-2183 (S.L.)
| | - Shicheng Qiu
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (H.G.); (Q.R.); (Z.H.); (S.Q.)
| | - Xinyun Fan
- College of Computer Science, Wuhan University, Wuhan 430072, China;
| | - Hongyan Wang
- Western E-commerce Co., Ltd., Yinchuan 750004, China;
| |
Collapse
|
38
|
Neupane B, Horanont T, Hung ND. Deep learning based banana plant detection and counting using high-resolution red-green-blue (RGB) images collected from unmanned aerial vehicle (UAV). PLoS One 2019; 14:e0223906. [PMID: 31622450 PMCID: PMC6797093 DOI: 10.1371/journal.pone.0223906] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Accepted: 10/01/2019] [Indexed: 11/19/2022] Open
Abstract
The production of banana-one of the highly consumed fruits-is highly affected due to loss of certain number of banana plants in an early phase of vegetation. This affects the ability of farmers to forecast and estimate the production of banana. In this paper, we propose a deep learning (DL) based method to precisely detect and count banana plants on a farm exclusive of other plants, using high resolution RGB aerial images collected from Unmanned Aerial Vehicle (UAV). An attempt to detect the plants on the normal RGB images resulted less than 78.8% recall for our sample images of a commercial banana farm in Thailand. To improve this result, we use three image processing methods-Linear Contrast Stretch, Synthetic Color Transform and Triangular Greenness Index-to enhance the vegetative properties of orthomosaic, generating multiple variants of orthomosaic. Then we separately train a parameter-optimized Convolutional Neural Network (CNN) on manually interpreted banana plant samples seen on each image variants, to produce multiple results of detection on our region of interest. 96.4%, 85.1% and 75.8% of plants were correctly detected on three of our dataset collected from multiple altitude of 40, 50 and 60 meters, of same farm. Further discussion on results obtained from combination of multiple altitude variants are also discussed later in the research, in an attempt to find better altitude combination for data collection from UAV for the detection of banana plants. The results showed that merging the detection results of 40 and 50 meter dataset could detect the plants missed by each other, increasing recall upto 99%.
Collapse
Affiliation(s)
- Bipul Neupane
- School of Information, Computer and Communication Technology, Sirindhorn International Institute of Technology, Pathum Thani, Thailand
| | - Teerayut Horanont
- School of Information, Computer and Communication Technology, Sirindhorn International Institute of Technology, Pathum Thani, Thailand
| | - Nguyen Duy Hung
- School of Information, Computer and Communication Technology, Sirindhorn International Institute of Technology, Pathum Thani, Thailand
| |
Collapse
|
39
|
Individual Rubber Tree Segmentation Based on Ground-Based LiDAR Data and Faster R-CNN of Deep Learning. FORESTS 2019. [DOI: 10.3390/f10090793] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Rubber trees in southern China are often impacted by natural disturbances that can result in a tilted tree body. Accurate crown segmentation for individual rubber trees from scanned point clouds is an essential prerequisite for accurate tree parameter retrieval. In this paper, three plots of different rubber tree clones, PR107, CATAS 7-20-59, and CATAS 8-7-9, were taken as the study subjects. Through data collection using ground-based mobile light detection and ranging (LiDAR), a voxelisation method based on the scanned tree trunk data was proposed, and deep images (i.e., images normally used for deep learning) were generated through frontal and lateral projection transform of point clouds in each voxel with a length of 8 m and a width of 3 m. These images provided the training and testing samples for the faster region-based convolutional neural network (Faster R-CNN) of deep learning. Consequently, the Faster R-CNN combined with the generated training samples comprising 802 deep images with pre-marked trunk locations was trained to automatically recognize the trunk locations in the testing samples, which comprised 359 deep images. Finally, the point clouds for the lower parts of each trunk were extracted through back-projection transform from the recognized trunk locations in the testing samples and used as the seed points for the region’s growing algorithm to accomplish individual rubber tree crown segmentation. Compared with the visual inspection results, the recognition rate of our method reached 100% for the deep images of the testing samples when the images contained one or two trunks or the trunk information was slightly occluded by leaves. For the complicated cases, i.e., multiple trunks or overlapping trunks in one deep image or a trunk appearing in two adjacent deep images, the recognition accuracy of our method was greater than 90%. Our work represents a new method that combines a deep learning framework with point cloud processing for individual rubber tree crown segmentation based on ground-based mobile LiDAR scanned data.
Collapse
|
40
|
Furbank RT, Jimenez-Berni JA, George-Jaeggli B, Potgieter AB, Deery DM. Field crop phenomics: enabling breeding for radiation use efficiency and biomass in cereal crops. THE NEW PHYTOLOGIST 2019; 223:1714-1727. [PMID: 30937909 DOI: 10.1111/nph.15817] [Citation(s) in RCA: 86] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2018] [Accepted: 03/02/2019] [Indexed: 05/21/2023]
Abstract
Plant phenotyping forms the core of crop breeding, allowing breeders to build on physiological traits and mechanistic science to inform their selection of material for crossing and genetic gain. Recent rapid progress in high-throughput techniques based on machine vision, robotics, and computing (plant phenomics) enables crop physiologists and breeders to quantitatively measure complex and previously intractable traits. By combining these techniques with affordable genomic sequencing and genotyping, machine learning, and genome selection approaches, breeders have an opportunity to make rapid genetic progress. This review focuses on how field-based plant phenomics can enable next-generation physiological breeding in cereal crops for traits related to radiation use efficiency, photosynthesis, and crop biomass. These traits have previously been regarded as difficult and laborious to measure but have recently become a focus as cereal breeders find genetic progress from 'Green Revolution' traits such as harvest index become exhausted. Application of LiDAR, thermal imaging, leaf and canopy spectral reflectance, Chl fluorescence, and machine learning are discussed using wheat and sorghum phenotyping as case studies. A vision of how crop genomics and high-throughput phenotyping could enable the next generation of crop research and breeding is presented.
Collapse
Affiliation(s)
- Robert T Furbank
- ARC Centre of Excellence for Translational Photosynthesis, Division of Plant Science, Australian National University, Canberra, 2601, ACT, Australia
- CSIRO Agriculture and Food, Canberra, 2601, ACT, Australia
| | - Jose A Jimenez-Berni
- CSIRO Agriculture and Food, Canberra, 2601, ACT, Australia
- Institute for Sustainable Agriculture (IAS), CSIC, Cordoba, 14004, Spain
| | - Barbara George-Jaeggli
- Queensland Alliance for Agriculture & Food Innovation, Centre for Crop Science, The University of Queensland, Hermitage Research Station, Warwick, 4370, QLD, Australia
- Agri-Science Queensland, Queensland Department of Agriculture & Fisheries, Hermitage Research Facility, Warwick, 4370, QLD, Australia
| | - Andries B Potgieter
- Queensland Alliance for Agriculture & Food Innovation, Centre for Crop Science, The University of Queensland, Tor Street, Toowoomba, 4350, QLD, Australia
| | - David M Deery
- CSIRO Agriculture and Food, Canberra, 2601, ACT, Australia
| |
Collapse
|
41
|
Zhou C, Ye H, Hu J, Shi X, Hua S, Yue J, Xu Z, Yang G. Automated Counting of Rice Panicle by Applying Deep Learning Model to Images from Unmanned Aerial Vehicle Platform. SENSORS 2019; 19:s19143106. [PMID: 31337086 PMCID: PMC6679257 DOI: 10.3390/s19143106] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Revised: 07/08/2019] [Accepted: 07/11/2019] [Indexed: 12/03/2022]
Abstract
The number of panicles per unit area is a common indicator of rice yield and is of great significance to yield estimation, breeding, and phenotype analysis. Traditional counting methods have various drawbacks, such as long delay times and high subjectivity, and they are easily perturbed by noise. To improve the accuracy of rice detection and counting in the field, we developed and implemented a panicle detection and counting system that is based on improved region-based fully convolutional networks, and we use the system to automate rice-phenotype measurements. The field experiments were conducted in target areas to train and test the system and used a rotor light unmanned aerial vehicle equipped with a high-definition RGB camera to collect images. The trained model achieved a precision of 0.868 on a held-out test set, which demonstrates the feasibility of this approach. The algorithm can deal with the irregular edge of the rice panicle, the significantly different appearance between the different varieties and growing periods, the interference due to color overlapping between panicle and leaves, and the variations in illumination intensity and shading effects in the field. The result is more accurate and efficient recognition of rice-panicles, which facilitates rice breeding. Overall, the approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path toward smartphone-assisted crop disease diagnosis on a global scale.
Collapse
Affiliation(s)
- Chengquan Zhou
- Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Hangzhou 310000, China
| | - Hongbao Ye
- Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Hangzhou 310000, China
| | - Jun Hu
- Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Hangzhou 310000, China
| | - Xiaoyan Shi
- Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Hangzhou 310000, China
| | - Shan Hua
- Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Hangzhou 310000, China
| | - Jibo Yue
- Key Laboratory of Quantitative Remote Sensing in Agriculture of Ministry of Agriculture P. R. China, Beijing Research Center for Information Technology in Agriculture, Beijing 100089, China
- Key Laboratory of Agri-informatics, Ministry of Agriculture, Beijing 100089, China
| | - Zhifu Xu
- Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Hangzhou 310000, China.
| | - Guijun Yang
- Key Laboratory of Quantitative Remote Sensing in Agriculture of Ministry of Agriculture P. R. China, Beijing Research Center for Information Technology in Agriculture, Beijing 100089, China.
- Key Laboratory of Agri-informatics, Ministry of Agriculture, Beijing 100089, China.
| |
Collapse
|
42
|
Hao H, Li W, Zhao X, Chang Q, Zhao P. Estimating the Aboveground Carbon Density of Coniferous Forests by Combining Airborne LiDAR and Allometry Models at Plot Level. FRONTIERS IN PLANT SCIENCE 2019; 10:917. [PMID: 31354780 PMCID: PMC6636660 DOI: 10.3389/fpls.2019.00917] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Accepted: 06/28/2019] [Indexed: 06/10/2023]
Abstract
Forest carbon density is an important indicator for evaluating forest carbon sink capacities. Accurate carbon density estimation is the basis for studying the response mechanisms of forest ecosystems to global climate change. Airborne light detection and ranging (LiDAR) technology can acquire the vertical structure parameters of forests with a higher precision and penetration ability than traditional optical remote sensing. Combining top of canopy height model (TCH) and allometry models, this paper constructed two prediction models of aboveground carbon density (ACD) with 94 square plots in northwestern China: one model is plot-averaged height-based power model and the other is plot-averaged daisy-chain model. The correlation coefficients (R 2) were 0.6725 and 0.6761, which are significantly higher than the correlation coefficients of the traditional percentile model (R 2 = 0.5910). In addition, the correlation between TCH and ACD was significantly better than that between plot-averaged height (AvgH) and ACD, and Lorey's height (LorH) had no significant correlation with ACD. We also found that plot-level basal area (BA) was a dominant factor in ACD prediction, with a correlation coefficient reaching 0.9182, but this subject requires field investigation. The two models proposed in this study provide a simple and easy approach for estimating ACD in coniferous forests, which can replace the traditional LiDAR percentile method completely.
Collapse
Affiliation(s)
- Hongke Hao
- College of Natural Resources and Environment, Northwest A&F University, Yangling, China
- College of Forestry, Northwest A&F University, Yangling, China
| | - Weizhong Li
- College of Forestry, Northwest A&F University, Yangling, China
| | - Xuan Zhao
- College of Landscape Architecture and Arts, Northwest A&F University, Yangling, China
| | - Qingrui Chang
- College of Natural Resources and Environment, Northwest A&F University, Yangling, China
| | - Pengxiang Zhao
- College of Forestry, Northwest A&F University, Yangling, China
| |
Collapse
|
43
|
Ramstein GP, Jensen SE, Buckler ES. Breaking the curse of dimensionality to identify causal variants in Breeding 4. TAG. THEORETICAL AND APPLIED GENETICS. THEORETISCHE UND ANGEWANDTE GENETIK 2019; 132:559-567. [PMID: 30547185 PMCID: PMC6439136 DOI: 10.1007/s00122-018-3267-3] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2018] [Accepted: 12/07/2018] [Indexed: 05/18/2023]
Abstract
In the past, plant breeding has undergone three major transformations and is currently transitioning to a new technological phase, Breeding 4. This phase is characterized by the development of methods for biological design of plant varieties, including transformation and gene editing techniques directed toward causal loci. The application of such technologies will require to reliably estimate the effect of loci in plant genomes by avoiding the situation where the number of loci assayed (p) surpasses the number of plant genotypes (n). Here, we discuss approaches to avoid this curse of dimensionality (n ≪ p), which will involve analyzing intermediate phenotypes such as molecular traits and component traits related to plant morphology or physiology. Because these approaches will rely on novel data types such as DNA sequences and high-throughput phenotyping images, Breeding 4 will call for analyses that are complementary to traditional quantitative genetic studies, being based on machine learning techniques which make efficient use of sequence and image data. In this article, we will present some of these techniques and their application for prioritizing causal loci and developing improved varieties in Breeding 4.
Collapse
Affiliation(s)
- Guillaume P Ramstein
- Institute for Genomic Diversity, Institute of Biotechnology, Cornell University, 175 Biotechnology Building, Ithaca, NY, 14853, USA.
| | - Sarah E Jensen
- Section of Plant Breeding and Genetics, Cornell University, Ithaca, NY, 14853, USA
| | - Edward S Buckler
- Institute for Genomic Diversity, Institute of Biotechnology, Cornell University, 175 Biotechnology Building, Ithaca, NY, 14853, USA
- Section of Plant Breeding and Genetics, Cornell University, Ithaca, NY, 14853, USA
- United States Department of Agriculture, Agricultural Research Service, Ithaca, NY, 14853, USA
| |
Collapse
|
44
|
Mochida K, Koda S, Inoue K, Hirayama T, Tanaka S, Nishii R, Melgani F. Computer vision-based phenotyping for improvement of plant productivity: a machine learning perspective. Gigascience 2019; 8:5232233. [PMID: 30520975 PMCID: PMC6312910 DOI: 10.1093/gigascience/giy153] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Revised: 09/06/2018] [Accepted: 11/24/2018] [Indexed: 11/29/2022] Open
Abstract
Employing computer vision to extract useful information from images and videos is becoming a key technique for identifying phenotypic changes in plants. Here, we review the emerging aspects of computer vision for automated plant phenotyping. Recent advances in image analysis empowered by machine learning-based techniques, including convolutional neural network-based modeling, have expanded their application to assist high-throughput plant phenotyping. Combinatorial use of multiple sensors to acquire various spectra has allowed us to noninvasively obtain a series of datasets, including those related to the development and physiological responses of plants throughout their life. Automated phenotyping platforms accelerate the elucidation of gene functions associated with traits in model plants under controlled conditions. Remote sensing techniques with image collection platforms, such as unmanned vehicles and tractors, are also emerging for large-scale field phenotyping for crop breeding and precision agriculture. Computer vision-based phenotyping will play significant roles in both the nowcasting and forecasting of plant traits through modeling of genotype/phenotype relationships.
Collapse
Affiliation(s)
- Keiichi Mochida
- Bioproductivity Informatics Research Team, RIKEN Center for Sustainable Resource Science, 1-7-22 Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa 230-0045, Japan
- Microalgae Production Control Technology Laboratory, RIKEN Baton Zone Program, RIKEN Cluster for Science, Technology and Innovation Hub, 1-7-22 Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa 230-0045, Japan
- Institute of Plant Science and Resources, Okayama University, 2-20-1 Chuo, Kurashiki, Okayama 710-0046, Japan
- Kihara Institute for Biological Research, Yokohama City University, 641-12 Maioka-cho, Totsuka-ku, Yokohama, Kanagawa 244–0813, Japan
- Graduate School of Nanobioscience, Yokohama City University, 22-2 Seto, Kanazawa-ku, Yokohama, Kanagawa 236-0027, Japan
| | - Satoru Koda
- Graduate School of Mathematics, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan
| | - Komaki Inoue
- Bioproductivity Informatics Research Team, RIKEN Center for Sustainable Resource Science, 1-7-22 Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa 230-0045, Japan
| | - Takashi Hirayama
- Institute of Plant Science and Resources, Okayama University, 2-20-1 Chuo, Kurashiki, Okayama 710-0046, Japan
| | - Shojiro Tanaka
- Hiroshima University of Economics, 5-37-1, Gion, Asaminami, Hiroshima-shi Hiroshima 731-0138, Japan
| | - Ryuei Nishii
- Institute of Mathematics for Industry, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan
| | - Farid Melgani
- Department of Information Engineering and Computer Science, University of Trento, Via Sommarive 9, 38123 Trento, Italy
| |
Collapse
|
45
|
Mochida K, Koda S, Inoue K, Hirayama T, Tanaka S, Nishii R, Melgani F. Computer vision-based phenotyping for improvement of plant productivity: a machine learning perspective. Gigascience 2019. [PMID: 30520975 DOI: 10.1093/gigascience/giy153/5232233] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2023] Open
Abstract
Employing computer vision to extract useful information from images and videos is becoming a key technique for identifying phenotypic changes in plants. Here, we review the emerging aspects of computer vision for automated plant phenotyping. Recent advances in image analysis empowered by machine learning-based techniques, including convolutional neural network-based modeling, have expanded their application to assist high-throughput plant phenotyping. Combinatorial use of multiple sensors to acquire various spectra has allowed us to noninvasively obtain a series of datasets, including those related to the development and physiological responses of plants throughout their life. Automated phenotyping platforms accelerate the elucidation of gene functions associated with traits in model plants under controlled conditions. Remote sensing techniques with image collection platforms, such as unmanned vehicles and tractors, are also emerging for large-scale field phenotyping for crop breeding and precision agriculture. Computer vision-based phenotyping will play significant roles in both the nowcasting and forecasting of plant traits through modeling of genotype/phenotype relationships.
Collapse
Affiliation(s)
- Keiichi Mochida
- Bioproductivity Informatics Research Team, RIKEN Center for Sustainable Resource Science, 1-7-22 Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa 230-0045, Japan
- Microalgae Production Control Technology Laboratory, RIKEN Baton Zone Program, RIKEN Cluster for Science, Technology and Innovation Hub, 1-7-22 Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa 230-0045, Japan
- Institute of Plant Science and Resources, Okayama University, 2-20-1 Chuo, Kurashiki, Okayama 710-0046, Japan
- Kihara Institute for Biological Research, Yokohama City University, 641-12 Maioka-cho, Totsuka-ku, Yokohama, Kanagawa 244-0813, Japan
- Graduate School of Nanobioscience, Yokohama City University, 22-2 Seto, Kanazawa-ku, Yokohama, Kanagawa 236-0027, Japan
| | - Satoru Koda
- Graduate School of Mathematics, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan
| | - Komaki Inoue
- Bioproductivity Informatics Research Team, RIKEN Center for Sustainable Resource Science, 1-7-22 Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa 230-0045, Japan
| | - Takashi Hirayama
- Institute of Plant Science and Resources, Okayama University, 2-20-1 Chuo, Kurashiki, Okayama 710-0046, Japan
| | - Shojiro Tanaka
- Hiroshima University of Economics, 5-37-1, Gion, Asaminami, Hiroshima-shi Hiroshima 731-0138, Japan
| | - Ryuei Nishii
- Institute of Mathematics for Industry, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan
| | - Farid Melgani
- Department of Information Engineering and Computer Science, University of Trento, Via Sommarive 9, 38123 Trento, Italy
| |
Collapse
|
46
|
Su Y, Wu F, Ao Z, Jin S, Qin F, Liu B, Pang S, Liu L, Guo Q. Evaluating maize phenotype dynamics under drought stress using terrestrial lidar. PLANT METHODS 2019; 15:11. [PMID: 30740137 PMCID: PMC6360786 DOI: 10.1186/s13007-019-0396-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Accepted: 01/25/2019] [Indexed: 05/18/2023]
Abstract
BACKGROUND Maize (Zea mays L.) is the third most consumed grain in the world and improving maize yield is of great importance of the world food security, especially under global climate change and more frequent severe droughts. Due to the limitation of phenotyping methods, most current studies only focused on the responses of phenotypes on certain key growth stages. Although light detection and ranging (lidar) technology showed great potential in acquiring three-dimensional (3D) vegetation information, it has been rarely used in monitoring maize phenotype dynamics at an individual plant level. RESULTS In this study, we used a terrestrial laser scanner to collect lidar data at six growth stages for 20 maize varieties under drought stress. Three drought-related phenotypes, i.e., plant height, plant area index (PAI) and projected leaf area (PLA), were calculated from the lidar point clouds at the individual plant level. The results showed that terrestrial lidar data can be used to estimate plant height, PAI and PLA at an accuracy of 96%, 70% and 92%, respectively. All three phenotypes showed a pattern of first increasing and then decreasing during the growth period. The high drought tolerance group tended to keep lower plant height and PAI without losing PLA during the tasseling stage. Moreover, the high drought tolerance group inclined to have lower plant area density in the upper canopy than the low drought tolerance group. CONCLUSION The results demonstrate the feasibility of using terrestrial lidar to monitor 3D maize phenotypes under drought stress in the field and may provide new insights on identifying the key phenotypes and growth stages influenced by drought stress.
Collapse
Affiliation(s)
- Yanjun Su
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, Beijing, 100049 China
| | - Fangfang Wu
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, Beijing, 100049 China
| | - Zurui Ao
- Guangdong Key Laboratory for Urbanization and Geo-simulation, School of Geography and Planning, Sun Yat-sen University, Guangzhou, 510275 China
| | - Shichao Jin
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, Beijing, 100049 China
| | - Feng Qin
- College of Biological Sciences, China Agricultural University, Beijing, 100091 China
| | - Boxin Liu
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, Beijing, 100049 China
| | - Shuxin Pang
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
| | - Lingli Liu
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, Beijing, 100049 China
| | - Qinghua Guo
- State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing, 100093 China
- University of Chinese Academy of Sciences, Beijing, 100049 China
| |
Collapse
|