1
|
Korth N, Yang Q, Van Haute MJ, Tross MC, Peng B, Shrestha N, Zwiener-Malcom M, Mural RV, Schnable JC, Benson AK. Genomic co-localization of variation affecting agronomic and human gut microbiome traits in a meta-analysis of diverse sorghum. G3 (BETHESDA, MD.) 2024; 14:jkae145. [PMID: 38979923 PMCID: PMC11373648 DOI: 10.1093/g3journal/jkae145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 03/26/2024] [Accepted: 06/05/2024] [Indexed: 07/10/2024]
Abstract
Substantial functional metabolic diversity exists within species of cultivated grain crops that directly or indirectly provide more than half of all calories consumed by humans around the globe. While such diversity is the molecular currency used for improving agronomic traits, diversity is poorly characterized for its effects on human nutrition and utilization by gut microbes. Moreover, we know little about agronomic traits' potential tradeoffs and pleiotropic effects on human nutritional traits. Here, we applied a quantitative genetics approach using a meta-analysis and parallel genome-wide association studies of Sorghum bicolor traits describing changes in the composition and function of human gut microbe communities, and any of 200 sorghum seed and agronomic traits across a diverse sorghum population to identify associated genetic variants. A total of 15 multiple-effect loci (MEL) were initially found where different alleles in the sorghum genome produced changes in seed that affected the abundance of multiple bacterial taxa across 2 human microbiomes in automated in vitro fermentations. Next, parallel genome-wide studies conducted for seed, biochemical, and agronomic traits in the same population identified significant associations within the boundaries of 13/15 MEL for microbiome traits. In several instances, the colocalization of variation affecting gut microbiome and agronomic traits provided hypotheses for causal mechanisms through which variation could affect both agronomic traits and human gut microbes. This work demonstrates that genetic factors affecting agronomic traits in sorghum seed can also drive significant effects on human gut microbes, particularly bacterial taxa considered beneficial. Understanding these pleiotropic relationships will inform future strategies for crop improvement toward yield, sustainability, and human health.
Collapse
Affiliation(s)
- Nate Korth
- Nebraska Food for Health Center, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
- Complex Biosystems Graduate Program, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
- Department of Food Science and Technology, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
| | - Qinnan Yang
- Nebraska Food for Health Center, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
- Department of Food Science and Technology, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
| | - Mallory J Van Haute
- Nebraska Food for Health Center, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
- Department of Food Science and Technology, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
| | - Michael C Tross
- Complex Biosystems Graduate Program, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
- Center for Plant Science Innovation, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
| | - Bo Peng
- Nebraska Food for Health Center, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
- Department of Food Science and Technology, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
| | - Nikee Shrestha
- Complex Biosystems Graduate Program, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
- Center for Plant Science Innovation, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
| | - Mackenzie Zwiener-Malcom
- Center for Plant Science Innovation, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
- Department of Agronomy and Horticulture, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
| | - Ravi V Mural
- Center for Plant Science Innovation, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
- Department of Agronomy and Horticulture, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
- Department of Agronomy, Horticulture, and Plant Science, South Dakota State University, Brookings, SD 57007, USA
| | - James C Schnable
- Nebraska Food for Health Center, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
- Center for Plant Science Innovation, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
- Department of Agronomy and Horticulture, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
| | - Andrew K Benson
- Nebraska Food for Health Center, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
- Department of Food Science and Technology, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
| |
Collapse
|
2
|
Makkar J, Flores J, Matich M, Duong TT, Thompson SM, Du Y, Busch I, Phan QM, Wang Q, Delevich K, Broughton-Neiswanger L, Driskell IM, Driskell RR. Deep Hair Phenomics: Implications in Endocrinology, Development, and Aging. J Invest Dermatol 2024:S0022-202X(24)02079-7. [PMID: 39236901 DOI: 10.1016/j.jid.2024.08.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 08/08/2024] [Accepted: 08/11/2024] [Indexed: 09/07/2024]
Abstract
Hair quality is an important indicator of health in humans and other animals. Current approaches to assess hair quality are generally nonquantitative or are low throughput owing to technical limitations of splitting hairs. We developed a deep learning-based computer vision approach for the high-throughput quantification of individual hair fibers at a high resolution. Our innovative computer vision tool can distinguish and extract overlapping fibers for quantification of multivariate features, including length, width, and color, to generate single-hair phenomes of diverse conditions across the lifespan of mice. Using our tool, we explored the effects of hormone signaling, genetic modifications, and aging on hair follicle output. Our analyses revealed hair phenotypes resultant of endocrinological, developmental, and aging-related alterations in the fur coats of mice. These results demonstrate the efficacy of our deep hair phenomics tool for characterizing factors that modulate the hair follicle and developing, to our knowledge, previously unreported diagnostic methods for detecting disease through the hair fiber. Finally, we have generated a searchable, interactive web tool for the exploration of our hair fiber data at skinregeneration.org.
Collapse
Affiliation(s)
- Jasson Makkar
- School of Molecular Biosciences, Washington State University, Pullman, Washington, USA
| | - Jorge Flores
- School of Molecular Biosciences, Washington State University, Pullman, Washington, USA
| | - Mason Matich
- School of Molecular Biosciences, Washington State University, Pullman, Washington, USA
| | - Tommy T Duong
- School of Molecular Biosciences, Washington State University, Pullman, Washington, USA
| | - Sean M Thompson
- School of Molecular Biosciences, Washington State University, Pullman, Washington, USA
| | - Yiqing Du
- School of Molecular Biosciences, Washington State University, Pullman, Washington, USA
| | - Isabelle Busch
- School of Molecular Biosciences, Washington State University, Pullman, Washington, USA
| | - Quan M Phan
- School of Molecular Biosciences, Washington State University, Pullman, Washington, USA
| | - Qing Wang
- Department of Integrative Physiology and Neuroscience, College of Veterinary Medicine, Washington State University, Pullman, Washington, USA
| | - Kristen Delevich
- Department of Integrative Physiology and Neuroscience, College of Veterinary Medicine, Washington State University, Pullman, Washington, USA; Center for Reproductive Biology, College of Veterinary Medicine, Washington State University, Pullman, Washington, USA
| | - Liam Broughton-Neiswanger
- Washington Animal Disease Diagnostic Laboratory, College of Veterinary Medicine, Washington State University, Pullman, Washington, USA
| | - Iwona M Driskell
- School of Molecular Biosciences, Washington State University, Pullman, Washington, USA
| | - Ryan R Driskell
- School of Molecular Biosciences, Washington State University, Pullman, Washington, USA; Center for Reproductive Biology, College of Veterinary Medicine, Washington State University, Pullman, Washington, USA.
| |
Collapse
|
3
|
Kronenwett F, Maier G, Leiss N, Gruna R, Thome V, Längle T. Sensor-based characterization of construction and demolition waste at high occupancy densities using synthetic training data and deep learning. WASTE MANAGEMENT & RESEARCH : THE JOURNAL OF THE INTERNATIONAL SOLID WASTES AND PUBLIC CLEANSING ASSOCIATION, ISWA 2024; 42:788-796. [PMID: 38385439 PMCID: PMC11367798 DOI: 10.1177/0734242x241231410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 01/20/2024] [Indexed: 02/23/2024]
Abstract
Sensor-based monitoring of construction and demolition waste (CDW) streams plays an important role in recycling (RC). Extracted knowledge about the composition of a material stream helps identifying RC paths, optimizing processing plants and form the basis for sorting. To enable economical use, it is necessary to ensure robust detection of individual objects even with high material throughput. Conventional algorithms struggle with resulting high occupancy densities and object overlap, making deep learning object detection methods more promising. In this study, different deep learning architectures for object detection (Region-based CNN/Region-based Convolutional Neural Network (Faster R-CNN), You only look once (YOLOv3), Single Shot MultiBox Detector (SSD)) are investigated with respect to their suitability for CDW characterization. A mixture of brick and sand-lime brick is considered as an exemplary waste stream. Particular attention is paid to detection performance with increasing occupancy density and particle overlap. A method for the generation of synthetic training images is presented, which avoids time-consuming manual labelling. By testing the models trained on synthetic data on real images, the success of the method is demonstrated. Requirements for synthetic training data composition, potential improvements and simplifications of different architecture approaches are discussed based on the characteristic of the detection task. In addition, the required inference time of the presented models is investigated to ensure their suitability for use under real-time conditions.
Collapse
Affiliation(s)
- Felix Kronenwett
- Fraunhofer IOSB, Institute of Optronics, System Technologies and Image Exploitation, Karlsruhe, Germany
| | - Georg Maier
- Fraunhofer IOSB, Institute of Optronics, System Technologies and Image Exploitation, Karlsruhe, Germany
| | - Norbert Leiss
- Fraunhofer IBP, Institute for Building Physics, Holzkirchen, Germany
| | - Robin Gruna
- Fraunhofer IOSB, Institute of Optronics, System Technologies and Image Exploitation, Karlsruhe, Germany
| | - Volker Thome
- Fraunhofer IBP, Institute for Building Physics, Holzkirchen, Germany
| | - Thomas Längle
- Fraunhofer IOSB, Institute of Optronics, System Technologies and Image Exploitation, Karlsruhe, Germany
| |
Collapse
|
4
|
Zhou L, Zhang H, Bian L, Tian Y, Zhou H. Phenotyping of Drought-Stressed Poplar Saplings Using Exemplar-Based Data Generation and Leaf-Level Structural Analysis. PLANT PHENOMICS (WASHINGTON, D.C.) 2024; 6:0205. [PMID: 39077119 PMCID: PMC11283870 DOI: 10.34133/plantphenomics.0205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Accepted: 06/05/2024] [Indexed: 07/31/2024]
Abstract
Drought stress is one of the main threats to poplar plant growth and has a negative impact on plant yield. Currently, high-throughput plant phenotyping has been widely studied as a rapid and nondestructive tool for analyzing the growth status of plants, such as water and nutrient content. In this study, a combination of computer vision and deep learning was used for drought-stressed poplar sapling phenotyping. Four varieties of poplar saplings were cultivated, and 5 different irrigation treatments were applied. Color images of the plant samples were captured for analysis. Two tasks, including leaf posture calculation and drought stress identification, were conducted. First, instance segmentation was used to extract the regions of the leaf, petiole, and midvein. A dataset augmentation method was created for reducing manual annotation costs. The horizontal angles of the fitted lines of the petiole and midvein were calculated for leaf posture digitization. Second, multitask learning models were proposed for simultaneously determining the stress level and poplar variety. The mean absolute errors of the angle calculations were 10.7° and 8.2° for the petiole and midvein, respectively. Drought stress increased the horizontal angle of leaves. Moreover, using raw images as the input, the multitask MobileNet achieved the highest accuracy (99% for variety identification and 76% for stress level classification), outperforming widely used single-task deep learning models (stress level classification accuracies of <70% on the prediction dataset). The plant phenotyping methods presented in this study could be further used for drought-stress-resistant poplar plant screening and precise irrigation decision-making.
Collapse
Affiliation(s)
- Lei Zhou
- College of Mechanical and Electronic Engineering,
Nanjing Forestry University, Nanjing 210037, P. R. China
- Jiangsu Co-Innovation Center of Efficient Processing and Utilization of Forest Resources,
Nanjing Forestry University, Nanjing 210037, P. R. China
| | - Huichun Zhang
- College of Mechanical and Electronic Engineering,
Nanjing Forestry University, Nanjing 210037, P. R. China
- Jiangsu Co-Innovation Center of Efficient Processing and Utilization of Forest Resources,
Nanjing Forestry University, Nanjing 210037, P. R. China
| | - Liming Bian
- State Key Laboratory of Tree Genetics and Breeding, Co-Innovation Center for Sustainable Forestry in Southern China, Key Laboratory of Forest Genetics & Biotechnology of Ministry of Education, Nanjing Forestry University, Nanjing 210037, P. R. China
| | - Ye Tian
- College of Forestry and Grassland,
Nanjing Forestry University, Nanjing 210037, P. R. China
| | - Haopeng Zhou
- College of Mechanical and Electronic Engineering,
Nanjing Forestry University, Nanjing 210037, P. R. China
| |
Collapse
|
5
|
Ouyang Z, Fu X, Zhong Z, Bai R, Cheng Q, Gao G, Li M, Zhang H, Zhang Y. An exploration of the influence of ZnO NPs treatment on germination of radish seeds under salt stress based on the YOLOv8-R lightweight model. PLANT METHODS 2024; 20:110. [PMID: 39044226 PMCID: PMC11267839 DOI: 10.1186/s13007-024-01238-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Accepted: 07/14/2024] [Indexed: 07/25/2024]
Abstract
BACKGROUND Since traditional germination test methods have drawbacks such as slow efficiency, proneness to error, and damage to seeds, a non-destructive testing method is proposed for full-process germination of radish seeds, which improves the monitoring efficiency of seed quality. RESULTS Based on YOLOv8n, a lightweight test model YOLOv8-R is proposed, where the number of parameters, the amount of calculation, and size of weights are significantly reduced by replacing the backbone network with PP-LCNet, the neck part with CCFM, the C2f of the neck part with OREPA, the SPPF with FocalModulation, and the Detect of the head part with LADH. The ablation test and comparative test prove the performance of the model. With adoption of germination rate, germination index, and germination potential as the three vitality indicators, the seed germination phenotype collection system and YOLOv8-R model are used to analyze the full time-series sequence effects of different ZnO NPs concentrations on germination of radish seeds under varying degrees of salt stress. CONCLUSIONS The results show that salt stress inhibits the germination of radish seeds and that the inhibition effect is more obvious with the increased concentration of NaCl solution; in cultivation with deionized water, the germination rate of radish seeds does not change significantly with increased concentration of ZnO NPs, but the germination index and germination potential increase initially and then decline; in cultivation with NaCl solution, the germination rate, germination potential and germination index of radish seeds first increase and then decline with increased concentration of ZnO NPs.
Collapse
Affiliation(s)
- Zhiqian Ouyang
- College of Engineering, Nanjing Agricultural University, Nanjing, 210031, China
| | - Xiuqing Fu
- College of Engineering, Nanjing Agricultural University, Nanjing, 210031, China.
| | - Zhibo Zhong
- Institute of Farmland Water Conservancy and Soil-Fertilizer, Xinjiang Academy of Agricultural and Reclamation Science, Shihezi, 832000, Xinjiang, China
| | - Ruxiao Bai
- Institute of Farmland Water Conservancy and Soil-Fertilizer, Xinjiang Academy of Agricultural and Reclamation Science, Shihezi, 832000, Xinjiang, China
| | - Qianzhe Cheng
- College of Engineering, Nanjing Agricultural University, Nanjing, 210031, China
| | - Ge Gao
- College of Engineering, Nanjing Agricultural University, Nanjing, 210031, China
| | - Meng Li
- College of Engineering, Nanjing Agricultural University, Nanjing, 210031, China
| | - Haolun Zhang
- College of Engineering, Nanjing Agricultural University, Nanjing, 210031, China
| | - Yaben Zhang
- College of Engineering, Nanjing Agricultural University, Nanjing, 210031, China
| |
Collapse
|
6
|
Davidson SJ, Saggese T, Krajňáková J. Deep learning for automated segmentation and counting of hypocotyl and cotyledon regions in mature Pinus radiata D. Don. somatic embryo images. FRONTIERS IN PLANT SCIENCE 2024; 15:1322920. [PMID: 38495377 PMCID: PMC10940415 DOI: 10.3389/fpls.2024.1322920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 02/12/2024] [Indexed: 03/19/2024]
Abstract
In commercial forestry and large-scale plant propagation, the utilization of artificial intelligence techniques for automated somatic embryo analysis has emerged as a highly valuable tool. Notably, image segmentation plays a key role in the automated assessment of mature somatic embryos. However, to date, the application of Convolutional Neural Networks (CNNs) for segmentation of mature somatic embryos remains unexplored. In this study, we present a novel application of CNNs for delineating mature somatic conifer embryos from background and residual proliferating embryogenic tissue and differentiating various morphological regions within the embryos. A semantic segmentation CNN was trained to assign pixels to cotyledon, hypocotyl, and background regions, while an instance segmentation network was trained to detect individual cotyledons for automated counting. The main dataset comprised 275 high-resolution microscopic images of mature Pinus radiata somatic embryos, with 42 images reserved for testing and validation sets. The evaluation of different segmentation methods revealed that semantic segmentation achieved the highest performance averaged across classes, achieving F1 scores of 0.929 and 0.932, with IoU scores of 0.867 and 0.872 for the cotyledon and hypocotyl regions respectively. The instance segmentation approach demonstrated proficiency in accurate detection and counting of the number of cotyledons, as indicated by a mean squared error (MSE) of 0.79 and mean absolute error (MAE) of 0.60. The findings highlight the efficacy of neural network-based methods in accurately segmenting somatic embryos and delineating individual morphological parts, providing additional information compared to previous segmentation techniques. This opens avenues for further analysis, including quantification of morphological characteristics in each region, enabling the identification of features of desirable embryos in large-scale production systems. These advancements contribute to the improvement of automated somatic embryogenesis systems, facilitating efficient and reliable plant propagation for commercial forestry applications.
Collapse
Affiliation(s)
- Sam J. Davidson
- Data and Geospatial Intelligence, New Zealand Forest Research Institute (Scion), Christchurch, New Zealand
| | - Taryn Saggese
- Forest Genetics and Biotechnology, New Zealand Forest Research Institute (Scion), Rotorua, New Zealand
| | - Jana Krajňáková
- Forest Genetics and Biotechnology, New Zealand Forest Research Institute (Scion), Rotorua, New Zealand
| |
Collapse
|
7
|
Walsh CL, Berg M, West H, Holroyd NA, Walker-Samuel S, Shipley RJ. Reconstructing microvascular network skeletons from 3D images: What is the ground truth? Comput Biol Med 2024; 171:108140. [PMID: 38422956 DOI: 10.1016/j.compbiomed.2024.108140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 01/29/2024] [Accepted: 02/12/2024] [Indexed: 03/02/2024]
Abstract
Structural changes to microvascular networks are increasingly highlighted as markers of pathogenesis in a wide range of disease, e.g. Alzheimer's disease, vascular dementia and tumour growth. This has motivated the development of dedicated 3D imaging techniques, alongside the creation of computational modelling frameworks capable of using 3D reconstructed networks to simulate functional behaviours such as blood flow or transport processes. Extraction of 3D networks from imaging data broadly consists of two image processing steps: segmentation followed by skeletonisation. Much research effort has been devoted to segmentation field, and there are standard and widely-applied methodologies for creating and assessing gold standards or ground truths produced by manual annotation or automated algorithms. The Skeletonisation field, however, lacks widely applied, simple to compute metrics for the validation or optimisation of the numerous algorithms that exist to extract skeletons from binary images. This is particularly problematic as 3D imaging datasets increase in size and visual inspection becomes an insufficient validation approach. In this work, we first demonstrate the extent of the problem by applying 4 widely-used skeletonisation algorithms to 3 different imaging datasets. In doing so we show significant variability between reconstructed skeletons of the same segmented imaging dataset. Moreover, we show that such a structural variability propagates to simulated metrics such as blood flow. To mitigate this variability we introduce a new, fast and easy to compute super metric that compares the volume, connectivity, medialness, bifurcation point identification and homology of the reconstructed skeletons to the original segmented data. We then show that such a metric can be used to select the best performing skeletonisation algorithm for a given dataset, as well as to optimise its parameters. Finally, we demonstrate that the super metric can also be used to quickly identify how a particular skeletonisation algorithm could be improved, becoming a powerful tool in understanding the complex implication of small structural changes in a network.
Collapse
Affiliation(s)
- Claire L Walsh
- Department of Mechanical Engineering, University College London, United Kingdom
| | - Maxime Berg
- Department of Mechanical Engineering, University College London, United Kingdom.
| | - Hannah West
- Department of Mechanical Engineering, University College London, United Kingdom
| | - Natalie A Holroyd
- Centre for Computational Medicine, Division of Medicine, University College London, United Kingdom
| | - Simon Walker-Samuel
- Centre for Computational Medicine, Division of Medicine, University College London, United Kingdom
| | - Rebecca J Shipley
- Department of Mechanical Engineering, University College London, United Kingdom; Centre for Computational Medicine, Division of Medicine, University College London, United Kingdom
| |
Collapse
|
8
|
Zhang M, Zhao J, Hoshino Y. Deep learning-based high-throughput detection of in vitro germination to assess pollen viability from microscopic images. JOURNAL OF EXPERIMENTAL BOTANY 2023; 74:6551-6562. [PMID: 37584205 PMCID: PMC10662222 DOI: 10.1093/jxb/erad315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/12/2023] [Indexed: 08/17/2023]
Abstract
In vitro pollen germination is considered the most efficient method to assess pollen viability. The pollen germination frequency and pollen tube length, which are key indicators of pollen viability, should be accurately measured during in vitro culture. In this study, a Mask R-CNN model trained using microscopic images of tree peony (Paeonia suffruticosa) pollen has been proposed to rapidly detect the pollen germination rate and pollen tube length. To reduce the workload during image acquisition, images of synthesized crossed pollen tubes were added to the training dataset, significantly improving the model accuracy in recognizing crossed pollen tubes. At an Intersection over Union threshold of 50%, a mean average precision of 0.949 was achieved. The performance of the model was verified using 120 testing images. The R2 value of the linear regression model using detected pollen germination frequency against the ground truth was 0.909 and that using average pollen tube length was 0.958. Further, the model was successfully applied to two other plant species, indicating a good generalizability and potential to be applied widely.
Collapse
Affiliation(s)
- Mengwei Zhang
- Division of Biosphere Science, Graduate School of Environmental Science, Hokkaido University, Kita 11, Nishi 10, Kita-ku, Sapporo 060-0811, Japan
| | - Jianxiang Zhao
- Division of Biosphere Science, Graduate School of Environmental Science, Hokkaido University, Kita 11, Nishi 10, Kita-ku, Sapporo 060-0811, Japan
| | - Yoichiro Hoshino
- Division of Biosphere Science, Graduate School of Environmental Science, Hokkaido University, Kita 11, Nishi 10, Kita-ku, Sapporo 060-0811, Japan
- Field Science Center for Northern Biosphere, Hokkaido University, Kita 11, Nishi 10, Kita-ku, Sapporo 060-0811, Japan
| |
Collapse
|
9
|
Colliard-Granero A, Jitsev J, Eikerling MH, Malek K, Eslamibidgoli MJ. UTILE-Gen: Automated Image Analysis in Nanoscience Using Synthetic Dataset Generator and Deep Learning. ACS NANOSCIENCE AU 2023; 3:398-407. [PMID: 37868222 PMCID: PMC10588433 DOI: 10.1021/acsnanoscienceau.3c00020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 07/20/2023] [Accepted: 07/20/2023] [Indexed: 10/24/2023]
Abstract
This work presents the development and implementation of a deep learning-based workflow for autonomous image analysis in nanoscience. A versatile, agnostic, and configurable tool was developed to generate instance-segmented imaging datasets of nanoparticles. The synthetic generator tool employs domain randomization to expand the image/mask pairs dataset for training supervised deep learning models. The approach eliminates tedious manual annotation and allows training of high-performance models for microscopy image analysis based on convolutional neural networks. We demonstrate how the expanded training set can significantly improve the performance of the classification and instance segmentation models for a variety of nanoparticle shapes, ranging from spherical-, cubic-, to rod-shaped nanoparticles. Finally, the trained models were deployed in a cloud-based analytics platform for the autonomous particle analysis of microscopy images.
Collapse
Affiliation(s)
- André Colliard-Granero
- Theory
and Computation of Energy Materials (IEK-13), Institute of Energy
and Climate Research, Forschungszentrum
Jülich GmbH, 52425 Jülich, Germany
- Centre
for Advanced Simulation and Analytics (CASA), Simulation and Data
Science Lab for Energy Materials (SDL-EM), Forschungszentrum Jülich GmbH, 52425 Jülich, Germany
- Chair
of Theory and Computation of Energy Materials, Faculty of Georesources
and Materials Engineering, RWTH Aachen University, 52062 Aachen, Germany
| | - Jenia Jitsev
- Centre
for Advanced Simulation and Analytics (CASA), Simulation and Data
Science Lab for Energy Materials (SDL-EM), Forschungszentrum Jülich GmbH, 52425 Jülich, Germany
- Jülich
Supercomputing Center, Forschungszentrum
Jülich, 52425 Jülich, Germany
| | - Michael H. Eikerling
- Theory
and Computation of Energy Materials (IEK-13), Institute of Energy
and Climate Research, Forschungszentrum
Jülich GmbH, 52425 Jülich, Germany
- Centre
for Advanced Simulation and Analytics (CASA), Simulation and Data
Science Lab for Energy Materials (SDL-EM), Forschungszentrum Jülich GmbH, 52425 Jülich, Germany
- Chair
of Theory and Computation of Energy Materials, Faculty of Georesources
and Materials Engineering, RWTH Aachen University, 52062 Aachen, Germany
| | - Kourosh Malek
- Theory
and Computation of Energy Materials (IEK-13), Institute of Energy
and Climate Research, Forschungszentrum
Jülich GmbH, 52425 Jülich, Germany
- Centre
for Advanced Simulation and Analytics (CASA), Simulation and Data
Science Lab for Energy Materials (SDL-EM), Forschungszentrum Jülich GmbH, 52425 Jülich, Germany
| | - Mohammad J. Eslamibidgoli
- Theory
and Computation of Energy Materials (IEK-13), Institute of Energy
and Climate Research, Forschungszentrum
Jülich GmbH, 52425 Jülich, Germany
- Centre
for Advanced Simulation and Analytics (CASA), Simulation and Data
Science Lab for Energy Materials (SDL-EM), Forschungszentrum Jülich GmbH, 52425 Jülich, Germany
| |
Collapse
|
10
|
Choi JH, Jang W, Lim YJ, Mun SJ, Bong KW. Highly Flexible Deep-Learning-Based Automatic Analysis for Graphically Encoded Hydrogel Microparticles. ACS Sens 2023; 8:3158-3166. [PMID: 37489756 DOI: 10.1021/acssensors.3c00857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/26/2023]
Abstract
Graphically encoded hydrogel microparticle (HMP)-based bioassay is a diagnostic tool characterized by exceptional multiplex detectability and robust sensitivity and specificity. Specifically, deep learning enables highly fast and accurate analyses of HMPs with diverse graphical codes. However, previous related studies have found the use of plain particles as data to be disadvantageous for accurate analyses of HMPs loaded with functional nanomaterials. Furthermore, the manual data annotation method used in existing approaches is highly labor-intensive and time-consuming. In this study, we present an efficient deep-learning-based analysis of encoded HMPs with diverse graphical codes and functional nanomaterials, utilizing the auto-annotation and synthetic data mixing methods for model training. The auto-annotation enhanced the throughput of dataset preparation up to 0.11 s/image. Using synthetic data mixing, a mean average precision of 0.88 was achieved in the analysis of encoded HMPs with magnetic nanoparticles, representing an approximately twofold improvement over the standard method. To evaluate the practical applicability of the proposed automatic analysis strategy, a single-image analysis was performed after the triplex immunoassay for the preeclampsia-related protein biomarkers. Finally, we accomplished a processing throughput of 0.353 s per sample for analyzing the result image.
Collapse
Affiliation(s)
- Jun Hee Choi
- Department of Chemical and Biological Engineering, Korea University, Seoul 02841, South Korea
| | - Wookyoung Jang
- Department of Chemical and Biological Engineering, Korea University, Seoul 02841, South Korea
| | - Yong Jun Lim
- Department of Chemical and Biological Engineering, Korea University, Seoul 02841, South Korea
| | - Seok Joon Mun
- Department of Chemical and Biological Engineering, Korea University, Seoul 02841, South Korea
| | - Ki Wan Bong
- Department of Chemical and Biological Engineering, Korea University, Seoul 02841, South Korea
| |
Collapse
|
11
|
Tanaka Y, Watanabe T, Katsura K, Tsujimoto Y, Takai T, Tanaka TST, Kawamura K, Saito H, Homma K, Mairoua SG, Ahouanton K, Ibrahim A, Senthilkumar K, Semwal VK, Matute EJG, Corredor E, El-Namaky R, Manigbas N, Quilang EJP, Iwahashi Y, Nakajima K, Takeuchi E, Saito K. Deep Learning Enables Instant and Versatile Estimation of Rice Yield Using Ground-Based RGB Images. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0073. [PMID: 38239736 PMCID: PMC10795498 DOI: 10.34133/plantphenomics.0073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Accepted: 06/28/2023] [Indexed: 01/22/2024]
Abstract
Rice (Oryza sativa L.) is one of the most important cereals, which provides 20% of the world's food energy. However, its productivity is poorly assessed especially in the global South. Here, we provide a first study to perform a deep-learning-based approach for instantaneously estimating rice yield using red-green-blue images. During ripening stage and at harvest, over 22,000 digital images were captured vertically downward over the rice canopy from a distance of 0.8 to 0.9 m at 4,820 harvesting plots having the yield of 0.1 to 16.1 t·ha-1 across 6 countries in Africa and Japan. A convolutional neural network applied to these data at harvest predicted 68% variation in yield with a relative root mean square error of 0.22. The developed model successfully detected genotypic difference and impact of agronomic interventions on yield in the independent dataset. The model also demonstrated robustness against the images acquired at different shooting angles up to 30° from right angle, diverse light environments, and shooting date during late ripening stage. Even when the resolution of images was reduced (from 0.2 to 3.2 cm·pixel-1 of ground sampling distance), the model could predict 57% variation in yield, implying that this approach can be scaled by the use of unmanned aerial vehicles. Our work offers low-cost, hands-on, and rapid approach for high-throughput phenotyping and can lead to impact assessment of productivity-enhancing interventions, detection of fields where these are needed to sustainably increase crop production, and yield forecast at several weeks before harvesting.
Collapse
Affiliation(s)
- Yu Tanaka
- Graduate School of Agriculture,
Kyoto University, Kitashirakawa Oiwake-chou, Sakyo-ku, Kyoto 606-8502, Japan
- Graduate School of Environmental, Life, Natural Science and Technology,
Okayama University, 1-1-1, Tsushima Naka, Okayama 700-8530, Japan
| | - Tomoya Watanabe
- Graduate School of Mathematics,
Kyushu University, 744, Motooka, Fukuoka Shi Nishi Ku, Fukuoka 819-0395, Japan
| | - Keisuke Katsura
- Graduate School of Agriculture,
Tokyo University of Agriculture and Technology, 3-5-8 Saiwaicho, Fuchu, Tokyo 183-8509, Japan
| | - Yasuhiro Tsujimoto
- Japan International Research Center for Agricultural Sciences, 1-1 Ohwashi, Tsukuba, Ibaraki 305-8686, Japan
| | - Toshiyuki Takai
- Japan International Research Center for Agricultural Sciences, 1-1 Ohwashi, Tsukuba, Ibaraki 305-8686, Japan
| | - Takashi Sonam Tashi Tanaka
- Faculty of Applied Biological Sciences,
Gifu University, 1-1 Yanagido, Gifu 501-1193, Japan
- Artificial Intelligence Advanced Research Center,
Gifu University, 1-1 Yanagido, Gifu 501-1193, Japan
| | - Kensuke Kawamura
- Japan International Research Center for Agricultural Sciences, 1-1 Ohwashi, Tsukuba, Ibaraki 305-8686, Japan
| | - Hiroki Saito
- Tropical Agriculture Research Front,
Japan International Research Center for Agricultural Sciences, 1091-1 Maezato, Ishigaki, Okinawa 907-0002, Japan
| | - Koki Homma
- Graduate School of Agricultural Science,
Tohoku University, Aramaki Aza-Aoba, Aoba, Sendai, Miyagi 980-8572, Japan
| | | | - Kokou Ahouanton
- Africa Rice Center (AfricaRice), 01 BP 2551 Bouaké, Côte d'Ivoire
| | - Ali Ibrahim
- Africa Rice Center (AfricaRice), Regional Station for the Sahel, B.P. 96, Saint-Louis, Senegal
| | - Kalimuthu Senthilkumar
- Africa Rice Center (AfricaRice), P.O. Box 1690, Ampandrianomby, Antananarivo, Madagascar
| | - Vimal Kumar Semwal
- Africa Rice Center (AfricaRice), Nigeria Station, c/o IITA, PMB 5320, Ibadan, Nigeria
| | - Eduardo Jose Graterol Matute
- Latin American Fund for Irrigated Rice - The Alliance of Bioversity International and CIAT, Km 17 Recta Cali-Palmira, C.P. 763537, A.A. 6713, Cali, Colombia
| | - Edgar Corredor
- Latin American Fund for Irrigated Rice - The Alliance of Bioversity International and CIAT, Km 17 Recta Cali-Palmira, C.P. 763537, A.A. 6713, Cali, Colombia
| | - Raafat El-Namaky
- Rice Research and Training Center,
Field Crops Research Institute, ARC, Giza, Egypt
| | - Norvie Manigbas
- Philippine Rice Research Institute (PhilRice), Maligaya, Science City of Muñoz, 3119 Nueva Ecija, Philippines
| | - Eduardo Jimmy P. Quilang
- Philippine Rice Research Institute (PhilRice), Maligaya, Science City of Muñoz, 3119 Nueva Ecija, Philippines
| | - Yu Iwahashi
- Graduate School of Agriculture,
Kyoto University, Kitashirakawa Oiwake-chou, Sakyo-ku, Kyoto 606-8502, Japan
| | - Kota Nakajima
- Graduate School of Agriculture,
Kyoto University, Kitashirakawa Oiwake-chou, Sakyo-ku, Kyoto 606-8502, Japan
| | - Eisuke Takeuchi
- Graduate School of Agriculture,
Kyoto University, Kitashirakawa Oiwake-chou, Sakyo-ku, Kyoto 606-8502, Japan
| | - Kazuki Saito
- Japan International Research Center for Agricultural Sciences, 1-1 Ohwashi, Tsukuba, Ibaraki 305-8686, Japan
- Africa Rice Center (AfricaRice), 01 BP 2551 Bouaké, Côte d'Ivoire
- International Rice Research Institute (IRRI), DAPO Box 7777, Metro Manila 1301, Philippines
| |
Collapse
|
12
|
Abebe AM, Kim Y, Kim J, Kim SL, Baek J. Image-Based High-Throughput Phenotyping in Horticultural Crops. PLANTS (BASEL, SWITZERLAND) 2023; 12:2061. [PMID: 37653978 PMCID: PMC10222289 DOI: 10.3390/plants12102061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/12/2023] [Accepted: 05/18/2023] [Indexed: 09/02/2023]
Abstract
Plant phenotyping is the primary task of any plant breeding program, and accurate measurement of plant traits is essential to select genotypes with better quality, high yield, and climate resilience. The majority of currently used phenotyping techniques are destructive and time-consuming. Recently, the development of various sensors and imaging platforms for rapid and efficient quantitative measurement of plant traits has become the mainstream approach in plant phenotyping studies. Here, we reviewed the trends of image-based high-throughput phenotyping methods applied to horticultural crops. High-throughput phenotyping is carried out using various types of imaging platforms developed for indoor or field conditions. We highlighted the applications of different imaging platforms in the horticulture sector with their advantages and limitations. Furthermore, the principles and applications of commonly used imaging techniques, visible light (RGB) imaging, thermal imaging, chlorophyll fluorescence, hyperspectral imaging, and tomographic imaging for high-throughput plant phenotyping, are discussed. High-throughput phenotyping has been widely used for phenotyping various horticultural traits, which can be morphological, physiological, biochemical, yield, biotic, and abiotic stress responses. Moreover, the ability of high-throughput phenotyping with the help of various optical sensors will lead to the discovery of new phenotypic traits which need to be explored in the future. We summarized the applications of image analysis for the quantitative evaluation of various traits with several examples of horticultural crops in the literature. Finally, we summarized the current trend of high-throughput phenotyping in horticultural crops and highlighted future perspectives.
Collapse
Affiliation(s)
| | | | | | | | - Jeongho Baek
- Department of Agricultural Biotechnology, National Institute of Agricultural Science, Rural Development Administration, Jeonju 54874, Republic of Korea
| |
Collapse
|
13
|
Dirr J, Gebauer D, Yao J, Daub R. Automatic Image Generation Pipeline for Instance Segmentation of Deformable Linear Objects. SENSORS (BASEL, SWITZERLAND) 2023; 23:3013. [PMID: 36991728 PMCID: PMC10058460 DOI: 10.3390/s23063013] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 02/21/2023] [Accepted: 03/02/2023] [Indexed: 06/19/2023]
Abstract
Robust detection of deformable linear objects (DLOs) is a crucial challenge for the automation of handling and assembly of cables and hoses. The lack of training data is a limiting factor for deep-learning-based detection of DLOs. In this context, we propose an automatic image generation pipeline for instance segmentation of DLOs. In this pipeline, a user can set boundary conditions to generate training data for industrial applications automatically. A comparison of different replication types of DLOs shows that modeling DLOs as rigid bodies with versatile deformations is most effective. Further, reference scenarios for the arrangement of DLOs are defined to generate scenes in a simulation automatically. This allows the pipelines to be quickly transferred to new applications. The validation of models trained with synthetic images and tested on real-world images shows the feasibility of the proposed data generation approach for segmentation of DLOs. Finally, we show that the pipeline yields results comparable to the state of the art but has advantages in reduced manual effort and transferability to new use cases.
Collapse
|
14
|
Wang Z, Guan B, Tang W, Wu S, Ma X, Niu H, Wan X, Zang Y. Classification of Fluorescently Labelled Maize Kernels Using Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:2840. [PMID: 36905044 PMCID: PMC10007198 DOI: 10.3390/s23052840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 02/24/2023] [Accepted: 03/01/2023] [Indexed: 06/18/2023]
Abstract
Accurate real-time classification of fluorescently labelled maize kernels is important for the industrial application of its advanced breeding techniques. Therefore, it is necessary to develop a real-time classification device and recognition algorithm for fluorescently labelled maize kernels. In this study, a machine vision (MV) system capable of identifying fluorescent maize kernels in real time was designed using a fluorescent protein excitation light source and a filter to achieve optimal detection. A high-precision method for identifying fluorescent maize kernels based on a YOLOv5s convolutional neural network (CNN) was developed. The kernel sorting effects of the improved YOLOv5s model, as well as other YOLO models, were analysed and compared. The results show that using a yellow LED light as an excitation light source combined with an industrial camera filter with a central wavelength of 645 nm achieves the best recognition effect for fluorescent maize kernels. Using the improved YOLOv5s algorithm can increase the recognition accuracy of fluorescent maize kernels to 96%. This study provides a feasible technical solution for the high-precision, real-time classification of fluorescent maize kernels and has universal technical value for the efficient identification and classification of various fluorescently labelled plant seeds.
Collapse
Affiliation(s)
- Zilong Wang
- School of Mechanical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Ben Guan
- School of Mechanical Engineering, University of Science and Technology Beijing, Beijing 100083, China
- Beijing Zhong Zhi International Institute of Agricultural Biosciences, Beijing 101200, China
- Shunde Innovation School, University of Science and Technology Beijing, Beijing 528300, China
| | - Wenbo Tang
- School of Mechanical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Suowei Wu
- Beijing Zhong Zhi International Institute of Agricultural Biosciences, Beijing 101200, China
- School of Chemistry and Biological Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Xuejie Ma
- School of Mechanical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Hao Niu
- School of Mechanical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Xiangyuan Wan
- Beijing Zhong Zhi International Institute of Agricultural Biosciences, Beijing 101200, China
- Shunde Innovation School, University of Science and Technology Beijing, Beijing 528300, China
- School of Chemistry and Biological Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Yong Zang
- School of Mechanical Engineering, University of Science and Technology Beijing, Beijing 100083, China
- Beijing Zhong Zhi International Institute of Agricultural Biosciences, Beijing 101200, China
- Shunde Innovation School, University of Science and Technology Beijing, Beijing 528300, China
| |
Collapse
|
15
|
Seki K, Toda Y. QTL mapping for seed morphology using the instance segmentation neural network in Lactuca spp. FRONTIERS IN PLANT SCIENCE 2022; 13:949470. [PMID: 36311127 PMCID: PMC9606697 DOI: 10.3389/fpls.2022.949470] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 09/26/2022] [Indexed: 06/16/2023]
Abstract
Wild species of lettuce (Lactuca sp.) are thought to have first been domesticated for oilseed contents to provide seed oil for human consumption. Although seed morphology is an important trait contributing to oilseed in lettuce, the underlying genetic mechanisms remain elusive. Since lettuce seeds are small, a manual phenotypic determination required for a genetic dissection of such traits is challenging. In this study, we built and applied an instance segmentation-based seed morphology quantification pipeline to measure traits in seeds generated from a cross between the domesticated oilseed type cultivar 'Oilseed' and the wild species 'UenoyamaMaruba' in an automated manner. Quantitative trait locus (QTL) mapping following ddRAD-seq revealed 11 QTLs linked to 7 seed traits (area, width, length, length-to-width ratio, eccentricity, perimeter length, and circularity). Remarkably, the three QTLs with the highest LOD scores, qLWR-3.1, qECC-3.1, and qCIR-3.1, for length-to-width ratio, eccentricity, and circularity, respectively, mapped to linkage group 3 (LG3) around 161.5 to 214.6 Mb, a region previously reported to be associated with domestication traits from wild species. These results suggest that the oilseed cultivar harbors genes acquired during domestication to control seed shape in this genomic region. This study also provides genetic evidence that domestication arose, at least in part, by selection for the oilseed type from wild species and demonstrates the effectiveness of image-based phenotyping to accelerate discoveries of the genetic basis for small morphological features such as seed size and shape.
Collapse
Affiliation(s)
- Kousuke Seki
- Nagano Vegetable and Ornamental Crops Experiment Station, Shiojiri, Japan
| | - Yosuke Toda
- Phytometrics Co., Ltd., Shizuoka, Japan
- Bioscience and Biotechnology Center, Nagoya University, Nagoya, Japan
- Institute of Transformative Bio-Molecules (WPI-ITbM), Nagoya University, Nagoya, Japan
| |
Collapse
|
16
|
Hussein BR, Malik OA, Ong WH, Slik JWF. Applications of computer vision and machine learning techniques for digitized herbarium specimens: A systematic literature review. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101641] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
17
|
Unlersen MF, Sonmez ME, Aslan MF, Demir B, Aydin N, Sabanci K, Ropelewska E. CNN–SVM hybrid model for varietal classification of wheat based on bulk samples. Eur Food Res Technol 2022. [DOI: 10.1007/s00217-022-04029-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
18
|
Majewski P, Zapotoczny P, Lampa P, Burduk R, Reiner J. Multipurpose monitoring system for edible insect breeding based on machine learning. Sci Rep 2022; 12:7892. [PMID: 35551215 PMCID: PMC9098436 DOI: 10.1038/s41598-022-11794-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 04/25/2022] [Indexed: 12/03/2022] Open
Abstract
The Tenebrio molitor has become the first insect added to the catalogue of novel foods by the European Food Safety Authority due to its rich nutritional value and the low carbon footprint produced during its breeding. The large scale of Tenebrio molitor breeding makes automation of the process, which is supported by a monitoring system, essential. Present research involves the development of a 3-module system for monitoring Tenebrio molitor breeding. The instance segmentation module (ISM) detected individual growth stages (larvae, pupae, beetles) of Tenebrio molitor, and also identified anomalies: dead larvae and pests. The semantic segmentation module (SSM) extracted feed, chitin, and frass from the obtained image. The larvae phenotyping module (LPM) calculated features for both individual larvae (length, curvature, mass, division into segments, and their classification) and the whole population (length distribution). The modules were developed using machine learning models (Mask R-CNN, U-Net, LDA), and were validated on different samples of real data. Synthetic image generation using a collection of labelled objects was used, which significantly reduced the development time of the models and reduced the problems of dense scenes and the imbalance of the considered classes. The obtained results (average [Formula: see text] for ISM and average [Formula: see text] for SSM) confirm the great potential of the proposed system.
Collapse
Affiliation(s)
- Paweł Majewski
- Faculty of Information and Communication Technology, Wrocław University of Science and Technology, Wrocław, Poland.
| | - Piotr Zapotoczny
- Department of Systems Engineering, University of Warmia and Mazury in Olsztyn, Olsztyn, Poland
| | - Piotr Lampa
- Faculty of Mechanical Engineering, Wrocław University of Science and Technology, Wrocław, Poland
| | - Robert Burduk
- Faculty of Information and Communication Technology, Wrocław University of Science and Technology, Wrocław, Poland
| | - Jacek Reiner
- Faculty of Mechanical Engineering, Wrocław University of Science and Technology, Wrocław, Poland
| |
Collapse
|
19
|
Wang P, Meng F, Donaldson P, Horan S, Panchy NL, Vischulis E, Winship E, Conner JK, Krysan PJ, Shiu S, Lehti‐Shiu MD. High-throughput measurement of plant fitness traits with an object detection method using Faster R-CNN. THE NEW PHYTOLOGIST 2022; 234:1521-1533. [PMID: 35218008 PMCID: PMC9310946 DOI: 10.1111/nph.18056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 02/09/2022] [Indexed: 06/14/2023]
Abstract
Revealing the contributions of genes to plant phenotype is frequently challenging because loss-of-function effects may be subtle or masked by varying degrees of genetic redundancy. Such effects can potentially be detected by measuring plant fitness, which reflects the cumulative effects of genetic changes over the lifetime of a plant. However, fitness is challenging to measure accurately, particularly in species with high fecundity and relatively small propagule sizes such as Arabidopsis thaliana. An image segmentation-based method using the software ImageJ and an object detection-based method using the Faster Region-based Convolutional Neural Network (R-CNN) algorithm were used for measuring two Arabidopsis fitness traits: seed and fruit counts. The segmentation-based method was error-prone (correlation between true and predicted seed counts, r2 = 0.849) because seeds touching each other were undercounted. By contrast, the object detection-based algorithm yielded near perfect seed counts (r2 = 0.9996) and highly accurate fruit counts (r2 = 0.980). Comparing seed counts for wild-type and 12 mutant lines revealed fitness effects for three genes; fruit counts revealed the same effects for two genes. Our study provides analysis pipelines and models to facilitate the investigation of Arabidopsis fitness traits and demonstrates the importance of examining fitness traits when studying gene functions.
Collapse
Affiliation(s)
- Peipei Wang
- Department of Plant BiologyMichigan State UniversityEast LansingMI48824USA
- DOE Great Lake Bioenergy Research CenterMichigan State UniversityEast LansingMI48824USA
| | - Fanrui Meng
- Department of Plant BiologyMichigan State UniversityEast LansingMI48824USA
- DOE Great Lake Bioenergy Research CenterMichigan State UniversityEast LansingMI48824USA
| | - Paityn Donaldson
- Department of Plant BiologyMichigan State UniversityEast LansingMI48824USA
| | - Sarah Horan
- Department of Plant BiologyMichigan State UniversityEast LansingMI48824USA
| | - Nicholas L. Panchy
- National Institute for Mathematical and Biological SynthesisUniversity of Tennessee1122 Volunteer Blvd, Suite 106KnoxvilleTN37996‐3410USA
| | - Elyse Vischulis
- Genetics and Genome Sciences Graduate ProgramMichigan State UniversityEast LansingMI48824USA
| | - Eamon Winship
- Department of Biochemistry and Molecular BiologyMichigan State UniversityEast LansingMI48824USA
| | - Jeffrey K. Conner
- Department of Plant BiologyMichigan State UniversityEast LansingMI48824USA
- W.K. Kellogg Biological StationMichigan State University3700 E. Gull Lake DriveHickory CornersMI49060USA
- Ecology, Evolution, and Behavior Graduate ProgramMichigan State UniversityEast LansingMI48824USA
| | - Patrick J. Krysan
- Department of HorticultureUniversity of Wisconsin‐MadisonMadisonWI53705USA
| | - Shin‐Han Shiu
- Department of Plant BiologyMichigan State UniversityEast LansingMI48824USA
- DOE Great Lake Bioenergy Research CenterMichigan State UniversityEast LansingMI48824USA
- Genetics and Genome Sciences Graduate ProgramMichigan State UniversityEast LansingMI48824USA
- Ecology, Evolution, and Behavior Graduate ProgramMichigan State UniversityEast LansingMI48824USA
- Department of Computational Mathematics, Science, and EngineeringMichigan State UniversityEast LansingMI48824USA
| | | |
Collapse
|
20
|
Ta QB, Huynh TC, Pham QQ, Kim JT. Corroded Bolt Identification Using Mask Region-Based Deep Learning Trained on Synthesized Data. SENSORS (BASEL, SWITZERLAND) 2022; 22:3340. [PMID: 35591032 PMCID: PMC9104359 DOI: 10.3390/s22093340] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 04/21/2022] [Accepted: 04/24/2022] [Indexed: 01/27/2023]
Abstract
The performance of a neural network depends on the availability of datasets, and most deep learning techniques lack accuracy and generalization when they are trained using limited datasets. Using synthesized training data is one of the effective ways to overcome the above limitation. Besides, the previous corroded bolt detection method has focused on classifying only two classes, clean and fully rusted bolts, and its performance for detecting partially rusted bolts is still questionable. This study presents a deep learning method to identify corroded bolts in steel structures using a mask region-based convolutional neural network (Mask-RCNN) trained on synthesized data. The Resnet50 integrated with a feature pyramid network is used as the backbone for feature extraction in the Mask-RCNN-based corroded bolt detector. A four-step data synthesis procedure is proposed to autonomously generate the training datasets of corroded bolts with different severities. Afterwards, the proposed detector is trained by the synthesized datasets, and its robustness is demonstrated by detecting corroded bolts in a lab-scale steel structure under varying capturing distances and perspectives. The results show that the proposed method has detected corroded bolts well and identified their corrosion levels with the most desired overall accuracy rate = 96.3% for a 1.0 m capturing distance and 97.5% for a 15° perspective angle.
Collapse
Affiliation(s)
- Quoc-Bao Ta
- Department of Ocean Engineering, Pukyong National University, Busan 48513, Korea; (Q.-B.T.); (Q.-Q.P.)
| | - Thanh-Canh Huynh
- Institute of Research and Development, Duy Tan University, Danang 550000, Vietnam;
- Faculty of Civil Engineering, Duy Tan University, Danang 550000, Vietnam
| | - Quang-Quang Pham
- Department of Ocean Engineering, Pukyong National University, Busan 48513, Korea; (Q.-B.T.); (Q.-Q.P.)
| | - Jeong-Tae Kim
- Department of Ocean Engineering, Pukyong National University, Busan 48513, Korea; (Q.-B.T.); (Q.-Q.P.)
| |
Collapse
|
21
|
Du J, Li B, Lu X, Yang X, Guo X, Zhao C. Quantitative phenotyping and evaluation for lettuce leaves of multiple semantic components. PLANT METHODS 2022; 18:54. [PMID: 35468831 PMCID: PMC9036747 DOI: 10.1186/s13007-022-00890-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 04/13/2022] [Indexed: 05/09/2023]
Abstract
BACKGROUND Classification and phenotype identification of lettuce leaves urgently require fine quantification of their multi-semantic traits. Different components of lettuce leaves undertake specific physiological functions and can be quantitatively described and interpreted using their observable properties. In particular, petiole and veins determine mechanical support and material transport performance of leaves, while other components may be closely related to photosynthesis. Currently, lettuce leaf phenotyping does not accurately differentiate leaf components, and there is no comparative evaluation for positive-back of the same lettuce leaf. In addition, a few traits of leaf components can be measured manually, but it is time-consuming, laborious, and inaccurate. Although several studies have been on image-based phenotyping of leaves, there is still a lack of robust methods to extract and validate multi-semantic traits of large-scale lettuce leaves automatically. RESULTS In this study, we developed an automated phenotyping pipeline to recognize the components of detached lettuce leaves and calculate multi-semantic traits for phenotype identification. Six semantic segmentation models were constructed to extract leaf components from visible images of lettuce leaves. And then, the leaf normalization technique was used to rotate and scale different leaf sizes to the "size-free" space for consistent leaf phenotyping. A novel lamina-based approach was also utilized to determine the petiole, first-order vein, and second-order veins. The proposed pipeline contributed 30 geometry-, 20 venation-, and 216 color-based traits to characterize each lettuce leaf. Eleven manually measured traits were evaluated and demonstrated high correlations with computation results. Further, positive-back images of leaves were used to verify the accuracy of the proposed method and evaluate the trait differences. CONCLUSIONS The proposed method lays an effective strategy for quantitative analysis of detached lettuce leaves' fine structure and components. Geometry, color, and vein traits of lettuce leaf and its components can be comprehensively utilized for phenotype identification and breeding of lettuce. This study provides valuable perspectives for developing automated high-throughput phenotyping application of lettuce leaves and the improvement of agronomic traits such as effective photosynthetic area and vein configuration.
Collapse
Affiliation(s)
- Jianjun Du
- Beijing Key Lab of Digital Plant, Research Center of Information Technology, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Bo Li
- Beijing Key Laboratory of Agricultural Genetic Resources and Biotechnology, Beijing Agro-Biotechnology Research Center, Beijing, China
| | - Xianju Lu
- Beijing Key Lab of Digital Plant, Research Center of Information Technology, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Xiaozeng Yang
- Beijing Key Laboratory of Agricultural Genetic Resources and Biotechnology, Beijing Agro-Biotechnology Research Center, Beijing, China
| | - Xinyu Guo
- Beijing Key Lab of Digital Plant, Research Center of Information Technology, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Chunjiang Zhao
- Beijing Key Lab of Digital Plant, Research Center of Information Technology, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| |
Collapse
|
22
|
Rodríguez-López CE, Jiang Y, Kamileen MO, Lichman BR, Hong B, Vaillancourt B, Buell CR, O'Connor SE. Phylogeny-aware chemoinformatic analysis of chemical diversity in the Lamiaceae enables iridoid pathway assembly and discovery of aucubin synthase. Mol Biol Evol 2022; 39:6550147. [PMID: 35298643 PMCID: PMC9048965 DOI: 10.1093/molbev/msac057] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Countless reports describe the isolation and structural characterization of natural products, yet this information remains disconnected and under-utilized. Using a cheminformatics approach, we leverage the reported observations of iridoid glucosides with the known phylogeny of a large iridoid producing plant family (Lamiaceae), to generate a set of biosynthetic pathways that best explain the extant iridoid chemical diversity. We developed a pathway reconstruction algorithm that connects iridoid reports via reactions, and prunes this solution space by considering phylogenetic relationships between genera. We formulate a model that emulates the evolution of iridoid glucosides to create a synthetic dataset, used to select the parameters that would best reconstruct the pathways, and apply them to the iridoid dataset to generate Pathway Hypotheses. These computationally generated pathways were then used as the basis by which to select and screen biosynthetic enzyme candidates. Our model was successfully applied to discover a cytochrome P450 enzyme from Callicarpa americana that catalyzes the oxidation of bartsioside to aucubin, predicted by our model despite neither molecule having been observed in the genus. We also demonstrate aucubin synthase activity in orthologues of Vitex agnus-castus, and the outgroup Paulownia tomentosa, further strengthening the hypothesis, enabled by our model, that the reaction was present in the ancestral biosynthetic pathway. This is the first systematic hypothesis on the epi-iridoid glucosides biosynthesis in 25 years, and sets the stage for streamlined work on the iridoid pathway. This work highlights how curation and computational analysis of widely available structural data can facilitate hypothesis-based gene discovery.
Collapse
Affiliation(s)
- Carlos E Rodríguez-López
- Department of Natural Product Biosynthesis, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany.,Escuela de Ingenieria y Ciencias, Tecnologico de Monterrey, 64849 Monterrey, Mexico
| | - Yindi Jiang
- Department of Natural Product Biosynthesis, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany
| | - Mohamed O Kamileen
- Department of Natural Product Biosynthesis, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany
| | - Benjamin R Lichman
- Department of Biology, University of York, YO10 5DD York, United Kingdom
| | - Benke Hong
- Department of Natural Product Biosynthesis, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany
| | - Brieanne Vaillancourt
- Center for Applied Genetic Technologies, University of Georgia, Athens, GA 30602, USA
| | - C Robin Buell
- Department of Crop & Soil Sciences, University of Georgia, Athens, GA 30602, USA
| | - Sarah E O'Connor
- Department of Natural Product Biosynthesis, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany
| |
Collapse
|
23
|
Ninomiya S. High-throughput field crop phenotyping: current status and challenges. BREEDING SCIENCE 2022; 72:3-18. [PMID: 36045897 PMCID: PMC8987842 DOI: 10.1270/jsbbs.21069] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 12/16/2021] [Indexed: 05/03/2023]
Abstract
In contrast to the rapid advances made in plant genotyping, plant phenotyping is considered a bottleneck in plant science. This has promoted high-throughput plant phenotyping (HTP) studies, resulting in an exponential increase in phenotyping-related publications. The development of HTP was originally intended for use as indoor HTP technologies for model plant species under controlled environments. However, this subsequently shifted to HTP for use in crops in fields. Although HTP in fields is much more difficult to conduct due to unstable environmental conditions compared to HTP in controlled environments, recent advances in HTP technology have allowed these difficulties to be overcome, allowing for rapid, efficient, non-destructive, non-invasive, quantitative, repeatable, and objective phenotyping. Recent HTP developments have been accelerated by the advances in data analysis, sensors, and robot technologies, including machine learning, image analysis, three dimensional (3D) reconstruction, image sensors, laser sensors, environmental sensors, and drones, along with high-speed computational resources. This article provides an overview of recent HTP technologies, focusing mainly on canopy-based phenotypes of major crops, such as canopy height, canopy coverage, canopy biomass, and canopy stressed appearance, in addition to crop organ detection and counting in the fields. Current topics in field HTP are also presented, followed by a discussion on the low rates of adoption of HTP in practical breeding programs.
Collapse
Affiliation(s)
- Seishi Ninomiya
- Graduate School of Agriculture and Life Sciences, The University of Tokyo, Nishitokyo, Tokyo 188-0002, Japan
- Plant Phenomics Research Center, Nanjing Agricultural University, Nanjing, China
| |
Collapse
|
24
|
Teramoto S, Uga Y. Improving the efficiency of plant root system phenotyping through digitization and automation. BREEDING SCIENCE 2022; 72:48-55. [PMID: 36045896 PMCID: PMC8987843 DOI: 10.1270/jsbbs.21053] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 11/11/2021] [Indexed: 05/19/2023]
Abstract
Root system architecture (RSA) determines unevenly distributed water and nutrient availability in soil. Genetic improvement of RSA, therefore, is related to crop production. However, RSA phenotyping has been carried out less frequently than above-ground phenotyping because measuring roots in the soil is difficult and labor intensive. Recent advancements have led to the digitalization of plant measurements; this digital phenotyping has been widely used for measurements of both above-ground and RSA traits. Digital phenotyping for RSA is slower and more difficult than for above-ground traits because the roots are hidden underground. In this review, we summarized recent trends in digital phenotyping for RSA traits. We classified the sample types into three categories: soil block containing roots, section of soil block, and root sample. Examples of the use of digital phenotyping are presented for each category. We also discussed room for improvement in digital phenotyping in each category.
Collapse
Affiliation(s)
- Shota Teramoto
- Institute of Crop Science, National Agriculture and Food Research Organization, Tsukuba, Ibaraki 305-8518, Japan
| | - Yusaku Uga
- Institute of Crop Science, National Agriculture and Food Research Organization, Tsukuba, Ibaraki 305-8518, Japan
- Corresponding author (e-mail: )
| |
Collapse
|
25
|
Kruitbosch HT, Mzayek Y, Omlor S, Guerra P, Milias-Argeitis A. A convolutional neural network for segmentation of yeast cells without manual training annotations. Bioinformatics 2022; 38:1427-1433. [PMID: 34893817 PMCID: PMC8825468 DOI: 10.1093/bioinformatics/btab835] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 10/09/2021] [Accepted: 12/07/2021] [Indexed: 01/05/2023] Open
Abstract
MOTIVATION Single-cell time-lapse microscopy is a ubiquitous tool for studying the dynamics of complex cellular processes. While imaging can be automated to generate very large volumes of data, the processing of the resulting movies to extract high-quality single-cell information remains a challenging task. The development of software tools that automatically identify and track cells is essential for realizing the full potential of time-lapse microscopy data. Convolutional neural networks (CNNs) are ideally suited for such applications, but require great amounts of manually annotated data for training, a time-consuming and tedious process. RESULTS We developed a new approach to CNN training for yeast cell segmentation based on synthetic data and present (i) a software tool for the generation of synthetic images mimicking brightfield images of budding yeast cells and (ii) a convolutional neural network (Mask R-CNN) for yeast segmentation that was trained on a fully synthetic dataset. The Mask R-CNN performed excellently on segmenting actual microscopy images of budding yeast cells, and a density-based spatial clustering algorithm (DBSCAN) was able to track the detected cells across the frames of microscopy movies. Our synthetic data creation tool completely bypassed the laborious generation of manually annotated training datasets, and can be easily adjusted to produce images with many different features. The incorporation of synthetic data creation into the development pipeline of CNN-based tools for budding yeast microscopy is a critical step toward the generation of more powerful, widely applicable and user-friendly image processing tools for this microorganism. AVAILABILITY AND IMPLEMENTATION The synthetic data generation code can be found at https://github.com/prhbrt/synthetic-yeast-cells. The Mask R-CNN as well as the tuning and benchmarking scripts can be found at https://github.com/ymzayek/yeastcells-detection-maskrcnn. We also provide Google Colab scripts that reproduce all the results of this work. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Herbert T Kruitbosch
- Center for Information Technology, University of Groningen, 9747 AJ Groningen, The Netherlands
| | - Yasmin Mzayek
- Center for Information Technology, University of Groningen, 9747 AJ Groningen, The Netherlands
| | - Sara Omlor
- Center for Information Technology, University of Groningen, 9747 AJ Groningen, The Netherlands
| | - Paolo Guerra
- Groningen Biomolecular Sciences and Biotechnology Institute, University of Groningen, 9747 AG Groningen, The Netherlands
| | - Andreas Milias-Argeitis
- Groningen Biomolecular Sciences and Biotechnology Institute, University of Groningen, 9747 AG Groningen, The Netherlands
| |
Collapse
|
26
|
Classification of Fruit Flies by Gender in Images Using Smartphones and the YOLOv4-Tiny Neural Network. MATHEMATICS 2022. [DOI: 10.3390/math10030295] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
The fruit fly Drosophila melanogaster is a classic research object in genetics and systems biology. In the genetic analysis of flies, a routine task is to determine the offspring size and gender ratio in their populations. Currently, these estimates are made manually, which is a very time-consuming process. The counting and gender determination of flies can be automated by using image analysis with deep learning neural networks on mobile devices. We proposed an algorithm based on the YOLOv4-tiny network to identify Drosophila flies and determine their gender based on the protocol of taking pictures of insects on a white sheet of paper with a cell phone camera. Three strategies with different types of augmentation were used to train the network. The best performance (F1 = 0.838) was achieved using synthetic images with mosaic generation. Females gender determination is worse than that one of males. Among the factors that most strongly influencing the accuracy of fly gender recognition, the fly’s position on the paper was the most important. Increased light intensity and higher quality of the device cameras have a positive effect on the recognition accuracy. We implement our method in the FlyCounter Android app for mobile devices, which performs all the image processing steps using the device processors only. The time that the YOLOv4-tiny algorithm takes to process one image is less than 4 s.
Collapse
|
27
|
Colliard-Granero A, Batool M, Jankovic J, Jitsev J, Eikerling MH, Malek K, Eslamibidgoli MJ. Deep learning for the automation of particle analysis in catalyst layers for polymer electrolyte fuel cells. NANOSCALE 2021; 14:10-18. [PMID: 34846412 DOI: 10.1039/d1nr06435e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The rapidly growing use of imaging infrastructure in the energy materials domain drives significant data accumulation in terms of their amount and complexity. The applications of routine techniques for image processing in materials research are often ad hoc, indiscriminate, and empirical, which renders the crucial task of obtaining reliable metrics for quantifications obscure. Moreover, these techniques are expensive, slow, and often involve several preprocessing steps. This paper presents a novel deep learning-based approach for the high-throughput analysis of the particle size distributions from transmission electron microscopy (TEM) images of carbon-supported catalysts for polymer electrolyte fuel cells. A dataset of 40 high-resolution TEM images at different magnification levels, from 10 to 100 nm scales, was annotated manually. This dataset was used to train the U-Net model, with the StarDist formulation for the loss function, for the nanoparticle segmentation task. StarDist reached a precision of 86%, recall of 85%, and an F1-score of 85% by training on datasets as small as thirty images. The segmentation maps outperform models reported in the literature for a similar problem, and the results on particle size analyses agree well with manual particle size measurements, albeit at a significantly lower cost.
Collapse
Affiliation(s)
- André Colliard-Granero
- Theory and Computation of Energy Materials (IEK-13), Institute of Energy and Climate Research, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany.
- Department of Chemistry, University of Cologne, Greinstr. 4-6, 50939 Cologne, Germany
| | - Mariah Batool
- Department of Materials Science and Engineering, University of Connecticut, 97 North Eagleville Road, Unit 3136, Storrs, CT 06269-3136, USA
| | - Jasna Jankovic
- Department of Materials Science and Engineering, University of Connecticut, 97 North Eagleville Road, Unit 3136, Storrs, CT 06269-3136, USA
| | - Jenia Jitsev
- Julich Supercomputing Center, Forschungszentrum Jülich, 52425 Jülich, Germany
| | - Michael H Eikerling
- Theory and Computation of Energy Materials (IEK-13), Institute of Energy and Climate Research, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany.
- Chair of Theory and Computation of Energy Materials, Faculty of Georesources and Materials Engineering, RWTH Aachen University, Aachen 52062, Germany
| | - Kourosh Malek
- Theory and Computation of Energy Materials (IEK-13), Institute of Energy and Climate Research, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany.
- Centre for Advanced Simulation and Analytics (CASA), Simulation and Data Science Lab for Energy Materials (SDL-EM), Forschungszentrum Jülich GmbH, 52425 Jülich, Germany
| | - Mohammad J Eslamibidgoli
- Theory and Computation of Energy Materials (IEK-13), Institute of Energy and Climate Research, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany.
| |
Collapse
|
28
|
Chomiak T, Rasiah NP, Molina LA, Hu B, Bains JS, Füzesi T. A versatile computational algorithm for time-series data analysis and machine-learning models. NPJ PARKINSONS DISEASE 2021; 7:97. [PMID: 34753948 PMCID: PMC8578326 DOI: 10.1038/s41531-021-00240-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 09/29/2021] [Indexed: 11/10/2022]
Abstract
Here we introduce Local Topological Recurrence Analysis (LoTRA), a simple computational approach for analyzing time-series data. Its versatility is elucidated using simulated data, Parkinsonian gait, and in vivo brain dynamics. We also show that this algorithm can be used to build a remarkably simple machine-learning model capable of outperforming deep-learning models in detecting Parkinson’s disease from a single digital handwriting test.
Collapse
Affiliation(s)
- Taylor Chomiak
- Division of Translational Neuroscience, Department of Clinical Neurosciences, Hotchkiss Brain Institute, Alberta Children's Hospital Research Institute, Cumming School of Medicine, University of Calgary, 3330 Hospital Drive, Calgary, AB, T2N 4N1, Canada. .,CSM Optogenetics Facility, University of Calgary, 3330 Hospital Drive, Calgary, AB, T2N 4N1, Canada.
| | - Neilen P Rasiah
- Department of Physiology & Pharmacology, Cumming School of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada
| | - Leonardo A Molina
- CSM Optogenetics Facility, University of Calgary, 3330 Hospital Drive, Calgary, AB, T2N 4N1, Canada
| | - Bin Hu
- Division of Translational Neuroscience, Department of Clinical Neurosciences, Hotchkiss Brain Institute, Alberta Children's Hospital Research Institute, Cumming School of Medicine, University of Calgary, 3330 Hospital Drive, Calgary, AB, T2N 4N1, Canada
| | - Jaideep S Bains
- Department of Physiology & Pharmacology, Cumming School of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada
| | - Tamás Füzesi
- CSM Optogenetics Facility, University of Calgary, 3330 Hospital Drive, Calgary, AB, T2N 4N1, Canada. .,Department of Physiology & Pharmacology, Cumming School of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada.
| |
Collapse
|
29
|
Boost Precision Agriculture with Unmanned Aerial Vehicle Remote Sensing and Edge Intelligence: A Survey. REMOTE SENSING 2021. [DOI: 10.3390/rs13214387] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In recent years unmanned aerial vehicles (UAVs) have emerged as a popular and cost-effective technology to capture high spatial and temporal resolution remote sensing (RS) images for a wide range of precision agriculture applications, which can help reduce costs and environmental impacts by providing detailed agricultural information to optimize field practices. Furthermore, deep learning (DL) has been successfully applied in agricultural applications such as weed detection, crop pest and disease detection, etc. as an intelligent tool. However, most DL-based methods place high computation, memory and network demands on resources. Cloud computing can increase processing efficiency with high scalability and low cost, but results in high latency and great pressure on the network bandwidth. The emerging of edge intelligence, although still in the early stages, provides a promising solution for artificial intelligence (AI) applications on intelligent edge devices at the edge of the network close to data sources. These devices are with built-in processors enabling onboard analytics or AI (e.g., UAVs and Internet of Things gateways). Therefore, in this paper, a comprehensive survey on the latest developments of precision agriculture with UAV RS and edge intelligence is conducted for the first time. The major insights observed are as follows: (a) in terms of UAV systems, small or light, fixed-wing or industrial rotor-wing UAVs are widely used in precision agriculture; (b) sensors on UAVs can provide multi-source datasets, and there are only a few public UAV dataset for intelligent precision agriculture, mainly from RGB sensors and a few from multispectral and hyperspectral sensors; (c) DL-based UAV RS methods can be categorized into classification, object detection and segmentation tasks, and convolutional neural network and recurrent neural network are the mostly common used network architectures; (d) cloud computing is a common solution to UAV RS data processing, while edge computing brings the computing close to data sources; (e) edge intelligence is the convergence of artificial intelligence and edge computing, in which model compression especially parameter pruning and quantization is the most important and widely used technique at present, and typical edge resources include central processing units, graphics processing units and field programmable gate arrays.
Collapse
|
30
|
Xie S, Hu C, Bagavathiannan M, Song D. Toward Robotic Weed Control: Detection of Nutsedge Weed in Bermudagrass Turf Using Inaccurate and Insufficient Training Data. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3098012] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
31
|
Warman C, Fowler JE. Deep learning-based high-throughput phenotyping can drive future discoveries in plant reproductive biology. PLANT REPRODUCTION 2021; 34:81-89. [PMID: 33725183 PMCID: PMC8128740 DOI: 10.1007/s00497-021-00407-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 02/15/2021] [Indexed: 05/09/2023]
Abstract
Advances in deep learning are providing a powerful set of image analysis tools that are readily accessible for high-throughput phenotyping applications in plant reproductive biology. High-throughput phenotyping systems are becoming critical for answering biological questions on a large scale. These systems have historically relied on traditional computer vision techniques. However, neural networks and specifically deep learning are rapidly becoming more powerful and easier to implement. Here, we examine how deep learning can drive phenotyping systems and be used to answer fundamental questions in reproductive biology. We describe previous applications of deep learning in the plant sciences, provide general recommendations for applying these methods to the study of plant reproduction, and present a case study in maize ear phenotyping. Finally, we highlight several examples where deep learning has enabled research that was previously out of reach and discuss the future outlook of these methods.
Collapse
Affiliation(s)
- Cedar Warman
- Department of Botany and Plant Pathology, Oregon State University, Corvallis, OR, USA.
- School of Plant Sciences, University of Arizona, Tucson, AZ, USA.
| | - John E Fowler
- Department of Botany and Plant Pathology, Oregon State University, Corvallis, OR, USA
| |
Collapse
|
32
|
Marsh JI, Hu H, Gill M, Batley J, Edwards D. Crop breeding for a changing climate: integrating phenomics and genomics with bioinformatics. TAG. THEORETICAL AND APPLIED GENETICS. THEORETISCHE UND ANGEWANDTE GENETIK 2021; 134:1677-1690. [PMID: 33852055 DOI: 10.1007/s00122-021-03820-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 03/18/2021] [Indexed: 05/05/2023]
Abstract
Safeguarding crop yields in a changing climate requires bioinformatics advances in harnessing data from vast phenomics and genomics datasets to translate research findings into climate smart crops in the field. Climate change and an additional 3 billion mouths to feed by 2050 raise serious concerns over global food security. Crop breeding and land management strategies will need to evolve to maximize the utilization of finite resources in coming years. High-throughput phenotyping and genomics technologies are providing researchers with the information required to guide and inform the breeding of climate smart crops adapted to the environment. Bioinformatics has a fundamental role to play in integrating and exploiting this fast accumulating wealth of data, through association studies to detect genomic targets underlying key adaptive climate-resilient traits. These data provide tools for breeders to tailor crops to their environment and can be introduced using advanced selection or genome editing methods. To effectively translate research into the field, genomic and phenomic information will need to be integrated into comprehensive clade-specific databases and platforms alongside accessible tools that can be used by breeders to inform the selection of climate adaptive traits. Here we discuss the role of bioinformatics in extracting, analysing, integrating and managing genomic and phenomic data to improve climate resilience in crops, including current, emerging and potential approaches, applications and bottlenecks in the research and breeding pipeline.
Collapse
Affiliation(s)
- Jacob I Marsh
- School of Biological Sciences and Institute of Agriculture, The University of Western Australia, Perth, 6009, Australia
| | - Haifei Hu
- School of Biological Sciences and Institute of Agriculture, The University of Western Australia, Perth, 6009, Australia
| | - Mitchell Gill
- School of Biological Sciences and Institute of Agriculture, The University of Western Australia, Perth, 6009, Australia
| | - Jacqueline Batley
- School of Biological Sciences and Institute of Agriculture, The University of Western Australia, Perth, 6009, Australia
| | - David Edwards
- School of Biological Sciences and Institute of Agriculture, The University of Western Australia, Perth, 6009, Australia.
| |
Collapse
|
33
|
Yang S, Zheng L, He P, Wu T, Sun S, Wang M. High-throughput soybean seeds phenotyping with convolutional neural networks and transfer learning. PLANT METHODS 2021; 17:50. [PMID: 33952294 PMCID: PMC8097802 DOI: 10.1186/s13007-021-00749-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 04/23/2021] [Indexed: 05/14/2023]
Abstract
BACKGROUND Effective soybean seed phenotyping demands large-scale accurate quantities of morphological parameters. The traditional manual acquisition of soybean seed morphological phenotype information is error-prone, and time-consuming, which is not feasible for large-scale collection. The segmentation of individual soybean seed is the prerequisite step for obtaining phenotypic traits such as seed length and seed width. Nevertheless, traditional image-based methods for obtaining high-throughput soybean seed phenotype are not robust and practical. Although deep learning-based algorithms can achieve accurate training and strong generalization capabilities, it requires a large amount of ground truth data which is often the limitation step. RESULTS We showed a novel synthetic image generation and augmentation method based on domain randomization. We synthesized a plenty of labeled image dataset automatedly by our method to train instance segmentation network for high throughput soybean seeds segmentation. It can pronouncedly decrease the cost of manual annotation and facilitate the preparation of training dataset. And the convolutional neural network can be purely trained by our synthetic image dataset to achieve a good performance. In the process of training Mask R-CNN, we proposed a transfer learning method which can reduce the computing costs significantly by finetuning the pre-trained model weights. We demonstrated the robustness and generalization ability of our method by analyzing the result of synthetic test datasets with different resolution and the real-world soybean seeds test dataset. CONCLUSION The experimental results show that the proposed method realized the effective segmentation of individual soybean seed and the efficient calculation of the morphological parameters of each seed and it is practical to use this approach for high-throughput objects instance segmentation and high-throughput seeds phenotyping.
Collapse
Affiliation(s)
- Si Yang
- College of Information and Electrical Engineering, China Agricultural University, Beijing, 100083, China
- Key Laboratory of Agricultural Informatization Standardization, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing, 100083, China
| | - Lihua Zheng
- College of Information and Electrical Engineering, China Agricultural University, Beijing, 100083, China.
- Key Laboratory of Modern Precision Agriculture System Integration Research, Ministry of Education, China Agricultural University, Beijing, 100083, China.
| | - Peng He
- College of Information Engineering, Northwest A&F University, Yangling, 712100, China
| | - Tingting Wu
- Institute of Crop Sciences, Chinese Academy of Agricultural Sciences, Beijing, 100081, China
| | - Shi Sun
- Institute of Crop Sciences, Chinese Academy of Agricultural Sciences, Beijing, 100081, China
| | - Minjuan Wang
- College of Information and Electrical Engineering, China Agricultural University, Beijing, 100083, China.
- College of Information Science and Engineering, Shandong Agriculture and Engineering University, Jinan, 251100, China.
- Key Laboratory of Agricultural Informatization Standardization, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing, 100083, China.
| |
Collapse
|
34
|
Williamson HF, Brettschneider J, Caccamo M, Davey RP, Goble C, Kersey PJ, May S, Morris RJ, Ostler R, Pridmore T, Rawlings C, Studholme D, Tsaftaris SA, Leonelli S. Data management challenges for artificial intelligence in plant and agricultural research. F1000Res 2021; 10:324. [PMID: 36873457 PMCID: PMC9975417 DOI: 10.12688/f1000research.52204.1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/12/2021] [Indexed: 09/14/2024] Open
Abstract
Artificial Intelligence (AI) is increasingly used within plant science, yet it is far from being routinely and effectively implemented in this domain. Particularly relevant to the development of novel food and agricultural technologies is the development of validated, meaningful and usable ways to integrate, compare and visualise large, multi-dimensional datasets from different sources and scientific approaches. After a brief summary of the reasons for the interest in data science and AI within plant science, the paper identifies and discusses eight key challenges in data management that must be addressed to further unlock the potential of AI in crop and agronomic research, and particularly the application of Machine Learning (AI) which holds much promise for this domain.
Collapse
Affiliation(s)
- Hugh F. Williamson
- Exeter Centre for the Study of the Life Sciences & Institute for Data Science and Artificial Intelligence, University of Exeter, Exeter, UK
| | | | - Mario Caccamo
- NIAB, National Research Institute of Brewing, East Malling, UK
| | | | - Carole Goble
- Department of Computer Science, University of Manchester, Manchester, UK
| | | | - Sean May
- School of Biosciences, University of Nottingham, Loughborough, UK
| | | | - Richard Ostler
- Department of Computational and Analytical Sciences, Rothamsted Research, Harpendem, UK
| | - Tony Pridmore
- School of Computer Science, University of Nottingham, Nottingham, UK
| | - Chris Rawlings
- Department of Computational and Analytical Sciences, Rothamsted Research, Harpendem, UK
| | | | - Sotirios A. Tsaftaris
- Institute of Digital Communications, University of Edinburgh, Edinburgh, UK
- Alan Turing Institute, London, UK
| | - Sabina Leonelli
- Exeter Centre for the Study of the Life Sciences & Institute for Data Science and Artificial Intelligence, University of Exeter, Exeter, UK
- Alan Turing Institute, London, UK
| |
Collapse
|
35
|
Williamson HF, Brettschneider J, Caccamo M, Davey RP, Goble C, Kersey PJ, May S, Morris RJ, Ostler R, Pridmore T, Rawlings C, Studholme D, Tsaftaris SA, Leonelli S. Data management challenges for artificial intelligence in plant and agricultural research. F1000Res 2021; 10:324. [PMID: 36873457 PMCID: PMC9975417 DOI: 10.12688/f1000research.52204.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/12/2023] [Indexed: 01/19/2023] Open
Abstract
Artificial Intelligence (AI) is increasingly used within plant science, yet it is far from being routinely and effectively implemented in this domain. Particularly relevant to the development of novel food and agricultural technologies is the development of validated, meaningful and usable ways to integrate, compare and visualise large, multi-dimensional datasets from different sources and scientific approaches. After a brief summary of the reasons for the interest in data science and AI within plant science, the paper identifies and discusses eight key challenges in data management that must be addressed to further unlock the potential of AI in crop and agronomic research, and particularly the application of Machine Learning (AI) which holds much promise for this domain.
Collapse
Affiliation(s)
- Hugh F. Williamson
- Exeter Centre for the Study of the Life Sciences & Institute for Data Science and Artificial Intelligence, University of Exeter, Exeter, UK
| | | | - Mario Caccamo
- NIAB, National Research Institute of Brewing, East Malling, UK
| | | | - Carole Goble
- Department of Computer Science, University of Manchester, Manchester, UK
| | | | - Sean May
- School of Biosciences, University of Nottingham, Loughborough, UK
| | | | - Richard Ostler
- Department of Computational and Analytical Sciences, Rothamsted Research, Harpendem, UK
| | - Tony Pridmore
- School of Computer Science, University of Nottingham, Nottingham, UK
| | - Chris Rawlings
- Department of Computational and Analytical Sciences, Rothamsted Research, Harpendem, UK
| | | | - Sotirios A. Tsaftaris
- Institute of Digital Communications, University of Edinburgh, Edinburgh, UK
- Alan Turing Institute, London, UK
| | - Sabina Leonelli
- Exeter Centre for the Study of the Life Sciences & Institute for Data Science and Artificial Intelligence, University of Exeter, Exeter, UK
- Alan Turing Institute, London, UK
| |
Collapse
|
36
|
Arent I, Schmidt FP, Botsch M, Dürr V. Marker-Less Motion Capture of Insect Locomotion With Deep Neural Networks Pre-trained on Synthetic Videos. Front Behav Neurosci 2021; 15:637806. [PMID: 33967713 PMCID: PMC8100444 DOI: 10.3389/fnbeh.2021.637806] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 03/23/2021] [Indexed: 11/13/2022] Open
Abstract
Motion capture of unrestrained moving animals is a major analytic tool in neuroethology and behavioral physiology. At present, several motion capture methodologies have been developed, all of which have particular limitations regarding experimental application. Whereas marker-based motion capture systems are very robust and easily adjusted to suit different setups, tracked species, or body parts, they cannot be applied in experimental situations where markers obstruct the natural behavior (e.g., when tracking delicate, elastic, and/or sensitive body structures). On the other hand, marker-less motion capture systems typically require setup- and animal-specific adjustments, for example by means of tailored image processing, decision heuristics, and/or machine learning of specific sample data. Among the latter, deep-learning approaches have become very popular because of their applicability to virtually any sample of video data. Nevertheless, concise evaluation of their training requirements has rarely been done, particularly with regard to the transfer of trained networks from one application to another. To address this issue, the present study uses insect locomotion as a showcase example for systematic evaluation of variation and augmentation of the training data. For that, we use artificially generated video sequences with known combinations of observed, real animal postures and randomized body position, orientation, and size. Moreover, we evaluate the generalization ability of networks that have been pre-trained on synthetic videos to video recordings of real walking insects, and estimate the benefit in terms of reduced requirement for manual annotation. We show that tracking performance is affected only little by scaling factors ranging from 0.5 to 1.5. As expected from convolutional networks, the translation of the animal has no effect. On the other hand, we show that sufficient variation of rotation in the training data is essential for performance, and make concise suggestions about how much variation is required. Our results on transfer from synthetic to real videos show that pre-training reduces the amount of necessary manual annotation by about 50%.
Collapse
Affiliation(s)
- Ilja Arent
- Biological Cybernetics, Faculty of Biology, Bielefeld University, Bielefeld, Germany
| | - Florian P. Schmidt
- Biological Cybernetics, Faculty of Biology, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| | - Mario Botsch
- Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
- Computer Graphics, TU Dortmund University, Dortmund, Germany
| | - Volker Dürr
- Biological Cybernetics, Faculty of Biology, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
37
|
High-throughput image segmentation and machine learning approaches in the plant sciences across multiple scales. Emerg Top Life Sci 2021; 5:239-248. [DOI: 10.1042/etls20200273] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 02/09/2021] [Accepted: 02/11/2021] [Indexed: 01/12/2023]
Abstract
Agriculture has benefited greatly from the rise of big data and high-performance computing. The acquisition and analysis of data across biological scales have resulted in strategies modeling inter- actions between plant genotype and environment, models of root architecture that provide insight into resource utilization, and the elucidation of cell-to-cell communication mechanisms that are instrumental in plant development. Image segmentation and machine learning approaches for interpreting plant image data are among many of the computational methodologies that have evolved to address challenging agricultural and biological problems. These approaches have led to contributions such as the accelerated identification of gene that modulate stress responses in plants and automated high-throughput phenotyping for early detection of plant diseases. The continued acquisition of high throughput imaging across multiple biological scales provides opportunities to further push the boundaries of our understandings quicker than ever before. In this review, we explore the current state of the art methodologies in plant image segmentation and machine learning at the agricultural, organ, and cellular scales in plants. We show how the methodologies for segmentation and classification differ due to the diversity of physical characteristics found at these different scales. We also discuss the hardware technologies most commonly used at these different scales, the types of quantitative metrics that can be extracted from these images, and how the biological mechanisms by which plants respond to abiotic/biotic stresses or genotypic modifications can be extracted from these approaches.
Collapse
|
38
|
Loh DR, Yong WX, Yapeter J, Subburaj K, Chandramohanadas R. A deep learning approach to the screening of malaria infection: Automated and rapid cell counting, object detection and instance segmentation using Mask R-CNN. Comput Med Imaging Graph 2021; 88:101845. [PMID: 33582593 DOI: 10.1016/j.compmedimag.2020.101845] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2020] [Revised: 12/01/2020] [Accepted: 12/11/2020] [Indexed: 10/22/2022]
Abstract
Accurate and early diagnosis is critical to proper malaria treatment and hence death prevention. Several computer vision technologies have emerged in recent years as alternatives to traditional microscopy and rapid diagnostic tests. In this work, we used a deep learning model called Mask R-CNN that is trained on uninfected and Plasmodium falciparum-infected red blood cells. Our predictive model produced reports at a rate 15 times faster than manual counting without compromising on accuracy. Another unique feature of our model is its ability to generate segmentation masks on top of bounding box classifications for immediate visualization, making it superior to existing models. Furthermore, with greater standardization, it holds much potential to reduce errors arising from manual counting and save a significant amount of human resources, time, and cost.
Collapse
Affiliation(s)
- De Rong Loh
- Pillar of Information Systems Technology and Design, Singapore University of Technology and Design, Singapore, Singapore.
| | - Wen Xin Yong
- Pillar of Engineering Product Development, Singapore University of Technology and Design, Singapore, Singapore.
| | - Jullian Yapeter
- Faculty of Engineering, University of Waterloo, Ontario, Canada.
| | - Karupppasamy Subburaj
- Pillar of Engineering Product Development, Singapore University of Technology and Design, Singapore, Singapore.
| | - Rajesh Chandramohanadas
- Pillar of Engineering Product Development, Singapore University of Technology and Design, Singapore, Singapore; Department of Microbiology and Immunology, National University of Singapore, Singapore.
| |
Collapse
|