1
|
Ong S, Høye TT. Trap colour strongly affects the ability of deep learning models to recognize insect species in images of sticky traps. PEST MANAGEMENT SCIENCE 2025; 81:654-666. [PMID: 39377441 PMCID: PMC11716339 DOI: 10.1002/ps.8464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 09/12/2024] [Accepted: 09/23/2024] [Indexed: 10/09/2024]
Abstract
BACKGROUND The use of computer vision and deep learning models to automatically classify insect species on sticky traps has proven to be a cost- and time-efficient approach to pest monitoring. As different species are attracted to different colours, the variety of sticky trap colours poses a challenge to the performance of the models. However, the effectiveness of deep learning in classifying pests on different coloured sticky traps has not yet been sufficiently explored. In this study, we aim to investigate the influence of sticky trap colour and imaging devices on the performance of deep learning models in classifying pests on sticky traps. RESULTS Our results show that using the MobileNetV2 architecture with transparent sticky traps as training data, the model predicted the pest species on transparent sticky traps with an accuracy of at least 0.95 and on other sticky trap colours with at least 0.85 of the F1 score. Using a generalised linear model (GLM) and a Boruta feature selection algorithm, we also showed that the colour and architecture of the sticky traps significantly influenced the performance of the model. CONCLUSION Our results support the development of an automatic classification of pests on a sticky trap, which should focus on colour and deep learning architecture to achieve good results. Future studies could aim to incorporate the trap system into pest monitoring, providing more accurate and cost-effective results in a pest management programme. © 2024 The Author(s). Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
Collapse
Affiliation(s)
- Song‐Quan Ong
- Department of EcoscienceAarhus UniversityAarhusDenmark
- Institute for Tropical Biology and ConservationUniversiti Malaysia SabahKota KinabaluMalaysia
| | - Toke Thomas Høye
- Department of EcoscienceAarhus UniversityAarhusDenmark
- Arctic Research CentreAarhus UniversityAarhusDenmark
| |
Collapse
|
2
|
Kim JI, Baek JW, Kim CB. Hierarchical image classification using transfer learning to improve deep learning model performance for amazon parrots. Sci Rep 2025; 15:3790. [PMID: 39885290 PMCID: PMC11782500 DOI: 10.1038/s41598-025-88103-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2024] [Accepted: 01/24/2025] [Indexed: 02/01/2025] Open
Abstract
Numerous studies have proven the potential of deep learning models for classifying wildlife. Such models can reduce the workload of experts by automating species classification to monitor wild populations and global trade. Although deep learning models typically perform better with more input data, the available wildlife data are ordinarily limited, specifically for rare or endangered species. Recently, citizen science programs have helped accumulate valuable wildlife data, but such data is still not enough to achieve the best performance of deep learning models compared to benchmark datasets. Recent studies have applied the hierarchical classification of a given wildlife dataset to improve model performance and classification accuracy. This study applied hierarchical classification by transfer learning for classifying Amazon parrot species. Specifically, a hierarchy was built based on diagnostic morphological features. Upon evaluating model performance, the hierarchical model outperformed the non-hierarchical model in detecting and classifying Amazon parrots. Notably, the hierarchical model achieved the mean Average Precision (mAP) of 0.944, surpassing the mAP of 0.908 achieved by the non-hierarchical model. Moreover, the hierarchical model improved classification accuracy between morphologically similar species. The outcomes of this study may facilitate the monitoring of wild populations and the global trade of Amazon parrots for conservation purposes.
Collapse
Affiliation(s)
- Jung-Il Kim
- Biotechnology Major, Sangmyung University, Seoul, 03016, South Korea
| | - Jong-Won Baek
- Biotechnology Major, Sangmyung University, Seoul, 03016, South Korea
| | - Chang-Bae Kim
- Biotechnology Major, Sangmyung University, Seoul, 03016, South Korea.
| |
Collapse
|
3
|
Chiranjeevi S, Saadati M, Deng ZK, Koushik J, Jubery TZ, Mueller DS, O’Neal M, Merchant N, Singh A, Singh AK, Sarkar S, Singh A, Ganapathysubramanian B. InsectNet: Real-time identification of insects using an end-to-end machine learning pipeline. PNAS NEXUS 2025; 4:pgae575. [PMID: 39895677 PMCID: PMC11783291 DOI: 10.1093/pnasnexus/pgae575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 12/11/2024] [Indexed: 02/04/2025]
Abstract
Insect pests significantly impact global agricultural productivity and crop quality. Effective integrated pest management strategies require the identification of insects, including beneficial and harmful insects. Automated identification of insects under real-world conditions presents several challenges, including the need to handle intraspecies dissimilarity and interspecies similarity, life-cycle stages, camouflage, diverse imaging conditions, and variability in insect orientation. An end-to-end approach for training deep-learning models, InsectNet, is proposed to address these challenges. Our approach has the following key features: (i) uses a large dataset of insect images collected through citizen science along with label-free self-supervised learning to train a global model, (ii) fine-tuning this global model using smaller, expert-verified regional datasets to create a local insect identification model, (iii) which provides high prediction accuracy even for species with small sample sizes, (iv) is designed to enhance model trustworthiness, and (v) democratizes access through streamlined machine learning operations. This global-to-local model strategy offers a more scalable and economically viable solution for implementing advanced insect identification systems across diverse agricultural ecosystems. We report accurate identification (>96% accuracy) of numerous agriculturally and ecologically relevant insect species, including pollinators, parasitoids, predators, and harmful insects. InsectNet provides fine-grained insect species identification, works effectively in challenging backgrounds, and avoids making predictions when uncertain, increasing its utility and trustworthiness. The model and associated workflows are available through a web-based portal accessible through a computer or mobile device. We envision InsectNet to complement existing approaches, and be part of a growing suite of AI technologies for addressing agricultural challenges.
Collapse
Affiliation(s)
- Shivani Chiranjeevi
- Department of Mechanical Engineering, Iowa State University, Ames, IA 50011, USA
| | - Mojdeh Saadati
- Department of Computer Science, Iowa State University, Ames, IA 50011, USA
| | - Zi K Deng
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ 85721, USA
| | - Jayanth Koushik
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Talukder Z Jubery
- Department of Mechanical Engineering, Iowa State University, Ames, IA 50011, USA
| | - Daren S Mueller
- Department of Plant Pathology, Entomology and Microbiology, Iowa State University, Ames, IA 50011, USA
| | - Matthew O’Neal
- Department of Plant Pathology, Entomology and Microbiology, Iowa State University, Ames, IA 50011, USA
| | - Nirav Merchant
- Data Science Institute, University of Arizona, Tucson, AZ 85721, USA
| | - Aarti Singh
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Asheesh K Singh
- Department of Agronomy, Iowa State University, Ames, IA 50011, USA
| | - Soumik Sarkar
- Department of Mechanical Engineering, Iowa State University, Ames, IA 50011, USA
| | - Arti Singh
- Department of Agronomy, Iowa State University, Ames, IA 50011, USA
| | | |
Collapse
|
4
|
Marulanda Lopez JF, de Brito Neto WB, Dos Santos Ferreira R. Machine Learning Approach to Support Taxonomic Discrimination of Mayflies Species Based on Morphologic Data. NEOTROPICAL ENTOMOLOGY 2024; 53:1196-1203. [PMID: 39320425 DOI: 10.1007/s13744-024-01200-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 08/08/2024] [Indexed: 09/26/2024]
Abstract
Artificial intelligence (AI) and machine learning (ML) offer objective solutions in the elaboration of taxonomic keys, such as the processing of large numbers of samples, aiding in the species identification, and optimizing the time required for this process. We utilized ML to study the morphological data of eight species of Americabaetis Kluge 1992, a diverse genus in South American freshwater environments. Decision trees were employed, examining specimens from the Museu de Entomologia da Universidade Federal de Viçosa (UFVB/Brazil) and literature data. Eleven morphological traits of taxonomic importance from the literature, including frontal keel, shape of the mouthparts, and abdominal color pattern, were analyzed. The decision tree obtained with the Gini algorithm effectively differentiates eight species (40% of the known species), using only eight morphological characters. Our analysis revealed distinct groups within Americabaetis alphus Lugo-Ortiz and McCafferty 1996a, based on variations in abdominal tracheae pigmentation. This study introduces a novel approach, integrating AI techniques, biological collections, and literature data for aid in the Americabaetis species identification. It provides a valuable tool for taxonomic research on contemporary and extinct mayflies.
Collapse
|
5
|
Hofmann M, Kiel S, Kösters LM, Wäldchen J, Mäder P. Inferring Taxonomic Affinities and Genetic Distances Using Morphological Features Extracted from Specimen Images: A Case Study with a Bivalve Data Set. Syst Biol 2024; 73:920-940. [PMID: 39046773 DOI: 10.1093/sysbio/syae042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 07/04/2024] [Accepted: 07/22/2024] [Indexed: 07/25/2024] Open
Abstract
Reconstructing the tree of life and understanding the relationships of taxa are core questions in evolutionary and systematic biology. The main advances in this field in the last decades were derived from molecular phylogenetics; however, for most species, molecular data are not available. Here, we explore the applicability of 2 deep learning methods-supervised classification approaches and unsupervised similarity learning-to infer organism relationships from specimen images. As a basis, we assembled an image data set covering 4144 bivalve species belonging to 74 families across all orders and subclasses of the extant Bivalvia, with molecular phylogenetic data being available for all families and a complete taxonomic hierarchy for all species. The suitability of this data set for deep learning experiments was evidenced by an ablation study resulting in almost 80% accuracy for identifications on the species level. Three sets of experiments were performed using our data set. First, we included taxonomic hierarchy and genetic distances in a supervised learning approach to obtain predictions on several taxonomic levels simultaneously. Here, we stimulated the model to consider features shared between closely related taxa to be more critical for their classification than features shared with distantly related taxa, imprinting phylogenetic and taxonomic affinities into the architecture and training procedure. Second, we used transfer learning and similarity learning approaches for zero-shot experiments to identify the higher-level taxonomic affinities of test species that the models had not been trained on. The models assigned the unknown species to their respective genera with approximately 48% and 67% accuracy. Lastly, we used unsupervised similarity learning to infer the relatedness of the images without prior knowledge of their taxonomic or phylogenetic affinities. The results clearly showed similarities between visual appearance and genetic relationships at the higher taxonomic levels. The correlation was 0.6 for the most species-rich subclass (Imparidentia), ranging from 0.5 to 0.7 for the orders with the most images. Overall, the correlation between visual similarity and genetic distances at the family level was 0.78. However, fine-grained reconstructions based on these observed correlations, such as sister-taxa relationships, require further work. Overall, our results broaden the applicability of automated taxon identification systems and provide a new avenue for estimating phylogenetic relationships from specimen images.
Collapse
Affiliation(s)
- Martin Hofmann
- Data-intensive Systems and Visualization Group (dAI.SY), Technical University Ilmenau, Ilmenau 98693, Germany
| | - Steffen Kiel
- Department of Palaeobiology, Swedish Museum of Natural History, Stockholm 104 05, Sweden
| | - Lara M Kösters
- Department of Biogeochemical Integration, Max Planck Institute for Biogeochemistry, Jena 07745, Germany
| | - Jana Wäldchen
- Department of Biogeochemical Integration, Max Planck Institute for Biogeochemistry, Jena 07745, Germany
- German Centre for Integrative Biodiversity Research (iDiv), Halle-Jena-Leipzig, Germany
| | - Patrick Mäder
- Data-intensive Systems and Visualization Group (dAI.SY), Technical University Ilmenau, Ilmenau 98693, Germany
- German Centre for Integrative Biodiversity Research (iDiv), Halle-Jena-Leipzig, Germany
- Faculty of Biological Sciences, Friedrich Schiller University, Jena 07745, Germany
| |
Collapse
|
6
|
Baek JW, Kim JI, Kim CB. Deep learning-based image classification of sea turtles using object detection and instance segmentation models. PLoS One 2024; 19:e0313323. [PMID: 39585892 PMCID: PMC11588218 DOI: 10.1371/journal.pone.0313323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Accepted: 10/09/2024] [Indexed: 11/27/2024] Open
Abstract
Sea turtles exhibit high migratory rates and occupy a broad range of habitats, which in turn makes monitoring these taxa challenging. Applying deep learning (DL) models to vast image datasets collected from citizen science programs can offer promising solutions to overcome the challenge of monitoring the wide habitats of wildlife, particularly sea turtles. Among DL models, object detection models, such as the You Only Look Once (YOLO) series, have been extensively employed for wildlife classification. Despite their successful application in this domain, detecting objects in images with complex backgrounds, including underwater environments, remains a significant challenge. Recently, instance segmentation models have been developed to address this issue by providing more accurate classification of complex images compared to traditional object detection models. This study compared the performance of two state-of-the-art DL methods namely; the object detection model (YOLOv5) and instance segmentation model (YOLOv5-seg), to detect and classify sea turtles. The images were collected from iNaturalist and Google and then divided into 64% for training, 16% for validation, and 20% for test sets. Model performance during and after finishing training was evaluated by loss functions and various indexes, respectively. Based on loss functions, YOLOv5-seg demonstrated a lower error rate in detecting rather than classifying sea turtles than the YOLOv5. According to mean Average Precision (mAP) values, which reflect precision and recall, the YOLOv5-seg model showed superior performance than YOLOv5. The mAP0.5 and mAP0.5:0.95 for the YOLOv5 model were 0.885 and 0.795, respectively, whereas for the YOLOv5-seg, these values were 0.918 and 0.831, respectively. In particular, based on the loss functions and classification results, the YOLOv5-seg showed improved performance for detecting rather than classifying sea turtles compared to the YOLOv5. The results of this study may help improve sea turtle monitoring in the future.
Collapse
Affiliation(s)
- Jong-Won Baek
- Department of Biotechnology, Sangmyung University, Seoul, Korea
| | - Jung-Il Kim
- Department of Biotechnology, Sangmyung University, Seoul, Korea
| | - Chang-Bae Kim
- Department of Biotechnology, Sangmyung University, Seoul, Korea
| |
Collapse
|
7
|
Varga-Szilay Z, Szövényi G, Pozsgai G. Flower Visitation through the Lens: Exploring the Foraging Behaviour of Bombus terrestris with a Computer Vision-Based Application. INSECTS 2024; 15:729. [PMID: 39336697 PMCID: PMC11432343 DOI: 10.3390/insects15090729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Revised: 09/06/2024] [Accepted: 09/20/2024] [Indexed: 09/30/2024]
Abstract
To understand the processes behind pollinator declines and for the conservation of pollination services, we need to understand fundamental drivers influencing pollinator behaviour. Here, we aimed to elucidate how wild bumblebees interact with three plant species and investigated their foraging behaviour with varying flower densities. We video-recorded Bombus terrestris in 60 × 60 cm quadrats of Lotus creticus, Persicaria capitata, and Trifolium pratense in urban areas of Terceira (Azores, Portugal). For the automated bumblebee detection and counting, we created deep learning-based computer vision models with custom datasets. We achieved high model accuracy of 0.88 for Lotus and Persicaria and 0.95 for Trifolium, indicating accurate bumblebee detection. In our study, flower cover was the only factor that influenced the attractiveness of flower patches, and plant species did not have an effect. We detected a significant positive effect of flower cover on the attractiveness of flower patches for flower-visiting bumblebees. The time spent per unit of inflorescence surface area was longer on the Trifolium than those on the Lotus and Persicaria. However, our result did not indicate significant differences in the time bumblebees spent on inflorescences among the three plant species. Here, we also justify computer vision-based analysis as a reliable tool for studying pollinator behavioural ecology.
Collapse
Affiliation(s)
- Zsófia Varga-Szilay
- Doctoral School of Biology, Institute of Biology, ELTE Eötvös Loránd University, 1117 Budapest, Hungary
| | - Gergely Szövényi
- Department of Systematic Zoology and Ecology, ELTE Eötvös Loránd University, 1117 Budapest, Hungary
| | - Gábor Pozsgai
- Ce3C-Centre for Ecology, Evolution and Environmental Changes, Azorean Biodiversity Group, CHANGE–Global Change and Sustainability Institute, University of the Azores, 9700-042 Angra do Heroísmo, Portugal
| |
Collapse
|
8
|
Truong MXA, Van der Wal R. Exploring the landscape of automated species identification apps: Development, promise, and user appraisal. Bioscience 2024; 74:601-613. [PMID: 39421010 PMCID: PMC11480699 DOI: 10.1093/biosci/biae077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 07/02/2024] [Accepted: 07/04/2024] [Indexed: 10/19/2024] Open
Abstract
Two decades ago, Gaston and O'Neill (2004) deliberated on why automated species identification had not become widely employed. We no longer have to wonder: This AI-based technology is here, embedded in numerous web and mobile apps used by large audiences interested in nature. Now that automated species identification tools are available, popular, and efficient, it is time to look at how the apps are developed, what they promise, and how users appraise them. Delving into the automated species identification apps landscape, we found that free and paid apps differ fundamentally in presentation, experience, and the use of biodiversity and personal data. However, these two business models are deeply intertwined. Going forward, although big tech companies will eventually take over the landscape, citizen science programs will likely continue to have their own identification tools because of their specific purpose and their ability to create a strong sense of belonging among naturalist communities.
Collapse
|
9
|
Spiesman BJ, Gratton C, Gratton E, Hines H. Deep learning for identifying bee species from images of wings and pinned specimens. PLoS One 2024; 19:e0303383. [PMID: 38805521 PMCID: PMC11132477 DOI: 10.1371/journal.pone.0303383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 04/23/2024] [Indexed: 05/30/2024] Open
Abstract
One of the most challenging aspects of bee ecology and conservation is species-level identification, which is costly, time consuming, and requires taxonomic expertise. Recent advances in the application of deep learning and computer vision have shown promise for identifying large bumble bee (Bombus) species. However, most bees, such as sweat bees in the genus Lasioglossum, are much smaller and can be difficult, even for trained taxonomists, to identify. For this reason, the great majority of bees are poorly represented in the crowdsourced image datasets often used to train computer vision models. But even larger bees, such as bumble bees from the B. vagans complex, can be difficult to separate morphologically. Using images of specimens from our research collections, we assessed how deep learning classification models perform on these more challenging taxa, qualitatively comparing models trained on images of whole pinned specimens or on images of bee forewings. The pinned specimen and wing image datasets represent 20 and 18 species from 6 and 4 genera, respectively, and were used to train the EfficientNetV2L convolutional neural network. Mean test precision was 94.9% and 98.1% for pinned and wing images respectively. Results show that computer vision holds great promise for classifying smaller, more difficult to identify bees that are poorly represented in crowdsourced datasets. Images from research and museum collections will be valuable for expanding classification models to include additional species, which will be essential for large scale conservation monitoring efforts.
Collapse
Affiliation(s)
- Brian J. Spiesman
- Department of Entomology, Kansas State University, Manhattan, Kansas, United States of America
| | - Claudio Gratton
- Department of Entomology, University of Wisconsin–Madison, Madison, Wisconsin, United States of America
| | - Elena Gratton
- Department of Entomology, University of Illinois Urbana-Champaign, Champaign, Illinois, United States of America
| | - Heather Hines
- Department of Entomology, Penn State University, State College, Pennsylvania, United States of America
| |
Collapse
|
10
|
Smith CD, Cornman RS, Fike JA, Kraus JM, Oyler-McCance SJ, Givens CE, Hladik ML, Vandever MW, Kolpin DW, Smalling KL. Comparing modern identification methods for wild bees: Metabarcoding and image-based morphological taxonomic assignment. PLoS One 2024; 19:e0301474. [PMID: 38564614 PMCID: PMC10986983 DOI: 10.1371/journal.pone.0301474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 03/15/2024] [Indexed: 04/04/2024] Open
Abstract
With the decline of bee populations worldwide, studies determining current wild bee distributions and diversity are increasingly important. Wild bee identification is often completed by experienced taxonomists or by genetic analysis. The current study was designed to compare two methods of identification including: (1) morphological identification by experienced taxonomists using images of field-collected wild bees and (2) genetic analysis of composite bee legs (multiple taxa) using metabarcoding. Bees were collected from conservation grasslands in eastern Iowa in summer 2019 and identified to the lowest taxonomic unit using both methods. Sanger sequencing of individual wild bee legs was used as a positive control for metabarcoding. Morphological identification of bees using images resulted in 36 unique taxa among 22 genera, and >80% of Bombus specimens were identified to species. Metabarcoding was limited to genus-level assignments among 18 genera but resolved some morphologically similar genera. Metabarcoding did not consistently detect all genera in the composite samples, including kleptoparasitic bees. Sanger sequencing showed similar presence or absence detection results as metabarcoding but provided species-level identifications for cryptic species (i.e., Lasioglossum). Genus-specific detections were more frequent with morphological identification than metabarcoding, but certain genera such as Ceratina and Halictus were identified equally well with metabarcoding and morphology. Genera with proportionately less tissue in a composite sample were less likely to be detected using metabarcoding. Image-based methods were limited by image quality and visible morphological features, while genetic methods were limited by databases, primers, and amplification at target loci. This study shows how an image-based identification method compares with genetic techniques, and how in combination, the methods provide valuable genus- and species-level information for wild bees while preserving tissue for other analyses. These methods could be improved and transferred to a field setting to advance our understanding of wild bee distributions and to expedite conservation research.
Collapse
Affiliation(s)
- Cassandra D. Smith
- Oregon Water Science Center, U.S. Geological Survey, Bend, Oregon, United States of America
| | - Robert S. Cornman
- Fort Collins Science Center, U.S. Geological Survey, Fort Collins, Colorado, United States of America
| | - Jennifer A. Fike
- Fort Collins Science Center, U.S. Geological Survey, Fort Collins, Colorado, United States of America
| | - Johanna M. Kraus
- Columbia Environmental Research Center, U.S. Geological Survey, Columbia, Missouri, United States of America
| | - Sara J. Oyler-McCance
- Fort Collins Science Center, U.S. Geological Survey, Fort Collins, Colorado, United States of America
| | - Carrie E. Givens
- Upper Midwest Water Science Center, U.S. Geological Survey, Lansing, Michigan, United States of America
| | - Michelle L. Hladik
- California Water Science Center, U.S. Geological Survey, Sacramento, California, United States of America
| | - Mark W. Vandever
- Fort Collins Science Center, U.S. Geological Survey, Fort Collins, Colorado, United States of America
| | - Dana W. Kolpin
- Central Midwest Water Science Center, U.S. Geological Survey, Iowa City, Iowa, United States of America
| | - Kelly L. Smalling
- New Jersey Water Science Center, U.S. Geological Survey, Lawrenceville, New Jersey, United States of America
| |
Collapse
|
11
|
Sauer FG, Werny M, Nolte K, Villacañas de Castro C, Becker N, Kiel E, Lühken R. A convolutional neural network to identify mosquito species (Diptera: Culicidae) of the genus Aedes by wing images. Sci Rep 2024; 14:3094. [PMID: 38326355 PMCID: PMC10850211 DOI: 10.1038/s41598-024-53631-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 02/02/2024] [Indexed: 02/09/2024] Open
Abstract
Accurate species identification is crucial to assess the medical relevance of a mosquito specimen, but requires intensive experience of the observers and well-equipped laboratories. In this proof-of-concept study, we developed a convolutional neural network (CNN) to identify seven Aedes species by wing images, only. While previous studies used images of the whole mosquito body, the nearly two-dimensional wings may facilitate standardized image capture and reduce the complexity of the CNN implementation. Mosquitoes were sampled from different sites in Germany. Their wings were mounted and photographed with a professional stereomicroscope. The data set consisted of 1155 wing images from seven Aedes species as well as 554 wings from different non-Aedes mosquitoes. A CNN was trained to differentiate between Aedes and non-Aedes mosquitoes and to classify the seven Aedes species based on grayscale and RGB images. Image processing, data augmentation, training, validation and testing were conducted in python using deep-learning framework PyTorch. Our best-performing CNN configuration achieved a macro F1 score of 99% to discriminate Aedes from non-Aedes mosquito species. The mean macro F1 score to predict the Aedes species was 90% for grayscale images and 91% for RGB images. In conclusion, wing images are sufficient to identify mosquito species by CNNs.
Collapse
Affiliation(s)
- Felix G Sauer
- Bernhard Nocht Institute for Tropical Medicine, Hamburg, Germany.
| | | | - Kristopher Nolte
- Bernhard Nocht Institute for Tropical Medicine, Hamburg, Germany
- Faculty of Life Sciences, HAW Hamburg, Hamburg, Germany
| | | | - Norbert Becker
- Faculty of Biosciences, University Heidelberg, Im Neuenheimer Feld 230, 69120, Heidelberg, Germany
- Institute of Dipterology (IfD)/KABS, Georg-Peter-Süß-Str. 3, 67346, Speyer, Germany
| | - Ellen Kiel
- Carl von Ossietzky University, Oldenburg, Germany
| | - Renke Lühken
- Bernhard Nocht Institute for Tropical Medicine, Hamburg, Germany
| |
Collapse
|
12
|
Stark T, Ştefan V, Wurm M, Spanier R, Taubenböck H, Knight TM. YOLO object detection models can locate and classify broad groups of flower-visiting arthropods in images. Sci Rep 2023; 13:16364. [PMID: 37773202 PMCID: PMC10541899 DOI: 10.1038/s41598-023-43482-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 09/25/2023] [Indexed: 10/01/2023] Open
Abstract
Develoment of image recognition AI algorithms for flower-visiting arthropods has the potential to revolutionize the way we monitor pollinators. Ecologists need light-weight models that can be deployed in a field setting and can classify with high accuracy. We tested the performance of three deep learning light-weight models, YOLOv5nano, YOLOv5small, and YOLOv7tiny, at object recognition and classification in real time on eight groups of flower-visiting arthropods using open-source image data. These eight groups contained four orders of insects that are known to perform the majority of pollination services in Europe (Hymenoptera, Diptera, Coleoptera, Lepidoptera) as well as other arthropod groups that can be seen on flowers but are not typically considered pollinators (e.g., spiders-Araneae). All three models had high accuracy, ranging from 93 to 97%. Intersection over union (IoU) depended on the relative area of the bounding box, and the models performed best when a single arthropod comprised a large portion of the image and worst when multiple small arthropods were together in a single image. The model could accurately distinguish flies in the family Syrphidae from the Hymenoptera that they are known to mimic. These results reveal the capability of existing YOLO models to contribute to pollination monitoring.
Collapse
Affiliation(s)
- Thomas Stark
- German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), Oberpfaffenhofen, Germany.
| | - Valentin Ştefan
- Department of Community Ecology, Helmholtz Centre for Environmental Research - UFZ, Halle (Saale), Germany
- German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, Leipzig, Germany
| | - Michael Wurm
- German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), Oberpfaffenhofen, Germany
| | - Robin Spanier
- German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), Oberpfaffenhofen, Germany
| | - Hannes Taubenböck
- German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), Oberpfaffenhofen, Germany
- Institute of Geography and Geology, University of Würzburg, Würzburg, Germany
| | - Tiffany M Knight
- Department of Community Ecology, Helmholtz Centre for Environmental Research - UFZ, Halle (Saale), Germany
- German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, Leipzig, Germany
- Institute of Biology, Martin Luther University Halle-Wittenberg, Halle (Saale), Germany
| |
Collapse
|
13
|
Zhang T, Li K, Chen X, Zhong C, Luo B, Grijalva I, McCornack B, Flippo D, Sharda A, Wang G. Aphid cluster recognition and detection in the wild using deep learning models. Sci Rep 2023; 13:13410. [PMID: 37591898 PMCID: PMC10435548 DOI: 10.1038/s41598-023-38633-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 07/12/2023] [Indexed: 08/19/2023] Open
Abstract
Aphid infestation poses a significant threat to crop production, rural communities, and global food security. While chemical pest control is crucial for maximizing yields, applying chemicals across entire fields is both environmentally unsustainable and costly. Hence, precise localization and management of aphids are essential for targeted pesticide application. The paper primarily focuses on using deep learning models for detecting aphid clusters. We propose a novel approach for estimating infection levels by detecting aphid clusters. To facilitate this research, we have captured a large-scale dataset from sorghum fields, manually selected 5447 images containing aphids, and annotated each individual aphid cluster within these images. To facilitate the use of machine learning models, we further process the images by cropping them into patches, resulting in a labeled dataset comprising 151,380 image patches. Then, we implemented and compared the performance of four state-of-the-art object detection models (VFNet, GFLV2, PAA, and ATSS) on the aphid dataset. Extensive experimental results show that all models yield stable similar performance in terms of average precision and recall. We then propose to merge close neighboring clusters and remove tiny clusters caused by cropping, and the performance is further boosted by around 17%. The study demonstrates the feasibility of automatically detecting and managing insects using machine learning models. The labeled dataset will be made openly available to the research community.
Collapse
Affiliation(s)
- Tianxiao Zhang
- Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence, KS, 66045, USA
| | - Kaidong Li
- Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence, KS, 66045, USA
| | - Xiangyu Chen
- Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence, KS, 66045, USA
| | - Cuncong Zhong
- Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence, KS, 66045, USA
| | - Bo Luo
- Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence, KS, 66045, USA
| | - Ivan Grijalva
- Department of Entomology, Kansas State University, Manhattan, KS, 66506, USA
| | - Brian McCornack
- Department of Entomology, Kansas State University, Manhattan, KS, 66506, USA
| | - Daniel Flippo
- Department of Biological and Agricultural Engineering, Kansas State University, Manhattan, KS, 66506, USA
| | - Ajay Sharda
- Department of Biological and Agricultural Engineering, Kansas State University, Manhattan, KS, 66506, USA
| | - Guanghui Wang
- Department of Computer Science, Toronto Metropolitan University, Toronto, ON, M5B 2K3, Canada.
| |
Collapse
|
14
|
Kulyukin VA, Kulyukin AV. Accuracy vs. Energy: An Assessment of Bee Object Inference in Videos from On-Hive Video Loggers with YOLOv3, YOLOv4-Tiny, and YOLOv7-Tiny. SENSORS (BASEL, SWITZERLAND) 2023; 23:6791. [PMID: 37571576 PMCID: PMC10422429 DOI: 10.3390/s23156791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 07/16/2023] [Accepted: 07/24/2023] [Indexed: 08/13/2023]
Abstract
A continuing trend in precision apiculture is to use computer vision methods to quantify characteristics of bee traffic in managed colonies at the hive's entrance. Since traffic at the hive's entrance is a contributing factor to the hive's productivity and health, we assessed the potential of three open-source convolutional network models, YOLOv3, YOLOv4-tiny, and YOLOv7-tiny, to quantify omnidirectional traffic in videos from on-hive video loggers on regular, unmodified one- and two-super Langstroth hives and compared their accuracies, energy efficacies, and operational energy footprints. We trained and tested the models with a 70/30 split on a dataset of 23,173 flying bees manually labeled in 5819 images from 10 randomly selected videos and manually evaluated the trained models on 3600 images from 120 randomly selected videos from different apiaries, years, and queen races. We designed a new energy efficacy metric as a ratio of performance units per energy unit required to make a model operational in a continuous hive monitoring data pipeline. In terms of accuracy, YOLOv3 was first, YOLOv7-tiny-second, and YOLOv4-tiny-third. All models underestimated the true amount of traffic due to false negatives. YOLOv3 was the only model with no false positives, but had the lowest energy efficacy and highest operational energy footprint in a deployed hive monitoring data pipeline. YOLOv7-tiny had the highest energy efficacy and the lowest operational energy footprint in the same pipeline. Consequently, YOLOv7-tiny is a model worth considering for training on larger bee datasets if a primary objective is the discovery of non-invasive computer vision models of traffic quantification with higher energy efficacies and lower operational energy footprints.
Collapse
|
15
|
Wang C, Grijalva I, Caragea D, McCornack B. Detecting common coccinellids found in sorghum using deep learning models. Sci Rep 2023; 13:9748. [PMID: 37328502 PMCID: PMC10276038 DOI: 10.1038/s41598-023-36738-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 06/08/2023] [Indexed: 06/18/2023] Open
Abstract
Increased global production of sorghum has the potential to meet many of the demands of a growing human population. Developing automation technologies for field scouting is crucial for long-term and low-cost production. Since 2013, sugarcane aphid (SCA) Melanaphis sacchari (Zehntner) has become an important economic pest causing significant yield loss across the sorghum production region in the United States. Adequate management of SCA depends on costly field scouting to determine pest presence and economic threshold levels to spray insecticides. However, with the impact of insecticides on natural enemies, there is an urgent need to develop automated-detection technologies for their conservation. Natural enemies play a crucial role in the management of SCA populations. These insects, primary coccinellids, prey on SCA and help to reduce unnecessary insecticide applications. Although these insects help regulate SCA populations, the detection and classification of these insects is time-consuming and inefficient in lower value crops like sorghum during field scouting. Advanced deep learning software provides a means to perform laborious automatic agricultural tasks, including detection and classification of insects. However, deep learning models for coccinellids in sorghum have not been developed. Therefore, our objective was to develop and train machine learning models to detect coccinellids commonly found in sorghum and classify them according to their genera, species, and subfamily level. We trained a two-stage object detection model, specifically, Faster Region-based Convolutional Neural Network (Faster R-CNN) with the Feature Pyramid Network (FPN) and also one-stage detection models in the YOLO (You Only Look Once) family (YOLOv5 and YOLOv7) to detect and classify seven coccinellids commonly found in sorghum (i.e., Coccinella septempunctata, Coleomegilla maculata, Cycloneda sanguinea, Harmonia axyridis, Hippodamia convergens, Olla v-nigrum, Scymninae). We used images extracted from the iNaturalist project to perform training and evaluation of the Faster R-CNN-FPN and YOLOv5 and YOLOv7 models. iNaturalist is an imagery web server used to publish citizen's observations of images pertaining to living organisms. Experimental evaluation using standard object detection metrics, such as average precision (AP), AP@0.50, etc., has shown that the YOLOv7 model performs the best on the coccinellid images with an AP@0.50 as high as 97.3, and AP as high as 74.6. Our research contributes automated deep learning software to the area of integrated pest management, making it easier to detect natural enemies in sorghum.
Collapse
Affiliation(s)
- Chaoxin Wang
- Department of Computer Science, Kansas State University, Manhattan, KS, 66506, USA
| | - Ivan Grijalva
- Department of Entomology, Kansas State University, Manhattan, KS, 66506, USA
| | - Doina Caragea
- Department of Computer Science, Kansas State University, Manhattan, KS, 66506, USA.
| | - Brian McCornack
- Department of Entomology, Kansas State University, Manhattan, KS, 66506, USA
| |
Collapse
|
16
|
Otto CRV, Schrage AC, Bailey LL, Mola JM, Smith TA, Pearse I, Simanonok S, Grundel R. Addressing Detection Uncertainty in Bombus affinis (Hymenoptera: Apidae) Surveys Can Improve Inferences Made From Monitoring. ENVIRONMENTAL ENTOMOLOGY 2023; 52:108-118. [PMID: 36412052 DOI: 10.1093/ee/nvac090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Indexed: 06/16/2023]
Abstract
The U.S. Fish and Wildlife Service developed national guidelines to track species recovery of the endangered rusty patched bumble bee [Bombus affinis Cresson (Hymenoptera: Apidae)] and to investigate changes in species occupancy across space and time. As with other native bee monitoring efforts, managers have specifically acknowledged the need to address species detection uncertainty and determine the sampling effort required to infer species absence within sites. We used single-season, single-species occupancy models fit to field data collected in four states to estimate imperfect detection of B. affinis and to determine the survey effort required to achieve high confidence of species detection. Our analysis revealed a precipitous, seasonal, decline in B. affinis detection probability throughout the July through September sampling window in 2021. We estimated that six, 30-min surveys conducted in early July are required to achieve a 95% cumulative detection probability, whereas >10 surveys would be required in early August to achieve the same level of confidence. Our analysis also showed B. affinis was less likely to be detected during hot and humid days and at patches of reduced habitat quality. Bombus affinis was frequently observed on Monarda fistulosa (Lamiales: Lamiaceae), followed by [Pycnanthemum virginianum Rob. and Fernald (Lamiales: Lamiaceae)], Eutrochium maculatum Lamont (Asterales: Asteraceae), and Veronicastrum virginicum Farw. (Lamiales: Plantaginaceae). Although our research is focused on B. affinis, it is relevant for monitoring other bumble bees of conservation concern, such as B. occidentalis Greene (Hymenoptera: Apidae) and B. terricola Kirby (Hymenoptera: Apidae) for which monitoring efforts have been recently initiated and occupancy is a variable of conservation interest.
Collapse
Affiliation(s)
- Clint R V Otto
- U.S. Geological Survey, Northern Prairie Wildlife Research Center, Jamestown, ND 58401, USA
| | - Alma C Schrage
- U.S. Geological Survey, Great Lakes Science Center, 1574 N 300E, Chesterton, IN 46304, USA
| | - Larissa L Bailey
- Department of Fish, Wildlife and Conservation Biology, Graduate Degree Program in Ecology, 1474 Campus Delivery, Colorado State University, Fort Collins, CO 80523, USA
| | - John M Mola
- U.S. Geological Survey, Fort Collins Science Center, Fort Collins, CO, USA
- Forest & Rangeland Stewardship, Warner College of Natural Resources, Colorado State University, Fort Collins, CO, USA
| | - Tamara A Smith
- U.S. Fish and Wildlife Service, Minnesota-Wisconsin Ecological Services Field Office, 3815 American Boulevard East, Bloomington, MN 55425, USA
| | - Ian Pearse
- U.S. Geological Survey, Fort Collins Science Center, Fort Collins, CO, USA
| | - Stacy Simanonok
- U.S. Geological Survey, Northern Prairie Wildlife Research Center, Jamestown, ND 58401, USA
| | - Ralph Grundel
- U.S. Geological Survey, Great Lakes Science Center, 1574 N 300E, Chesterton, IN 46304, USA
| |
Collapse
|
17
|
Wang J, Shi Z, Shi W, Wang H. The Detection of Yarn Roll's Margin in Complex Background. SENSORS (BASEL, SWITZERLAND) 2023; 23:1993. [PMID: 36850588 PMCID: PMC9961418 DOI: 10.3390/s23041993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 02/05/2023] [Accepted: 02/07/2023] [Indexed: 06/18/2023]
Abstract
Online detection of yarn roll's margin is one of the key issues in textile automation, which is related to the speed and scheduling of bobbin (empty yarn roll) replacement. The actual industrial site is characterized by uneven lighting, restricted shooting angles, diverse yarn colors and cylinder yarn types, and complex backgrounds. Due to the above characteristics, the neural network detection error is large, and the contour detection extraction edge accuracy is low. In this paper, an improved neural network algorithm is proposed, and the improved Yolo algorithm and the contour detection algorithm are integrated. First, the image is entered in the Yolo model to detect each yarn roll and its dimensions; second, the contour and dimensions of each yarn roll are accurately detected based on Yolo; third, the diameter of the yarn rolls detected by Yolo and the contour detection algorithm are fused, and then the length of the yarn rolls and the edges of the yarn rolls are calculated as measurements; finally, in order to completely eliminate the error detection, the yarn consumption speed is used to estimate the residual yarn volume and the measured and estimated values are fused using a Kalman filter. This method overcomes the effects of complex backgrounds and illumination while being applicable to different types of yarn rolls. It is experimentally verified that the average measurement error of the cylinder yarn diameter is less than 8.6 mm, and the measurement error of the cylinder yarn length does not exceed 3 cm.
Collapse
|
18
|
Reis HC, Turk V. Transfer Learning Approach and Nucleus Segmentation with MedCLNet Colon Cancer Database. J Digit Imaging 2023; 36:306-325. [PMID: 36127531 PMCID: PMC9984669 DOI: 10.1007/s10278-022-00701-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 09/07/2022] [Accepted: 09/08/2022] [Indexed: 11/30/2022] Open
Abstract
Machine learning has been recently used especially in the medical field. In the diagnosis of serious diseases such as cancer, deep learning techniques can be used to reduce the workload of experts and to produce quick solutions. The nuclei found in the histopathology dataset are an essential parameter in disease detection. The nucleus segmentation was performed using the colorectal histology MNIST dataset for nucleus detection in this study. The graph theory, PSO, watershed, and random walker algorithms were used for the segmentation process. In addition, we present the 10-class MedCLNet visual dataset consisting of the NCT-CRC-HE-100 K dataset, LC25000 dataset, and GlaS dataset that can be used in transfer learning studies from deep learning techniques. The study proposes a transfer learning technique using the MedCLNet database. Deep neural networks pre-trained with the proposed transfer learning method were used in the classification with the colorectal histology MNIST dataset in the experimental process. DenseNet201, DenseNet169, InceptionResNetV2, InceptionV3, ResNet152V2, ResNet101V2, and Xception deep learning algorithms were used in transfer learning and the classification studies. The proposed approach was analyzed before and after transfer learning with different methods (DenseNet169 + SVM, DenseNet169 + GRU). In the performance measurement, using the colorectal histology MNIST dataset, 94.29% accuracy was obtained in the DenseNet169 model, which was initiated with random weights in the multi-classification study, and 95.00% accuracy after transfer learning was applied. In comparison with the results obtained from empirical studies, it was demonstrated that the proposed method produced satisfactory outcomes. The application is expected to provide a secondary evaluation for physicians in colon cancer detection and the segmentation.
Collapse
Affiliation(s)
- Hatice Catal Reis
- Department of Geomatics Engineering, Gumushane University, Gumushane, 2900, Turkey.
| | - Veysel Turk
- Department of Computer Engineering, University of Harran, Sanliurfa, Turkey
| |
Collapse
|
19
|
Desai B, Patel A, Patel V, Shah S, Raval MS, Ghosal R. Identification of free-ranging mugger crocodiles by applying deep learning methods on UAV imagery. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
20
|
Blair J, Weiser MD, de Beurs K, Kaspari M, Siler C, Marshall KE. Embracing imperfection: Machine-assisted invertebrate classification in real-world datasets. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
21
|
Phan TTH, Nguyen-Doan D, Nguyen-Huu D, Nguyen-Van H, Pham-Hong T. Investigation on new Mel frequency cepstral coefficients features and hyper-parameters tuning technique for bee sound recognition. Soft comput 2022. [DOI: 10.1007/s00500-022-07596-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
22
|
Artificial intelligence versus natural selection: Using computer vision techniques to classify bees and bee mimics. iScience 2022; 25:104924. [PMID: 36060073 PMCID: PMC9437854 DOI: 10.1016/j.isci.2022.104924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 06/19/2022] [Accepted: 08/09/2022] [Indexed: 12/04/2022] Open
Abstract
Many groups of stingless insects have independently evolved mimicry of bees to fool would-be predators. To investigate this mimicry, we trained artificial intelligence (AI) algorithms—specifically, computer vision—to classify citizen scientist images of bees, bumble bees, and diverse bee mimics. For detecting bees and bumble bees, our models achieved accuracies of 91.71% and 88.86%, respectively. As a proxy for a natural predator, our models were poorest in detecting bee mimics that exhibit both aggressive and defensive mimicry. Using the explainable AI method of class activation maps, we validated that our models learn from appropriate components within the image, which in turn provided anatomical insights. Our t-SNE plot yielded perfect within-group clustering, as well as between-group clustering that grossly replicated the phylogeny. Ultimately, the transdisciplinary approaches herein can enhance global citizen science efforts as well as investigations of mimicry and morphology of bees and other insects. AI models for classifying bees and bumble bees achieved 92% and 89% accuracy AI models were fooled most by bee mimics exhibiting both aggressive and defensive mimicry Class activation maps explained the anatomical reasoning of AI model classifications t-SNE plot exhibited perfect phylogenetic clustering within and between groups
Collapse
|
23
|
Image Classification of Amazon Parrots by Deep Learning: A Potentially Useful Tool for Wildlife Conservation. BIOLOGY 2022; 11:biology11091303. [PMID: 36138782 PMCID: PMC9495850 DOI: 10.3390/biology11091303] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/08/2022] [Accepted: 08/29/2022] [Indexed: 12/04/2022]
Abstract
Simple Summary Most parrot species are threatened with extinction because of habitat loss and commercial trade. Parrot conservation is vital because parrots play an important role in the ecosystem. The Amazon parrots are one of the most endangered parrot species. Monitoring their wild population and global trade is essential for their conservation. However, this is becoming more challenging because it requires manual analysis of large-scale image data. Furthermore, the morphological identification of the Amazon parrots can be difficult because they have similar morphological features. Deep learning-based object detection models are useful tools for monitoring wild populations and global trade. In this study, 26 Amazon parrot species were classified using eight object detection models. The object detection model, which showed the highest accuracy, classified the 26 Amazon parrot species at 90.7% on average. The continuous development of deep learning models for classifying Amazon parrots might help to improve the ability to monitor their wild populations and global trade. Abstract Parrots play a crucial role in the ecosystem by performing various roles, such as consuming the reproductive structures of plants and dispersing plant seeds. However, most are threatened because of habitat loss and commercial trade. Amazon parrots are one of the most traded and illegally traded parrots. Therefore, monitoring their wild populations and global trade is crucial for their conservation. However, monitoring wild populations is becoming more challenging because the manual analysis of large-scale datasets of images obtained from camera trap methods is labor-intensive and time consuming. Monitoring the wildlife trade is difficult because of the large quantities of wildlife trade. Amazon parrots can be difficult to identify because of their morphological similarity. Object detection models have been widely used for automatic and accurate species classification. In this study, to classify 26 Amazon parrot species, 8 Single Shot MultiBox Detector models were assessed. Among the eight models, the DenseNet121 model showed the highest mean average precision at 88.9%. This model classified the 26 Amazon parrot species at 90.7% on average. Continuous improvement of deep learning models classifying Amazon parrots may support monitoring wild populations and the global trade of these species.
Collapse
|
24
|
Carney RM, Mapes C, Low RD, Long A, Bowser A, Durieux D, Rivera K, Dekramanjian B, Bartumeus F, Guerrero D, Seltzer CE, Azam F, Chellappan S, Palmer JRB. Integrating Global Citizen Science Platforms to Enable Next-Generation Surveillance of Invasive and Vector Mosquitoes. INSECTS 2022; 13:675. [PMID: 36005301 PMCID: PMC9409379 DOI: 10.3390/insects13080675] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 06/29/2022] [Accepted: 07/01/2022] [Indexed: 11/29/2022]
Abstract
Mosquito-borne diseases continue to ravage humankind with >700 million infections and nearly one million deaths every year. Yet only a small percentage of the >3500 mosquito species transmit diseases, necessitating both extensive surveillance and precise identification. Unfortunately, such efforts are costly, time-consuming, and require entomological expertise. As envisioned by the Global Mosquito Alert Consortium, citizen science can provide a scalable solution. However, disparate data standards across existing platforms have thus far precluded truly global integration. Here, utilizing Open Geospatial Consortium standards, we harmonized four data streams from three established mobile apps—Mosquito Alert, iNaturalist, and GLOBE Observer’s Mosquito Habitat Mapper and Land Cover—to facilitate interoperability and utility for researchers, mosquito control personnel, and policymakers. We also launched coordinated media campaigns that generated unprecedented numbers and types of observations, including successfully capturing the first images of targeted invasive and vector species. Additionally, we leveraged pooled image data to develop a toolset of artificial intelligence algorithms for future deployment in taxonomic and anatomical identification. Ultimately, by harnessing the combined powers of citizen science and artificial intelligence, we establish a next-generation surveillance framework to serve as a united front to combat the ongoing threat of mosquito-borne diseases worldwide.
Collapse
Affiliation(s)
- Ryan M. Carney
- Department of Integrative Biology, University of South Florida (USF), Tampa, FL 33620, USA; (C.M.); (D.D.); (K.R.)
| | - Connor Mapes
- Department of Integrative Biology, University of South Florida (USF), Tampa, FL 33620, USA; (C.M.); (D.D.); (K.R.)
- Woodrow Wilson International Center for Scholars, Washington, DC 20007, USA; (A.L.); (A.B.)
| | - Russanne D. Low
- Institute for Global Environmental Strategies, Arlington, VA 22202, USA;
| | - Alex Long
- Woodrow Wilson International Center for Scholars, Washington, DC 20007, USA; (A.L.); (A.B.)
| | - Anne Bowser
- Woodrow Wilson International Center for Scholars, Washington, DC 20007, USA; (A.L.); (A.B.)
| | - David Durieux
- Department of Integrative Biology, University of South Florida (USF), Tampa, FL 33620, USA; (C.M.); (D.D.); (K.R.)
| | - Karlene Rivera
- Department of Integrative Biology, University of South Florida (USF), Tampa, FL 33620, USA; (C.M.); (D.D.); (K.R.)
| | - Berj Dekramanjian
- Department of Political and Social Sciences, Universitat Pompeu Fabra, 08005 Barcelona, Spain; (B.D.); (J.R.B.P.)
| | - Frederic Bartumeus
- Centre d’Estudis Avançats de Blanes (CEAB-CSIC), 17300 Blanes, Spain;
- Centre de Recerca Ecològica i Aplicacions Forestals (CREAF), 08193 Cerdanyola del Vallès, Spain
- Institució Catalana de Recerca i Estudis Avançats (ICREA), 08010 Barcelona, Spain
| | - Daniel Guerrero
- Centre d’Estudis Avançats de Blanes (CEAB-CSIC), 17300 Blanes, Spain;
| | - Carrie E. Seltzer
- iNaturalist, California Academy of Sciences, San Francisco, CA 94118, USA;
| | - Farhat Azam
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA; (F.A.); (S.C.)
| | - Sriram Chellappan
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA; (F.A.); (S.C.)
| | - John R. B. Palmer
- Department of Political and Social Sciences, Universitat Pompeu Fabra, 08005 Barcelona, Spain; (B.D.); (J.R.B.P.)
| |
Collapse
|
25
|
van Klink R, August T, Bas Y, Bodesheim P, Bonn A, Fossøy F, Høye TT, Jongejans E, Menz MHM, Miraldo A, Roslin T, Roy HE, Ruczyński I, Schigel D, Schäffler L, Sheard JK, Svenningsen C, Tschan GF, Wäldchen J, Zizka VMA, Åström J, Bowler DE. Emerging technologies revolutionise insect ecology and monitoring. Trends Ecol Evol 2022; 37:872-885. [PMID: 35811172 DOI: 10.1016/j.tree.2022.06.001] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 05/26/2022] [Accepted: 06/07/2022] [Indexed: 12/30/2022]
Abstract
Insects are the most diverse group of animals on Earth, but their small size and high diversity have always made them challenging to study. Recent technological advances have the potential to revolutionise insect ecology and monitoring. We describe the state of the art of four technologies (computer vision, acoustic monitoring, radar, and molecular methods), and assess their advantages, current limitations, and future potential. We discuss how these technologies can adhere to modern standards of data curation and transparency, their implications for citizen science, and their potential for integration among different monitoring programmes and technologies. We argue that they provide unprecedented possibilities for insect ecology and monitoring, but it will be important to foster international standards via collaboration.
Collapse
Affiliation(s)
- Roel van Klink
- German Centre for Integrative Biodiversity Research (iDiv) Halle Jena Leipzig, Puschstrasse 4, 04103, Leipzig, Germany; Martin Luther University-Halle Wittenberg, Department of Computer Science, 06099, Halle (Saale), Germany.
| | - Tom August
- UK Centre for Ecology & Hydrology, Benson Lane, Wallingford, OX10 8BB, UK
| | - Yves Bas
- Centre d'Écologie et des Sciences de la Conservation, Muséum National d'Histoire Naturelle, Paris, France; CEFE, Université Montpellier, CNRS, EPHE, IRD, Montpellier, France
| | - Paul Bodesheim
- Friedrich Schiller University Jena, Computer Vision Group, Ernst-Abbe-Platz 2, 07743, Jena, Germany
| | - Aletta Bonn
- German Centre for Integrative Biodiversity Research (iDiv) Halle Jena Leipzig, Puschstrasse 4, 04103, Leipzig, Germany; Helmholtz - Centre for Environmental Research - UFZ, Permoserstrasse 15, 04318, Leipzig, Germany; Friedrich Schiller University Jena, Institute of Biodiversity, Dornburger Strasse 159, 07743, Jena, Germany
| | - Frode Fossøy
- Norwegian Institute for Nature Research, P.O. Box 5685 Torgarden, 7485, Trondheim, Norway
| | - Toke T Høye
- Aarhus University, Department of Ecoscience and Arctic Research Centre, C.F. Møllers Allé 8, 8000, Aarhus, Denmark
| | - Eelke Jongejans
- Radboud University, Animal Ecology and Physiology, Heyendaalseweg 135, 6525, AJ, Nijmegen, The Netherlands; Netherlands Institute of Ecology, Animal Ecology, Droevendaalsesteeg 10, 6708 PB, Wageningen, The Netherlands
| | - Myles H M Menz
- Max Planck Institute for Animal Behaviour, Department of Migration, Am Obstberg 1, 78315, Radolfzell, Germany; College of Science and Engineering, James Cook University, Townsville, Qld, Australia
| | - Andreia Miraldo
- Swedish Museum of Natural Sciences, Department of Bioinformatics and Genetics, Frescativägen 40, 114 18, Stockholm, Sweden
| | - Tomas Roslin
- Swedish University of Agricultural Sciences (SLU), Department of Ecology, Ulls väg 18B, 75651, Uppsala, Sweden
| | - Helen E Roy
- UK Centre for Ecology & Hydrology, Benson Lane, Wallingford, OX10 8BB, UK
| | - Ireneusz Ruczyński
- Mammal Research Institute, Polish Academy of Sciences, Stoczek 1, 17-230, Białowieża, Poland
| | - Dmitry Schigel
- Global Biodiversity Information Facility (GBIF), Universitetsparken 15, 2100, Copenhagen, Denmark
| | - Livia Schäffler
- Leibniz Institute for the Analysis of Biodiversity Change, Museum Koenig Bonn, Adenauerallee 127, 53113, Bonn, Germany
| | - Julie K Sheard
- German Centre for Integrative Biodiversity Research (iDiv) Halle Jena Leipzig, Puschstrasse 4, 04103, Leipzig, Germany; Helmholtz - Centre for Environmental Research - UFZ, Permoserstrasse 15, 04318, Leipzig, Germany; Friedrich Schiller University Jena, Institute of Biodiversity, Dornburger Strasse 159, 07743, Jena, Germany; University of Copenhagen, Centre for Macroecology, Evolution and Climate, Globe Institute, Universitetsparken 15, bld. 3, 2100, Copenhagen, Denmark
| | - Cecilie Svenningsen
- University of Copenhagen, Natural History Museum of Denmark, Øster Voldgade 5-7, 1350, Copenhagen, Denmark
| | - Georg F Tschan
- Leibniz Institute for the Analysis of Biodiversity Change, Museum Koenig Bonn, Adenauerallee 127, 53113, Bonn, Germany
| | - Jana Wäldchen
- German Centre for Integrative Biodiversity Research (iDiv) Halle Jena Leipzig, Puschstrasse 4, 04103, Leipzig, Germany; Max Planck Institute for Biogeochemistry, Department of Biogeochemical Integration, Hans-Knoell-Str. 10, 07745, Jena, Germany
| | - Vera M A Zizka
- Leibniz Institute for the Analysis of Biodiversity Change, Museum Koenig Bonn, Adenauerallee 127, 53113, Bonn, Germany
| | - Jens Åström
- Norwegian Institute for Nature Research, P.O. Box 5685 Torgarden, 7485, Trondheim, Norway
| | - Diana E Bowler
- German Centre for Integrative Biodiversity Research (iDiv) Halle Jena Leipzig, Puschstrasse 4, 04103, Leipzig, Germany; UK Centre for Ecology & Hydrology, Benson Lane, Wallingford, OX10 8BB, UK; Helmholtz - Centre for Environmental Research - UFZ, Permoserstrasse 15, 04318, Leipzig, Germany; Friedrich Schiller University Jena, Institute of Biodiversity, Dornburger Strasse 159, 07743, Jena, Germany
| |
Collapse
|
26
|
Wang J, Tian Y, Zhang R, Liu Z, Tian Y, Dai S. Multi-Information Model for Large-Flowered Chrysanthemum Cultivar Recognition and Classification. FRONTIERS IN PLANT SCIENCE 2022; 13:806711. [PMID: 35734255 PMCID: PMC9208330 DOI: 10.3389/fpls.2022.806711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 03/16/2022] [Indexed: 06/15/2023]
Abstract
The traditional Chinese large-flowered chrysanthemum is one of the cultivar groups of chrysanthemum (Chrysanthemum × morifolium Ramat.) with great morphological variation based on many cultivars. Some experts have established several large-flowered chrysanthemum classification systems by using the method of comparative morphology. However, for many cultivars, accurate recognition and classification are still a problem. Combined with the comparative morphological traits of selected samples, we proposed a multi-information model based on deep learning to recognize and classify large-flowered chrysanthemum. In this study, we collected the images of 213 large-flowered chrysanthemum cultivars in two consecutive years, 2018 and 2019. Based on the 2018 dataset, we constructed a multi-information classification model using non-pre-trained ResNet18 as the backbone network. The model achieves 70.62% top-5 test accuracy for the 2019 dataset. We explored the ability of image features to represent the characteristics of large-flowered chrysanthemum. The affinity propagation (AP) clustering shows that the features are sufficient to discriminate flower colors. The principal component analysis (PCA) shows the petal type has a better interpretation than the flower type. The training sample processing, model training scheme, and learning rate adjustment method affected the convergence and generalization of the model. The non-pre-trained model overcomes the problem of focusing on texture by ignoring colors with the ImageNet pre-trained model. These results lay a foundation for the automated recognition and classification of large-flowered chrysanthemum cultivars based on image classification.
Collapse
Affiliation(s)
- Jue Wang
- Beijing Key Laboratory of Ornamental Plants Germplasm Innovation and Molecular Breeding, Beijing Laboratory of Urban and Rural Ecological Environment, Key Laboratory of Genetics and Breeding in Forest Trees and Ornamental Plants of Ministry of Education, National Engineering Research Center for Floriculture, School of Landscape Architecture, Beijing Forestry University, Beijing, China
| | - Yuankai Tian
- Beijing Key Laboratory of Ornamental Plants Germplasm Innovation and Molecular Breeding, Beijing Laboratory of Urban and Rural Ecological Environment, Key Laboratory of Genetics and Breeding in Forest Trees and Ornamental Plants of Ministry of Education, National Engineering Research Center for Floriculture, School of Landscape Architecture, Beijing Forestry University, Beijing, China
| | - Ruisong Zhang
- College of Technology, Beijing Forestry University, Beijing, China
| | - Zhilan Liu
- Beijing Key Laboratory of Ornamental Plants Germplasm Innovation and Molecular Breeding, Beijing Laboratory of Urban and Rural Ecological Environment, Key Laboratory of Genetics and Breeding in Forest Trees and Ornamental Plants of Ministry of Education, National Engineering Research Center for Floriculture, School of Landscape Architecture, Beijing Forestry University, Beijing, China
| | - Ye Tian
- College of Technology, Beijing Forestry University, Beijing, China
| | - Silan Dai
- Beijing Key Laboratory of Ornamental Plants Germplasm Innovation and Molecular Breeding, Beijing Laboratory of Urban and Rural Ecological Environment, Key Laboratory of Genetics and Breeding in Forest Trees and Ornamental Plants of Ministry of Education, National Engineering Research Center for Floriculture, School of Landscape Architecture, Beijing Forestry University, Beijing, China
| |
Collapse
|
27
|
Montero‐Castaño A, Koch JBU, Lindsay TT, Love B, Mola JM, Newman K, Sharkey JK. Pursuing best practices for minimizing wild bee captures to support biological research. CONSERVATION SCIENCE AND PRACTICE 2022. [DOI: 10.1111/csp2.12734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Affiliation(s)
| | - Jonathan Berenguer Uhuad Koch
- U.S. Department of Agriculture‐Agricultural Research Service Pollinating Insect‐Biology, Management, and Systematics Research Unit Logan Utah USA
| | - Thuy‐Tien Thai Lindsay
- U.S. Department of Agriculture‐Agricultural Research Service Pollinating Insect‐Biology, Management, and Systematics Research Unit Logan Utah USA
| | - Byron Love
- U.S. Department of Agriculture‐Agricultural Research Service Pollinating Insect‐Biology, Management, and Systematics Research Unit Logan Utah USA
| | - John M. Mola
- U.S. Geological Survey Fort Collins Science Center Fort Collins Colorado USA
| | - Kiera Newman
- School of Environmental Sciences University of Guelph Guelph Ontario Canada
| | - Janean K. Sharkey
- School of Environmental Sciences University of Guelph Guelph Ontario Canada
| |
Collapse
|
28
|
Crone MK, Biddinger DJ, Grozinger CM. Wild Bee Nutritional Ecology: Integrative Strategies to Assess Foraging Preferences and Nutritional Requirements. FRONTIERS IN SUSTAINABLE FOOD SYSTEMS 2022. [DOI: 10.3389/fsufs.2022.847003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
Bees depend on flowering plants for their nutrition, and reduced availability of floral resources is a major driver of declines in both managed and wild bee populations. Understanding the nutritional needs of different bee species, and how these needs are met by the varying nutritional resources provided by different flowering plant taxa, can greatly inform land management recommendations to support bee populations and their associated ecosystem services. However, most bee nutrition research has focused on the three most commonly managed and commercially reared bee taxa—honey bees, bumble bees, and mason bees—with fewer studies focused on wild bees and other managed species, such as leafcutting bees, stingless bees, and alkali bees. Thus, we have limited information about the nutritional requirements and foraging preferences of the vast majority of bee species. Here, we discuss the approaches traditionally used to understand bee nutritional ecology: identification of floral visitors of selected focal plant species, evaluation of the foraging preferences of adults in selected focal bee species, evaluation of the nutritional requirements of focal bee species (larvae or adults) in controlled settings, and examine how these methods may be adapted to study a wider range of bee species. We also highlight emerging technologies that have the potential to greatly facilitate studies of the nutritional ecology of wild bee species, as well as evaluate bee nutritional ecology at significantly larger spatio-temporal scales than were previously feasible. While the focus of this review is on bee species, many of these techniques can be applied to other pollinator taxa as well.
Collapse
|
29
|
Assessment of deep convolutional neural network models for species identification of forensically-important fly maggots based on images of posterior spiracles. Sci Rep 2022; 12:4753. [PMID: 35306517 PMCID: PMC8934339 DOI: 10.1038/s41598-022-08823-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 03/14/2022] [Indexed: 12/02/2022] Open
Abstract
Forensic entomology is the branch of forensic science that is related to using arthropod specimens found in legal issues. Fly maggots are one of crucial pieces of evidence that can be used for estimating post-mortem intervals worldwide. However, the species-level identification of fly maggots is difficult, time consuming, and requires specialized taxonomic training. In this work, a novel method for the identification of different forensically-important fly species is proposed using convolutional neural networks (CNNs). The data used for the experiment were obtained from a digital camera connected to a compound microscope. We compared the performance of four widely used models that vary in complexity of architecture to evaluate tradeoffs in accuracy and speed for species classification including ResNet-101, Densenet161, Vgg19_bn, and AlexNet. In the validation step, all of the studied models provided 100% accuracy for identifying maggots of 4 species including Chrysomya megacephala (Diptera: Calliphoridae), Chrysomya (Achoetandrus) rufifacies (Diptera: Calliphoridae), Lucilia cuprina (Diptera: Calliphoridae), and Musca domestica (Diptera: Muscidae) based on images of posterior spiracles. However, AlexNet showed the fastest speed to process the identification model and presented a good balance between performance and speed. Therefore, the AlexNet model was selected for the testing step. The results of the confusion matrix of AlexNet showed that misclassification was found between C. megacephala and C. (Achoetandrus) rufifacies as well as between C. megacephala and L. cuprina. No misclassification was found for M. domestica. In addition, we created a web-application platform called thefly.ai to help users identify species of fly maggots in their own images using our classification model. The results from this study can be applied to identify further species by using other types of images. This model can also be used in the development of identification features in mobile applications. This study is a crucial step for integrating information from biology and AI-technology to develop a novel platform for use in forensic investigation.
Collapse
|
30
|
Luo CY, Pearson P, Xu G, Rich SM. A Computer Vision-Based Approach for Tick Identification Using Deep Learning Models. INSECTS 2022; 13:116. [PMID: 35206690 PMCID: PMC8879515 DOI: 10.3390/insects13020116] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 01/18/2022] [Accepted: 01/19/2022] [Indexed: 12/21/2022]
Abstract
A wide range of pathogens, such as bacteria, viruses, and parasites can be transmitted by ticks and can cause diseases, such as Lyme disease, anaplasmosis, or Rocky Mountain spotted fever. Landscape and climate changes are driving the geographic range expansion of important tick species. The morphological identification of ticks is critical for the assessment of disease risk; however, this process is time-consuming, costly, and requires qualified taxonomic specialists. To address this issue, we constructed a tick identification tool that can differentiate the most encountered human-biting ticks, Amblyomma americanum, Dermacentor variabilis, and Ixodes scapularis, by implementing artificial intelligence methods with deep learning algorithms. Many convolutional neural network (CNN) models (such as VGG, ResNet, or Inception) have been used for image recognition purposes but it is still a very limited application in the use of tick identification. Here, we describe the modified CNN-based models which were trained using a large-scale molecularly verified dataset to identify tick species. The best CNN model achieved a 99.5% accuracy on the test set. These results demonstrate that a computer vision system is a potential alternative tool to help in prescreening ticks for identification, an earlier diagnosis of disease risk, and, as such, could be a valuable resource for health professionals.
Collapse
Affiliation(s)
| | | | | | - Stephen M. Rich
- Department of Microbiology, University of Massachusetts, Amherst, MA 01003, USA; (C.-Y.L.); (P.P.); (G.X.)
| |
Collapse
|
31
|
Reimer LC, Sardà Carbasse J, Koblitz J, Ebeling C, Podstawka A, Overmann J. BacDive in 2022: the knowledge base for standardized bacterial and archaeal data. Nucleic Acids Res 2022; 50:D741-D746. [PMID: 34718743 PMCID: PMC8728306 DOI: 10.1093/nar/gkab961] [Citation(s) in RCA: 91] [Impact Index Per Article: 30.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 09/30/2021] [Accepted: 10/05/2021] [Indexed: 11/24/2022] Open
Abstract
The bacterial metadatabase BacDive (https://bacdive.dsmz.de) has developed into a leading database for standardized prokaryotic data on strain level. With its current release (07/2021) the database offers information for 82 892 bacterial and archaeal strains covering taxonomy, morphology, cultivation, metabolism, origin, and sequence information within 1048 data fields. By integrating high-quality data from additional culture collections as well as detailed information from species descriptions, the amount of data provided has increased by 30% over the past three years. A newly developed query builder tool in the advanced search now allows complex database queries. Thereby bacterial strains can be systematically searched based on combinations of their attributes, e.g. growth and metabolic features for biotechnological applications or to identify gaps in the present knowledge about bacteria. A new interactive dashboard provides a statistic overview over the most important data fields. Additional new features are improved genomic sequence data, integrated NCBI TaxIDs and links to BacMedia, the new sister database on cultivation media. To improve the findability and interpretation of data through search engines, data in BacDive are annotated with bioschemas.org terms.
Collapse
Affiliation(s)
- Lorenz Christian Reimer
- Leibniz Institute DSMZ-German Collection of Microorganisms and Cell Cultures, Braunschweig, Germany
| | - Joaquim Sardà Carbasse
- Leibniz Institute DSMZ-German Collection of Microorganisms and Cell Cultures, Braunschweig, Germany
| | - Julia Koblitz
- Leibniz Institute DSMZ-German Collection of Microorganisms and Cell Cultures, Braunschweig, Germany
| | - Christian Ebeling
- Leibniz Institute DSMZ-German Collection of Microorganisms and Cell Cultures, Braunschweig, Germany
| | - Adam Podstawka
- Leibniz Institute DSMZ-German Collection of Microorganisms and Cell Cultures, Braunschweig, Germany
| | - Jörg Overmann
- Leibniz Institute DSMZ-German Collection of Microorganisms and Cell Cultures, Braunschweig, Germany
| |
Collapse
|
32
|
Identification of Oil Tea (Camellia oleifera C.Abel) Cultivars Using EfficientNet-B4 CNN Model with Attention Mechanism. FORESTS 2021. [DOI: 10.3390/f13010001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Cultivar identification is a basic task in oil tea (Camellia oleifera C.Abel) breeding, quality analysis, and an adjustment in the industrial structure. However, because the differences in texture, shape, and color under different cultivars of oil tea are usually inconspicuous and subtle, the identification of oil tea cultivars can be a significant challenge. The main goal of this study is to propose an automatic and accurate method for identifying oil tea cultivars. In this study, a new deep learning model is built, called EfficientNet-B4-CBAM, to identify oil tea cultivars. First, 4725 images containing four cultivars were collected to build an oil tea cultivar identification dataset. EfficientNet-B4 was selected as the basic model of oil tea cultivar identification, and the Convolutional Block Attention Module (CBAM) was integrated into EfficientNet-B4 to build EfficientNet-B4-CBAM, thereby improving the focusing ability of the fruit areas and the information expression capability of the fruit areas. Finally, the cultivar identification capability of EfficientNet-B4-CBAM was tested on the testing dataset and compared with InceptionV3, VGG16, ResNet50, EfficientNet-B4, and EfficientNet-B4-SE. The experiment results showed that the EfficientNet-B4-CBAM model achieves an overall accuracy of 97.02% and a kappa coefficient of 0.96, which is higher than that of other methods used in comparative experiments. In addition, gradient-weighted class activation mapping network visualization also showed that EfficientNet-B4-CBAM can pay more attention to the fruit areas that play a key role in cultivar identification. This study provides new effective strategies and a theoretical basis for the application of deep learning technology in the identification of oil tea cultivars and provides technical support for the automatic identification and non-destructive testing of oil tea cultivars.
Collapse
|
33
|
The Detection of Thread Roll's Margin Based on Computer Vision. SENSORS 2021; 21:s21196331. [PMID: 34640651 PMCID: PMC8512785 DOI: 10.3390/s21196331] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 09/11/2021] [Accepted: 09/17/2021] [Indexed: 11/16/2022]
Abstract
The automatic detection of the thread roll's margin is one of the kernel problems in the textile field. As the traditional detection method based on the thread's tension has the disadvantages of high cost and low reliability, this paper proposes a technology that installs a camera on a mobile robot and uses computer vision to detect the thread roll's margin. Before starting, we define a thread roll's margin as follows: The difference between the thread roll's radius and the bobbin's radius. Firstly, we capture images of the thread roll's end surface. Secondly, we obtain the bobbin's image coordinates by calculating the image's convolutions with a Circle Gradient Operator. Thirdly, we fit the thread roll and bobbin's contours into ellipses, and then delete false detections according to the bobbin's image coordinates. Finally, we restore every sub-image of the thread roll by a perspective transformation method, and establish the conversion relationship between the actual size and pixel size. The difference value of the two concentric circles' radii is the thread roll's margin. However, there are false detections and these errors may be more than 19.4 mm when the margin is small. In order to improve the precision and delete false detections, we use deep learning to detect thread roll and bobbin's radii and then can calculate the thread roll's margin. After that, we fuse the two results. However, the deep learning method also has some false detections. As such, in order to eliminate the false detections completely, we estimate the thread roll's margin according to thread consumption speed. Lastly, we use a Kalman Filter to fuse the measured value and estimated value; the average error is less than 5.7 mm.
Collapse
|