1
|
Otesteanu CF, Caldelari R, Heussler V, Sznitman R. Machine learning for predicting Plasmodium liver stage development in vitro using microscopy imaging. Comput Struct Biotechnol J 2024; 24:334-342. [PMID: 38690550 PMCID: PMC11059334 DOI: 10.1016/j.csbj.2024.04.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 04/09/2024] [Accepted: 04/10/2024] [Indexed: 05/02/2024] Open
Abstract
Malaria, a significant global health challenge, is caused by Plasmodium parasites. The Plasmodium liver stage plays a pivotal role in the establishment of the infection. This study focuses on the liver stage development of the model organism Plasmodium berghei, employing fluorescent microscopy imaging and convolutional neural networks (CNNs) for analysis. Convolutional neural networks have been recently proposed as a viable option for tasks such as malaria detection, prediction of host-pathogen interactions, or drug discovery. Our research aimed to predict the transition of Plasmodium-infected liver cells to the merozoite stage, a key development phase, 15 hours in advance. We collected and analyzed hourly imaging data over a span of at least 38 hours from 400 sequences, encompassing 502 parasites. Our method was compared to human annotations to validate its efficacy. Performance metrics, including the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity, were evaluated on an independent test dataset. The outcomes revealed an AUC of 0.873, a sensitivity of 84.6%, and a specificity of 83.3%, underscoring the potential of our CNN-based framework to predict liver stage development of P. berghei. These findings not only demonstrate the feasibility of our methodology but also could potentially contribute to the broader understanding of parasite biology.
Collapse
Affiliation(s)
- Corin F. Otesteanu
- Artificial Intelligence in Medicine group, University of Bern, Switzerland
| | - Reto Caldelari
- Institute of Cell Biology, University of Bern, Switzerland
| | | | - Raphael Sznitman
- Artificial Intelligence in Medicine group, University of Bern, Switzerland
| |
Collapse
|
2
|
Luo Y, Xu Y, Wang C, Li Q, Fu C, Jiang H. ResNeXt-CC: a novel network based on cross-layer deep-feature fusion for white blood cell classification. Sci Rep 2024; 14:18439. [PMID: 39117714 PMCID: PMC11310349 DOI: 10.1038/s41598-024-69076-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2024] [Accepted: 07/31/2024] [Indexed: 08/10/2024] Open
Abstract
Accurate diagnosis of white blood cells from cytopathological images is a crucial step in evaluating leukaemia. In recent years, image classification methods based on fully convolutional networks have drawn extensive attention and achieved competitive performance in medical image classification. In this paper, we propose a white blood cell classification network called ResNeXt-CC for cytopathological images. First, we transform cytopathological images from the RGB color space to the HSV color space so as to precisely extract the texture features, color changes and other details of white blood cells. Second, since cell classification primarily relies on distinguishing local characteristics, we design a cross-layer deep-feature fusion module to enhance our ability to extract discriminative information. Third, the efficient attention mechanism based on the ECANet module is used to promote the feature extraction capability of cell details. Finally, we combine the modified softmax loss function and the central loss function to train the network, thereby effectively addressing the problem of class imbalance and improving the network performance. The experimental results on the C-NMC 2019 dataset show that our proposed method manifests obvious advantages over the existing classification methods, including ResNet-50, Inception-V3, Densenet121, VGG16, Cross ViT, Token-to-Token ViT, Deep ViT, and simple ViT about 5.5-20.43% accuracy, 3.6-23.56% F1-score, 3.5-25.71% AUROC and 8.1-36.98% specificity, respectively.
Collapse
Affiliation(s)
- Yang Luo
- School of Artificial Intelligence, Anshan Normal University, Anshan, 114007, Liaoning, China
| | - Ying Xu
- Anshan Central Hospital, Anshan, 114000, Liaoning, China
| | - Changbin Wang
- School of Artificial Intelligence, Anshan Normal University, Anshan, 114007, Liaoning, China
| | - Qiuju Li
- School of Artificial Intelligence, Anshan Normal University, Anshan, 114007, Liaoning, China
| | - Chong Fu
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, Liaoning, China
- Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, Shenyang, 110819, Liaoning, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110819, Liaoning, China
| | - Hongyang Jiang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China.
| |
Collapse
|
3
|
Moysis E, Brown BJ, Shokunbi W, Manescu P, Fernandez-Reyes D. Leveraging deep learning for detecting red blood cell morphological changes in blood films from children with severe malaria anaemia. Br J Haematol 2024; 205:699-710. [PMID: 38894606 DOI: 10.1111/bjh.19599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 06/06/2024] [Indexed: 06/21/2024]
Abstract
In sub-Saharan Africa, acute-onset severe malaria anaemia (SMA) is a critical challenge, particularly affecting children under five. The acute drop in haematocrit in SMA is thought to be driven by an increased phagocytotic pathological process in the spleen, leading to the presence of distinct red blood cells (RBCs) with altered morphological characteristics. We hypothesized that these RBCs could be detected systematically and at scale in peripheral blood films (PBFs) by harnessing the capabilities of deep learning models. Assessment of PBFs by a microscopist does not scale for this task and is subject to variability. Here we introduce a deep learning model, leveraging a weakly supervised Multiple Instance Learning framework, to Identify SMA (MILISMA) through the presence of morphologically changed RBCs. MILISMA achieved a classification accuracy of 83% (receiver operating characteristic area under the curve [AUC] of 87%; precision-recall AUC of 76%). More importantly, MILISMA's capabilities extend to identifying statistically significant morphological distinctions (p < 0.01) in RBCs descriptors. Our findings are enriched by visual analyses, which underscore the unique morphological features of SMA-affected RBCs when compared to non-SMA cells. This model aided detection and characterization of RBC alterations could enhance the understanding of SMA's pathology and refine SMA diagnostic and prognostic evaluation processes at scale.
Collapse
Affiliation(s)
- Ezer Moysis
- Department of Computer Science, Faculty of Engineering Sciences, University College London, London, UK
| | - Biobele J Brown
- Department of Paediatrics, College of Medicine University of Ibadan, University College Hospital, Ibadan, Nigeria
- Childhood Malaria Research Group, College of Medicine University of Ibadan, University College Hospital, Ibadan, Nigeria
- African Computational Sciences Centre for Health and Development, University of Ibadan, Ibadan, Nigeria
| | - Wuraola Shokunbi
- Childhood Malaria Research Group, College of Medicine University of Ibadan, University College Hospital, Ibadan, Nigeria
- Department of Haematology, College of Medicine University of Ibadan, University College Hospital, Ibadan, Nigeria
| | - Petru Manescu
- Department of Computer Science, Faculty of Engineering Sciences, University College London, London, UK
| | - Delmiro Fernandez-Reyes
- Department of Computer Science, Faculty of Engineering Sciences, University College London, London, UK
- Department of Paediatrics, College of Medicine University of Ibadan, University College Hospital, Ibadan, Nigeria
- Childhood Malaria Research Group, College of Medicine University of Ibadan, University College Hospital, Ibadan, Nigeria
- African Computational Sciences Centre for Health and Development, University of Ibadan, Ibadan, Nigeria
| |
Collapse
|
4
|
Sun M, Zou W, Wang Z, Wang S, Sun Z. An Automated Framework for Histopathological Nucleus Segmentation With Deep Attention Integrated Networks. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:995-1006. [PMID: 37018302 DOI: 10.1109/tcbb.2022.3233400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Clinical management and accurate disease diagnosis are evolving from qualitative stage to the quantitative stage, particularly at the cellular level. However, the manual process of histopathological analysis is lab-intensive and time-consuming. Meanwhile, the accuracy is limited by the experience of the pathologist. Therefore, deep learning-empowered computer-aided diagnosis (CAD) is emerging as an important topic in digital pathology to streamline the standard process of automatic tissue analysis. Automated accurate nucleus segmentation can not only help pathologists make more accurate diagnosis, save time and labor, but also achieve consistent and efficient diagnosis results. However, nucleus segmentation is susceptible to staining variation, uneven nucleus intensity, background noises, and nucleus tissue differences in biopsy specimens. To solve these problems, we propose Deep Attention Integrated Networks (DAINets), which mainly built on self-attention based spatial attention module and channel attention module. In addition, we also introduce a feature fusion branch to fuse high-level representations with low-level features for multi-scale perception, and employ the mark-based watershed algorithm to refine the predicted segmentation maps. Furthermore, in the testing phase, we design Individual Color Normalization (ICN) to settle the dyeing variation problem in specimens. Quantitative evaluations on the multi-organ nucleus dataset indicate the priority of our automated nucleus segmentation framework.
Collapse
|
5
|
Tekle E, Dese K, Girma S, Adissu W, Krishnamoorthy J, Kwa T. DeepLeish: a deep learning based support system for the detection of Leishmaniasis parasite from Giemsa-stained microscope images. BMC Med Imaging 2024; 24:152. [PMID: 38890604 PMCID: PMC11186139 DOI: 10.1186/s12880-024-01333-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Accepted: 06/13/2024] [Indexed: 06/20/2024] Open
Abstract
BACKGROUND Leishmaniasis is a vector-born neglected parasitic disease belonging to the genus Leishmania. Out of the 30 Leishmania species, 21 species cause human infection that affect the skin and the internal organs. Around, 700,000 to 1,000,000 of the newly infected cases and 26,000 to 65,000 deaths are reported worldwide annually. The disease exhibits three clinical presentations, namely, the cutaneous, muco-cutaneous and visceral Leishmaniasis which affects the skin, mucosal membrane and the internal organs, respectively. The relapsing behavior of the disease limits its diagnosis and treatment efficiency. The common diagnostic approaches follow subjective, error-prone, repetitive processes. Despite, an ever pressing need for an accurate detection of Leishmaniasis, the research conducted so far is scarce. In this regard, the main aim of the current research is to develop an artificial intelligence based detection tool for the Leishmaniasis from the Geimsa-stained microscopic images using deep learning method. METHODS Stained microscopic images were acquired locally and labeled by experts. The images were augmented using different methods to prevent overfitting and improve the generalizability of the system. Fine-tuned Faster RCNN, SSD, and YOLOV5 models were used for object detection. Mean average precision (MAP), precision, and Recall were calculated to evaluate and compare the performance of the models. RESULTS The fine-tuned YOLOV5 outperformed the other models such as Faster RCNN and SSD, with the MAP scores, of 73%, 54% and 57%, respectively. CONCLUSION The currently developed YOLOV5 model can be tested in the clinics to assist the laboratorists in diagnosing Leishmaniasis from the microscopic images. Particularly, in low-resourced healthcare facilities, with fewer qualified medical professionals or hematologists, our AI support system can assist in reducing the diagnosing time, workload, and misdiagnosis. Furthermore, the dataset collected by us will be shared with other researchers who seek to improve upon the detection system of the parasite. The current model detects the parasites even in the presence of the monocyte cells, but sometimes, the accuracy decreases due to the differences in the sizes of the parasite cells alongside the blood cells. The incorporation of cascaded networks in future and the quantification of the parasite load, shall overcome the limitations of the currently developed system.
Collapse
Affiliation(s)
- Eden Tekle
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia.
- Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, WV, 26505, USA.
| | - Selfu Girma
- Pathology Unit, Armauer Hansen Research Institute, Addis Ababa, Ethiopia
| | - Wondimagegn Adissu
- School of Medical Laboratory Sciences, Institute of Health, Jimma University, Jimma, Ethiopia
- Clinical Trial Unit, Jimma University, Jimma, Ethiopia
| | | | - Timothy Kwa
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia.
- Medtronic MiniMed, 18000 Devonshire St. Northridge, Los Angeles, CA, USA.
| |
Collapse
|
6
|
Asghar R, Kumar S, Shaukat A, Hynds P. Classification of white blood cells (leucocytes) from blood smear imagery using machine and deep learning models: A global scoping review. PLoS One 2024; 19:e0292026. [PMID: 38885231 PMCID: PMC11182552 DOI: 10.1371/journal.pone.0292026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 05/13/2024] [Indexed: 06/20/2024] Open
Abstract
Machine learning (ML) and deep learning (DL) models are being increasingly employed for medical imagery analyses, with both approaches used to enhance the accuracy of classification/prediction in the diagnoses of various cancers, tumors and bloodborne diseases. To date however, no review of these techniques and their application(s) within the domain of white blood cell (WBC) classification in blood smear images has been undertaken, representing a notable knowledge gap with respect to model selection and comparison. Accordingly, the current study sought to comprehensively identify, explore and contrast ML and DL methods for classifying WBCs. Following development and implementation of a formalized review protocol, a cohort of 136 primary studies published between January 2006 and May 2023 were identified from the global literature, with the most widely used techniques and best-performing WBC classification methods subsequently ascertained. Studies derived from 26 countries, with highest numbers from high-income countries including the United States (n = 32) and The Netherlands (n = 26). While WBC classification was originally rooted in conventional ML, there has been a notable shift toward the use of DL, and particularly convolutional neural networks (CNN), with 54.4% of identified studies (n = 74) including the use of CNNs, and particularly in concurrence with larger datasets and bespoke features e.g., parallel data pre-processing, feature selection, and extraction. While some conventional ML models achieved up to 99% accuracy, accuracy was shown to decrease in concurrence with decreasing dataset size. Deep learning models exhibited improved performance for more extensive datasets and exhibited higher levels of accuracy in concurrence with increasingly large datasets. Availability of appropriate datasets remains a primary challenge, potentially resolvable using data augmentation techniques. Moreover, medical training of computer science researchers is recommended to improve current understanding of leucocyte structure and subsequent selection of appropriate classification models. Likewise, it is critical that future health professionals be made aware of the power, efficacy, precision and applicability of computer science, soft computing and artificial intelligence contributions to medicine, and particularly in areas like medical imaging.
Collapse
Affiliation(s)
- Rabia Asghar
- Spatiotemporal Environmental Epidemiology Research (STEER) Group, Technological University Dublin, Dublin, Ireland
| | - Sanjay Kumar
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Arslan Shaukat
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Paul Hynds
- Spatiotemporal Environmental Epidemiology Research (STEER) Group, Technological University Dublin, Dublin, Ireland
| |
Collapse
|
7
|
Bai T, Luo J, Zhou S, Lu Y, Wang Y. Vehicle-Type Recognition Method for Images Based on Improved Faster R-CNN Model. SENSORS (BASEL, SWITZERLAND) 2024; 24:2650. [PMID: 38676267 PMCID: PMC11053705 DOI: 10.3390/s24082650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 04/02/2024] [Accepted: 04/18/2024] [Indexed: 04/28/2024]
Abstract
The rapid increase in the number of vehicles has led to increasing traffic congestion, traffic accidents, and motor vehicle crime rates. The management of various parking lots has also become increasingly challenging. Vehicle-type recognition technology can reduce the workload of humans in vehicle management operations. Therefore, the application of image technology for vehicle-type recognition is of great significance for integrated traffic management. In this paper, an improved faster region with convolutional neural network features (Faster R-CNN) model was proposed for vehicle-type recognition. Firstly, the output features of different convolution layers were combined to improve the recognition accuracy. Then, the average precision (AP) of the recognition model was improved through the contextual features of the original image and the object bounding box optimization strategy. Finally, the comparison experiment used the vehicle image dataset of three vehicle types, including cars, sports utility vehicles (SUVs), and vans. The experimental results show that the improved recognition model can effectively identify vehicle types in the images. The AP of the three vehicle types is 83.2%, 79.2%, and 78.4%, respectively, and the mean average precision (mAP) is 1.7% higher than that of the traditional Faster R-CNN model.
Collapse
Grants
- 62171073, 61971079, U21A20447 National Natural Science Foundation of China
- 2020YFQ0025, 2020YJ0151 Department of Science and Technology of Sichuan Province
- 210022-01SZ, 200020-01SZ, 200028-01SZ, 200027-01SZ Project of Central Nervous System Drug Key Laboratory of Sichuan Province
- CSTB2022NSCQ-MSX1523, cstc2019jcyj-msxmX0275, cstc2020jcyj-cxttX0002, cstc2019jcyjmsxmX0666, cstc2021jscx-gksbx0051, cstc2021jcyj-bsh0221 Nature Science Foundation of Chongqing
- 2022MD713702 China Postdoctoral Science Foundation
- CSTB2022TIAD-KPX0062 Chongqing Technical Innovation and Application Development Special Project
- cstc2022jxj120036, CSTB2023JXJL-YFX0027 Chongqing Scientific Institution Incentive Performance Guiding Special Projects
- KJZD-k202000604, KJQN202100602, KJQN202100602, KJQN202000604 Science and Technology Research Project of Chongqing Education Commission
- 2022MK105 SAMR Science and Technology Program
- 2021ZKZD019 Key Research Project of Southwest Medical University
- 2021XM3010, 2021XM2051 Special support for Chongqing Postdoctoral Research Project
Collapse
Affiliation(s)
- Tong Bai
- School of Optoelectronic Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (T.B.); (Y.L.); (Y.W.)
| | - Jiasai Luo
- School of Optoelectronic Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (T.B.); (Y.L.); (Y.W.)
| | - Sen Zhou
- Chongqing Academy of Metrology and Quality Inspection, Chongqing 401121, China;
| | - Yi Lu
- School of Optoelectronic Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (T.B.); (Y.L.); (Y.W.)
| | - Yuanfa Wang
- School of Optoelectronic Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (T.B.); (Y.L.); (Y.W.)
| |
Collapse
|
8
|
Guemas E, Routier B, Ghelfenstein-Ferreira T, Cordier C, Hartuis S, Marion B, Bertout S, Varlet-Marie E, Costa D, Pasquier G. Automatic patient-level recognition of four Plasmodium species on thin blood smear by a real-time detection transformer (RT-DETR) object detection algorithm: a proof-of-concept and evaluation. Microbiol Spectr 2024; 12:e0144023. [PMID: 38171008 PMCID: PMC10846087 DOI: 10.1128/spectrum.01440-23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 11/15/2023] [Indexed: 01/05/2024] Open
Abstract
Malaria remains a global health problem, with 247 million cases and 619,000 deaths in 2021. Diagnosis of Plasmodium species is important for administering the appropriate treatment. The gold-standard diagnosis for accurate species identification remains the thin blood smear. Nevertheless, this method is time-consuming and requires highly skilled and trained microscopists. To overcome these issues, new diagnostic tools based on deep learning are emerging. This study aimed to evaluate the performances of a real-time detection transformer (RT-DETR) object detection algorithm to discriminate Plasmodium species on thin blood smear images. The algorithm was trained and validated on a data set consisting in 24,720 images from 475 thin blood smears corresponding to 2,002,597 labels. Performances were calculated with a test data set of 4,508 images from 170 smears corresponding to 358,825 labels coming from six French university hospitals. At the patient level, the RT-DETR algorithm exhibited an overall accuracy of 79.4% (135/170) with a recall of 74% (40/54) and 81.9% (95/116) for negative and positive smears, respectively. Among Plasmodium-positive smears, the global accuracy was 82.7% (91/110) with a recall of 90% (38/42), 81.8% (18/22), and 76.1% (35/46) for P. falciparum, P. malariae, and P. ovale/vivax, respectively. The RT-DETR model achieved a World Health Organization (WHO) competence level 2 for species identification. Besides, the RT-DETR algorithm may be run in real-time on low-cost devices such as a smartphone and could be suitable for deployment in low-resource setting areas lacking microscopy experts.IMPORTANCEMalaria remains a global health problem, with 247 million cases and 619,000 deaths in 2021. Diagnosis of Plasmodium species is important for administering the appropriate treatment. The gold-standard diagnosis for accurate species identification remains the thin blood smear. Nevertheless, this method is time-consuming and requires highly skilled and trained microscopists. To overcome these issues, new diagnostic tools based on deep learning are emerging. This study aimed to evaluate the performances of a real-time detection transformer (RT-DETR) object detection algorithm to discriminate Plasmodium species on thin blood smear images. Performances were calculated with a test data set of 4,508 images from 170 smears coming from six French university hospitals. The RT-DETR model achieved a World Health Organization (WHO) competence level 2 for species identification. Besides, the RT-DETR algorithm may be run in real-time on low-cost devices and could be suitable for deployment in low-resource setting areas.
Collapse
Affiliation(s)
- Emilie Guemas
- Department of Parasitology and Mycology, Academic Hospital (CHU) of Toulouse, Toulouse, France
- Toulouse Institute for Infectious and Inflammatory Diseases (Infinity), CNRS UMR5051, INSERM UMR1291, UPS, Toulouse, France
| | - Baptiste Routier
- Laboratory of Parasitology-Mycology, EA7510 ESCAPE, University Hospital of Rouen, University of Rouen Normandie, Normandie, France
| | - Théo Ghelfenstein-Ferreira
- Université de Paris Cité, Laboratoire de Parasitologie-Mycologie, Groupe Hospitalier Saint-Louis-Lariboisière-Fernand-Widal, Assistance Publique-Hôpitaux de Paris (AP-HP), Paris, France
| | - Camille Cordier
- Laboratory of Parasitology-Mycology, INSERM U1285, Unité de Glycobiologie Structurale et Fonctionnelle (CNRS UMR 8576), University Hospital (CHU) of Lille, University of Lille, Lille, France
| | - Sophie Hartuis
- Nantes University,Academic Hospital (CHU) of Nantes,Cibles et Médicaments des Infections et de l'Immunité, IICiMed, UR1155, Nantes, France
| | - Bénédicte Marion
- Department of Physical Chemistry and Biophysics, Academic Hospital (CHU) of Montpellier, University of Montpellier, National Reference Centre (CNR) for Paludism, Montpellier, France
- Department of Parasitology/Mycology, Academic Hospital (CHU) of Montpellier, University of Montpellier, National Reference Centre (CNR) for Paludism, Montpellier, France
| | - Sébastien Bertout
- Laboratory of Parasitology/Mycology, UMI 233 TransVIHMI, University of Montpellier, IRD, INSERM U1175, Montpellier, France
| | - Emmanuelle Varlet-Marie
- Department of Physical Chemistry and Biophysics, Academic Hospital (CHU) of Montpellier, University of Montpellier, National Reference Centre (CNR) for Paludism, Montpellier, France
- Department of Parasitology/Mycology, Academic Hospital (CHU) of Montpellier, University of Montpellier, National Reference Centre (CNR) for Paludism, Montpellier, France
| | - Damien Costa
- Laboratory of Parasitology-Mycology, EA7510 ESCAPE, University Hospital of Rouen, University of Rouen Normandie, Normandie, France
| | - Grégoire Pasquier
- Department of Parasitology/Mycology, Academic Hospital (CHU) of Montpellier, University of Montpellier, National Reference Centre (CNR) for Paludism, Montpellier, France
| |
Collapse
|
9
|
Hassini H, Dorizzi B, Thellier M, Klossa J, Gottesman Y. Investigating the Joint Amplitude and Phase Imaging of Stained Samples in Automatic Diagnosis. SENSORS (BASEL, SWITZERLAND) 2023; 23:7932. [PMID: 37765989 PMCID: PMC10536387 DOI: 10.3390/s23187932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 08/29/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023]
Abstract
The diagnosis of many diseases relies, at least on first intention, on an analysis of blood smears acquired with a microscope. However, image quality is often insufficient for the automation of such processing. A promising improvement concerns the acquisition of enriched information on samples. In particular, Quantitative Phase Imaging (QPI) techniques, which allow the digitization of the phase in complement to the intensity, are attracting growing interest. Such imaging allows the exploration of transparent objects not visible in the intensity image using the phase image only. Another direction proposes using stained images to reveal some characteristics of the cells in the intensity image; in this case, the phase information is not exploited. In this paper, we question the interest of using the bi-modal information brought by intensity and phase in a QPI acquisition when the samples are stained. We consider the problem of detecting parasitized red blood cells for diagnosing malaria from stained blood smears using a Deep Neural Network (DNN). Fourier Ptychographic Microscopy (FPM) is used as the computational microscopy framework to produce QPI images. We show that the bi-modal information enhances the detection performance by 4% compared to the intensity image only when the convolution in the DNN is implemented through a complex-based formalism. This proves that the DNN can benefit from the bi-modal enhanced information. We conjecture that these results should extend to other applications processed through QPI acquisition.
Collapse
Affiliation(s)
- Houda Hassini
- Samovar, Télécom SudParis, Institut Polytechnique de Paris, 91120 Palaiseau, France; (B.D.); (Y.G.)
- TRIBVN/T-Life, 92800 Puteaux, France;
| | - Bernadette Dorizzi
- Samovar, Télécom SudParis, Institut Polytechnique de Paris, 91120 Palaiseau, France; (B.D.); (Y.G.)
| | - Marc Thellier
- AP-HP, Centre National de Référence du Paludisme, 75013 Paris, France;
- Institut Pierre-Louis d’Épidémiologie et de Santé Publique, Sorbonne Université, INSERM, 75013 Paris, France
| | | | - Yaneck Gottesman
- Samovar, Télécom SudParis, Institut Polytechnique de Paris, 91120 Palaiseau, France; (B.D.); (Y.G.)
| |
Collapse
|
10
|
Yuan H, Wang Z, Wang Z, Zhang F, Guan D, Zhao R. Trends in forensic microbiology: From classical methods to deep learning. Front Microbiol 2023; 14:1163741. [PMID: 37065115 PMCID: PMC10098119 DOI: 10.3389/fmicb.2023.1163741] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Accepted: 03/08/2023] [Indexed: 04/18/2023] Open
Abstract
Forensic microbiology has been widely used in the diagnosis of causes and manner of death, identification of individuals, detection of crime locations, and estimation of postmortem interval. However, the traditional method, microbial culture, has low efficiency, high consumption, and a low degree of quantitative analysis. With the development of high-throughput sequencing technology, advanced bioinformatics, and fast-evolving artificial intelligence, numerous machine learning models, such as RF, SVM, ANN, DNN, regression, PLS, ANOSIM, and ANOVA, have been established with the advancement of the microbiome and metagenomic studies. Recently, deep learning models, including the convolutional neural network (CNN) model and CNN-derived models, improve the accuracy of forensic prognosis using object detection techniques in microorganism image analysis. This review summarizes the application and development of forensic microbiology, as well as the research progress of machine learning (ML) and deep learning (DL) based on microbial genome sequencing and microbial images, and provided a future outlook on forensic microbiology.
Collapse
Affiliation(s)
- Huiya Yuan
- Department of Forensic Analytical Toxicology, China Medical University School of Forensic Medicine, Shenyang, China
- Liaoning Province Key Laboratory of Forensic Bio-Evidence Science, Shenyang, China
| | - Ziwei Wang
- Department of Forensic Pathology, China Medical University School of Forensic Medicine, Shenyang, China
| | - Zhi Wang
- Department of Forensic Pathology, China Medical University School of Forensic Medicine, Shenyang, China
| | - Fuyuan Zhang
- Department of Forensic Pathology, China Medical University School of Forensic Medicine, Shenyang, China
| | - Dawei Guan
- Liaoning Province Key Laboratory of Forensic Bio-Evidence Science, Shenyang, China
- Department of Forensic Pathology, China Medical University School of Forensic Medicine, Shenyang, China
- *Correspondence: Dawei Guan
| | - Rui Zhao
- Liaoning Province Key Laboratory of Forensic Bio-Evidence Science, Shenyang, China
- Department of Forensic Pathology, China Medical University School of Forensic Medicine, Shenyang, China
- Rui Zhao
| |
Collapse
|
11
|
Kumar S, Arif T, Alotaibi AS, Malik MB, Manhas J. Advances Towards Automatic Detection and Classification of Parasites Microscopic Images Using Deep Convolutional Neural Network: Methods, Models and Research Directions. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2022; 30:2013-2039. [PMID: 36531561 PMCID: PMC9734923 DOI: 10.1007/s11831-022-09858-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 11/19/2022] [Indexed: 06/17/2023]
Abstract
In the developing world, parasites are responsible for causing several serious health problems, with relatively high infections in human beings. The traditional manual light microscopy process of parasite recognition remains the golden standard approach for the diagnosis of parasitic species, but this approach is time-consuming, highly tedious, and also difficult to maintain consistency but essential in parasitological classification for carrying out several experimental observations. Therefore, it is meaningful to apply deep learning to address these challenges. Convolution Neural Network and digital slide scanning show promising results that can revolutionize the clinical parasitology laboratory by automating the process of classification and detection of parasites. Image analysis using deep learning methods have the potential to achieve high efficiency and accuracy. For this review, we have conducted a thorough investigation in the field of image detection and classification of various parasites based on deep learning. Online databases and digital libraries such as ACM, IEEE, ScienceDirect, Springer, and Wiley Online Library were searched to identify sufficient related paper collections. After screening of 200 research papers, 70 of them met our filtering criteria, which became a part of this study. This paper presents a comprehensive review of existing parasite classification and detection methods and models in chronological order, from traditional machine learning based techniques to deep learning based techniques. In this review, we also demonstrate the summary of machine learning and deep learning methods along with dataset details, evaluation metrics, methods limitations, and future scope over the one decade. The majority of the technical publications from 2012 to the present have been examined and summarized. In addition, we have discussed the future directions and challenges of parasites classification and detection to help researchers in understanding the existing research gaps. Further, this review provides support to researchers who require an effective and comprehensive understanding of deep learning development techniques, research, and future trends in the field of parasites detection and classification.
Collapse
Affiliation(s)
- Satish Kumar
- Department of Information Technology, BGSB University Rajouri, Rajouri, J&K 185131 India
| | - Tasleem Arif
- Department of Information Technology, BGSB University Rajouri, Rajouri, J&K 185131 India
| | | | - Majid B. Malik
- Department of Computer Sciences, BGSB University Rajouri, Rajouri, J&K 185131 India
| | - Jatinder Manhas
- Department of Computer Sciences & IT, University of Jammu, Jammu, J&K India
| |
Collapse
|
12
|
Diao Z, Kan L, Zhao Y, Yang H, Song J, Wang C, Liu Y, Zhang F, Xu T, Chen R, Ji Y, Wang X, Jing X, Xu J, Li Y, Ma B. Artificial intelligence-assisted automatic and index-based microbial single-cell sorting system for One-Cell-One-Tube. MLIFE 2022; 1:448-459. [PMID: 38818483 PMCID: PMC10989846 DOI: 10.1002/mlf2.12047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 11/07/2022] [Accepted: 11/10/2022] [Indexed: 06/01/2024]
Abstract
Identification, sorting, and sequencing of individual cells directly from in situ samples have great potential for in-depth analysis of the structure and function of microbiomes. In this work, based on an artificial intelligence (AI)-assisted object detection model for cell phenotype screening and a cross-interface contact method for single-cell exporting, we developed an automatic and index-based system called EasySort AUTO, where individual microbial cells are sorted and then packaged in a microdroplet and automatically exported in a precisely indexed, "One-Cell-One-Tube" manner. The target cell is automatically identified based on an AI-assisted object detection model and then mobilized via an optical tweezer for sorting. Then, a cross-interface contact microfluidic printing method that we developed enables the automated transfer of cells from the chip to the tube, which leads to coupling with subsequent single-cell culture or sequencing. The efficiency of the system for single-cell printing is >93%. The throughput of the system for single-cell printing is ~120 cells/h. Moreover, >80% of single cells of both yeast and Escherichia coli are culturable, suggesting the superior preservation of cell viability during sorting. Finally, AI-assisted object detection supports automated sorting of target cells with high accuracy from mixed yeast samples, which was validated by downstream single-cell proliferation assays. The automation, index maintenance, and vitality preservation of EasySort AUTO suggest its excellent application potential for single-cell sorting.
Collapse
Affiliation(s)
- Zhidian Diao
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- University of Chinese Academy of SciencesBeijingChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| | - Lingyan Kan
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| | - Yilong Zhao
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| | - Huaibo Yang
- Qingdao Single‐Cell Biotechnology Co. Ltd.QingdaoChina
| | - Jingyun Song
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| | - Chen Wang
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| | - Yang Liu
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| | - Fengli Zhang
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| | - Teng Xu
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- University of Chinese Academy of SciencesBeijingChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| | - Rongze Chen
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- University of Chinese Academy of SciencesBeijingChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| | - Yuetong Ji
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- Qingdao Single‐Cell Biotechnology Co. Ltd.QingdaoChina
| | - Xixian Wang
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- University of Chinese Academy of SciencesBeijingChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| | - Xiaoyan Jing
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- University of Chinese Academy of SciencesBeijingChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| | - Jian Xu
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- University of Chinese Academy of SciencesBeijingChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| | - Yuandong Li
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- University of Chinese Academy of SciencesBeijingChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| | - Bo Ma
- CAS Key Laboratory of Biofuels, Shandong Key Laboratory of Energy Genetics, Single‐Cell Center, Qingdao Institute of Bioenergy and Bioprocess TechnologyChinese Academy of SciencesQingdaoChina
- University of Chinese Academy of SciencesBeijingChina
- Shandong Energy InstituteQingdaoChina
- Qingdao New Energy Shandong LaboratoryQingdaoChina
| |
Collapse
|
13
|
Maturana CR, de Oliveira AD, Nadal S, Bilalli B, Serrat FZ, Soley ME, Igual ES, Bosch M, Lluch AV, Abelló A, López-Codina D, Suñé TP, Clols ES, Joseph-Munné J. Advances and challenges in automated malaria diagnosis using digital microscopy imaging with artificial intelligence tools: A review. Front Microbiol 2022; 13:1006659. [PMID: 36458185 PMCID: PMC9705958 DOI: 10.3389/fmicb.2022.1006659] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 09/26/2022] [Indexed: 09/03/2023] Open
Abstract
Malaria is an infectious disease caused by parasites of the genus Plasmodium spp. It is transmitted to humans by the bite of an infected female Anopheles mosquito. It is the most common disease in resource-poor settings, with 241 million malaria cases reported in 2020 according to the World Health Organization. Optical microscopy examination of blood smears is the gold standard technique for malaria diagnosis; however, it is a time-consuming method and a well-trained microscopist is needed to perform the microbiological diagnosis. New techniques based on digital imaging analysis by deep learning and artificial intelligence methods are a challenging alternative tool for the diagnosis of infectious diseases. In particular, systems based on Convolutional Neural Networks for image detection of the malaria parasites emulate the microscopy visualization of an expert. Microscope automation provides a fast and low-cost diagnosis, requiring less supervision. Smartphones are a suitable option for microscopic diagnosis, allowing image capture and software identification of parasites. In addition, image analysis techniques could be a fast and optimal solution for the diagnosis of malaria, tuberculosis, or Neglected Tropical Diseases in endemic areas with low resources. The implementation of automated diagnosis by using smartphone applications and new digital imaging technologies in low-income areas is a challenge to achieve. Moreover, automating the movement of the microscope slide and image autofocusing of the samples by hardware implementation would systemize the procedure. These new diagnostic tools would join the global effort to fight against pandemic malaria and other infectious and poverty-related diseases.
Collapse
Affiliation(s)
- Carles Rubio Maturana
- Microbiology Department, Vall d’Hebron Research Institute, Vall d’Hebron Hospital Campus, Barcelona, Spain
- Universitat Autònoma de Barcelona (UAB), Barcelona, Spain
| | - Allisson Dantas de Oliveira
- Computational Biology and Complex Systems Group, Physics Department, Universitat Politècnica de Catalunya (UPC), Castelldefels, Spain
| | - Sergi Nadal
- Data Base Technologies and Information Group, Engineering Services and Information Systems Department, Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Besim Bilalli
- Data Base Technologies and Information Group, Engineering Services and Information Systems Department, Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Francesc Zarzuela Serrat
- Microbiology Department, Vall d’Hebron Research Institute, Vall d’Hebron Hospital Campus, Barcelona, Spain
| | - Mateu Espasa Soley
- Universitat Autònoma de Barcelona (UAB), Barcelona, Spain
- Clinical Laboratories, Microbiology Department, Hospital Universitari Parc Taulí, Sabadell, Spain
| | - Elena Sulleiro Igual
- Microbiology Department, Vall d’Hebron Research Institute, Vall d’Hebron Hospital Campus, Barcelona, Spain
- Universitat Autònoma de Barcelona (UAB), Barcelona, Spain
- CIBERINFEC, ISCIII- CIBER de Enfermedades Infecciosas, Instituto de Salud Carlos III, Madrid, Spain
| | | | | | - Alberto Abelló
- Data Base Technologies and Information Group, Engineering Services and Information Systems Department, Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Daniel López-Codina
- Computational Biology and Complex Systems Group, Physics Department, Universitat Politècnica de Catalunya (UPC), Castelldefels, Spain
| | - Tomàs Pumarola Suñé
- Microbiology Department, Vall d’Hebron Research Institute, Vall d’Hebron Hospital Campus, Barcelona, Spain
- Universitat Autònoma de Barcelona (UAB), Barcelona, Spain
| | - Elisa Sayrol Clols
- Image Processing Group, Telecommunications and Signal Theory Group, Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Joan Joseph-Munné
- Microbiology Department, Vall d’Hebron Research Institute, Vall d’Hebron Hospital Campus, Barcelona, Spain
| |
Collapse
|
14
|
Wahab F, Ullah I, Shah A, Khan RA, Choi A, Anwar MS. Design and implementation of real-time object detection system based on single-shoot detector and OpenCV. Front Psychol 2022; 13:1039645. [PMID: 36405169 PMCID: PMC9666404 DOI: 10.3389/fpsyg.2022.1039645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 10/05/2022] [Indexed: 11/24/2022] Open
Abstract
Computer vision (CV) and human-computer interaction (HCI) are essential in many technological fields. Researchers in CV are particularly interested in real-time object detection techniques, which have a wide range of applications, including inspection systems. In this study, we design and implement real-time object detection and recognition systems using the single-shoot detector (SSD) algorithm and deep learning techniques with pre-trained models. The system can detect static and moving objects in real-time and recognize the object's class. The primary goals of this research were to investigate and develop a real-time object detection system that employs deep learning and neural systems for real-time object detection and recognition. In addition, we evaluated the free available, pre-trained models with the SSD algorithm on various types of datasets to determine which models have high accuracy and speed when detecting an object. Moreover, the system is required to be operational on reasonable equipment. We tried and evaluated several deep learning structures and techniques during the coding procedure and developed and proposed a highly accurate and efficient object detection system. This system utilizes freely available datasets such as MS Common Objects in Context (COCO), PASCAL VOC, and Kitti. We evaluated our system's accuracy using various metrics such as precision and recall. The proposed system achieved a high accuracy of 97% while detecting and recognizing real-time objects.
Collapse
Affiliation(s)
- Fazal Wahab
- College of Computer Science and Technology, Northeastern University, Shenyang, China
| | - Inam Ullah
- BK21 Chungbuk Information Technology Education and Research Center, Chungbuk National University, Cheongju, South Korea
| | - Anwar Shah
- School of Computing, National University of Computer and Emerging Sciences, Faisalabad, Pakistan
| | - Rehan Ali Khan
- Department of Electrical Engineering, University of Science and Technology, Bannu, Pakistan
| | - Ahyoung Choi
- Department of AI and Software, Gachon University, Seongnam, South Korea
| | | |
Collapse
|
15
|
Barboza MFX, Monteiro KHDC, Rodrigues IR, Santos GL, Monteiro WM, Figueira EAG, Sampaio VDS, Lynn T, Endo PT. Prediction of malaria using deep learning models: A case study on city clusters in the state of Amazonas, Brazil, from 2003 to 2018. Rev Soc Bras Med Trop 2022; 55:e0420. [PMID: 35946631 PMCID: PMC9344950 DOI: 10.1590/0037-8682-0420-2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 04/13/2022] [Indexed: 11/22/2022] Open
Abstract
Background: Malaria is curable. Nonetheless, over 229 million cases of malaria were recorded in 2019, along with 409,000 deaths. Although over 42 million Brazilians are at risk of contracting malaria, 99% percent of all malaria cases in Brazil are located in or around the Amazon rainforest. Despite declining cases and deaths, malaria remains a major public health issue in Brazil. Accurate spatiotemporal prediction of malaria propagation may enable improved resource allocation to support efforts to eradicate the disease. Methods: In response to calls for novel research on malaria elimination strategies that suit local conditions, in this study, we propose machine learning (ML) and deep learning (DL) models to predict the probability of malaria cases in the state of Amazonas. Using a dataset of approximately 6 million records (January 2003 to December 2018), we applied k-means clustering to group cities based on their similarity of malaria incidence. We evaluated random forest, long-short term memory (LSTM) and dated recurrent unit (GRU) models and compared their performance. Results: The LSTM architecture achieved better performance in clusters with less variability in the number of cases, whereas the GRU presents better results in clusters with high variability. Although Diebold-Mariano testing suggested that both the LSTM and GRU performed comparably, GRU can be trained significantly faster, which could prove advantageous in practice. Conclusions: All models showed satisfactory accuracy and strong performance in predicting new cases of malaria, and each could serve as a supplemental tool to support regional policies and strategies.
Collapse
Affiliation(s)
| | | | | | - Guto Leoni Santos
- Universidade Federal de Pernambuco, Centro de Informática, Recife, PE, Brasil
| | - Wuelton Marcelo Monteiro
- Universidade do Estado do Amazonas, Manaus, AM, Brasil.,Fundação de Medicina Tropical Doutor Heitor Vieira Dourado, Manaus, AM, Brasil
| | - Elder Augusto Guimaraes Figueira
- Fundação de Vigilância em Saúde Rosemary Costa Pinto, Manaus, AM, Brasil.,Instituto Oswaldo Cruz, Programa de Pós-graduação Stricto Sensu em Medicina Tropical, Rio de Janeiro, RJ, Brasil
| | - Vanderson de Souza Sampaio
- Fundação de Medicina Tropical Doutor Heitor Vieira Dourado, Manaus, AM, Brasil.,Fundação de Vigilância em Saúde Rosemary Costa Pinto, Manaus, AM, Brasil.,Instituto Todos pela Saúde, São Paulo, SP, Brasil
| | - Theo Lynn
- Dublin City University, Dublin, Ireland
| | - Patricia Takako Endo
- Universidade de Pernambuco, Programa de Pós-Graduação em Engenharia da Computação, Recife, PE, Brasil
| |
Collapse
|
16
|
Deep Learning and Transfer Learning for Malaria Detection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2221728. [PMID: 35814548 PMCID: PMC9259269 DOI: 10.1155/2022/2221728] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 05/23/2022] [Accepted: 05/30/2022] [Indexed: 11/25/2022]
Abstract
Infectious disease malaria is a devastating infectious disease that claims the lives of more than 500,000 people worldwide every year. Most of these deaths occur as a result of a delayed or incorrect diagnosis. At the moment, the manual microscope is considered to be the most effective equipment for diagnosing malaria. It is, on the other hand, time-consuming and prone to human error. Because it is such a serious global health issue, it is important that the evaluation process be automated. The objective of this article is to advocate for the automation of the diagnosis process in order to eliminate the need for human intervention in the process. Convolutional neural networks (CNNs) and other deep-learning technologies, such as image processing, are being utilized to evaluate parasitemia in microscopic blood slides in order to enhance diagnostic accuracy. The approach is based on the intensity characteristics of Plasmodium parasites and erythrocytes, which are both known to be variable. Images of infected and noninfected erythrocytes are gathered and fed into the CNN models ResNet50, ResNet34, VGG-16, and VGG-19, which are all trained on the same dataset. The techniques of transfer learning and fine-tuning are employed, and the outcomes are contrasted. The VGG-19 model obtained the best overall performance given the parameters and dataset that were evaluated.
Collapse
|
17
|
Ma P, Li C, Rahaman MM, Yao Y, Zhang J, Zou S, Zhao X, Grzegorzek M. A state-of-the-art survey of object detection techniques in microorganism image analysis: from classical methods to deep learning approaches. Artif Intell Rev 2022; 56:1627-1698. [PMID: 35693000 PMCID: PMC9170564 DOI: 10.1007/s10462-022-10209-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Microorganisms play a vital role in human life. Therefore, microorganism detection is of great significance to human beings. However, the traditional manual microscopic detection methods have the disadvantages of long detection cycle, low detection accuracy in large orders, and great difficulty in detecting uncommon microorganisms. Therefore, it is meaningful to apply computer image analysis technology to the field of microorganism detection. Computer image analysis can realize high-precision and high-efficiency detection of microorganisms. In this review, first,we analyse the existing microorganism detection methods in chronological order, from traditional image processing and traditional machine learning to deep learning methods. Then, we analyze and summarize these existing methods and introduce some potential methods, including visual transformers. In the end, the future development direction and challenges of microorganism detection are discussed. In general, we have summarized 142 related technical papers from 1985 to the present. This review will help researchers have a more comprehensive understanding of the development process, research status, and future trends in the field of microorganism detection and provide a reference for researchers in other fields.
Collapse
Affiliation(s)
- Pingli Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Md Mamunur Rahaman
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology,
Hoboken, NJ USA
| | - Jiawei Zhang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Shuojia Zou
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xin Zhao
- School of Resources and Civil Engineering, Northeastern University, Shenyang, China
| | - Marcin Grzegorzek
- Biomedical Information College, University of Luebeck, Luebeck, Germany
| |
Collapse
|
18
|
Jindal N, Singh H, Rana PS. Face mask detection in COVID-19: a strategic review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:40013-40042. [PMID: 35528282 PMCID: PMC9069221 DOI: 10.1007/s11042-022-12999-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 01/12/2022] [Accepted: 03/27/2022] [Indexed: 06/14/2023]
Abstract
With the outbreak of the Coronavirus Disease in 2019, life seemed to be had come to a standstill. To combat the transmission of the virus, World Health Organization (WHO) announced wearing of face mask as an imperative way to limit the spread of the virus. However, manually ensuring whether people are wearing face masks or not in a public area is a cumbersome task. The exigency of monitoring people wearing face masks necessitated building an automatic system. Currently, distinct methods using machine learning and deep learning can be used effectively. In this paper, all the essential requirements for such a model have been reviewed. The need and the structural outline of the proposed model have been discussed extensively, followed by a comprehensive study of various available techniques and their respective comparative performance analysis. Further, the pros and cons of each method have been analyzed in depth. Subsequently, sources to multiple datasets are mentioned. The several software needed for the implementation are also discussed. And discussions have been organized on the various use cases, limitations, and observations for the system, and the conclusion of this paper with several directions for future research.
Collapse
Affiliation(s)
- Neeru Jindal
- Faculty, ECED, Thapar Institute of Engineering and Technology, Patiala, Punjab India
| | - Harpreet Singh
- Faculty, CSED, Thapar Institute of Engineering and Technology, Patiala, Punjab India
| | - Prashant Singh Rana
- Faculty, CSED, Thapar Institute of Engineering and Technology, Patiala, Punjab India
| |
Collapse
|
19
|
Zhang J, Li C, Yin Y, Zhang J, Grzegorzek M. Applications of artificial neural networks in microorganism image analysis: a comprehensive review from conventional multilayer perceptron to popular convolutional neural network and potential visual transformer. Artif Intell Rev 2022; 56:1013-1070. [PMID: 35528112 PMCID: PMC9066147 DOI: 10.1007/s10462-022-10192-7] [Citation(s) in RCA: 42] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Microorganisms are widely distributed in the human daily living environment. They play an essential role in environmental pollution control, disease prevention and treatment, and food and drug production. The analysis of microorganisms is essential for making full use of different microorganisms. The conventional analysis methods are laborious and time-consuming. Therefore, the automatic image analysis based on artificial neural networks is introduced to optimize it. However, the automatic microorganism image analysis faces many challenges, such as the requirement of a robust algorithm caused by various application occasions, insignificant features and easy under-segmentation caused by the image characteristic, and various analysis tasks. Therefore, we conduct this review to comprehensively discuss the characteristics of microorganism image analysis based on artificial neural networks. In this review, the background and motivation are introduced first. Then, the development of artificial neural networks and representative networks are presented. After that, the papers related to microorganism image analysis based on classical and deep neural networks are reviewed from the perspectives of different tasks. In the end, the methodology analysis and potential direction are discussed.
Collapse
Affiliation(s)
- Jinghua Zhang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yimin Yin
- School of Mathematics and Statistics, Hunan First Normal University, Changsha, China
| | - Jiawei Zhang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany
| |
Collapse
|
20
|
Muta K, Takata S, Utsumi Y, Matsumura A, Iwamura M, Kise K. TAIM: Tool for Analyzing Root Images to Calculate the Infection Rate of Arbuscular Mycorrhizal Fungi. FRONTIERS IN PLANT SCIENCE 2022; 13:881382. [PMID: 35592584 PMCID: PMC9111841 DOI: 10.3389/fpls.2022.881382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 03/31/2022] [Indexed: 06/15/2023]
Abstract
Arbuscular mycorrhizal fungi (AMF) infect plant roots and are hypothesized to improve plant growth. Recently, AMF is now available for axenic culture. Therefore, AMF is expected to be used as a microbial fertilizer. To evaluate the usefulness of AMF as a microbial fertilizer, we need to investigate the relationship between the degree of root colonization of AMF and plant growth. The method popularly used for calculation of the degree of root colonization, termed the magnified intersections method, is performed manually and is too labor-intensive to enable an extensive survey to be undertaken. Therefore, we automated the magnified intersections method by developing an application named "Tool for Analyzing root images to calculate the Infection rate of arbuscular Mycorrhizal fungi: TAIM." TAIM is a web-based application that calculates the degree of AMF colonization from images using automated computer vision and pattern recognition techniques. Experimental results showed that TAIM correctly detected sampling areas for calculation of the degree of infection and classified the sampling areas with 87.4% accuracy. TAIM is publicly accessible at http://taim.imlab.jp/.
Collapse
Affiliation(s)
- Kaoru Muta
- Graduate School of Engineering, Osaka Prefecture University, Osaka, Japan
| | - Shiho Takata
- Graduate School of Life and Environmental Sciences, Osaka Prefecture University, Osaka, Japan
| | - Yuzuko Utsumi
- Graduate School of Engineering, Osaka Prefecture University, Osaka, Japan
| | - Atsushi Matsumura
- Graduate School of Life and Environmental Sciences, Osaka Prefecture University, Osaka, Japan
| | - Masakazu Iwamura
- Graduate School of Engineering, Osaka Prefecture University, Osaka, Japan
| | - Koichi Kise
- Graduate School of Engineering, Osaka Prefecture University, Osaka, Japan
| |
Collapse
|
21
|
Bilodeau A, Delmas CVL, Parent M, De Koninck P, Durand A, Lavoie-Cardinal F. Microscopy analysis neural network to solve detection, enumeration and segmentation from image-level annotations. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00472-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
22
|
A dataset and benchmark for malaria life-cycle classification in thin blood smear images. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06602-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
23
|
Automated detection and staging of malaria parasites from cytological smears using convolutional neural networks. BIOLOGICAL IMAGING 2022; 1:e2. [PMID: 35036920 PMCID: PMC8724263 DOI: 10.1017/s2633903x21000015] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 06/24/2021] [Accepted: 07/14/2021] [Indexed: 12/14/2022]
Abstract
Microscopic examination of blood smears remains the gold standard for laboratory inspection and diagnosis of malaria. Smear inspection is, however, time-consuming and dependent on trained microscopists with results varying in accuracy. We sought to develop an automated image analysis method to improve accuracy and standardization of smear inspection that retains capacity for expert confirmation and image archiving. Here, we present a machine learning method that achieves red blood cell (RBC) detection, differentiation between infected/uninfected cells, and parasite life stage categorization from unprocessed, heterogeneous smear images. Based on a pretrained Faster Region-Based Convolutional Neural Networks (R-CNN) model for RBC detection, our model performs accurately, with an average precision of 0.99 at an intersection-over-union threshold of 0.5. Application of a residual neural network-50 model to infected cells also performs accurately, with an area under the receiver operating characteristic curve of 0.98. Finally, combining our method with a regression model successfully recapitulates intraerythrocytic developmental cycle with accurate lifecycle stage categorization. Combined with a mobile-friendly web-based interface, called PlasmoCount, our method permits rapid navigation through and review of results for quality assurance. By standardizing assessment of Giemsa smears, our method markedly improves inspection reproducibility and presents a realistic route to both routine lab and future field-based automated malaria diagnosis.
Collapse
|
24
|
Artificial Intelligence and Malaria. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
25
|
Albuquerque C, Vanneschi L, Henriques R, Castelli M, Póvoa V, Fior R, Papanikolaou N. Object detection for automatic cancer cell counting in zebrafish xenografts. PLoS One 2021; 16:e0260609. [PMID: 34843603 PMCID: PMC8629215 DOI: 10.1371/journal.pone.0260609] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 11/13/2021] [Indexed: 12/12/2022] Open
Abstract
Cell counting is a frequent task in medical research studies. However, it is often performed manually; thus, it is time-consuming and prone to human error. Even so, cell counting automation can be challenging to achieve, especially when dealing with crowded scenes and overlapping cells, assuming different shapes and sizes. In this paper, we introduce a deep learning-based cell detection and quantification methodology to automate the cell counting process in the zebrafish xenograft cancer model, an innovative technique for studying tumor biology and for personalizing medicine. First, we implemented a fine-tuned architecture based on the Faster R-CNN using the Inception ResNet V2 feature extractor. Second, we performed several adjustments to optimize the process, paying attention to constraints such as the presence of overlapped cells, the high number of objects to detect, the heterogeneity of the cells' size and shape, and the small size of the data set. This method resulted in a median error of approximately 1% of the total number of cell units. These results demonstrate the potential of our novel approach for quantifying cells in poorly labeled images. Compared to traditional Faster R-CNN, our method improved the average precision from 71% to 85% on the studied data set.
Collapse
Affiliation(s)
- Carina Albuquerque
- Nova Information Management School (NOVA IMS), Universidade Nova de Lisboa, Lisboa, Portugal
| | - Leonardo Vanneschi
- Nova Information Management School (NOVA IMS), Universidade Nova de Lisboa, Lisboa, Portugal
| | - Roberto Henriques
- Nova Information Management School (NOVA IMS), Universidade Nova de Lisboa, Lisboa, Portugal
| | - Mauro Castelli
- Nova Information Management School (NOVA IMS), Universidade Nova de Lisboa, Lisboa, Portugal
| | - Vanda Póvoa
- Computational Clinical Imaging Group, Center for the Unknown, Champalimaud Foundation, Lisboa, Portugal
| | - Rita Fior
- Computational Clinical Imaging Group, Center for the Unknown, Champalimaud Foundation, Lisboa, Portugal
| | - Nickolas Papanikolaou
- Computational Clinical Imaging Group, Center for the Unknown, Champalimaud Foundation, Lisboa, Portugal
| |
Collapse
|
26
|
Zhao Y, Tian S, Yu L, Zhang Z, Zhang W. Analysis and Classification of Hepatitis Infections Using Raman Spectroscopy and Multiscale Convolutional Neural Networks. JOURNAL OF APPLIED SPECTROSCOPY 2021; 88:441-451. [PMID: 33972806 PMCID: PMC8099702 DOI: 10.1007/s10812-021-01192-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Hepatitis infections represent a major health concern worldwide. Numerous computer-aided approaches have been devised for the early detection of hepatitis. In this study, we propose a method for the analysis and classification of cases of hepatitis-B virus ( HBV), hepatitis-C virus (HCV), and healthy subjects using Raman spectroscopy and a multiscale convolutional neural network (MSCNN). In particular, serum samples of HBV-infected patients (435 cases), HCV-infected patients (374 cases), and healthy persons (499 cases) are analyzed via Raman spectroscopy. The differences between Raman peaks in the measured serum spectra indicate specific biomolecular differences among the three classes. The dimensionality of the spectral data is reduced through principal component analysis. Subsequently, features are extracted, and then feature normalization is applied. Next, the extracted features are used to train different classifiers, namely MSCNN, a single-scale convolutional neural network, and other traditional classifiers. Among these classifiers, the MSCNN model achieved the best outcomes with a precision of 98.89%, sensitivity of 97.44%, specificity of 94.54%, and accuracy of 94.92%. Overall, the results demonstrate that Raman spectral analysis and MSCNN can be effectively utilized for rapid screening of hepatitis B and C cases.
Collapse
Affiliation(s)
- Y. Zhao
- Key Laboratory of Software Engineering Technology, Xinjiang University, Urumqi, 830000 China
| | - Sh. Tian
- Key Laboratory of Software Engineering Technology, Xinjiang University, Urumqi, 830000 China
| | - L. Yu
- College of Software Engineering at Xin Jiang University, Urumqi, 830000 China
| | - Zh. Zhang
- The First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830000 China
| | - W. Zhang
- Key Laboratory of Software Engineering Technology, Xinjiang University, Urumqi, 830000 China
| |
Collapse
|
27
|
Das AK, Kalam S, Kumar C, Sinha D. TLCoV- An automated Covid-19 screening model using Transfer Learning from chest X-ray images. CHAOS, SOLITONS, AND FRACTALS 2021; 144:110713. [PMID: 33526961 PMCID: PMC7825894 DOI: 10.1016/j.chaos.2021.110713] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 01/05/2021] [Accepted: 01/19/2021] [Indexed: 05/09/2023]
Abstract
The Coronavirus disease (Covid-19) has been declared a pandemic by World Health Organisation (WHO) and till date caused 585,727 numbers of deaths all over the world. The only way to minimize the number of death is to quarantine the patients tested Corona positive. The quick spread of this disease can be reduced by automatic screening to cover the lack of radiologists. Though the researchers already have done extremely well to design pioneering deep learning models for the screening of Covid-19, most of them results in low accuracy rate. In addition, over-fitting problem increases difficulties for those models to learn on existing Covid-19 datasets. In this paper, an automated Covid-19 screening model is designed to identify the patients suffering from this disease by using their chest X-ray images. The model classifies the images in three categories - Covid-19 positive, other pneumonia infection and no infection. Three learning schemes such as CNN, VGG-16 and ResNet-50 are separately used to learn the model. A standard Covid-19 radiography dataset from the repository of Kaggle is used to get the chest X-ray images. The performance of the model with all the three learning schemes has been evaluated and it shows VGG-16 performed better as compared to CNN and ResNet-50. The model with VGG-16 gives the accuracy of 97.67%, precision of 96.65%, recall of 96.54% and F1 score of 96.59%. The performance evaluation also shows that our model outperforms two existing models to screen the Covid-19.
Collapse
Affiliation(s)
- Ayan Kumar Das
- Birla Institute of Technology, Mesra, Patna Campus, Patna-800014, India
| | - Sidra Kalam
- Birla Institute of Technology, Mesra, Patna Campus, Patna-800014, India
| | - Chiranjeev Kumar
- Birla Institute of Technology, Mesra, Patna Campus, Patna-800014, India
| | | |
Collapse
|
28
|
Fatima T, Farid MS. Automatic detection of Plasmodium parasites from microscopic blood images. J Parasit Dis 2020; 44:69-78. [PMID: 32174707 PMCID: PMC7046825 DOI: 10.1007/s12639-019-01163-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Accepted: 09/17/2019] [Indexed: 12/18/2022] Open
Abstract
Malaria is caused by Plasmodium parasite. It is transmitted by female Anopheles bite. Thick and thin blood smears of the patient are manually examined by an expert pathologist with the help of a microscope to diagnose the disease. Such expert pathologists may not be available in many parts of the world due to poor health facilities. Moreover, manual inspection requires full concentration of the pathologist and it is a tedious and time consuming way to detect the malaria. Therefore, development of automated systems is momentous for a quick and reliable detection of malaria. It can reduce the false negative rate and it can help in detecting the disease at early stages where it can be cured effectively. In this paper, we present a computer aided design to automatically detect malarial parasite from microscopic blood images. The proposed method uses bilateral filtering to remove the noise and enhance the image quality. Adaptive thresholding and morphological image processing algorithms are used to detect the malaria parasites inside individual cell. To measure the efficiency of the proposed algorithm, we have tested our method on a NIH Malaria dataset and also compared the results with existing similar methods. Our method achieved the detection accuracy of more than 91% outperforming the competing methods. The results show that the proposed algorithm is reliable and can be of great assistance to the pathologists and hematologists for accurate malaria parasite detection.
Collapse
Affiliation(s)
- Tehreem Fatima
- Punjab University College of Information Technology, University of the Punjab, Lahore, Pakistan
| | - Muhammad Shahid Farid
- Punjab University College of Information Technology, University of the Punjab, Lahore, Pakistan
| |
Collapse
|
29
|
Gupta A, Harrison PJ, Wieslander H, Pielawski N, Kartasalo K, Partel G, Solorzano L, Suveer A, Klemm AH, Spjuth O, Sintorn I, Wählby C. Deep Learning in Image Cytometry: A Review. Cytometry A 2019; 95:366-380. [PMID: 30565841 PMCID: PMC6590257 DOI: 10.1002/cyto.a.23701] [Citation(s) in RCA: 86] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Revised: 11/07/2018] [Accepted: 11/29/2018] [Indexed: 12/18/2022]
Abstract
Artificial intelligence, deep convolutional neural networks, and deep learning are all niche terms that are increasingly appearing in scientific presentations as well as in the general media. In this review, we focus on deep learning and how it is applied to microscopy image data of cells and tissue samples. Starting with an analogy to neuroscience, we aim to give the reader an overview of the key concepts of neural networks, and an understanding of how deep learning differs from more classical approaches for extracting information from image data. We aim to increase the understanding of these methods, while highlighting considerations regarding input data requirements, computational resources, challenges, and limitations. We do not provide a full manual for applying these methods to your own data, but rather review previously published articles on deep learning in image cytometry, and guide the readers toward further reading on specific networks and methods, including new methods not yet applied to cytometry data. © 2018 The Authors. Cytometry Part A published by Wiley Periodicals, Inc. on behalf of International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- Anindya Gupta
- Centre for Image AnalysisUppsala UniversityUppsala75124Sweden
| | - Philip J. Harrison
- Department of Pharmaceutical BiosciencesUppsala UniversityUppsala75124Sweden
| | | | | | - Kimmo Kartasalo
- Faculty of Medicine and Life SciencesUniversity of TampereTampere33014Finland
- Faculty of Biomedical Sciences and EngineeringTampere University of TechnologyTampere33720Finland
| | - Gabriele Partel
- Centre for Image AnalysisUppsala UniversityUppsala75124Sweden
| | | | - Amit Suveer
- Centre for Image AnalysisUppsala UniversityUppsala75124Sweden
| | - Anna H. Klemm
- Centre for Image AnalysisUppsala UniversityUppsala75124Sweden
- BioImage Informatics Facility of SciLifeLabUppsala75124Sweden
| | - Ola Spjuth
- Department of Pharmaceutical BiosciencesUppsala UniversityUppsala75124Sweden
| | | | - Carolina Wählby
- Centre for Image AnalysisUppsala UniversityUppsala75124Sweden
- BioImage Informatics Facility of SciLifeLabUppsala75124Sweden
| |
Collapse
|