1
|
Elsafty A, Soliman A, Ahmed Y. 1 Million Segmented Red Blood Cells With 240 K Classified in 9 Shapes and 47 K Patches of 25 Manual Blood Smears. Sci Data 2024; 11:722. [PMID: 38956115 PMCID: PMC11220077 DOI: 10.1038/s41597-024-03570-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 06/24/2024] [Indexed: 07/04/2024] Open
Abstract
Around 20% of complete blood count samples necessitate visual review using light microscopes or digital pathology scanners. There is currently no technological alternative to the visual examination of red blood cells (RBCs) morphology/shapes. True/non-artifact teardrop-shaped RBCs and schistocytes/fragmented RBCs are commonly associated with serious medical conditions that could be fatal, increased ovalocytes are associated with almost all types of anemias. 25 distinct blood smears, each from a different patient, were manually prepared, stained, and then sorted into four groups. Each group underwent imaging using different cameras integrated into light microscopes with 40X microscopic lenses resulting in total 47 K + field images/patches. Two hematologists processed cell-by-cell to provide one million + segmented RBCs with their XYWH coordinates and classified 240 K + RBCs into nine shapes. This dataset (Elsafty_RBCs_for_AI) enables the development/testing of deep learning-based (DL) automation of RBCs morphology/shapes examination, including specific normalization of blood smear stains (different from histopathology stains), detection/counting, segmentation, and classification. Two codes are provided (Elsafty_Codes_for_AI), one for semi-automated image processing and another for training/testing of a DL-based image classifier.
Collapse
Affiliation(s)
| | - Ahmed Soliman
- PathOlOgics, LLC, Cairo, Egypt
- Department of Computer Science and Artificial Intelligence, Faculty of Engineering and IT, British University in Dubai (BUiD), Dubai, United Arab Emirates
| | | |
Collapse
|
2
|
Khan S, Sajjad M, Abbas N, Escorcia-Gutierrez J, Gamarra M, Muhammad K. Efficient leukocytes detection and classification in microscopic blood images using convolutional neural network coupled with a dual attention network. Comput Biol Med 2024; 174:108146. [PMID: 38608320 DOI: 10.1016/j.compbiomed.2024.108146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 02/06/2024] [Accepted: 02/12/2024] [Indexed: 04/14/2024]
Abstract
Leukocytes, also called White Blood Cells (WBCs) or leucocytes, are the cells that play a pivotal role in human health and are vital indicators of diseases such as malaria, leukemia, AIDS, and other viral infections. WBCs detection and classification in blood smears offers insights to pathologists, aiding diagnosis across medical conditions. Traditional techniques, including manual counting, detection, classification, and visual inspection of microscopic images by medical professionals, pose challenges due to their labor-intensive nature. However, traditional methods are time consuming and sometimes susceptible to errors. Here, we propose a high-performance convolutional neural network (CNN) coupled with a dual-attention network that efficiently detects and classifies WBCs in microscopic thick smear images. The main aim of this study was to enhance clinical hematology systems and expedite medical diagnostic processes. In the proposed technique, we utilized a deep convolutional generative adversarial network (DCGAN) to overcome the limitations imposed by limited training data and employed a dual attention mechanism to improve accuracy, efficiency, and generalization. The proposed technique achieved overall accuracy rates of 99.83%, 99.35%, and 99.60% for the peripheral blood cell (PBC), leukocyte images for segmentation and classification (LISC), and Raabin-WBC benchmark datasets, respectively. Our proposed approach outperforms state-of-the-art methods in terms of accuracy, highlighting the effectiveness of the strategies employed and their potential to enhance diagnostic capabilities and advance real-world healthcare practices and diagnostic systems.
Collapse
Affiliation(s)
- Siraj Khan
- Digital Image Processing Laboratory (DIP Lab), Department of Computer Science, Islamia College University, Peshawar, 25120, Pakistan
| | - Muhammad Sajjad
- Digital Image Processing Laboratory (DIP Lab), Department of Computer Science, Islamia College University, Peshawar, 25120, Pakistan.
| | - Naveed Abbas
- Digital Image Processing Laboratory (DIP Lab), Department of Computer Science, Islamia College University, Peshawar, 25120, Pakistan
| | - José Escorcia-Gutierrez
- Department of Computational Science and Electronics, Universidad de la Costa, CUC, Barranquilla, 080002, Colombia
| | - Margarita Gamarra
- Department of System Engineering, Universidad del Norte, Puerto Colombia, 081007, Colombia
| | - Khan Muhammad
- Visual Analytics for Knowledge Laboratory (VIS2KNOW Lab), Department of Applied Artificial Intelligence, School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul, 03063, South Korea.
| |
Collapse
|
3
|
Rubio JMB, Moyà-Alcover G, Jaume-I-Capó A, Petrović N. Crowdsourced human-based computational approach for tagging peripheral blood smear sample images from Sickle Cell Disease patients using non-expert users. Sci Rep 2024; 14:1201. [PMID: 38216623 PMCID: PMC10786843 DOI: 10.1038/s41598-024-51591-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 01/07/2024] [Indexed: 01/14/2024] Open
Abstract
In this paper, we present a human-based computation approach for the analysis of peripheral blood smear (PBS) images images in patients with Sickle Cell Disease (SCD). We used the Mechanical Turk microtask market to crowdsource the labeling of PBS images. We then use the expert-tagged erythrocytesIDB dataset to assess the accuracy and reliability of our proposal. Our results showed that when a robust consensus is achieved among the Mechanical Turk workers, probability of error is very low, based on comparison with expert analysis. This suggests that our proposed approach can be used to annotate datasets of PBS images, which can then be used to train automated methods for the diagnosis of SCD. In future work, we plan to explore the potential integration of our findings with outcomes obtained through automated methodologies. This could lead to the development of more accurate and reliable methods for the diagnosis of SCD.
Collapse
Affiliation(s)
- José María Buades Rubio
- UGiVIA Research Group, Department of Mathematics and Computer Science, University of the Balearic Islands, 07122, Palma, Spain
| | - Gabriel Moyà-Alcover
- UGiVIA Research Group, Department of Mathematics and Computer Science, University of the Balearic Islands, 07122, Palma, Spain
- Laboratory for Artificial Intelligence Applications (LAIA@UIB), University of the Balearic Islands, 07122, Palma, Spain
| | - Antoni Jaume-I-Capó
- UGiVIA Research Group, Department of Mathematics and Computer Science, University of the Balearic Islands, 07122, Palma, Spain.
- Laboratory for Artificial Intelligence Applications (LAIA@UIB), University of the Balearic Islands, 07122, Palma, Spain.
| | - Nataša Petrović
- UGiVIA Research Group, Department of Mathematics and Computer Science, University of the Balearic Islands, 07122, Palma, Spain
| |
Collapse
|
4
|
Chai B, Efstathiou C, Yue H, Draviam VM. Opportunities and challenges for deep learning in cell dynamics research. Trends Cell Biol 2023:S0962-8924(23)00228-3. [PMID: 38030542 DOI: 10.1016/j.tcb.2023.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/30/2023] [Accepted: 10/13/2023] [Indexed: 12/01/2023]
Abstract
The growth of artificial intelligence (AI) has led to an increase in the adoption of computer vision and deep learning (DL) techniques for the evaluation of microscopy images and movies. This adoption has not only addressed hurdles in quantitative analysis of dynamic cell biological processes but has also started to support advances in drug development, precision medicine, and genome-phenome mapping. We survey existing AI-based techniques and tools, as well as open-source datasets, with a specific focus on the computational tasks of segmentation, classification, and tracking of cellular and subcellular structures and dynamics. We summarise long-standing challenges in microscopy video analysis from a computational perspective and review emerging research frontiers and innovative applications for DL-guided automation in cell dynamics research.
Collapse
Affiliation(s)
- Binghao Chai
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Christoforos Efstathiou
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Haoran Yue
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Viji M Draviam
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK; The Alan Turing Institute, London NW1 2DB, UK.
| |
Collapse
|
5
|
Jennifer SS, Shamim MH, Reza AW, Siddique N. Sickle cell disease classification using deep learning. Heliyon 2023; 9:e22203. [PMID: 38045118 PMCID: PMC10692811 DOI: 10.1016/j.heliyon.2023.e22203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 10/24/2023] [Accepted: 11/06/2023] [Indexed: 12/05/2023] Open
Abstract
This paper presents a transfer and deep learning based approach to the classification of Sickle Cell Disease (SCD). Five transfer learning models such as ResNet-50, AlexNet, MobileNet, VGG-16 and VGG-19, and a sequential convolutional neural network (CNN) have been implemented for SCD classification. ErythrocytesIDB dataset has been used for training and testing the models. In order to make up for the data insufficiency of the erythrocytesIDB dataset, advanced image augmentation techniques are employed to ensure the robustness of the dataset, enhance dataset diversity and improve the accuracy of the models. An ablation experiment using Random Forest and Support Vector Machine (SVM) classifiers along with various hyperparameter tweaking was carried out to determine the contribution of different model elements on their predicted accuracy. A rigorous statistical analysis was carried out for evaluation and to further evaluate the model's robustness, an adversarial attack test was conducted. The experimental results demonstrate compelling performance across all models. After performing the statistical tests, it was observed that MobileNet showed a significant improvement (p = 0.0229), while other models (ResNet-50, AlexNet, VGG-16, VGG-19) did not (p > 0.05). Notably, the ResNet-50 model achieves remarkable precision, recall, and F1-score values of 100 % for circular, elongated, and other cell shapes when experimented with a smaller dataset. The AlexNet model achieves a balanced precision (98 %) and recall (99 %) for circular and elongated shapes. Meanwhile, the other models showcase competitive performance.
Collapse
Affiliation(s)
- Sanjeda Sara Jennifer
- Department of Computer Science and Engineering, East West University, Dhaka, Bangladesh
| | - Mahbub Hasan Shamim
- Department of Computer Science and Engineering, East West University, Dhaka, Bangladesh
| | - Ahmed Wasif Reza
- Department of Computer Science and Engineering, East West University, Dhaka, Bangladesh
| | - Nazmul Siddique
- School of Computing, Engineering and Intelligent Systems, Ulster University, UK
| |
Collapse
|
6
|
Foy BH, Stefely JA, Bendapudi PK, Hasserjian RP, Al-Samkari H, Louissaint A, Fitzpatrick MJ, Hutchison B, Mow C, Collins J, Patel HR, Patel CH, Patel N, Ho SN, Kaufman RM, Dzik WH, Higgins JM, Makar RS. Computer vision quantitation of erythrocyte shape abnormalities provides diagnostic, prognostic, and mechanistic insight. Blood Adv 2023; 7:4621-4630. [PMID: 37146262 PMCID: PMC10448422 DOI: 10.1182/bloodadvances.2022008967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 04/07/2023] [Accepted: 04/10/2023] [Indexed: 05/07/2023] Open
Abstract
Examination of red blood cell (RBC) morphology in peripheral blood smears can help diagnose hematologic diseases, even in resource-limited settings, but this analysis remains subjective and semiquantitative with low throughput. Prior attempts to develop automated tools have been hampered by their poor reproducibility and limited clinical validation. Here, we present a novel, open-source machine-learning approach (denoted as RBC-diff) to quantify abnormal RBCs in peripheral smear images and generate an RBC morphology differential. RBC-diff cell counts showed high accuracy for single-cell classification (mean AUC, 0.93) and quantitation across smears (mean R2, 0.76 compared with experts, interexperts R2, 0.75). RBC-diff counts were concordant with the clinical morphology grading for 300 000+ images and recovered the expected pathophysiologic signals in diverse clinical cohorts. Criteria using RBC-diff counts distinguished thrombotic thrombocytopenic purpura and hemolytic uremic syndrome from other thrombotic microangiopathies, providing greater specificity than clinical morphology grading (72% vs 41%; P < .001) while maintaining high sensitivity (94% to 100%). Elevated RBC-diff schistocyte counts were associated with increased 6-month all-cause mortality in a cohort of 58 950 inpatients (9.5% mortality for schist. >1%, vs 4.7% for schist; <0.5%; P < .001) after controlling for comorbidities, demographics, clinical morphology grading, and blood count indices. RBC-diff also enabled the estimation of single-cell volume-morphology distributions, providing insight into the influence of morphology on routine blood count measures. Our codebase and expert-annotated images are included here to spur further advancement. These results illustrate that computer vision can enable rapid and accurate quantitation of RBC morphology, which may provide value in both clinical and research contexts.
Collapse
Affiliation(s)
- Brody H. Foy
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Center for Systems Biology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Department of Systems Biology, Harvard Medical School, Boston, MA
| | - Jonathan A. Stefely
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Blood Transfusion Service, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Pavan K. Bendapudi
- Blood Transfusion Service, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Division of Hematology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Robert P. Hasserjian
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Hanny Al-Samkari
- Division of Hematology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Abner Louissaint
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Megan J. Fitzpatrick
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Bailey Hutchison
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Christopher Mow
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Mass General Brigham Enterprise Research IS, Boston, MA
| | - Julia Collins
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Hasmukh R. Patel
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Center for Systems Biology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Chhaya H. Patel
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Center for Systems Biology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Nikita Patel
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Center for Systems Biology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Samantha N. Ho
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Center for Systems Biology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Richard M. Kaufman
- Department of Pathology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Walter H. Dzik
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Blood Transfusion Service, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - John M. Higgins
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Center for Systems Biology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Department of Systems Biology, Harvard Medical School, Boston, MA
| | - Robert S. Makar
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Blood Transfusion Service, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| |
Collapse
|
7
|
Treepong P, Theera-Ampornpunt N. Early bread mold detection through microscopic images using convolutional neural network. Curr Res Food Sci 2023; 7:100574. [PMID: 37664007 PMCID: PMC10474362 DOI: 10.1016/j.crfs.2023.100574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 07/25/2023] [Accepted: 08/20/2023] [Indexed: 09/05/2023] Open
Abstract
Mold on bread in the early stages of growth is difficult to discern with the naked eye. Visual inspection and expiration dates are imprecise approaches that consumers rely on to detect bread spoilage. Existing methods for detecting microbial contamination, such as inspection through a microscope and hyperspectral imaging, are unsuitable for consumer use. This paper proposes a novel early bread mold detection method through microscopic images taken using clip-on lenses. These low-cost lenses are used together with a smartphone to capture images of bread at 50× magnification. The microscopic images are automatically classified using state-of-the-art convolutional neural networks (CNNs) with transfer learning. We extensively compared image preprocessing methods, CNN models, and data augmentation methods to determine the best configuration in terms of classification accuracy. The top models achieved near-perfect F 1 scores of 0.9948 for white sandwich bread and 0.9972 for whole wheat bread.
Collapse
Affiliation(s)
- Panisa Treepong
- College of Computing, Prince of Songkla University, Phuket, Thailand
| | | |
Collapse
|
8
|
Bakasa W, Viriri S. VGG16 Feature Extractor with Extreme Gradient Boost Classifier for Pancreas Cancer Prediction. J Imaging 2023; 9:138. [PMID: 37504815 PMCID: PMC10381878 DOI: 10.3390/jimaging9070138] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/19/2023] [Accepted: 07/04/2023] [Indexed: 07/29/2023] Open
Abstract
The prognosis of patients with pancreatic ductal adenocarcinoma (PDAC) is greatly improved by an early and accurate diagnosis. Several studies have created automated methods to forecast PDAC development utilising various medical imaging modalities. These papers give a general overview of the classification, segmentation, or grading of many cancer types utilising conventional machine learning techniques and hand-engineered characteristics, including pancreatic cancer. This study uses cutting-edge deep learning techniques to identify PDAC utilising computerised tomography (CT) medical imaging modalities. This work suggests that the hybrid model VGG16-XGBoost (VGG16-backbone feature extractor and Extreme Gradient Boosting-classifier) for PDAC images. According to studies, the proposed hybrid model performs better, obtaining an accuracy of 0.97 and a weighted F1 score of 0.97 for the dataset under study. The experimental validation of the VGG16-XGBoost model uses the Cancer Imaging Archive (TCIA) public access dataset, which has pancreas CT images. The results of this study can be extremely helpful for PDAC diagnosis from computerised tomography (CT) pancreas images, categorising them into five different tumours (T), node (N), and metastases (M) (TNM) staging system class labels, which are T0, T1, T2, T3, and T4.
Collapse
Affiliation(s)
- Wilson Bakasa
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4041, South Africa
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4041, South Africa
| |
Collapse
|
9
|
Ain QU, Khan MA, Yaqoob MM, Khattak UF, Sajid Z, Khan MI, Al-Rasheed A. Privacy-Aware Collaborative Learning for Skin Cancer Prediction. Diagnostics (Basel) 2023; 13:2264. [PMID: 37443658 DOI: 10.3390/diagnostics13132264] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/15/2023] [Accepted: 06/24/2023] [Indexed: 07/15/2023] Open
Abstract
Cancer, including the highly dangerous melanoma, is marked by uncontrolled cell growth and the possibility of spreading to other parts of the body. However, the conventional approach to machine learning relies on centralized training data, posing challenges for data privacy in healthcare systems driven by artificial intelligence. The collection of data from diverse sensors leads to increased computing costs, while privacy restrictions make it challenging to employ traditional machine learning methods. Researchers are currently confronted with the formidable task of developing a skin cancer prediction technique that takes privacy concerns into account while simultaneously improving accuracy. In this work, we aimed to propose a decentralized privacy-aware learning mechanism to accurately predict melanoma skin cancer. In this research we analyzed federated learning from the skin cancer database. The results from the study showed that 92% accuracy was achieved by the proposed method, which was higher than baseline algorithms.
Collapse
Affiliation(s)
- Qurat Ul Ain
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| | - Muhammad Amir Khan
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| | - Muhammad Mateen Yaqoob
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| | - Umar Farooq Khattak
- School of Information Technology, UNITAR International University, Kelana Jaya, Petaling Jaya 47301, Selangor, Malaysia
| | - Zohaib Sajid
- Computer Science Department, Faculty of Computer Sciences, ILMA University, Karachi 75190, Pakistan
| | - Muhammad Ijaz Khan
- Institute of Computing and Information Technology, Gomal University, Dera Ismail Khan 29220, Pakistan
| | - Amal Al-Rasheed
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| |
Collapse
|
10
|
Sadafi A, Bordukova M, Makhro A, Navab N, Bogdanova A, Marr C. RedTell: an AI tool for interpretable analysis of red blood cell morphology. Front Physiol 2023; 14:1058720. [PMID: 37304818 PMCID: PMC10250619 DOI: 10.3389/fphys.2023.1058720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 04/13/2023] [Indexed: 06/13/2023] Open
Abstract
Introduction: Hematologists analyze microscopic images of red blood cells to study their morphology and functionality, detect disorders and search for drugs. However, accurate analysis of a large number of red blood cells needs automated computational approaches that rely on annotated datasets, expensive computational resources, and computer science expertise. We introduce RedTell, an AI tool for the interpretable analysis of red blood cell morphology comprising four single-cell modules: segmentation, feature extraction, assistance in data annotation, and classification. Methods: Cell segmentation is performed by a trained Mask R-CNN working robustly on a wide range of datasets requiring no or minimum fine-tuning. Over 130 features that are regularly used in research are extracted for every detected red blood cell. If required, users can train task-specific, highly accurate decision tree-based classifiers to categorize cells, requiring a minimal number of annotations and providing interpretable feature importance. Results: We demonstrate RedTell's applicability and power in three case studies. In the first case study we analyze the difference of the extracted features between the cells coming from patients suffering from different diseases, in the second study we use RedTell to analyze the control samples and use the extracted features to classify cells into echinocytes, discocytes and stomatocytes and finally in the last use case we distinguish sickle cells in sickle cell disease patients. Discussion: We believe that RedTell can accelerate and standardize red blood cell research and help gain new insights into mechanisms, diagnosis, and treatment of red blood cell associated disorders.
Collapse
Affiliation(s)
- Ario Sadafi
- Institute of AI for Health, Helmholtz Zentrum München—German Research Center for Environmental Health, Neuherberg, Germany
- Chair for Computer Aided Medical Procedures & Augmented Reality, Technical University of Munich, Garching, Germany
| | - Maria Bordukova
- Institute of AI for Health, Helmholtz Zentrum München—German Research Center for Environmental Health, Neuherberg, Germany
| | - Asya Makhro
- Red Blood Cell Research Group, Institute of Veterinary Physiology, Vetsuisse Faculty and the Zurich Center for Integrative Human Physiology, University of Zurich, Zurich, Switzerland
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures & Augmented Reality, Technical University of Munich, Garching, Germany
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, United States
| | - Anna Bogdanova
- Red Blood Cell Research Group, Institute of Veterinary Physiology, Vetsuisse Faculty and the Zurich Center for Integrative Human Physiology, University of Zurich, Zurich, Switzerland
| | - Carsten Marr
- Institute of AI for Health, Helmholtz Zentrum München—German Research Center for Environmental Health, Neuherberg, Germany
| |
Collapse
|
11
|
Zhu Z, Ren Z, Lu S, Wang S, Zhang Y. DLBCNet: A Deep Learning Network for Classifying Blood Cells. BIG DATA AND COGNITIVE COMPUTING 2023; 7:75. [PMID: 38560757 PMCID: PMC7615784 DOI: 10.3390/bdcc7020075] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Background Blood is responsible for delivering nutrients to various organs, which store important health information about the human body. Therefore, the diagnosis of blood can indirectly help doctors judge a person's physical state. Recently, researchers have applied deep learning (DL) to the automatic analysis of blood cells. However, there are still some deficiencies in these models. Methods To cope with these issues, we propose a novel network for the multi-classification of blood cells, which is called DLBCNet. A new specifical model for blood cells (BCGAN) is designed to generate synthetic images. The pre-trained ResNet50 is implemented as the backbone model, which serves as the feature extractor. The extracted features are fed to the proposed ETRN to improve the multi-classification performance of blood cells. Results The average accuracy, average sensitivity, average precision, average specificity, and average f1-score of the proposed model are 95.05%, 93.25%, 97.75%, 93.72%, and 95.38%, accordingly. Conclusions The performance of the proposed model surpasses other state-of-the-art methods in reported classification results.
Collapse
Affiliation(s)
- Ziquan Zhu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Zeyu Ren
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Siyuan Lu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, China
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, China
| |
Collapse
|
12
|
Panchbhai A, Savash Ishanzadeh MC, Sidali A, Solaiman N, Pankanti S, Kanagaraj R, Murphy JJ, Surendranath K. A deep learning workflow for quantification of micronuclei in DNA damage studies in cultured cancer cell lines: A proof of principle investigation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107447. [PMID: 36889248 DOI: 10.1016/j.cmpb.2023.107447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Revised: 02/23/2023] [Accepted: 02/24/2023] [Indexed: 06/18/2023]
Abstract
The cytokinesis block micronucleus assay is widely used for measuring/scoring/counting micronuclei, a marker of genome instability in cultured and primary cells. Though a gold standard method, this is a laborious and time-consuming process with person-to-person variation observed in quantification of micronuclei. We report in this study the utilisation of a new deep learning workflow for detection of micronuclei in DAPI stained nuclear images. The proposed deep learning framework achieved an average precision of >90% in detection of micronuclei. This proof of principle investigation in a DNA damage studies laboratory supports the idea of deploying AI powered tools in a cost-effective manner for repetitive and laborious tasks with relevant computational expertise. These systems will also help improving the quality of data and wellbeing of researchers.
Collapse
Affiliation(s)
- Anand Panchbhai
- Logy.AI, Machine Learning Research Division, Indian Institute of Technology Bhilai, Raipur India.
| | | | - Ahmed Sidali
- Genome engineering laboratory, University of Westminster, London W1W 6UW, United Kingdom
| | - Nadeen Solaiman
- Genome engineering laboratory, University of Westminster, London W1W 6UW, United Kingdom
| | - Smarana Pankanti
- Logy.AI, Machine Learning Research Division, Indian Institute of Technology Bhilai, Raipur India
| | - Radhakrishnan Kanagaraj
- Genome engineering laboratory, University of Westminster, London W1W 6UW, United Kingdom; School of Life Sciences, University of Bedfordshire, Park Square, Luton LU1 3JU, United Kingdom
| | - John J Murphy
- Genome engineering laboratory, University of Westminster, London W1W 6UW, United Kingdom
| | - Kalpana Surendranath
- Genome engineering laboratory, University of Westminster, London W1W 6UW, United Kingdom.
| |
Collapse
|
13
|
Robust Classification and Detection of Big Medical Data Using Advanced Parallel K-Means Clustering, YOLOv4, and Logistic Regression. Life (Basel) 2023; 13:life13030691. [PMID: 36983845 PMCID: PMC10056696 DOI: 10.3390/life13030691] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 02/24/2023] [Accepted: 02/28/2023] [Indexed: 03/08/2023] Open
Abstract
Big-medical-data classification and image detection are crucial tasks in the field of healthcare, as they can assist with diagnosis, treatment planning, and disease monitoring. Logistic regression and YOLOv4 are popular algorithms that can be used for these tasks. However, these techniques have limitations and performance issue with big medical data. In this study, we presented a robust approach for big-medical-data classification and image detection using logistic regression and YOLOv4, respectively. To improve the performance of these algorithms, we proposed the use of advanced parallel k-means pre-processing, a clustering technique that identified patterns and structures in the data. Additionally, we leveraged the acceleration capabilities of a neural engine processor to further enhance the speed and efficiency of our approach. We evaluated our approach on several large medical datasets and showed that it could accurately classify large amounts of medical data and detect medical images. Our results demonstrated that the combination of advanced parallel k-means pre-processing, and the neural engine processor resulted in a significant improvement in the performance of logistic regression and YOLOv4, making them more reliable for use in medical applications. This new approach offers a promising solution for medical data classification and image detection and may have significant implications for the field of healthcare.
Collapse
|
14
|
Wang Y, Ma R, Ma X, Cui H, Xiao Y, Wu X, Zhou Y. Shape-aware fine-grained classification of erythroid cells. APPL INTELL 2023. [DOI: 10.1007/s10489-023-04465-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
|
15
|
Chen YM, Tsai JT, Ho WH. Automatic identifying and counting blood cells in smear images by using single shot detector and Taguchi method. BMC Bioinformatics 2022; 22:635. [PMID: 36482316 PMCID: PMC9732976 DOI: 10.1186/s12859-022-05074-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 11/22/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Researchers have tried to identify and count different blood cells in microscopic smear images by using deep learning methods of artificial intelligence to solve the highly time-consuming problem. RESULTS The three types of blood cells are platelets, red blood cells, and white blood cells. This study used the Resnet50 network as a backbone network of the single shot detector (SSD) for automatically identifying and counting different blood cells and, meanwhile, proposed a systematic method to find a better combination of algorithm hyperparameters of the Resnet50 network for promoting accuracy for identifying and counting blood cells. The Resnet50 backbone network of the SSD with its optimized algorithm hyperparameters, which is called the Resnet50-SSD model, was developed to enhance the feature extraction ability for identifying and counting blood cells. Furthermore, the algorithm hyperparameters of Resnet50 backbone networks of the SSD were optimized by the Taguchi experimental method for promoting detection accuracy of the Resnet50-SSD model. The experimental result shows that the detection accuracy of the Resnet50-SSD model with 512 × 512 × 3 input images was better than that of the Resnet50-SSD model with 300 × 300 × 3 input images on the test set of blood cells images. Additionally, the detection accuracy of the Resnet50-SSD model using the combination of algorithm hyperparameters got by the Taguchi method was better than that of the Resnet50-SSD model using the combination of algorithm hyperparameters given by the Matlab example. CONCLUSION In blood cell images acquired from the BCCD dataset, the proposed Resnet50-SSD model had higher accuracy in identifying and counting blood cells, especially white blood cells and red blood cells.
Collapse
Affiliation(s)
- Yao-Mei Chen
- grid.412019.f0000 0000 9476 5696School of Nursing, Kaohsiung Medical University, Kaohsiung, 807 Taiwan ,grid.412027.20000 0004 0620 9374Superintendent Office, Kaohsiung Medical University Hospital, Kaohsiung, 807 Taiwan
| | - Jinn-Tsong Tsai
- grid.445052.20000 0004 0639 3773Department of Computer Science and Artificial Intelligence, National Pingtung University, Pingtung, 900 Taiwan ,grid.412019.f0000 0000 9476 5696Department of Healthcare Administration and Medical Informatics, Kaohsiung Medical University, Kaohsiung, 807 Taiwan
| | - Wen-Hsien Ho
- grid.412019.f0000 0000 9476 5696Department of Healthcare Administration and Medical Informatics, Kaohsiung Medical University, Kaohsiung, 807 Taiwan ,grid.412027.20000 0004 0620 9374Department of Medical Research, Kaohsiung Medical University Hospital, Kaohsiung, 807 Taiwan
| |
Collapse
|
16
|
Chakraborty S, Mali K. Fuzzy and elitist cuckoo search based microscopic image segmentation approach. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
17
|
Zhang R, Han X, Lei Z, Jiang C, Gul I, Hu Q, Zhai S, Liu H, Lian L, Liu Y, Zhang Y, Dong Y, Zhang CY, Lam TK, Han Y, Yu D, Zhou J, Qin P. RCMNet: A deep learning model assists CAR-T therapy for leukemia. Comput Biol Med 2022; 150:106084. [PMID: 36155267 DOI: 10.1016/j.compbiomed.2022.106084] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 08/16/2022] [Accepted: 09/03/2022] [Indexed: 11/30/2022]
Abstract
Acute leukemia is a type of blood cancer with a high mortality rate. Current therapeutic methods include bone marrow transplantation, supportive therapy, and chemotherapy. Although a satisfactory remission of the disease can be achieved, the risk of recurrence is still high. Therefore, novel treatments are demanding. Chimeric antigen receptor-T (CAR-T) therapy has emerged as a promising approach to treating and curing acute leukemia. To harness the therapeutic potential of CAR-T cell therapy for blood diseases, reliable cell morphological identification is crucial. Nevertheless, the identification of CAR-T cells is a big challenge posed by their phenotypic similarity with other blood cells. To address this substantial clinical challenge, herein we first construct a CAR-T dataset with 500 original microscopy images after staining. Following that, we create a novel integrated model called RCMNet (ResNet18 with Convolutional Block Attention Module and Multi-Head Self-Attention) that combines the convolutional neural network (CNN) and Transformer. The model shows 99.63% top-1 accuracy on the public dataset. Compared with previous reports, our model obtains satisfactory results for image classification. Although testing on the CAR-T cell dataset, a decent performance is observed, which is attributed to the limited size of the dataset. Transfer learning is adapted for RCMNet and a maximum of 83.36% accuracy is achieved, which is higher than that of other state-of-the-art models. This study evaluates the effectiveness of RCMNet on a big public dataset and translates it to a clinical dataset for diagnostic applications.
Collapse
Affiliation(s)
- Ruitao Zhang
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Xueying Han
- The First Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang 150001, China
| | - Zhengyang Lei
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Chenyao Jiang
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Ijaz Gul
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Qiuyue Hu
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Shiyao Zhai
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Hong Liu
- Animal and Plant Inspection and Quarantine Technical Centre, Shenzhen Customs District, Shenzhen, Guangdong 518045, China
| | - Lijin Lian
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Ying Liu
- Animal and Plant Inspection and Quarantine Technical Centre, Shenzhen Customs District, Shenzhen, Guangdong 518045, China
| | - Yongbing Zhang
- Department of Computer Science, Harbin Institute of Technology, Shenzhen, Guangdong 518055, China
| | - Yuhan Dong
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Can Yang Zhang
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Tsz Kwan Lam
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Yuxing Han
- Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Dongmei Yu
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai, Shandong 264209, China
| | - Jin Zhou
- The First Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang 150001, China
| | - Peiwu Qin
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China.
| |
Collapse
|
18
|
Srinivasan R, Eugene Christo VR, Nambannor Kunnath R, Katare P, Venukumar A, Nambison NKM, Gorthi SS. Optical absorbance-based rapid test for the detection of sickle cell trait and sickle cell disease at the point-of-care. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2022; 279:121394. [PMID: 35660146 DOI: 10.1016/j.saa.2022.121394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 05/10/2022] [Accepted: 05/11/2022] [Indexed: 06/15/2023]
Abstract
People afflicted with sickle cell disease (SCD) experience severe deterioration in quality of life. The disease is characterized by debilitating pain, anemia, and increased susceptibility to life threatening infections. This genetic disorder is endemic to many parts of the world. Extensive and accurate screening of individuals with sickle cell trait (SCT) in the population, coupled with genetic counselling can inhibit the propagation of the disease. The gold-standard techniques for the detection of sickle hemoglobin, such as capillary electrophoresis, HPLC, and genetic testing, are prohibitively expensive and time-consuming. Mass screening is usually conducted with a low-cost test called the solubility test, which does not offer high specificity. This study proposes a game-changing single-step low-cost method for rapidly yet accurately screening and diagnosing SCD and SCT. This method relies on the hitherto unexplored differences in the optical absorbance between diseased, trait, and normal blood samples, under deoxygenated conditions. The proposed method was tested in two phases of clinical validation: a pilot study and a blind study. A total of 438 patient samples were tested using the proposed method across the two phases. The proposed method offers an average accuracy, sensitivity, and specificity of 97.6%, 96.9%, and 98.6%, respectively. The proposed test has the potential to obliviate the conventional two-step process of screening and diagnostic tests as it can be used at the point-of-care with minimal training and yet yield results reliable enough to assess disability benefit claims.
Collapse
Affiliation(s)
- Rajesh Srinivasan
- Department of Instrumentation and Applied Physics, Indian Institute of Science, Bangalore, India
| | - V R Eugene Christo
- Department of Instrumentation and Applied Physics, Indian Institute of Science, Bangalore, India
| | | | - Prateek Katare
- Department of Instrumentation and Applied Physics, Indian Institute of Science, Bangalore, India
| | - Aravind Venukumar
- Department of Instrumentation and Applied Physics, Indian Institute of Science, Bangalore, India
| | - Nisanth K M Nambison
- Government Homeopathic Medical College and Hospital, Bhopal, Madhya Pradesh, India
| | - Sai Siva Gorthi
- Department of Instrumentation and Applied Physics, Indian Institute of Science, Bangalore, India.
| |
Collapse
|
19
|
S A, Ganesan K, K BB. A novel deep learning approach for sickle cell anemia detection in human RBCs using an improved wrapper-based feature selection technique in microscopic blood smear images. BIOMED ENG-BIOMED TE 2022; 68:175-185. [PMID: 36197949 DOI: 10.1515/bmt-2021-0127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 09/13/2022] [Indexed: 11/15/2022]
Abstract
Sickle Cell Anemia (SCA) is a disorder in Red Blood Cells (RBCs) of human blood. Children under five years and pregnant women are mostly affected by SCA. Early diagnosis of this ailment can save lives. In recent years, the computer aided diagnosis of SCA is preferred to resolve this issue. A novel and effective deep learning approach for identification of sickle cell anemia is proposed in this work. Around nine hundred microscopic images of human red blood cells are obtained from the public database 'erythrocytes IDB'. All the images are resized uniformly. About 2048 deep features are extracted from the fully connected layer of pre-trained model InceptionV3. These features are further subjected to classification using optimization-based methods. An improved wrapper-based feature selection technique is implemented using Multi- Objective Binary Grey Wolf Optimization (MO-BGWO) approach with KNN and SVM for classification. The detection of sickle cell is also performed using typical InceptionV3 model by using SoftMax layer. It is observed that the performance of the proposed system seems to be high when compared to the classification using the original InceptionV3 model. The results are validated by various evaluation metrics such as accuracy, precision, sensitivity, specificity and F1-score. The SVM classifier yields high accuracy of about 96%. The optimal subset of deep features along with SVM enhances the system performance in the proposed work. Thus, the proposed approach is appropriate for pathologists to take early clinical decisions on detection of sickle cells.
Collapse
Affiliation(s)
- Alagu S
- Department of Electronics Engineering, Madras Institute of Technology, Chennai, India
| | - Kavitha Ganesan
- Department of Electronics Engineering, Madras Institute of Technology, Chennai, India
| | - Bhoopathy Bagan K
- Department of Electronics Engineering, Madras Institute of Technology, Chennai, India
| |
Collapse
|
20
|
Mukhlif AA, Al-Khateeb B, Mohammed MA. An extensive review of state-of-the-art transfer learning techniques used in medical imaging: Open issues and challenges. JOURNAL OF INTELLIGENT SYSTEMS 2022. [DOI: 10.1515/jisys-2022-0198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Deep learning techniques, which use a massive technology known as convolutional neural networks, have shown excellent results in a variety of areas, including image processing and interpretation. However, as the depth of these networks grows, so does the demand for a large amount of labeled data required to train these networks. In particular, the medical field suffers from a lack of images because the procedure for obtaining labeled medical images in the healthcare field is difficult, expensive, and requires specialized expertise to add labels to images. Moreover, the process may be prone to errors and time-consuming. Current research has revealed transfer learning as a viable solution to this problem. Transfer learning allows us to transfer knowledge gained from a previous process to improve and tackle a new problem. This study aims to conduct a comprehensive survey of recent studies that dealt with solving this problem and the most important metrics used to evaluate these methods. In addition, this study identifies problems in transfer learning techniques and highlights the problems of the medical dataset and potential problems that can be addressed in future research. According to our review, many researchers use pre-trained models on the Imagenet dataset (VGG16, ResNet, Inception v3) in many applications such as skin cancer, breast cancer, and diabetic retinopathy classification tasks. These techniques require further investigation of these models, due to training them on natural, non-medical images. In addition, many researchers use data augmentation techniques to expand their dataset and avoid overfitting. However, not enough studies have shown the effect of performance with or without data augmentation. Accuracy, recall, precision, F1 score, receiver operator characteristic curve, and area under the curve (AUC) were the most widely used measures in these studies. Furthermore, we identified problems in the datasets for melanoma and breast cancer and suggested corresponding solutions.
Collapse
Affiliation(s)
- Abdulrahman Abbas Mukhlif
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| | - Belal Al-Khateeb
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| | - Mazin Abed Mohammed
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| |
Collapse
|
21
|
K.T. N, Prasad K, Singh BMK. Analysis of red blood cells from peripheral blood smear images for anemia detection: a methodological review. Med Biol Eng Comput 2022; 60:2445-2462. [PMID: 35838854 PMCID: PMC9365735 DOI: 10.1007/s11517-022-02614-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Accepted: 04/22/2022] [Indexed: 11/10/2022]
Abstract
Anemia is a blood disorder which is caused due to inadequate red blood cells and hemoglobin concentration. It occurs in all phases of life cycle but is more dominant in pregnant women and infants. According to the survey conducted by the World Health Organization (WHO) (McLean et al., Public Health Nutr 12(4):444–454, 2009), anemia affects 1.62 billion people constituting 24.8% of the population and is considered the world’s second leading cause of illness. The Peripheral Blood Smear (PBS) examination plays an important role in evaluating hematological disorders. Anemia is diagnosed using PBS. Being the most powerful analytical tool, manual analysis approach is still in use even though it is tedious, prone to errors, time-consuming and requires qualified laboratorians. It is evident that there is a need for an inexpensive, automatic and robust technique to detect RBC disorders from PBS. Automation of PBS analysis is very active field of research that motivated many research groups to develop methods using image processing. In this paper, we present a review of the methods used to analyze the characteristics of RBC from PBS images using image processing techniques. We have categorized these methods into three groups based on approaches such as RBC segmentation, RBC classification and detection of anemia, and classification of anemia. The outcome of this review has been presented as a list of observations.
Collapse
|
22
|
Gil T, Moon CI, Lee S, Lee O. Automatic analysis system for abnormal red blood cells in peripheral blood smears. Microsc Res Tech 2022; 85:3623-3632. [PMID: 35916360 DOI: 10.1002/jemt.24215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 07/06/2022] [Accepted: 07/16/2022] [Indexed: 11/08/2022]
Abstract
The type and ratio of abnormal red blood cells (RBCs) in blood can be identified through peripheral blood smear test. Accurate classification is important because the accompanying diseases indicated by abnormal RBCs vary. In clinical practice, this task is time-consuming because the RBCs are manually classified. In addition, because the classification depends on the subjective criteria of pathologists, objective classification is difficult to achieve. In this paper, an automatic classification method that is solely based on images of RBCs captured under a microscope and processed using machine learning (ML) is proposed. The size and hemoglobin abnormalities of RBCs were classified by optimizing the criteria used in clinical practice. For morphologically abnormal RBCs classification, used seven geometric features information (major axis, minor axis, ratio of major and minor axis, perimeter, circularity, number of convex hulls, difference between area and convex area) and five types of multiple classifiers (Support Vector Machine, Decision Tree, K-Nearest Neighbor, Random Forest, and Adaboost models). Among was categorized using SVM, highly accurate results (99.9%) were obtained. The classification is performed simultaneously, and results are provided to the user through a graphical user interface (GUI).
Collapse
Affiliation(s)
- Taeyeon Gil
- Department of Software Convergence, Graduate School, Soonchunhyang University, Asan City, Chungnam-do, Republic of Korea
| | - Cho-I Moon
- Department of Software Convergence, Graduate School, Soonchunhyang University, Asan City, Chungnam-do, Republic of Korea
| | - Sukjun Lee
- Department of Biomedical Laboratory Science, College of Health and Medical Sciences, Cheongju University, Cheongju City, Chungbuk, Republic of Korea
| | - Onseok Lee
- Department of Software Convergence, Graduate School, Soonchunhyang University, Asan City, Chungnam-do, Republic of Korea.,Department of Medical IT Engineering, College of Medical Sciences, Soonchunhyang University, Asan City, Chungnam-do, Republic of Korea
| |
Collapse
|
23
|
Biomedical Microscopic Imaging in Computational Intelligence Using Deep Learning Ensemble Convolution Learning-Based Feature Extraction and Classification. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3531308. [PMID: 35795729 PMCID: PMC9252635 DOI: 10.1155/2022/3531308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 04/30/2022] [Accepted: 05/05/2022] [Indexed: 11/22/2022]
Abstract
Microscopy image analysis gives quantitative support for enhancing the characterizations of various diseases, including breast cancer, lung cancer, and brain tumors. As a result, it is crucial in computer-assisted diagnosis and prognosis. Understanding the biological principles underlying these dynamic image sequences often necessitates precise analysis and statistical quantification, a major discipline issue. Deep learning methods are increasingly used in bioimage processing as they grow rapidly. This research proposes novel biomedical microscopic image analysis techniques using deep learning architectures based on feature extraction and classification. Here, the input image has been taken as microscopic image, and it has been processed and analyzed for noise removal, edge smoothening, and normalization. The processed image has been extracted based on their features in microscopic image analysis using ConVol_NN architecture with AlexNet model. Then, the features have been classified using ensemble of Inception-ResNet and VGG-16 (EN_InResNet_VGG-16) architectures. The experimental results show various dataset analyses in terms of accuracy of 98%, precision of 90%, computational time of 79%, SNR of 89%, and MSE of 62%.
Collapse
|
24
|
|
25
|
Abstract
Blood cell analysis is essential for the diagnosis and identification of hematological malignancies. The use of digital microscopy systems has been extended in clinical laboratories. Super-resolution microscopy (SRM) has attracted wide attention in the medical field due to its nanoscale spatial resolution and high sensitivity. It is considered to be a potential method of blood cell analysis that may have more advantages than traditional approaches such as conventional optical microscopy and hematology analyzers in certain examination projects. In this review, we firstly summarize several common blood cell analysis technologies in the clinic, and analyze the advantages and disadvantages of these technologies. Then, we focus on the basic principles and characteristics of three representative SRM techniques, as well as the latest advances in these techniques for blood cell analysis. Finally, we discuss the developmental trend and possible research directions of SRM, and provide some discussions on further development of technologies for blood cell analysis.
Collapse
|
26
|
|
27
|
Lamoureux ES, Islamzada E, Wiens MVJ, Matthews K, Duffy SP, Ma H. Assessing red blood cell deformability from microscopy images using deep learning. LAB ON A CHIP 2021; 22:26-39. [PMID: 34874395 DOI: 10.1039/d1lc01006a] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Red blood cells (RBCs) must be highly deformable to transit through the microvasculature to deliver oxygen to tissues. The loss of RBC deformability resulting from pathology, natural aging, or storage in blood bags can impede the proper function of these cells. A variety of methods have been developed to measure RBC deformability, but these methods require specialized equipment, long measurement time, and highly skilled personnel. To address this challenge, we investigated whether a machine learning approach could be used to predict donor RBC deformability based on morphological features from single cell microscope images. We used the microfluidic ratchet device to sort RBCs based on deformability. Sorted cells are then imaged and used to train a deep learning model to classify RBC based image features related to cell deformability. This model correctly predicted deformability of individual RBCs with 81 ± 11% accuracy averaged across ten donors. Using this model to score the deformability of RBC samples was accurate to within 10.4 ± 6.8% of the value obtained using the microfluidic ratchet device. While machine learning methods are frequently developed to automate human image analysis, our study is remarkable in showing that deep learning of single cell microscopy images could be used to assess RBC deformability, a property not normally measurable by imaging. Measuring RBC deformability by imaging is also desirable because it can be performed rapidly using a standard microscopy system, potentially enabling RBC deformability studies to be performed as part of routine clinical assessments.
Collapse
Affiliation(s)
- Erik S Lamoureux
- Department of Mechanical Engineering, University of British Columbia, 2054-6250 Applied Science Lane, Vancouver, BC, V6T 1Z4, Canada.
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
| | - Emel Islamzada
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Matthew V J Wiens
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Kerryn Matthews
- Department of Mechanical Engineering, University of British Columbia, 2054-6250 Applied Science Lane, Vancouver, BC, V6T 1Z4, Canada.
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
| | - Simon P Duffy
- Department of Mechanical Engineering, University of British Columbia, 2054-6250 Applied Science Lane, Vancouver, BC, V6T 1Z4, Canada.
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
- British Columbia Institute of Technology, Burnaby, BC, Canada
| | - Hongshen Ma
- Department of Mechanical Engineering, University of British Columbia, 2054-6250 Applied Science Lane, Vancouver, BC, V6T 1Z4, Canada.
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
- Vancouver Prostate Centre, Vancouver General Hospital, Vancouver, BC, Canada
| |
Collapse
|
28
|
Benyahia S, Meftah B, Lézoray O. Multi-features extraction based on deep learning for skin lesion classification. Tissue Cell 2021; 74:101701. [PMID: 34861582 DOI: 10.1016/j.tice.2021.101701] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 11/22/2021] [Accepted: 11/23/2021] [Indexed: 10/19/2022]
Abstract
For various forms of skin lesion, many different feature extraction methods have been investigated so far. Indeed, feature extraction is a crucial step in machine learning processes. In general, we can distinct handcrafted and deep learning features. In this paper, we investigate the efficiency of using 17 commonly pre-trained convolutional neural networks (CNN) architectures as feature extractors and of 24 machine learning classifiers to evaluate the classification of skin lesions from two different datasets: ISIC 2019 and PH2. In this research, we find out that a DenseNet201 combined with Fine KNN or Cubic SVM achieved the best results in accuracy (92.34% and 91.71%) for the ISIC 2019 dataset. The results also show that the suggested method outperforms others approaches with an accuracy of 99% on the PH2 dataset.
Collapse
Affiliation(s)
- Samia Benyahia
- Department of Computer Science, Faculty of Exact Sciences, University of Mascara, Mascara, Algeria
| | | | - Olivier Lézoray
- Normandie Univ, UNICAEN, ENSICAEN, CNRS, GREYC, Caen, France
| |
Collapse
|
29
|
Automatic Detection and Counting of Blood Cells in Smear Images Using RetinaNet. ENTROPY 2021; 23:e23111522. [PMID: 34828220 PMCID: PMC8618480 DOI: 10.3390/e23111522] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 11/10/2021] [Accepted: 11/11/2021] [Indexed: 01/06/2023]
Abstract
A complete blood count is one of the significant clinical tests that evaluates overall human health and provides relevant information for disease diagnosis. The conventional strategies of blood cell counting include manual counting as well as counting using the hemocytometer and are tedious and time-consuming tasks. This research-based paper proposes an automatic software-based alternative method to count blood cells accurately using the RetinaNet deep learning network, which is used to recognize and classify objects in microscopic images. After training, the network automatically recognizes and counts red blood cells, white blood cells, and platelets. We tested a model trained on smear images and found that the trained model has generalized capabilities. We assessed the quality of detection and cell counting using performance measures, such as accuracy, sensitivity, precision, and F1-score. Moreover, we studied the dependence of the confidence thresholds and the number of learning epochs on the obtained results of recognition and counting. We compared the performance of the proposed approach with those obtained by other authors who dealt with the subject of cell counting and show that object detection and labeling can be an additional advantage in the task of counting objects.
Collapse
|
30
|
Abstract
At present, diverse, innovative technology is used in electronics and ubiquitous computing environments [...]
Collapse
|
31
|
Alzubaidi L, Duan Y, Al-Dujaili A, Ibraheem IK, Alkenani AH, Santamaría J, Fadhel MA, Al-Shamma O, Zhang J. Deepening into the suitability of using pre-trained models of ImageNet against a lightweight convolutional neural network in medical imaging: an experimental study. PeerJ Comput Sci 2021; 7:e715. [PMID: 34722871 PMCID: PMC8530098 DOI: 10.7717/peerj-cs.715] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 08/24/2021] [Indexed: 05/14/2023]
Abstract
Transfer learning (TL) has been widely utilized to address the lack of training data for deep learning models. Specifically, one of the most popular uses of TL has been for the pre-trained models of the ImageNet dataset. Nevertheless, although these pre-trained models have shown an effective performance in several domains of application, those models may not offer significant benefits in all instances when dealing with medical imaging scenarios. Such models were designed to classify a thousand classes of natural images. There are fundamental differences between these models and those dealing with medical imaging tasks regarding learned features. Most medical imaging applications range from two to ten different classes, where we suspect that it would not be necessary to employ deeper learning models. This paper investigates such a hypothesis and develops an experimental study to examine the corresponding conclusions about this issue. The lightweight convolutional neural network (CNN) model and the pre-trained models have been evaluated using three different medical imaging datasets. We have trained the lightweight CNN model and the pre-trained models with two scenarios which are with a small number of images once and a large number of images once again. Surprisingly, it has been found that the lightweight model trained from scratch achieved a more competitive performance when compared to the pre-trained model. More importantly, the lightweight CNN model can be successfully trained and tested using basic computational tools and provide high-quality results, specifically when using medical imaging datasets.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, Queensland, Australia
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, Baghdad, Iraq
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri - Columbia, Columbia, Missouri, United States
| | - Ayad Al-Dujaili
- Electrical Engineering Technical College, Middle Technical University, Baghdad, Baghdad, Iraq
| | - Ibraheem Kasim Ibraheem
- Department of Electrical Engineering, College of Engineering, University of Baghdad, Baghdad, Baghdad, Iraq
| | - Ahmed H. Alkenani
- School of Computer Science, Queensland University of Technology, Brisbane, Queensland, Australia
- The Australian E-Health Research Centre, CSIRO, Brisbane, Queensland, Australia
| | - Jose Santamaría
- Department of Computer Science, University of Jaén, Jaén, Jaén, Spain
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Rafia, Thi Qar, Iraq
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, Baghdad, Iraq
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, Queensland, Australia
| |
Collapse
|
32
|
Lebleux M, Denimal E, De Oliveira D, Marin A, Desroche N, Alexandre H, Weidmann S, Rousseaux S. Prediction of Genetic Groups within Brettanomyces bruxellensis through Cell Morphology Using a Deep Learning Tool. J Fungi (Basel) 2021; 7:jof7080581. [PMID: 34436120 PMCID: PMC8396822 DOI: 10.3390/jof7080581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 07/16/2021] [Accepted: 07/18/2021] [Indexed: 11/16/2022] Open
Abstract
Brettanomyces bruxellensis is described as a wine spoilage yeast with many mainly strain-dependent genetic characteristics, bestowing tolerance against environmental stresses and persistence during the winemaking process. Thus, it is essential to discriminate B. bruxellensis isolates at the strain level in order to predict their stress resistance capacities. Few predictive tools are available to reveal intraspecific diversity within B. bruxellensis species; also, they require expertise and can be expensive. In this study, a Random Amplified Polymorphic DNA (RAPD) adapted PCR method was used with three different primers to discriminate 74 different B. bruxellensis isolates. High correlation between the results of this method using the primer OPA-09 and those of a previous microsatellite analysis was obtained, allowing us to cluster the isolates among four genetic groups more quickly and cheaply than microsatellite analysis. To make analysis even faster, we further investigated the correlation suggested in a previous study between genetic groups and cell polymorphism using the analysis of optical microscopy images via deep learning. A Convolutional Neural Network (CNN) was trained to predict the genetic group of B. bruxellensis isolates with 96.6% accuracy. These methods make intraspecific discrimination among B. bruxellensis species faster, simpler and less costly. These results open up very promising new perspectives in oenology for the study of microbial ecosystems.
Collapse
Affiliation(s)
- Manon Lebleux
- Laboratoire VAlMiS-IUVV, AgroSup Dijon, UMR PAM A 02.102, University Bourgogne Franche-Comté, F-21000 Dijon, France; (D.D.O.); (H.A.); (S.W.); (S.R.)
- Correspondence:
| | - Emmanuel Denimal
- AgroSup Dijon, Direction Scientifique, Appui à la Recherche, 26 Boulevard Docteur Petitjean, F-21000 Dijon, France;
| | - Déborah De Oliveira
- Laboratoire VAlMiS-IUVV, AgroSup Dijon, UMR PAM A 02.102, University Bourgogne Franche-Comté, F-21000 Dijon, France; (D.D.O.); (H.A.); (S.W.); (S.R.)
| | - Ambroise Marin
- Plateau D’imagerie DimaCell, Esplanade Erasme, Agrosup Dijon, UMR PAM A 02.102, University Bourgogne Franche-Comté, F-21000 Dijon, France;
| | | | - Hervé Alexandre
- Laboratoire VAlMiS-IUVV, AgroSup Dijon, UMR PAM A 02.102, University Bourgogne Franche-Comté, F-21000 Dijon, France; (D.D.O.); (H.A.); (S.W.); (S.R.)
| | - Stéphanie Weidmann
- Laboratoire VAlMiS-IUVV, AgroSup Dijon, UMR PAM A 02.102, University Bourgogne Franche-Comté, F-21000 Dijon, France; (D.D.O.); (H.A.); (S.W.); (S.R.)
| | - Sandrine Rousseaux
- Laboratoire VAlMiS-IUVV, AgroSup Dijon, UMR PAM A 02.102, University Bourgogne Franche-Comté, F-21000 Dijon, France; (D.D.O.); (H.A.); (S.W.); (S.R.)
| |
Collapse
|
33
|
Liu Z, Jin L, Chen J, Fang Q, Ablameyko S, Yin Z, Xu Y. A survey on applications of deep learning in microscopy image analysis. Comput Biol Med 2021; 134:104523. [PMID: 34091383 DOI: 10.1016/j.compbiomed.2021.104523] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 05/13/2021] [Accepted: 05/17/2021] [Indexed: 01/12/2023]
Abstract
Advanced microscopy enables us to acquire quantities of time-lapse images to visualize the dynamic characteristics of tissues, cells or molecules. Microscopy images typically vary in signal-to-noise ratios and include a wealth of information which require multiple parameters and time-consuming iterative algorithms for processing. Precise analysis and statistical quantification are often needed for the understanding of the biological mechanisms underlying these dynamic image sequences, which has become a big challenge in the field. As deep learning technologies develop quickly, they have been applied in bioimage processing more and more frequently. Novel deep learning models based on convolution neural networks have been developed and illustrated to achieve inspiring outcomes. This review article introduces the applications of deep learning algorithms in microscopy image analysis, which include image classification, region segmentation, object tracking and super-resolution reconstruction. We also discuss the drawbacks of existing deep learning-based methods, especially on the challenges of training datasets acquisition and evaluation, and propose the potential solutions. Furthermore, the latest development of augmented intelligent microscopy that based on deep learning technology may lead to revolution in biomedical research.
Collapse
Affiliation(s)
- Zhichao Liu
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China
| | - Luhong Jin
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China
| | - Jincheng Chen
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China
| | - Qiuyu Fang
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China
| | - Sergey Ablameyko
- National Academy of Sciences, United Institute of Informatics Problems, Belarusian State University, Minsk, 220012, Belarus
| | - Zhaozheng Yin
- AI Institute, Department of Biomedical Informatics and Department of Computer Science, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Yingke Xu
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Department of Endocrinology, The Affiliated Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, 310016, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China.
| |
Collapse
|
34
|
Techniques for the Detection of Sickle Cell Disease: A Review. MICROMACHINES 2021; 12:mi12050519. [PMID: 34063111 PMCID: PMC8148117 DOI: 10.3390/mi12050519] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 04/17/2021] [Accepted: 04/25/2021] [Indexed: 12/14/2022]
Abstract
Sickle cell disease (SCD) is a widespread disease caused by a mutation in the beta-globin gene that leads to the production of abnormal hemoglobin called hemoglobin S. The inheritance of the mutation could be homozygous or heterozygous combined with another hemoglobin mutation. SCD can be characterized by the presence of dense, sickled cells that causes hemolysis of blood cells, anemia, painful episodes, organ damage, and in some cases death. Early detection of SCD can help to reduce the mortality and manage the disease effectively. Therefore, different techniques have been developed to detect the sickle cell disease and the carrier states with high sensitivity and specificity. These techniques can be screening tests such as complete blood count, peripheral blood smears, and sickling test; confirmatory tests such as hemoglobin separation techniques; and genetic tests, which are more expensive and need to be done in centralized labs by highly skilled personnel. However, advanced portable point of care techniques have been developed to provide a low-cost, simple, and user-friendly device for detecting SCD, for instance coupling solubility tests with portable devices, using smartphone microscopic classifications, image processing techniques, rapid immunoassays, and sensor-based platforms. This review provides an overview of the current and emerging techniques for sickle cell disease detection and highlights the different potential methods that could be applied to help the early diagnosis of SCD.
Collapse
|
35
|
Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, Santamaría J, Fadhel MA, Al-Amidie M, Farhan L. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. JOURNAL OF BIG DATA 2021; 8:53. [PMID: 33816053 PMCID: PMC8010506 DOI: 10.1186/s40537-021-00444-8] [Citation(s) in RCA: 671] [Impact Index Per Article: 223.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 03/22/2021] [Indexed: 05/04/2023]
Abstract
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
| | - Amjad J. Humaidi
- Control and Systems Engineering Department, University of Technology, Baghdad, 10001 Iraq
| | - Ayad Al-Dujaili
- Electrical Engineering Technical College, Middle Technical University, Baghdad, 10001 Iraq
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - J. Santamaría
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi Qar, 64005 Iraq
| | - Muthana Al-Amidie
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Laith Farhan
- School of Engineering, Manchester Metropolitan University, Manchester, M1 5GD UK
| |
Collapse
|
36
|
Alzubaidi L, Al-Amidie M, Al-Asadi A, Humaidi AJ, Al-Shamma O, Fadhel MA, Zhang J, Santamaría J, Duan Y. Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data. Cancers (Basel) 2021; 13:1590. [PMID: 33808207 PMCID: PMC8036379 DOI: 10.3390/cancers13071590] [Citation(s) in RCA: 65] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 03/24/2021] [Accepted: 03/27/2021] [Indexed: 12/27/2022] Open
Abstract
Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes-either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia;
- AlNidhal Campus, University of Information Technology & Communications, Baghdad 10001, Iraq;
| | - Muthana Al-Amidie
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA; (M.A.-A.); (A.A.-A.); (Y.D.)
| | - Ahmed Al-Asadi
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA; (M.A.-A.); (A.A.-A.); (Y.D.)
| | - Amjad J. Humaidi
- Control and Systems Engineering Department, University of Technology, Baghdad 10001, Iraq;
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad 10001, Iraq;
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi Qar 64005, Iraq;
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia;
| | - J. Santamaría
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain;
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA; (M.A.-A.); (A.A.-A.); (Y.D.)
| |
Collapse
|
37
|
Abstract
In this article, a fuzzy logic model is proposed for more precise hourly electrical power demand modeling in New England. The issue that exists when considering hourly electrical power demand modeling is that these types of plants have a large amount of data. In order to obtain a more precise model of plants with a large amount of data, the main characteristics of the proposed fuzzy logic model are as follows: (1) it is in accordance with the conditions under which a fuzzy logic model and a radial basis mapping model are equivalent to obtain a new scheme, (2) it uses a combination of the descending gradient and the mini-lots approach to avoid applying the descending gradient to all data.
Collapse
|
38
|
Sebastian JA, Kolios MC, Acker JP. Emerging use of machine learning and advanced technologies to assess red cell quality. Transfus Apher Sci 2020; 59:103020. [PMID: 33246838 DOI: 10.1016/j.transci.2020.103020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Improving blood product quality and patient outcomes is an accepted goal in transfusion medicine research. Thus, there is an urgent need to understand the potential adverse effects on red blood cells (RBCs) during pre-transfusion storage. Current assessment techniques of these degradation events, termed "storage lesions", are subjective, labor-intensive, and complex. Here we describe emerging technologies that assess the biochemical, biophysical, and morphological characteristics of RBC storage lesions. Of these emerging techniques, machine learning (ML) has shown potential to overcome the limitations of conventional RBC assessment methods. Our previous work has shown that neural networks can extract chronological progressions of morphological changes in RBCs during storage without human input. We hypothesize that, with broader training and testing of multivariate data (e.g., varying donor factors and manufacturing methods), ML can further our understanding of clinical transfusion outcomes in multiple patient groups.
Collapse
Affiliation(s)
- Joseph A Sebastian
- Institute of Biomedical Engineering, University of Toronto, 164 College St., Toronto, Ontario, M5S 3G9, Canada; Translational Biology and Engineering Program, Ted Rogers Center for Heart Research, 661 University Avenue, Toronto, ON, M5G 1X8, Canada.
| | - Michael C Kolios
- Department of Physics, Ryerson University, 350 Victoria St., Toronto, Ontario, M5B 2K3, Canada; Institute of Biomedical Engineering, Science and Technology (iBEST), A Partnership Between Ryerson University and St. Michael's Hospital, 209 Victoria St, Toronto, Ontario, M5B 1T8, Canada; Keenan Research Centre for Biomedical Science, Li Ka Shing Knowledge Institute, St. Michael's Hospital, 209 Victoria St., Toronto, Ontario, M5B 1T8, Canada.
| | - Jason P Acker
- Centre for Innovation, Canadian Blood Services, 8249-114 St., Edmonton, Alberta, T6G 2R8, Canada; Department of Laboratory Medicine and Pathology, University of Alberta, 8249-114 St., Edmonton, Alberta, T6G 2R8, Canada.
| |
Collapse
|
39
|
Lin YH, Liao KYK, Sung KB. Automatic detection and characterization of quantitative phase images of thalassemic red blood cells using a mask region-based convolutional neural network. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:JBO-200187R. [PMID: 33188571 PMCID: PMC7665881 DOI: 10.1117/1.jbo.25.11.116502] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 10/26/2020] [Indexed: 05/07/2023]
Abstract
SIGNIFICANCE Label-free quantitative phase imaging is a promising technique for the automatic detection of abnormal red blood cells (RBCs) in real time. Although deep-learning techniques can accurately detect abnormal RBCs from quantitative phase images efficiently, their applications in diagnostic testing are limited by the lack of transparency. More interpretable results such as morphological and biochemical characteristics of individual RBCs are highly desirable. AIM An end-to-end deep-learning model was developed to efficiently discriminate thalassemic RBCs (tRBCs) from healthy RBCs (hRBCs) in quantitative phase images and segment RBCs for single-cell characterization. APPROACH Two-dimensional quantitative phase images of hRBCs and tRBCs were acquired using digital holographic microscopy. A mask region-based convolutional neural network (Mask R-CNN) model was trained to discriminate tRBCs and segment individual RBCs. Characterization of tRBCs was achieved utilizing SHapley Additive exPlanation analysis and canonical correlation analysis on automatically segmented RBC phase images. RESULTS The implemented model achieved 97.8% accuracy in detecting tRBCs. Phase-shift statistics showed the highest influence on the correct classification of tRBCs. Associations between the phase-shift features and three-dimensional morphological features were revealed. CONCLUSIONS The implemented Mask R-CNN model accurately identified tRBCs and segmented RBCs to provide single-RBC characterization, which has the potential to aid clinical decision-making.
Collapse
Affiliation(s)
- Yang-Hsien Lin
- National Taiwan University, Graduate Institute of Biomedical Electronics and Bioinformatics, Taipei, Taiwan
| | - Ken Y.-K. Liao
- Feng Chia University, College of Information and Electrical Engineering, Taichung, Taiwan
| | - Kung-Bin Sung
- National Taiwan University, Graduate Institute of Biomedical Electronics and Bioinformatics, Taipei, Taiwan
- National Taiwan University, Department of Electrical Engineering, Taipei, Taiwan
- National Taiwan University, Molecular Imaging Center, Taipei, Taiwan
| |
Collapse
|
40
|
Petrović N, Moyà-Alcover G, Jaume-i-Capó A, González-Hidalgo M. Sickle-cell disease diagnosis support selecting the most appropriate machine learning method: Towards a general and interpretable approach for cell morphology analysis from microscopy images. Comput Biol Med 2020; 126:104027. [DOI: 10.1016/j.compbiomed.2020.104027] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 09/13/2020] [Accepted: 09/30/2020] [Indexed: 12/01/2022]
|
41
|
Hasan RI, Yusuf SM, Alzubaidi L. Review of the State of the Art of Deep Learning for Plant Diseases: A Broad Analysis and Discussion. PLANTS (BASEL, SWITZERLAND) 2020; 9:E1302. [PMID: 33019765 PMCID: PMC7599890 DOI: 10.3390/plants9101302] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 09/24/2020] [Accepted: 09/25/2020] [Indexed: 01/17/2023]
Abstract
Deep learning (DL) represents the golden era in the machine learning (ML) domain, and it has gradually become the leading approach in many fields. It is currently playing a vital role in the early detection and classification of plant diseases. The use of ML techniques in this field is viewed as having brought considerable improvement in cultivation productivity sectors, particularly with the recent emergence of DL, which seems to have increased accuracy levels. Recently, many DL architectures have been implemented accompanying visualisation techniques that are essential for determining symptoms and classifying plant diseases. This review investigates and analyses the most recent methods, developed over three years leading up to 2020, for training, augmentation, feature fusion and extraction, recognising and counting crops, and detecting plant diseases, including how these methods can be harnessed to feed deep classifiers and their effects on classifier accuracy.
Collapse
Affiliation(s)
- Reem Ibrahim Hasan
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai, Johor 81310, Malaysia; (R.I.H.); (S.M.Y.)
- Al-Nidhal Campus, University of Information Technology & Communications, Baghdad 00964, Iraq
| | - Suhaila Mohd Yusuf
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai, Johor 81310, Malaysia; (R.I.H.); (S.M.Y.)
| | - Laith Alzubaidi
- Al-Nidhal Campus, University of Information Technology & Communications, Baghdad 00964, Iraq
- Faculty of Science & Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia
| |
Collapse
|
42
|
Improved Classification of White Blood Cells with the Generative Adversarial Network and Deep Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:6490479. [PMID: 32695152 PMCID: PMC7368188 DOI: 10.1155/2020/6490479] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 04/27/2020] [Accepted: 06/17/2020] [Indexed: 12/30/2022]
Abstract
White blood cells (leukocytes) are a very important component of the blood that forms the immune system, which is responsible for fighting foreign elements. The five types of white blood cells include neutrophils, eosinophils, lymphocytes, monocytes, and basophils, where each type constitutes a different proportion and performs specific functions. Being able to classify and, therefore, count these different constituents is critical for assessing the health of patients and infection risks. Generally, laboratory experiments are used for determining the type of a white blood cell. The staining process and manual evaluation of acquired images under the microscope are tedious and subject to human errors. Moreover, a major challenge is the unavailability of training data that cover the morphological variations of white blood cells so that trained classifiers can generalize well. As such, this paper investigates image transformation operations and generative adversarial networks (GAN) for data augmentation and state-of-the-art deep neural networks (i.e., VGG-16, ResNet, and DenseNet) for the classification of white blood cells into the five types. Furthermore, we explore initializing the DNNs' weights randomly or using weights pretrained on the CIFAR-100 dataset. In contrast to other works that require advanced image preprocessing and manual feature extraction before classification, our method works directly with the acquired images. The results of extensive experiments show that the proposed method can successfully classify white blood cells. The best DNN model, DenseNet-169, yields a validation accuracy of 98.8%. Particularly, we find that the proposed approach outperforms other methods that rely on sophisticated image processing and manual feature engineering.
Collapse
|
43
|
Towards a Better Understanding of Transfer Learning for Medical Imaging: A Case Study. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10134523] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
One of the main challenges of employing deep learning models in the field of medicine is a lack of training data due to difficulty in collecting and labeling data, which needs to be performed by experts. To overcome this drawback, transfer learning (TL) has been utilized to solve several medical imaging tasks using pre-trained state-of-the-art models from the ImageNet dataset. However, there are primary divergences in data features, sizes, and task characteristics between the natural image classification and the targeted medical imaging tasks. Therefore, TL can slightly improve performance if the source domain is completely different from the target domain. In this paper, we explore the benefit of TL from the same and different domains of the target tasks. To do so, we designed a deep convolutional neural network (DCNN) model that integrates three ideas including traditional and parallel convolutional layers and residual connections along with global average pooling. We trained the proposed model against several scenarios. We utilized the same and different domain TL with the diabetic foot ulcer (DFU) classification task and with the animal classification task. We have empirically shown that the source of TL from the same domain can significantly improve the performance considering a reduced number of images in the same domain of the target dataset. The proposed model with the DFU dataset achieved F1-score value of 86.6% when trained from scratch, 89.4% with TL from a different domain of the targeted dataset, and 97.6% with TL from the same domain of the targeted dataset.
Collapse
|
44
|
Optimizing the Performance of Breast Cancer Classification by Employing the Same Domain Transfer Learning from Hybrid Deep Convolutional Neural Network Model. ELECTRONICS 2020. [DOI: 10.3390/electronics9030445] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Breast cancer is a significant factor in female mortality. An early cancer diagnosis leads to a reduction in the breast cancer death rate. With the help of a computer-aided diagnosis system, the efficiency increased, and the cost was reduced for the cancer diagnosis. Traditional breast cancer classification techniques are based on handcrafted features techniques, and their performance relies upon the chosen features. They also are very sensitive to different sizes and complex shapes. However, histopathological breast cancer images are very complex in shape. Currently, deep learning models have become an alternative solution for diagnosis, and have overcome the drawbacks of classical classification techniques. Although deep learning has performed well in various tasks of computer vision and pattern recognition, it still has some challenges. One of the main challenges is the lack of training data. To address this challenge and optimize the performance, we have utilized a transfer learning technique which is where the deep learning models train on a task, and then fine-tune the models for another task. We have employed transfer learning in two ways: Training our proposed model first on the same domain dataset, then on the target dataset, and training our model on a different domain dataset, then on the target dataset. We have empirically proven that the same domain transfer learning optimized the performance. Our hybrid model of parallel convolutional layers and residual links is utilized to classify hematoxylin–eosin-stained breast biopsy images into four classes: invasive carcinoma, in-situ carcinoma, benign tumor and normal tissue. To reduce the effect of overfitting, we have augmented the images with different image processing techniques. The proposed model achieved state-of-the-art performance, and it outperformed the latest methods by achieving a patch-wise classification accuracy of 90.5%, and an image-wise classification accuracy of 97.4% on the validation set. Moreover, we have achieved an image-wise classification accuracy of 96.1% on the test set of the microscopy ICIAR-2018 dataset.
Collapse
|