1
|
Shabbir B, Saleem S, Aleem I, Babar N, Farooq H, Loya A, Naveed H. Low-Cost Histopathological Mitosis Detection for Microscope-acquired Images. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE PROCEEDINGS. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE 2024; 2024:409-418. [PMID: 38827107 PMCID: PMC11141803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Cancer outcomes are poor in resource-limited countries owing to high costs and insufficient pathologist-population ratio. The advent of digital pathology has assisted in improving cancer outcomes, however, Whole Slide Image scanners are expensive and not affordable in low-income countries. Microscope-acquired images on the other hand are cheap to collect and can be more viable for automation of cancer detection. In this study, we propose LCH-Network, a novel method to identify the cancer mitotic count from microscope-acquired images. We introduced Label Mix, and also synthesized images using GANs to handle data imbalance. Moreover, we applied progressive resolution to handle different image scales for mitotic localization. We achieved F1-Score of 0.71 and outperformed other existing techniques. Our findings enable mitotic count estimation from microscopic images with a low-cost setup. Clinically, our method could help avoid presumptive treatment without a confirmed cancer diagnosis.
Collapse
Affiliation(s)
- Bilal Shabbir
- Computational Biology Research Lab, National University of Computer & Emerging Sciences, Islamabad, Pakistan
| | - Saira Saleem
- Shaukat Khanum Memorial Cancer Hospital & Research Centre, Lahore, Pakistan
| | - Iffat Aleem
- Shaukat Khanum Memorial Cancer Hospital & Research Centre, Lahore, Pakistan
| | - Nida Babar
- Shaukat Khanum Memorial Cancer Hospital & Research Centre, Lahore, Pakistan
| | | | - Asif Loya
- Shaukat Khanum Memorial Cancer Hospital & Research Centre, Lahore, Pakistan
| | - Hammad Naveed
- Computational Biology Research Lab, National University of Computer & Emerging Sciences, Islamabad, Pakistan
| |
Collapse
|
2
|
Fernandez-Martín C, Silva-Rodriguez J, Kiraz U, Morales S, Janssen EAM, Naranjo V. Uninformed Teacher-Student for hard-samples distillation in weakly supervised mitosis localization. Comput Med Imaging Graph 2024; 112:102328. [PMID: 38244279 DOI: 10.1016/j.compmedimag.2024.102328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 11/02/2023] [Accepted: 12/12/2023] [Indexed: 01/22/2024]
Abstract
BACKGROUND AND OBJECTIVE Mitotic activity is a crucial biomarker for diagnosing and predicting outcomes for different types of cancers, particularly breast cancer. However, manual mitosis counting is challenging and time-consuming for pathologists, with moderate reproducibility due to biopsy slide size, low mitotic cell density, and pattern heterogeneity. In recent years, deep learning methods based on convolutional neural networks (CNNs) have been proposed to address these limitations. Nonetheless, these methods have been hampered by the available data labels, which usually consist only of the centroids of mitosis, and by the incoming noise from annotated hard negatives. As a result, complex algorithms with multiple stages are often required to refine the labels at the pixel level and reduce the number of false positives. METHODS This article presents a novel weakly supervised approach for mitosis detection that utilizes only image-level labels on histological hematoxylin and eosin (H&E) images, avoiding the need for complex labeling scenarios. Also, an Uninformed Teacher-Student (UTS) pipeline is introduced to detect and distill hard samples by comparing weakly supervised localizations and the annotated centroids, using strong augmentations to enhance uncertainty. Additionally, an automatic proliferation score is proposed that mimicks the pathologist-annotated mitotic activity index (MAI). The proposed approach is evaluated on three publicly available datasets for mitosis detection on breast histology samples, and two datasets for mitotic activity counting in whole-slide images. RESULTS The proposed framework achieves competitive performance with relevant prior literature in all the datasets used for evaluation without explicitly using the mitosis location information during training. This approach challenges previous methods that rely on strong mitosis location information and multiple stages to refine false positives. Furthermore, the proposed pipeline for hard-sample distillation demonstrates promising dataset-specific improvements. Concretely, when the annotation has not been thoroughly refined by multiple pathologists, the UTS model offers improvements of up to ∼4% in mitosis localization, thanks to the detection and distillation of uncertain cases. Concerning the mitosis counting task, the proposed automatic proliferation score shows a moderate positive correlation with the MAI annotated by pathologists at the biopsy level on two external datasets. CONCLUSIONS The proposed Uninformed Teacher-Student pipeline leverages strong augmentations to distill uncertain samples and measure dissimilarities between predicted and annotated mitosis. Results demonstrate the feasibility of the weakly supervised approach and highlight its potential as an objective evaluation tool for tumor proliferation.
Collapse
Affiliation(s)
- Claudio Fernandez-Martín
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-tech, Universitat Politècnica de València, Valencia, Spain.
| | | | - Umay Kiraz
- Department of Chemistry, Bioscience and Environmental Engineering, University of Stavanger, Stavanger, Norway; Department of Pathology, Stavanger University Hospital, Stavanger, Norway
| | - Sandra Morales
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-tech, Universitat Politècnica de València, Valencia, Spain
| | - Emiel A M Janssen
- Department of Chemistry, Bioscience and Environmental Engineering, University of Stavanger, Stavanger, Norway; Department of Pathology, Stavanger University Hospital, Stavanger, Norway
| | - Valery Naranjo
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-tech, Universitat Politècnica de València, Valencia, Spain
| |
Collapse
|
3
|
R Shihabuddin A, Beevi K S. Efficient mitosis detection: leveraging pre-trained faster R-CNN and cell-level classification. Biomed Phys Eng Express 2024; 10:025031. [PMID: 38357907 DOI: 10.1088/2057-1976/ad262f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 02/05/2024] [Indexed: 02/16/2024]
Abstract
The assessment of mitotic activity is an integral part of the comprehensive evaluation of breast cancer pathology. Understanding the level of tumor dissemination is essential for assessing the severity of the malignancy and guiding appropriate treatment strategies. A pathologist must manually perform the intricate and time-consuming task of counting mitoses by examining biopsy slices stained with Hematoxylin and Eosin (H&E) under a microscope. Mitotic cells can be challenging to distinguish in H&E-stained sections due to limited available datasets and similarities among mitotic and non-mitotic cells. Computer-assisted mitosis detection approaches have simplified the whole procedure by selecting, detecting, and labeling mitotic cells. Traditional detection strategies rely on image processing techniques that apply custom criteria to distinguish between different aspects of an image. Additionally, the automatic feature extraction from histopathology images that exhibit mitosis using neural networks.Additionally, the possibility of automatically extracting features from histopathological images using deep neural networks was investigated. This study examines mitosis detection as an object detection problem using multiple neural networks. From a medical standpoint, mitosis at the tissue level was also investigated utilising pre-trained Faster R-CNN and raw image data. Experiments were done on the MITOS-ATYPIA- 14 dataset and TUPAC16 dataset, and the results were compared to those of other methods described in the literature.
Collapse
Affiliation(s)
- Abdul R Shihabuddin
- Centre For Artificial Intelligence, TKM College of Engineering, Karicode, Kollam, 691005, Kerala, India
| | - Sabeena Beevi K
- Department of Electrical and Electronics Engineering, TKM College of Engineering, Karicode, Kollam, 691005, Kerala, India
| |
Collapse
|
4
|
Gu H, Yang C, Al-Kharouf I, Magaki S, Lakis N, Williams CK, Alrosan SM, Onstott EK, Yan W, Khanlou N, Cobos I, Zhang XR, Zarrin-Khameh N, Vinters HV, Chen XA, Haeri M. Enhancing mitosis quantification and detection in meningiomas with computational digital pathology. Acta Neuropathol Commun 2024; 12:7. [PMID: 38212848 PMCID: PMC10782692 DOI: 10.1186/s40478-023-01707-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 12/10/2023] [Indexed: 01/13/2024] Open
Abstract
Mitosis is a critical criterion for meningioma grading. However, pathologists' assessment of mitoses is subject to significant inter-observer variation due to challenges in locating mitosis hotspots and accurately detecting mitotic figures. To address this issue, we leverage digital pathology and propose a computational strategy to enhance pathologists' mitosis assessment. The strategy has two components: (1) A depth-first search algorithm that quantifies the mathematically maximum mitotic count in 10 consecutive high-power fields, which can enhance the preciseness, especially in cases with borderline mitotic count. (2) Implementing a collaborative sphere to group a set of pathologists to detect mitoses under each high-power field, which can mitigate subjective random errors in mitosis detection originating from individual detection errors. By depth-first search algorithm (1) , we analyzed 19 meningioma slides and discovered that the proposed algorithm upgraded two borderline cases verified at consensus conferences. This improvement is attributed to the algorithm's ability to quantify the mitotic count more comprehensively compared to other conventional methods of counting mitoses. In implementing a collaborative sphere (2) , we evaluated the correctness of mitosis detection from grouped pathologists and/or pathology residents, where each member of the group annotated a set of 48 high-power field images for mitotic figures independently. We report that groups with sizes of three can achieve an average precision of 0.897 and sensitivity of 0.699 in mitosis detection, which is higher than an average pathologist in this study (precision: 0.750, sensitivity: 0.667). The proposed computational strategy can be integrated with artificial intelligence workflow, which envisions the future of achieving a rapid and robust mitosis assessment by interactive assisting algorithms that can ultimately benefit patient management.
Collapse
Affiliation(s)
- Hongyan Gu
- Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Chunxu Yang
- Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Issa Al-Kharouf
- Pathology and Laboratory Medicine, The University of Kansas Medical Center, Kansas City, KS, 66160, USA
| | - Shino Magaki
- Pathology and Laboratory Medicine, UCLA David Geffen School of Medicine, Los Angeles, CA, 90095, USA
| | - Nelli Lakis
- Pathology and Laboratory Medicine, The University of Kansas Medical Center, Kansas City, KS, 66160, USA
| | - Christopher Kazu Williams
- Pathology and Laboratory Medicine, UCLA David Geffen School of Medicine, Los Angeles, CA, 90095, USA
| | - Sallam Mohammad Alrosan
- Pathology and Laboratory Medicine, The University of Kansas Medical Center, Kansas City, KS, 66160, USA
| | - Ellie Kate Onstott
- Pathology and Laboratory Medicine, The University of Kansas Medical Center, Kansas City, KS, 66160, USA
| | - Wenzhong Yan
- Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Negar Khanlou
- Pathology and Laboratory Medicine, UCLA David Geffen School of Medicine, Los Angeles, CA, 90095, USA
| | - Inma Cobos
- Department of Pathology, Stanford Medical School, Stanford, CA, 94305, USA
| | | | | | - Harry V Vinters
- Pathology and Laboratory Medicine, UCLA David Geffen School of Medicine, Los Angeles, CA, 90095, USA
| | - Xiang Anthony Chen
- Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, CA, 90095, USA.
| | - Mohammad Haeri
- Pathology and Laboratory Medicine, The University of Kansas Medical Center, Kansas City, KS, 66160, USA.
| |
Collapse
|
5
|
Wagner SJ, Matek C, Shetab Boushehri S, Boxberg M, Lamm L, Sadafi A, Winter DJE, Marr C, Peng T. Built to Last? Reproducibility and Reusability of Deep Learning Algorithms in Computational Pathology. Mod Pathol 2024; 37:100350. [PMID: 37827448 DOI: 10.1016/j.modpat.2023.100350] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 10/02/2023] [Accepted: 10/03/2023] [Indexed: 10/14/2023]
Abstract
Recent progress in computational pathology has been driven by deep learning. While code and data availability are essential to reproduce findings from preceding publications, ensuring a deep learning model's reusability is more challenging. For that, the codebase should be well-documented and easy to integrate into existing workflows and models should be robust toward noise and generalizable toward data from different sources. Strikingly, only a few computational pathology algorithms have been reused by other researchers so far, let alone employed in a clinical setting. To assess the current state of reproducibility and reusability of computational pathology algorithms, we evaluated peer-reviewed articles available in PubMed, published between January 2019 and March 2021, in 5 use cases: stain normalization; tissue type segmentation; evaluation of cell-level features; genetic alteration prediction; and inference of grading, staging, and prognostic information. We compiled criteria for data and code availability and statistical result analysis and assessed them in 160 publications. We found that only one-quarter (41 of 160 publications) made code publicly available. Among these 41 studies, three-quarters (30 of 41) analyzed their results statistically, half of them (20 of 41) released their trained model weights, and approximately a third (16 of 41) used an independent cohort for evaluation. Our review is intended for both pathologists interested in deep learning and researchers applying algorithms to computational pathology challenges. We provide a detailed overview of publications with published code in the field, list reusable data handling tools, and provide criteria for reproducibility and reusability.
Collapse
Affiliation(s)
- Sophia J Wagner
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; School of Computation, Information and Technology, Technical University of Munich, Garching, Germany
| | - Christian Matek
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Institute of Pathology, University Hospital Erlangen, Erlangen, Germany
| | - Sayedali Shetab Boushehri
- School of Computation, Information and Technology, Technical University of Munich, Garching, Germany; Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Data & Analytics (D&A), Roche Pharma Research and Early Development (pRED), Roche Innovation Center Munich, Germany
| | - Melanie Boxberg
- Institute of Pathology, Technical University Munich, Munich, Germany; Institute of Pathology Munich-North, Munich, Germany
| | - Lorenz Lamm
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Helmholtz Pioneer Campus, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany
| | - Ario Sadafi
- School of Computation, Information and Technology, Technical University of Munich, Garching, Germany; Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany
| | - Dominik J E Winter
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; School of Life Sciences, Technical University of Munich, Weihenstephan, Germany
| | - Carsten Marr
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany.
| | - Tingying Peng
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany.
| |
Collapse
|
6
|
Harrison P, Hasan R, Park K. State-of-the-Art of Breast Cancer Diagnosis in Medical Images via Convolutional Neural Networks (CNNs). JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2023; 7:387-432. [PMID: 37927373 PMCID: PMC10620373 DOI: 10.1007/s41666-023-00144-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 08/14/2023] [Accepted: 08/22/2023] [Indexed: 11/07/2023]
Abstract
Early detection of breast cancer is crucial for a better prognosis. Various studies have been conducted where tumor lesions are detected and localized on images. This is a narrative review where the studies reviewed are related to five different image modalities: histopathological, mammogram, magnetic resonance imaging (MRI), ultrasound, and computed tomography (CT) images, making it different from other review studies where fewer image modalities are reviewed. The goal is to have the necessary information, such as pre-processing techniques and CNN-based diagnosis techniques for the five modalities, readily available in one place for future studies. Each modality has pros and cons, such as mammograms might give a high false positive rate for radiographically dense breasts, while ultrasounds with low soft tissue contrast result in early-stage false detection, and MRI provides a three-dimensional volumetric image, but it is expensive and cannot be used as a routine test. Various studies were manually reviewed using particular inclusion and exclusion criteria; as a result, 91 recent studies that classify and detect tumor lesions on breast cancer images from 2017 to 2022 related to the five image modalities were included. For histopathological images, the maximum accuracy achieved was around 99 % , and the maximum sensitivity achieved was 97.29 % by using DenseNet, ResNet34, and ResNet50 architecture. For mammogram images, the maximum accuracy achieved was 96.52 % using a customized CNN architecture. For MRI, the maximum accuracy achieved was 98.33 % using customized CNN architecture. For ultrasound, the maximum accuracy achieved was around 99 % by using DarkNet-53, ResNet-50, G-CNN, and VGG. For CT, the maximum sensitivity achieved was 96 % by using Xception architecture. Histopathological and ultrasound images achieved higher accuracy of around 99 % by using ResNet34, ResNet50, DarkNet-53, G-CNN, and VGG compared to other modalities for either of the following reasons: use of pre-trained architectures with pre-processing techniques, use of modified architectures with pre-processing techniques, use of two-stage CNN, and higher number of studies available for Artificial Intelligence (AI)/machine learning (ML) researchers to reference. One of the gaps we found is that only a single image modality is used for CNN-based diagnosis; in the future, a multiple image modality approach can be used to design a CNN architecture with higher accuracy.
Collapse
Affiliation(s)
- Pratibha Harrison
- Department of Computer and Information Science, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| | - Rakib Hasan
- Department of Mechanical Engineering, Khulna University of Engineering & Technology, PhulBari Gate, Khulna, 9203 Bangladesh
| | - Kihan Park
- Department of Mechanical Engineering, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| |
Collapse
|
7
|
Khan SH, Iqbal J, Hassnain SA, Owais M, Mostafa SM, Hadjouni M, Mahmoud A. COVID-19 detection and analysis from lung CT images using novel channel boosted CNNs. EXPERT SYSTEMS WITH APPLICATIONS 2023; 229:120477. [PMID: 37220492 PMCID: PMC10186852 DOI: 10.1016/j.eswa.2023.120477] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 05/10/2023] [Accepted: 05/10/2023] [Indexed: 05/25/2023]
Abstract
In December 2019, the global pandemic COVID-19 in Wuhan, China, affected human life and the worldwide economy. Therefore, an efficient diagnostic system is required to control its spread. However, the automatic diagnostic system poses challenges with a limited amount of labeled data, minor contrast variation, and high structural similarity between infection and background. In this regard, a new two-phase deep convolutional neural network (CNN) based diagnostic system is proposed to detect minute irregularities and analyze COVID-19 infection. In the first phase, a novel SB-STM-BRNet CNN is developed, incorporating a new channel Squeezed and Boosted (SB) and dilated convolutional-based Split-Transform-Merge (STM) block to detect COVID-19 infected lung CT images. The new STM blocks performed multi-path region-smoothing and boundary operations, which helped to learn minor contrast variation and global COVID-19 specific patterns. Furthermore, the diverse boosted channels are achieved using the SB and Transfer Learning concepts in STM blocks to learn texture variation between COVID-19-specific and healthy images. In the second phase, COVID-19 infected images are provided to the novel COVID-CB-RESeg segmentation CNN to identify and analyze COVID-19 infectious regions. The proposed COVID-CB-RESeg methodically employed region-homogeneity and heterogeneity operations in each encoder-decoder block and boosted-decoder using auxiliary channels to simultaneously learn the low illumination and boundaries of the COVID-19 infected region. The proposed diagnostic system yields good performance in terms of accuracy: 98.21 %, F-score: 98.24%, Dice Similarity: 96.40 %, and IOU: 98.85 % for the COVID-19 infected region. The proposed diagnostic system would reduce the burden and strengthen the radiologist's decision for a fast and accurate COVID-19 diagnosis.
Collapse
Affiliation(s)
- Saddam Hussain Khan
- Department of Computer Systems Engineering, University of Engineering and Applied Science, Swat 19060, Pakistan
| | - Javed Iqbal
- Department of Computer Systems Engineering, University of Engineering and Applied Science, Swat 19060, Pakistan
| | - Syed Agha Hassnain
- Ocean College, Zhejiang University, Zheda Road 1, Zhoushan, Zhejiang 316021, China
| | - Muhammad Owais
- KUCARS and C2PS, Department of Electrical Engineering and Computer Science, Khalifa University, UAE
| | - Samih M Mostafa
- Computer Science Department, Faculty of Computers and Information, South Valley University, Qena 83523, Egypt
- Faculty of Industry and Energy Technology, New Assiut Technological University (N.A.T.U.), New Assiut City, Egypt
| | - Myriam Hadjouni
- Department of Computer Sciences, College of Computer and Information Science, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Amena Mahmoud
- Faculty of Computers and Information, Department of Computer Science, KafrElSkeikh University, Egypt
| |
Collapse
|
8
|
AlGhamdi R. Mitotic Nuclei Segmentation and Classification Using Chaotic Butterfly Optimization Algorithm with Deep Learning on Histopathology Images. Biomimetics (Basel) 2023; 8:474. [PMID: 37887605 PMCID: PMC10604189 DOI: 10.3390/biomimetics8060474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 09/05/2023] [Accepted: 09/20/2023] [Indexed: 10/28/2023] Open
Abstract
Histopathological grading of the tumors provides insights about the patient's disease conditions, and it also helps in customizing the treatment plans. Mitotic nuclei classification involves the categorization and identification of nuclei in histopathological images based on whether they are undergoing the cell division (mitosis) process or not. This is an essential procedure in several research and medical contexts, especially in diagnosis and prognosis of cancer. Mitotic nuclei classification is a challenging task since the size of the nuclei is too small to observe, while the mitotic figures possess a different appearance as well. Automated calculation of mitotic nuclei is a stimulating one due to their great similarity to non-mitotic nuclei and their heteromorphic appearance. Both Computer Vision (CV) and Machine Learning (ML) approaches are used in the automated identification and the categorization of mitotic nuclei in histopathological images that endure the procedure of cell division (mitosis). With this background, the current research article introduces the mitotic nuclei segmentation and classification using the chaotic butterfly optimization algorithm with deep learning (MNSC-CBOADL) technique. The main objective of the MNSC-CBOADL technique is to perform automated segmentation and the classification of the mitotic nuclei. In the presented MNSC-CBOADL technique, the U-Net model is initially applied for the purpose of segmentation. Additionally, the MNSC-CBOADL technique applies the Xception model for feature vector generation. For the classification process, the MNSC-CBOADL technique employs the deep belief network (DBN) algorithm. In order to enhance the detection performance of the DBN approach, the CBOA is designed for the hyperparameter tuning model. The proposed MNSC-CBOADL system was validated through simulation using the benchmark database. The extensive results confirmed the superior performance of the proposed MNSC-CBOADL system in the classification of mitotic nuclei.
Collapse
Affiliation(s)
- Rayed AlGhamdi
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
9
|
Drioua WR, Benamrane N, Sais L. Breast Cancer Histopathological Images Segmentation Using Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:7318. [PMID: 37687772 PMCID: PMC10490494 DOI: 10.3390/s23177318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 08/10/2023] [Accepted: 08/18/2023] [Indexed: 09/10/2023]
Abstract
Hospitals generate a significant amount of medical data every day, which constitute a very rich database for research. Today, this database is still not exploitable because to make its valorization possible, the images require an annotation which remains a costly and difficult task. Thus, the use of an unsupervised segmentation method could facilitate the process. In this article, we propose two approaches for the semantic segmentation of breast cancer histopathology images. On the one hand, an autoencoder architecture for unsupervised segmentation is proposed, and on the other hand, an improvement U-Net architecture for supervised segmentation is proposed. We evaluate these models on a public dataset of histological images of breast cancer. In addition, the performance of our segmentation methods is measured using several evaluation metrics such as accuracy, recall, precision and F1 score. The results are competitive with those of other modern methods.
Collapse
Affiliation(s)
- Wafaa Rajaa Drioua
- Laboratoire SIMPA, Département d’Informatique, Université des Sciences et de la Technologie d’Oran Mohamed Boudiaf (USTO-MB), Oran 31000, Algeria;
| | - Nacéra Benamrane
- Laboratoire SIMPA, Département d’Informatique, Université des Sciences et de la Technologie d’Oran Mohamed Boudiaf (USTO-MB), Oran 31000, Algeria;
| | - Lakhdar Sais
- Centre de Recherche en Informatique de Lens, CRIL, CNRS, Université d’Artois, 62307 Lens, France;
| |
Collapse
|
10
|
Shihabuddin AR, K. SB. Multi CNN based automatic detection of mitotic nuclei in breast histopathological images. Comput Biol Med 2023; 158:106815. [PMID: 37003066 DOI: 10.1016/j.compbiomed.2023.106815] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 03/07/2023] [Accepted: 03/20/2023] [Indexed: 03/31/2023]
Abstract
In breast cancer diagnosis, the number of mitotic cells in a specific area is an important measure. It indicates how far the tumour has spread, which has consequences for forecasting the aggressiveness of cancer. Mitosis counting is a time-consuming and challenging technique that a pathologist does manually by examining Hematoxylin and Eosin (H&E) stained biopsy slices under a microscope. Due to limited datasets and the resemblance between mitotic and non-mitotic cells, detecting mitosis in H&E stained slices is difficult. By assisting in the screening, identifying, and labelling of mitotic cells, computer-aided mitosis detection technologies make the entire procedure much easier. For computer-aided detection approaches of smaller datasets, pre-trained convolutional neural networks are extensively employed. The usefulness of a multi CNN framework with three pre-trained CNNs is investigated in this research for mitosis detection. Features were collected from histopathology data and identified using VGG16, ResNet50, and DenseNet201 pre-trained networks. The proposed framework utilises all training folders of the MITOS dataset provided for the MITOS-ATYPIA contest 2014 and all the 73 folders of the TUPAC16 dataset. Each pre-trained Convolutional Neural Network model, such as VGG16, ResNet50 and DenseNet201, provides an accuracy of 83.22%, 73.67%, and 81.75%, respectively. Different combinations of these pre-trained CNNs constitute a multi CNN framework. Performance measures of multi CNN consisting of 3 pre-trained CNNs with Linear SVM give 93.81% precision and 92.41% F1-score compared to multi CNN combinations with other classifiers such as Adaboost and Random Forest.
Collapse
|
11
|
Springenberg M, Frommholz A, Wenzel M, Weicken E, Ma J, Strodthoff N. From modern CNNs to vision transformers: Assessing the performance, robustness, and classification strategies of deep learning models in histopathology. Med Image Anal 2023; 87:102809. [PMID: 37201221 DOI: 10.1016/j.media.2023.102809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 03/16/2023] [Accepted: 04/04/2023] [Indexed: 05/20/2023]
Abstract
While machine learning is currently transforming the field of histopathology, the domain lacks a comprehensive evaluation of state-of-the-art models based on essential but complementary quality requirements beyond a mere classification accuracy. In order to fill this gap, we developed a new methodology to extensively evaluate a wide range of classification models, including recent vision transformers, and convolutional neural networks such as: ConvNeXt, ResNet (BiT), Inception, ViT and Swin transformer, with and without supervised or self-supervised pretraining. We thoroughly tested the models on five widely used histopathology datasets containing whole slide images of breast, gastric, and colorectal cancer and developed a novel approach using an image-to-image translation model to assess the robustness of a cancer classification model against stain variations. Further, we extended existing interpretability methods to previously unstudied models and systematically reveal insights of the models' classification strategies that allow for plausibility checks and systematic comparisons. The study resulted in specific model recommendations for practitioners as well as putting forward a general methodology to quantify a model's quality according to complementary requirements that can be transferred to future model architectures.
Collapse
Affiliation(s)
| | - Annika Frommholz
- Fraunhofer Heinrich Hertz Institute, Einsteinufer 37, 10587 Berlin, Germany
| | - Markus Wenzel
- Fraunhofer Heinrich Hertz Institute, Einsteinufer 37, 10587 Berlin, Germany
| | - Eva Weicken
- Fraunhofer Heinrich Hertz Institute, Einsteinufer 37, 10587 Berlin, Germany
| | - Jackie Ma
- Fraunhofer Heinrich Hertz Institute, Einsteinufer 37, 10587 Berlin, Germany
| | - Nils Strodthoff
- University of Oldenburg, Ammerländer Heerstr. 114-118, 26129 Oldenburg, Germany
| |
Collapse
|
12
|
Zhu Z, Wang SH, Zhang YD. A Survey of Convolutional Neural Network in Breast Cancer. COMPUTER MODELING IN ENGINEERING & SCIENCES : CMES 2023; 136:2127-2172. [PMID: 37152661 PMCID: PMC7614504 DOI: 10.32604/cmes.2023.025484] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/28/2022] [Indexed: 05/09/2023]
Abstract
Problems For people all over the world, cancer is one of the most feared diseases. Cancer is one of the major obstacles to improving life expectancy in countries around the world and one of the biggest causes of death before the age of 70 in 112 countries. Among all kinds of cancers, breast cancer is the most common cancer for women. The data showed that female breast cancer had become one of the most common cancers. Aims A large number of clinical trials have proved that if breast cancer is diagnosed at an early stage, it could give patients more treatment options and improve the treatment effect and survival ability. Based on this situation, there are many diagnostic methods for breast cancer, such as computer-aided diagnosis (CAD). Methods We complete a comprehensive review of the diagnosis of breast cancer based on the convolutional neural network (CNN) after reviewing a sea of recent papers. Firstly, we introduce several different imaging modalities. The structure of CNN is given in the second part. After that, we introduce some public breast cancer data sets. Then, we divide the diagnosis of breast cancer into three different tasks: 1. classification; 2. detection; 3. segmentation. Conclusion Although this diagnosis with CNN has achieved great success, there are still some limitations. (i) There are too few good data sets. A good public breast cancer dataset needs to involve many aspects, such as professional medical knowledge, privacy issues, financial issues, dataset size, and so on. (ii) When the data set is too large, the CNN-based model needs a sea of computation and time to complete the diagnosis. (iii) It is easy to cause overfitting when using small data sets.
Collapse
Affiliation(s)
| | | | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK
| |
Collapse
|
13
|
Basu A, Senapati P, Deb M, Rai R, Dhal KG. A survey on recent trends in deep learning for nucleus segmentation from histopathology images. EVOLVING SYSTEMS 2023; 15:1-46. [PMID: 38625364 PMCID: PMC9987406 DOI: 10.1007/s12530-023-09491-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023]
Abstract
Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017-2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
Collapse
Affiliation(s)
- Anusua Basu
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Pradip Senapati
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Mainak Deb
- Wipro Technologies, Pune, Maharashtra India
| | - Rebika Rai
- Department of Computer Applications, Sikkim University, Sikkim, India
| | - Krishna Gopal Dhal
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| |
Collapse
|
14
|
Wang R, Gu Y, Zhang T, Yang J. Fast cancer metastasis location based on dual magnification hard example mining network in whole-slide images. Comput Biol Med 2023; 158:106880. [PMID: 37044050 DOI: 10.1016/j.compbiomed.2023.106880] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 02/28/2023] [Accepted: 03/30/2023] [Indexed: 04/03/2023]
Abstract
Breast cancer has become the most common form of cancer among women. In recent years, deep learning has shown great potential in aiding the diagnosis of pathological images, particularly through the use of convolutional neural networks for locating lymph node metastasis under gigapixel whole slide images (WSIs). However, the massive size of these images at the highest magnification introduces redundant computation during the inference process. Additionally, the diversity of biological textures and structures within WSIs can cause confusion for classifiers, particularly in identifying hard examples. As a result, the trade-off between accuracy and efficiency remains a critical issue for whole-slide image metastasis localization. In this paper, we propose a novel two-stream network that takes a pair of low- and high-magnification image patches as input for identifying hard examples during the training phase. Specifically, our framework focuses on samples where the outputs of the two magnification networks are dissimilar. We adopt a dual magnification hard mining loss to re-weight the ambiguous samples. To more efficiently locate tumor metastasis cells in whole slide images, the two stream networks are decomposed into a cascaded network during the inference phase. The low magnification WSIs scanned by the low-mag network generate a coarse probability map, and the suspicious areas in the map are refined by the high-mag network. Finally, we evaluate our fast location dual magnification hard example mining network on the Camelyon16 breast cancer whole-slide image dataset. Experiments demonstrate that our proposed method achieves a 0.871 FROC score with a faster inference time, and our high magnification network also achieves a 0.88 FROC score.
Collapse
Affiliation(s)
- Rui Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China.
| | - Yun Gu
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China.
| | - Tianyi Zhang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China
| | - Jie Yang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China.
| |
Collapse
|
15
|
Sepahvand M, Abdali-Mohammadi F. Joint learning method with teacher-student knowledge distillation for on-device breast cancer image classification. Comput Biol Med 2023; 155:106476. [PMID: 36841060 DOI: 10.1016/j.compbiomed.2022.106476] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 09/12/2022] [Accepted: 12/21/2022] [Indexed: 12/25/2022]
Abstract
The deep learning models such as AlexNet, VGG, and ResNet achieved a good performance in classifying the breast cancer histopathological images in BreakHis dataset. However, these models are not practically appropriate due to their computational complexity and too many parameters; as a result, they are rarely utilized on devices with limited computational resources. This paper develops a lightweight learning model based on knowledge distillation to classify the histopathological images of breast cancer in BreakHis. This method employs two teacher models based on VGG and ResNext to train two student models, which are similar to the teacher models in development but have fewer deep layers. In the proposed method, the adaptive joint learning approach is adopted to transfer the knowledge in the final-layer output of a teacher model along with the feature maps of its middle layers as the dark knowledge to a student model. According to the experimental results, the student model designed by ResNeXt architecture obtained the recognition rate 97.09% for all histopathological images. In addition, this model has ∼69.40 million fewer parameters, ∼0.93 G less GPU memory use, and 268.17 times greater compression rate than its teacher model. While in the student model the recognition rate merely dropped down to 1.75%. The comparisons indicated that the student model had a rather acceptable outputs compared with state-of-the-art methods in classifying the images of breast cancer in BreakHis.
Collapse
Affiliation(s)
- Majid Sepahvand
- Department of Computer Engineering and Information Technology, Razi University, Kermanshah, Iran.
| | - Fardin Abdali-Mohammadi
- Department of Computer Engineering and Information Technology, Razi University, Kermanshah, Iran.
| |
Collapse
|
16
|
Buvaneswari B, Vijayaraj J, Satheesh Kumar B. Histopathological image-based breast cancer detection employing 3D-convolutional neural network feature extraction and Stochastic Diffusion Kernel Recursive Neural Networks classification. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2022.2161148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Affiliation(s)
- B. Buvaneswari
- Department of Information Technology, Panimalar Engineering College, Chennai, India
| | - J. Vijayaraj
- Department of Artificial Intelligence and Data Science, Easwari Engineering College, Chennai, India
| | - B. Satheesh Kumar
- Department of Computer Science and Engineering, School of Computing Science and Engineering, Galgotias University, Greater Noida, India
| |
Collapse
|
17
|
Rauf Z, Sohail A, Khan SH, Khan A, Gwak J, Maqbool M. Attention-guided multi-scale deep object detection framework for lymphocyte analysis in IHC histological images. Microscopy (Oxf) 2023; 72:27-42. [PMID: 36239597 DOI: 10.1093/jmicro/dfac051] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 09/21/2022] [Accepted: 10/13/2022] [Indexed: 11/14/2022] Open
Abstract
Tumor-infiltrating lymphocytes are specialized lymphocytes that can detect and kill cancerous cells. Their detection poses many challenges due to significant morphological variations, overlapping occurrence, artifact regions and high-class resemblance between clustered areas and artifacts. In this regard, a Lymphocyte Analysis Framework based on Deep Convolutional neural network (DC-Lym-AF) is proposed to analyze lymphocytes in immunohistochemistry images. The proposed framework comprises (i) pre-processing, (ii) screening phase, (iii) localization phase and (iv) post-processing. In the screening phase, a custom convolutional neural network architecture (lymphocyte dilated network) is developed to screen lymphocytic regions by performing a patch-level classification. This proposed architecture uses dilated convolutions and shortcut connections to capture multi-level variations and ensure reference-based learning. In contrast, the localization phase utilizes an attention-guided multi-scale lymphocyte detector to detect lymphocytes. The proposed detector extracts refined and multi-scale features by exploiting dilated convolutions, attention mechanism and feature pyramid network (FPN) using its custom attention-aware backbone. The proposed DC-Lym-AF shows exemplary performance on the NuClick dataset compared with the existing detection models, with an F-score and precision of 0.84 and 0.83, respectively. We verified the generalizability of our proposed framework by participating in a publically open LYON'19 challenge. Results in terms of detection rate (0.76) and F-score (0.73) suggest that the proposed DC-Lym-AF can effectively detect lymphocytes in immunohistochemistry-stained images collected from different laboratories. In addition, its promising generalization on several datasets implies that it can be turned into a medical diagnostic tool to investigate various histopathological problems. Graphical Abstract.
Collapse
Affiliation(s)
- Zunaira Rauf
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan.,PIEAS Artificial Intelligence Center, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Anabia Sohail
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan.,Department of Computer Science, Faculty of Computing and Artificial Intelligence, Air University, E-9, Islamabad 44230, Pakistan
| | - Saddam Hussain Khan
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan.,Department of Computer Systems Engineering, University of Engineering and Applied Sciences, Swat, Khyber Pakhtunkhwa 19130, Pakistan
| | - Asifullah Khan
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan.,PIEAS Artificial Intelligence Center, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan.,Center for Mathematical Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Jeonghwan Gwak
- Department of Software, Korea National University of Transportation, Chungju 27469, Republic of Korea
| | - Muhammad Maqbool
- The University of Alabama at Birmingham, 1720 2nd Ave South, Birmingham, AL 35294, USA
| |
Collapse
|
18
|
A generalizable and robust deep learning algorithm for mitosis detection in multicenter breast histopathological images. Med Image Anal 2023; 84:102703. [PMID: 36481608 DOI: 10.1016/j.media.2022.102703] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 09/16/2022] [Accepted: 11/21/2022] [Indexed: 11/24/2022]
Abstract
Mitosis counting of biopsies is an important biomarker for breast cancer patients, which supports disease prognostication and treatment planning. Developing a robust mitotic cell detection model is highly challenging due to its complex growth pattern and high similarities with non-mitotic cells. Most mitosis detection algorithms have poor generalizability across image domains and lack reproducibility and validation in multicenter settings. To overcome these issues, we propose a generalizable and robust mitosis detection algorithm (called FMDet), which is independently tested on multicenter breast histopathological images. To capture more refined morphological features of cells, we convert the object detection task as a semantic segmentation problem. The pixel-level annotations for mitotic nuclei are obtained by taking the intersection of the masks generated from a well-trained nuclear segmentation model and the bounding boxes provided by the MIDOG 2021 challenge. In our segmentation framework, a robust feature extractor is developed to capture the appearance variations of mitotic cells, which is constructed by integrating a channel-wise multi-scale attention mechanism into a fully convolutional network structure. Benefiting from the fact that the changes in the low-level spectrum do not affect the high-level semantic perception, we employ a Fourier-based data augmentation method to reduce domain discrepancies by exchanging the low-frequency spectrum between two domains. Our FMDet algorithm has been tested in the MIDOG 2021 challenge and ranked first place. Further, our algorithm is also externally validated on four independent datasets for mitosis detection, which exhibits state-of-the-art performance in comparison with previously published results. These results demonstrate that our algorithm has the potential to be deployed as an assistant decision support tool in clinical practice. Our code has been released at https://github.com/Xiyue-Wang/1st-in-MICCAI-MIDOG-2021-challenge.
Collapse
|
19
|
Chan RC, To CKC, Cheng KCT, Yoshikazu T, Yan LLA, Tse GM. Artificial intelligence in breast cancer histopathology. Histopathology 2023; 82:198-210. [PMID: 36482271 DOI: 10.1111/his.14820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/22/2022] [Accepted: 09/28/2022] [Indexed: 12/13/2022]
Abstract
This is a review on the use of artificial intelligence for digital breast pathology. A systematic search on PubMed was conducted, identifying 17,324 research papers related to breast cancer pathology. Following a semimanual screening, 664 papers were retrieved and pursued. The papers are grouped into six major tasks performed by pathologists-namely, molecular and hormonal analysis, grading, mitotic figure counting, ki-67 indexing, tumour-infiltrating lymphocyte assessment, and lymph node metastases identification. Under each task, open-source datasets for research to build artificial intelligence (AI) tools are also listed. Many AI tools showed promise and demonstrated feasibility in the automation of routine pathology investigations. We expect continued growth of AI in this field as new algorithms mature.
Collapse
Affiliation(s)
- Ronald Ck Chan
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Chun Kit Curtis To
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Ka Chuen Tom Cheng
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Tada Yoshikazu
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Lai Ling Amy Yan
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Gary M Tse
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| |
Collapse
|
20
|
Liu K, Li B, Wu W, May C, Chang O, Knezevich S, Reisch L, Elmore J, Shapiro L. VSGD-Net: Virtual Staining Guided Melanocyte Detection on Histopathological Images. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2023; 2023:1918-1927. [PMID: 36865487 PMCID: PMC9977454 DOI: 10.1109/wacv56688.2023.00196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
Detection of melanocytes serves as a critical prerequisite in assessing melanocytic growth patterns when diagnosing melanoma and its precursor lesions on skin biopsy specimens. However, this detection is challenging due to the visual similarity of melanocytes to other cells in routine Hematoxylin and Eosin (H&E) stained images, leading to the failure of current nuclei detection methods. Stains such as Sox10 can mark melanocytes, but they require an additional step and expense and thus are not regularly used in clinical practice. To address these limitations, we introduce VSGD-Net, a novel detection network that learns melanocyte identification through virtual staining from H&E to Sox10. The method takes only routine H&E images during inference, resulting in a promising approach to support pathologists in the diagnosis of melanoma. To the best of our knowledge, this is the first study that investigates the detection problem using image synthesis features between two distinct pathology stainings. Extensive experimental results show that our proposed model outperforms state-of-the-art nuclei detection methods for melanocyte detection. The source code and pre-trained model are available at: https://github.com/kechunl/VSGD-Net.
Collapse
Affiliation(s)
| | - Beibin Li
- University of Washington
- Microsoft Research
| | | | | | | | | | | | | | | |
Collapse
|
21
|
Computer-aided detection and prognosis of colorectal cancer on whole slide images using dual resolution deep learning. J Cancer Res Clin Oncol 2023; 149:91-101. [PMID: 36331654 DOI: 10.1007/s00432-022-04435-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 10/18/2022] [Indexed: 11/06/2022]
Abstract
PURPOSE Rapid diagnosis and risk stratification can provide timely treatment for colorectal cancer (CRC) patients. Deep learning (DL) is not only used to identify tumor regions in histopathological images, but also applied to predict survival and achieve risk stratification. Whereas, most of methods dependent on regions of interest annotated by pathologist and ignore the global information in the image. METHODS A dual resolution DL network based on weakly supervised learning (WDRNet) was proposed for CRC identification and prognosis. The proposed method was trained and validated on the dataset from The Cancer Genome Atlas (TCGA) and tested on the external dataset from Affiliated Cancer Hospital and Institute of Guangzhou Medical University (ACHIGMU). RESULTS In identification task, WDRNet accurately identified tumor images with an accuracy of 0.977 in slide level and 0.953 in patch level. Besides, in prognosis task, WDRNet showed an excellent prediction performance in both datasets with the concordance index (C-index) of 0.716 ± 0.037 and 0.598 ± 0.024 respectively. Moreover, the results of risk stratification were statistically significant in univariate analysis (p < 0.001, HR = 7.892 in TCGA-CRC, and p = 0.009, HR = 1.718 in ACHIGMU) and multivariate analysis (p < 0.001, HR = 5.914 in TCGA-CRC, and p = 0.025, HR = 1.674 in ACHIGMU). CONCLUSIONS We developed a weakly supervised resolution DL network to achieve precise identification and prognosis of CRC patients, which will assist doctors in diagnosis on histopathological images and stratify patients to select appropriate therapeutic schedule.
Collapse
|
22
|
Tian G, Wang Z, Wang C, Chen J, Liu G, Xu H, Lu Y, Han Z, Zhao Y, Li Z, Luo X, Peng L. A deep ensemble learning-based automated detection of COVID-19 using lung CT images and Vision Transformer and ConvNeXt. Front Microbiol 2022; 13:1024104. [PMID: 36406463 PMCID: PMC9672374 DOI: 10.3389/fmicb.2022.1024104] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 09/16/2022] [Indexed: 09/19/2023] Open
Abstract
Since the outbreak of COVID-19, hundreds of millions of people have been infected, causing millions of deaths, and resulting in a heavy impact on the daily life of countless people. Accurately identifying patients and taking timely isolation measures are necessary ways to stop the spread of COVID-19. Besides the nucleic acid test, lung CT image detection is also a path to quickly identify COVID-19 patients. In this context, deep learning technology can help radiologists identify COVID-19 patients from CT images rapidly. In this paper, we propose a deep learning ensemble framework called VitCNX which combines Vision Transformer and ConvNeXt for COVID-19 CT image identification. We compared our proposed model VitCNX with EfficientNetV2, DenseNet, ResNet-50, and Swin-Transformer which are state-of-the-art deep learning models in the field of image classification, and two individual models which we used for the ensemble (Vision Transformer and ConvNeXt) in binary and three-classification experiments. In the binary classification experiment, VitCNX achieves the best recall of 0.9907, accuracy of 0.9821, F1-score of 0.9855, AUC of 0.9985, and AUPR of 0.9991, which outperforms the other six models. Equally, in the three-classification experiment, VitCNX computes the best precision of 0.9668, an accuracy of 0.9696, and an F1-score of 0.9631, further demonstrating its excellent image classification capability. We hope our proposed VitCNX model could contribute to the recognition of COVID-19 patients.
Collapse
Affiliation(s)
- Geng Tian
- School of Computer Science, Hunan University of Technology, Zhuzhou, China
- Geneis (Beijing) Co., Ltd., Beijing, China
| | - Ziwei Wang
- School of Computer Science, Hunan University of Technology, Zhuzhou, China
| | - Chang Wang
- School of Computer Science, Hunan University of Technology, Zhuzhou, China
| | - Jianhua Chen
- Hunan Storm Information Technology Co., Ltd., Changsha, China
| | - Guangyi Liu
- School of Computer Science, Hunan University of Technology, Zhuzhou, China
| | - He Xu
- School of Computer Science, Hunan University of Technology, Zhuzhou, China
| | - Yuankang Lu
- School of Computer Science, Hunan University of Technology, Zhuzhou, China
| | - Zhuoran Han
- High School Attached to Northeast Normal University, Changchun, China
| | - Yubo Zhao
- No. 2 Middle School of Shijiazhuang, Shijiazhuang, China
| | - Zejun Li
- School of Computer Science, Hunan Institute of Technology, Hengyang, China
| | - Xueming Luo
- School of Computer Science, Hunan University of Technology, Zhuzhou, China
| | - Lihong Peng
- School of Computer Science, Hunan University of Technology, Zhuzhou, China
- College of Life Sciences and Chemistry, Hunan University of Technology, Zhuzhou, China
| |
Collapse
|
23
|
Ransomware detection using deep learning based unsupervised feature extraction and a cost sensitive Pareto Ensemble classifier. Sci Rep 2022; 12:15647. [PMID: 36123364 PMCID: PMC9485118 DOI: 10.1038/s41598-022-19443-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 08/29/2022] [Indexed: 11/09/2022] Open
Abstract
Ransomware attacks pose a serious threat to Internet resources due to their far-reaching effects. It's Zero-day variants are even more hazardous, as less is known about them. In this regard, when used for ransomware attack detection, conventional machine learning approaches may become data-dependent, insensitive to error cost, and thus may not tackle zero-day ransomware attacks. Zero-day ransomware have normally unseen underlying data distribution. This paper presents a Cost-Sensitive Pareto Ensemble strategy, CSPE-R to detect novel Ransomware attacks. Initially, the proposed framework exploits the unsupervised deep Contractive Auto Encoder (CAE) to transform the underlying varying feature space to a more uniform and core semantic feature space. To learn the robust features, the proposed CSPE-R ensemble technique explores different semantic spaces at various levels of detail. Heterogeneous base estimators are then trained over these extracted subspaces to find the core relevance between the various families of the ransomware attacks. Then, a novel Pareto Ensemble-based estimator selection strategy is implemented to achieve a cost-sensitive compromise between false positives and false negatives. Finally, the decision of selected estimators are aggregated to improve the detection against unknown ransomware attacks. The experimental results show that the proposed CSPE-R framework performs well against zero-day ransomware attacks.
Collapse
|
24
|
Li Y, Xue Y, Li L, Zhang X, Qian X. Domain Adaptive Box-Supervised Instance Segmentation Network for Mitosis Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2469-2485. [PMID: 35389862 DOI: 10.1109/tmi.2022.3165518] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The number of mitotic cells present in histopathological slides is an important predictor of tumor proliferation in the diagnosis of breast cancer. However, the current approaches can hardly perform precise pixel-level prediction for mitosis datasets with only weak labels (i.e., only provide the centroid location of mitotic cells), and take no account of the large domain gap across histopathological slides from different pathology laboratories. In this work, we propose a Domain adaptive Box-supervised Instance segmentation Network (DBIN) to address the above issues. In DBIN, we propose a high-performance Box-supervised Instance-Aware (BIA) head with the core idea of redesigning three box-supervised mask loss terms. Furthermore, we add a Pseudo-Mask-supervised Semantic (PMS) head for enriching characteristics extracted from underlying feature maps. Besides, we align the pixel-level feature distributions between source and target domains by a Cross-Domain Adaptive Module (CDAM), so as to adapt the detector learned from one lab can work well on unlabeled data from another lab. The proposed method achieves state-of-the-art performance across four mainstream datasets. A series of analysis and experiments show that our proposed BIA and PMS head can accomplish mitosis pixel-wise localization under weak supervision, and we can boost the generalization ability of our model by CDAM.
Collapse
|
25
|
Shankar K, Dutta AK, Kumar S, Joshi GP, Doo IC. Chaotic Sparrow Search Algorithm with Deep Transfer Learning Enabled Breast Cancer Classification on Histopathological Images. Cancers (Basel) 2022; 14:cancers14112770. [PMID: 35681749 PMCID: PMC9179470 DOI: 10.3390/cancers14112770] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 05/30/2022] [Accepted: 05/30/2022] [Indexed: 11/16/2022] Open
Abstract
Simple Summary Cancer is considered the most significant public health issue which severely threatens people’s health. The occurrence and mortality rate of breast cancer have been growing consistently. Initial precise diagnostics act as primary factors in improving the endurance rate of patients. Even though there are several means to identify breast cancer, histopathological diagnosis is now considered the gold standard in the diagnosis of cancer. However, the difficulty of histopathological image and the rapid rise in workload render this process time-consuming, and the outcomes might be subjected to pathologists’ subjectivity. Hence, the development of a precise and automatic histopathological image analysis method is essential for the field. Recently, the deep learning method for breast cancer pathological image classification has made significant progress, which has become mainstream in this field. Therefore, in this work, we focused on the design of metaheuristics with deep learning based breast cancer classification process. The proposed model is found to be an effective tool to assist physicians in the decision making process. Abstract Breast cancer is the major cause behind the death of women worldwide and is responsible for several deaths each year. Even though there are several means to identify breast cancer, histopathological diagnosis is now considered the gold standard in the diagnosis of cancer. However, the difficulty of histopathological image and the rapid rise in workload render this process time-consuming, and the outcomes might be subjected to pathologists’ subjectivity. Hence, the development of a precise and automatic histopathological image analysis method is essential for the field. Recently, the deep learning method for breast cancer pathological image classification has made significant progress, which has become mainstream in this field. This study introduces a novel chaotic sparrow search algorithm with a deep transfer learning-enabled breast cancer classification (CSSADTL-BCC) model on histopathological images. The presented CSSADTL-BCC model mainly focused on the recognition and classification of breast cancer. To accomplish this, the CSSADTL-BCC model primarily applies the Gaussian filtering (GF) approach to eradicate the occurrence of noise. In addition, a MixNet-based feature extraction model is employed to generate a useful set of feature vectors. Moreover, a stacked gated recurrent unit (SGRU) classification approach is exploited to allot class labels. Furthermore, CSSA is applied to optimally modify the hyperparameters involved in the SGRU model. None of the earlier works have utilized the hyperparameter-tuned SGRU model for breast cancer classification on HIs. The design of the CSSA for optimal hyperparameter tuning of the SGRU model demonstrates the novelty of the work. The performance validation of the CSSADTL-BCC model is tested by a benchmark dataset, and the results reported the superior execution of the CSSADTL-BCC model over recent state-of-the-art approaches.
Collapse
Affiliation(s)
- K. Shankar
- Big Data and Machine Learning Laboratory, South Ural State University, 454080 Chelyabinsk, Russia; (K.S.); (S.K.)
| | - Ashit Kumar Dutta
- Department of Computer Science and Information System, College of Applied Sciences, AlMaarefa University, Riyadh 11597, Saudi Arabia;
| | - Sachin Kumar
- Big Data and Machine Learning Laboratory, South Ural State University, 454080 Chelyabinsk, Russia; (K.S.); (S.K.)
| | - Gyanendra Prasad Joshi
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea
- Correspondence: (G.P.J.); (I.C.D.)
| | - Ill Chul Doo
- Artificial Intelligence Education, Hankuk University of Foreign Studies, Dongdaemun-gu, Seoul 02450, Korea
- Correspondence: (G.P.J.); (I.C.D.)
| |
Collapse
|
26
|
Texture Analysis of Enhanced MRI and Pathological Slides Predicts EGFR Mutation Status in Breast Cancer. BIOMED RESEARCH INTERNATIONAL 2022; 2022:1376659. [PMID: 35663041 PMCID: PMC9162871 DOI: 10.1155/2022/1376659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 04/25/2022] [Accepted: 04/29/2022] [Indexed: 12/02/2022]
Abstract
Objective Image texture information was extracted from enhanced magnetic resonance imaging (MRI) and pathological hematoxylin and eosin- (HE-) stained images of female breast cancer patients. We established models individually, and then, we combine the two kinds of data to establish model. Through this method, we verified whether sufficient information could be obtained from enhanced MRI and pathological slides to assist in the determination of epidermal growth factor receptor (EGFR) mutation status in patients. Methods We obtained enhanced MRI data from patients with breast cancer before treatment and selected diffusion-weighted imaging (DWI), T1 fast-spin echo (T1 FSE), and T2 fast-spin echo (T2 FSE) as the data sources for extracting texture information. Imaging physicians manually outlined the 3D regions of interest (ROIs) and extracted texture features according to the gray level cooccurrence matrix (GLCM) of the images. For the HE staining images of the patients, we adopted a specific normalization algorithm to simulate the images dyed with only hematoxylin or eosin and extracted textures. We extracted texture features to predict the expression of EGFR. After evaluating the predictive power of each model, the models from the two data sources were combined for remodeling. Results For enhanced MRI data, the modeling of texture information of T1 FSE had a good predictive effect for EGFR mutation status. For pathological images, eosin-stained images can achieve a better prediction effect. We selected these two classifiers as the weak classifiers of the final model and obtained good results (training group: AUC, 0.983; 95% CI, 0.95-1.00; accuracy, 0.962; specificity, 0.936; and sensitivity, 0.979; test group: AUC, 0.983; 95% CI, 0.94-1.00; accuracy, 0.943; specificity, 1.00; and sensitivity, 0.905). Conclusion The EGFR mutation status of patients with breast cancer can be well predicted based on enhanced MRI data and pathological data. This helps hospitals that do not test the EGFR mutation status of patients with breast cancer. The technology gives clinicians more information about breast cancer, which helps them make accurate diagnoses and select suitable treatments.
Collapse
|
27
|
COVID-19 Detection in Chest X-ray Images Using a New Channel Boosted CNN. Diagnostics (Basel) 2022; 12:diagnostics12020267. [PMID: 35204358 PMCID: PMC8871483 DOI: 10.3390/diagnostics12020267] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 01/07/2022] [Accepted: 01/16/2022] [Indexed: 02/01/2023] Open
Abstract
COVID-19 is a respiratory illness that has affected a large population worldwide and continues to have devastating consequences. It is imperative to detect COVID-19 at the earliest opportunity to limit the span of infection. In this work, we developed a new CNN architecture STM-RENet to interpret the radiographic patterns from X-ray images. The proposed STM-RENet is a block-based CNN that employs the idea of split–transform–merge in a new way. In this regard, we have proposed a new convolutional block STM that implements the region and edge-based operations separately, as well as jointly. The systematic use of region and edge implementations in combination with convolutional operations helps in exploring region homogeneity, intensity inhomogeneity, and boundary-defining features. The learning capacity of STM-RENet is further enhanced by developing a new CB-STM-RENet that exploits channel boosting and learns textural variations to effectively screen the X-ray images of COVID-19 infection. The idea of channel boosting is exploited by generating auxiliary channels from the two additional CNNs using Transfer Learning, which are then concatenated to the original channels of the proposed STM-RENet. A significant performance improvement is shown by the proposed CB-STM-RENet in comparison to the standard CNNs on three datasets, especially on the stringent CoV-NonCoV-15k dataset. The good detection rate (97%), accuracy (96.53%), and reasonable F-score (95%) of the proposed technique suggest that it can be adapted to detect COVID-19 infected patients.
Collapse
|
28
|
Chen SB, Novoa RA. Artificial intelligence for dermatopathology: Current trends and the road ahead. Semin Diagn Pathol 2022; 39:298-304. [DOI: 10.1053/j.semdp.2022.01.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 01/06/2022] [Accepted: 01/12/2022] [Indexed: 02/07/2023]
|
29
|
Zafar MM, Rauf Z, Sohail A, Khan AR, Obaidullah M, Khan SH, Lee YS, Khan A. Detection of tumour infiltrating lymphocytes in CD3 and CD8 stained histopathological images using a two-phase deep CNN. Photodiagnosis Photodyn Ther 2021; 37:102676. [PMID: 34890783 DOI: 10.1016/j.pdpdt.2021.102676] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 11/10/2021] [Accepted: 12/06/2021] [Indexed: 12/26/2022]
Abstract
BACKGROUND Immuno-score, a prognostic measure for cancer, employed in determining tumor grade and type, is generated by counting the number of Tumour-Infiltrating Lymphocytes (TILs) in CD3 and CD8 stained histopathological tissue samples. Significant stain variations and heterogeneity in lymphocytes' spatial distribution and density make automated counting of TILs' a challenging task. METHODS This work addresses the aforementioned challenges by developing a pipeline "Two-Phase Deep Convolutional Neural Network based Lymphocyte Counter (TDC-LC)" to detect lymphocytes in CD3 and CD8 stained histology images. The proposed pipeline sequentially works by removing hard negative examples (artifacts) in the first phase using a custom CNN "LSATM-Net" that exploits the idea of a split, asymmetric transform, and merge. Whereas, in the second phase, instance segmentation is performed to detect and generate a lymphocyte count against the remaining samples. Furthermore, the effectiveness of the proposed pipeline is measured by comparing it with the state-of-the-art single- and two-stage detectors. The inference code is available at GitHub Repository https://github.com/m-mohsin-zafar/tdc-lc. RESULTS The empirical evaluation on samples from LYSTO dataset shows that the proposed LSTAM-Net can learn variations in the images and precisely remove the hard negative stain artifacts with an F-score of 0.74. The detection analysis shows that the proposed TDC-LC outperforms the existing models in identifying and counting lymphocytes with high Recall (0.87) and F-score (0.89). Moreover, the commendable performance of the proposed TDC-LC in different organs suggests a good generalization. CONCLUSION The promising performance of the proposed pipeline suggests that it can serve as an automated system for detecting and counting lymphocytes from patches of tissue samples thereby reducing the burden on pathologists.
Collapse
Affiliation(s)
- Muhammad Mohsin Zafar
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; Faculty of Computer Science and Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi 23640, District Swabi, Khyber Pakhtunkhwa, Pakistan
| | - Zunaira Rauf
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Anabia Sohail
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Abdul Rehman Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; Center for Mathematical Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Muhammad Obaidullah
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; Center for Mathematical Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Saddam Hussain Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Yeon Soo Lee
- Deparment of Biomedical Engineering, College of Medical Sciences, Catholic University of Daegu, South Korea.
| | - Asifullah Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; Deparment of Biomedical Engineering, College of Medical Sciences, Catholic University of Daegu, South Korea; Center for Mathematical Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan.
| |
Collapse
|
30
|
Khan SH, Sohail A, Khan A, Hassan M, Lee YS, Alam J, Basit A, Zubair S. COVID-19 detection in chest X-ray images using deep boosted hybrid learning. Comput Biol Med 2021; 137:104816. [PMID: 34482199 PMCID: PMC8403339 DOI: 10.1016/j.compbiomed.2021.104816] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 08/16/2021] [Accepted: 08/26/2021] [Indexed: 12/24/2022]
Abstract
The new emerging COVID-19, declared a pandemic disease, has affected millions of human lives and caused a massive burden on healthcare centers. Therefore, a quick, accurate, and low-cost computer-based tool is required to timely detect and treat COVID-19 patients. In this work, two new deep learning frameworks: Deep Hybrid Learning (DHL) and Deep Boosted Hybrid Learning (DBHL), is proposed for effective COVID-19 detection in X-ray dataset. In the proposed DHL framework, the representation learning ability of the two developed COVID-RENet-1 & 2 models is exploited individually through a machine learning (ML) classifier. In COVID-RENet models, Region and Edge-based operations are carefully applied to learn region homogeneity and extract boundaries features. While in the case of the proposed DBHL framework, COVID-RENet-1 & 2 are fine-tuned using transfer learning on the chest X-rays. Furthermore, deep feature spaces are generated from the penultimate layers of the two models and then concatenated to get a single enriched boosted feature space. A conventional ML classifier exploits the enriched feature space to achieve better COVID-19 detection performance. The proposed COVID-19 detection frameworks are evaluated on radiologist's authenticated chest X-ray data, and their performance is compared with the well-established CNNs. It is observed through experiments that the proposed DBHL framework, which merges the two-deep CNN feature spaces, yields good performance (accuracy: 98.53%, sensitivity: 0.99, F-score: 0.98, and precision: 0.98). Furthermore, a web-based interface is developed, which takes only 5-10s to detect COVID-19 in each unseen chest X-ray image. This web-predictor is expected to help early diagnosis, save precious lives, and thus positively impact society.
Collapse
Affiliation(s)
- Saddam Hussain Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, 45650, Pakistan.
| | - Anabia Sohail
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, 45650, Pakistan.
| | - Asifullah Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, 45650, Pakistan; Center for Mathematical Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, 45650, Pakistan.
| | - Mehdi Hassan
- Department of Computer Science, Air University, Islamabad, Pakistan.
| | - Yeon Soo Lee
- Department of Biomedical Engineering, College of Medical Science, Catholic University of Daegu, South Korea.
| | | | - Abdul Basit
- TPD, Pakistan Institute of Nuclear Science and Technology, Nilore, Islamabad, 45650, Pakistan.
| | - Saima Zubair
- Islamabad Institute of Reproduce Medicine, Pakistan.
| |
Collapse
|
31
|
Khan SH, Sohail A, Zafar MM, Khan A. Coronavirus disease analysis using chest X-ray images and a novel deep convolutional neural network. Photodiagnosis Photodyn Ther 2021; 35:102473. [PMID: 34348186 PMCID: PMC8325950 DOI: 10.1016/j.pdpdt.2021.102473] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 07/09/2021] [Accepted: 07/30/2021] [Indexed: 12/14/2022]
Abstract
Background The recent emergence of a highly infectious and contagious respiratory viral disease known as COVID-19 has vastly impacted human lives and overloaded the health care system. Therefore, it is crucial to develop a fast and accurate diagnostic system for the timely identification of COVID-19 infected patients and thus to help control its spread. Methods This work proposes a new deep CNN based technique for COVID-19 classification in X-ray images. In this regard, two novel custom CNN architectures, namely COVID-RENet-1 and COVID-RENet-2, are developed for COVID-19 specific pneumonia analysis. The proposed technique systematically employs Region and Edge-based operations along with convolution operations. The advantage of the proposed idea is validated by performing series of experimentation and comparing results with two baseline CNNs that exploited either a single type of pooling operation or strided convolution down the architecture. Additionally, the discrimination capacity of the proposed technique is assessed by benchmarking it against the state-of-the-art CNNs on radiologist's authenticated chest X-ray dataset. Implementation is available at https://github.com/PRLAB21/Coronavirus-Disease-Analysis-using-Chest-X-Ray-Images. Results The proposed classification technique shows good generalization as compared to existing CNNs by achieving promising MCC (0.96), F-score (0.98) and Accuracy (98%). This suggests that the idea of synergistically using Region and Edge-based operations aid in better exploiting the region homogeneity, textural variations, and region boundary-related information in an image, which helps to capture the pneumonia specific pattern. Conclusions The encouraging results of the proposed classification technique on the test set with high sensitivity (0.98) and precision (0.98) suggest the effectiveness of the proposed technique. Thus, it suggests the potential use of the proposed technique in other X-ray imagery-based infectious disease analysis.
Collapse
Affiliation(s)
- Saddam Hussain Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Anabia Sohail
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Muhammad Mohsin Zafar
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Asifullah Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan; Center for Mathematical Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad 45650, Pakistan.
| |
Collapse
|
32
|
A Cascade Deep Forest Model for Breast Cancer Subtype Classification Using Multi-Omics Data. MATHEMATICS 2021. [DOI: 10.3390/math9131574] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Automated diagnosis systems aim to reduce the cost of diagnosis while maintaining the same efficiency. Many methods have been used for breast cancer subtype classification. Some use single data source, while others integrate many data sources, the case that results in reduced computational performance as opposed to accuracy. Breast cancer data, especially biological data, is known for its imbalance, with lack of extensive amounts of histopathological images as biological data. Recent studies have shown that cascade Deep Forest ensemble model achieves a competitive classification accuracy compared with other alternatives, such as the general ensemble learning methods and the conventional deep neural networks (DNNs), especially for imbalanced training sets, through learning hyper-representations through using cascade ensemble decision trees. In this work, a cascade Deep Forest is employed to classify breast cancer subtypes, IntClust and Pam50, using multi-omics datasets and different configurations. The results obtained recorded an accuracy of 83.45% for 5 subtypes and 77.55% for 10 subtypes. The significance of this work is that it is shown that using gene expression data alone with the cascade Deep Forest classifier achieves comparable accuracy to other techniques with higher computational performance, where the time recorded is about 5 s for 10 subtypes, and 7 s for 5 subtypes.
Collapse
|
33
|
Mitotic nuclei analysis in breast cancer histopathology images using deep ensemble classifier. Med Image Anal 2021; 72:102121. [PMID: 34139665 DOI: 10.1016/j.media.2021.102121] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 05/20/2021] [Accepted: 05/24/2021] [Indexed: 02/06/2023]
Abstract
Mitotic nuclei estimation in breast tumour samples has a prognostic significance in analysing tumour aggressiveness and grading system. The automated assessment of mitotic nuclei is challenging because of their high similarity with non-mitotic nuclei and heteromorphic appearance. In this work, we have proposed a new Deep Convolutional Neural Network (CNN) based Heterogeneous Ensemble technique "DHE-Mit-Classifier" for analysis of mitotic nuclei in breast histopathology images. The proposed technique in the first step detects candidate mitotic patches from the histopathological biopsy regions, whereas, in the second step, these patches are classified into mitotic and non-mitotic nuclei using the proposed DHE-Mit-Classifier. For the development of a heterogeneous ensemble, five different deep CNNs are designed and used as base-classifiers. These deep CNNs have varying architectural designs to capture the structural, textural, and morphological properties of the mitotic nuclei. The developed base-classifiers exploit different ideas, including (i) region homogeneity and feature invariance, (ii) asymmetric split-transform-merge, (iii) dilated convolution based multi-scale transformation, (iv) spatial and channel attention, and (v) residual learning. Multi-layer-perceptron is used as a meta-classifier to develop a robust and accurate classifier for providing the final decision. The performance of the proposed ensemble "DHE-Mit-Classifier" is evaluated against state-of-the-art CNNs. The performance evaluation on the test set suggests the superiority of the proposed ensemble with an F-score (0.77), recall (0.71), precision (0.83), and area under the precision-recall curve (0.80). The good generalisation of the proposed ensemble with a considerably high F-score and precision suggests its potential use in the development of an assistance tool for pathologists.
Collapse
|
34
|
Sena IFG, Rocha BGS, Picoli CC, Santos GSP, Costa AC, Gonçalves BOP, Garcia APV, Soltani-Asl M, Coimbra-Campos LMC, Silva WN, Costa PAC, Pinto MCX, Amorim JH, Azevedo VAC, Resende RR, Heller D, Cassali GD, Mintz A, Birbrair A. C(3)1-TAg in C57BL/6 J background as a model to study mammary tumor development. Histochem Cell Biol 2021; 156:165-182. [PMID: 34003355 DOI: 10.1007/s00418-021-01995-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2021] [Indexed: 02/06/2023]
Abstract
Diagnosis and prognosis of breast cancer is based on disease staging identified through histopathological and molecular biology techniques. Animal models are used to gain mechanistic insights into the development of breast cancer. C(3)1-TAg is a genetically engineered mouse model that develops mammary cancer. However, carcinogenesis caused by this transgene was characterized in the Friend Virus B (FVB) background. As most genetic studies are done in mice with C57BL/6 J background, we aimed to define the histological alterations in C3(1)-TAg C57BL/6 J animals. Our results showed that C3(1)-TAg animals with C57BL/6 J background develop solid-basaloid adenoid cystic carcinomas with increased fibrosis, decreased area of adipocytes, and a high proliferative index, which are triple-negative for progesterone, estrogen, and human epidermal growth factor receptor 2 (HER2) receptors. Our results also revealed that tumor development is slower in the C57BL/6 J background when compared with the FVB strain, providing a better model to study the different stages in breast cancer progression.
Collapse
Affiliation(s)
- Isadora F G Sena
- Department of Pathology, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil
| | - Beatriz G S Rocha
- Department of Pathology, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil
| | - Caroline C Picoli
- Department of Pathology, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil
| | - Gabryella S P Santos
- Department of Pathology, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil
| | - Alinne C Costa
- Department of Pathology, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil
| | - Bryan O P Gonçalves
- Department of Pathology, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil
| | - Ana Paula V Garcia
- Department of Pathology, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil
| | - Maryam Soltani-Asl
- Department of Pathology, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil
| | | | - Walison N Silva
- Department of Pathology, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil
| | - Pedro A C Costa
- Department of Pathology, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil
| | - Mauro C X Pinto
- Laboratory of Neuropharmacology and Neurochemistry, Institute of Biological Sciences, Federal University of Goiás, Goiânia, GO, Brazil
| | - Jaime H Amorim
- Center of Biological Sciences and Health, Federal University of West Bahia, Barreiras, BA, Brazil
| | - Vasco A C Azevedo
- Cellular and Molecular Genetics Laboratory, Department of Genetics, Ecology and Evolution, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil
| | - Rodrigo R Resende
- Department of Biochemistry and Immunology, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil
| | - Debora Heller
- Hospital Israelita Albert Einstein, São Paulo, Brazil.,Cruzeiro Do Sul University, São Paulo, Brazil
| | - Geovanni D Cassali
- Department of Pathology, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil
| | - Akiva Mintz
- Department of Radiology, Columbia University Medical Center, New York, NY, USA
| | - Alexander Birbrair
- Department of Pathology, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil. .,Department of Radiology, Columbia University Medical Center, New York, NY, USA.
| |
Collapse
|