1
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
2
|
Huang L, Ruan S, Xing Y, Feng M. A review of uncertainty quantification in medical image analysis: Probabilistic and non-probabilistic methods. Med Image Anal 2024; 97:103223. [PMID: 38861770 DOI: 10.1016/j.media.2024.103223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 03/16/2024] [Accepted: 05/27/2024] [Indexed: 06/13/2024]
Abstract
The comprehensive integration of machine learning healthcare models within clinical practice remains suboptimal, notwithstanding the proliferation of high-performing solutions reported in the literature. A predominant factor hindering widespread adoption pertains to an insufficiency of evidence affirming the reliability of the aforementioned models. Recently, uncertainty quantification methods have been proposed as a potential solution to quantify the reliability of machine learning models and thus increase the interpretability and acceptability of the results. In this review, we offer a comprehensive overview of the prevailing methods proposed to quantify the uncertainty inherent in machine learning models developed for various medical image tasks. Contrary to earlier reviews that exclusively focused on probabilistic methods, this review also explores non-probabilistic approaches, thereby furnishing a more holistic survey of research pertaining to uncertainty quantification for machine learning models. Analysis of medical images with the summary and discussion on medical applications and the corresponding uncertainty evaluation protocols are presented, which focus on the specific challenges of uncertainty in medical image analysis. We also highlight some potential future research work at the end. Generally, this review aims to allow researchers from both clinical and technical backgrounds to gain a quick and yet in-depth understanding of the research in uncertainty quantification for medical image analysis machine learning models.
Collapse
Affiliation(s)
- Ling Huang
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Su Ruan
- Quantif, LITIS, University of Rouen Normandy, France.
| | - Yucheng Xing
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Mengling Feng
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore; Institute of Data Science, National University of Singapore, Singapore
| |
Collapse
|
3
|
Umamaheswari T, Babu YMM. ViT-MAENB7: An innovative breast cancer diagnosis model from 3D mammograms using advanced segmentation and classification process. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 257:108373. [PMID: 39276667 DOI: 10.1016/j.cmpb.2024.108373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 08/01/2024] [Accepted: 08/10/2024] [Indexed: 09/17/2024]
Abstract
Tumors are an important health concern in modern times. Breast cancer is one of the most prevalent causes of death for women. Breast cancer is rapidly becoming the leading cause of mortality among women globally. Early detection of breast cancer allows patients to obtain appropriate therapy, increasing their probability of survival. The adoption of 3-Dimensional (3D) mammography for the medical identification of abnormalities in the breast reduced the number of deaths dramatically. Classification and accurate detection of lumps in the breast in 3D mammography is especially difficult due to factors such as inadequate contrast and normal fluctuations in tissue density. Several Computer-Aided Diagnosis (CAD) solutions are under development to help radiologists accurately classify abnormalities in the breast. In this paper, a breast cancer diagnosis model is implemented to detect breast cancer in cancer patients to prevent death rates. The 3D mammogram images are gathered from the internet. Then, the gathered images are given to the preprocessing phase. The preprocessing is done using a median filter and image scaling method. The purpose of the preprocessing phase is to enhance the quality of the images and remove any noise or artifacts that may interfere with the detection of abnormalities. The median filter helps to smooth out any irregularities in the images, while the image scaling method adjusts the size and resolution of the images for better analysis. Once the preprocessing is complete, the preprocessed image is given to the segmentation phase. The segmentation phase is crucial in medical image analysis as it helps to identify and separate different structures within the image, such as organs or tumors. This process involves dividing the preprocessed image into meaningful regions or segments based on intensity, color, texture, or other features. The segmentation process is done using Adaptive Thresholding with Region Growing Fusion Model (AT-RGFM)". This model combines the advantages of both thresholding and region-growing techniques to accurately identify and delineate specific structures within the image. By utilizing AT-RGFM, the segmentation phase can effectively differentiate between different parts of the image, allowing for more precise analysis and diagnosis. It plays a vital role in the medical image analysis process, providing crucial insights for healthcare professionals. Here, the Modified Garter Snake Optimization Algorithm (MGSOA) is used to optimize the parameters. It helps to optimize parameters for accurately identifying and delineating specific structures within medical images and also helps healthcare professionals in providing more precise analysis and diagnosis, ultimately playing a vital role in the medical image analysis process. MGSOA enhances the segmentation phase by effectively differentiating between different parts of the image, leading to more accurate results. Then, the segmented image is fed into the detection phase. The tumor detection is performed by the Vision Transformer-based Multiscale Adaptive EfficientNetB7 (ViT-MAENB7) model. This model utilizes a combination of advanced algorithms and deep learning techniques to accurately identify and locate tumors within the segmented medical image. By incorporating a multiscale adaptive approach, the ViT-MAENB7 model can analyze the image at various levels of detail, improving the overall accuracy of tumor detection. This crucial step in the medical image analysis process allows healthcare professionals to make more informed decisions regarding patient treatment and care. Here, the created MGSOA algorithm is used to optimize the parameters for enhancing the performance of the model. The suggested breast cancer diagnosis performance is compared to conventional cancer diagnosis models and it showed high accuracy. The accuracy of the developed MGSOA-ViT-MAENB7 is 96.6 %, and others model like RNN, LSTM, EffNet, and ViT-MAENet given the accuracy to be 90.31 %, 92.79 %, 94.46 % and 94.75 %. The developed model's ability to analyze images at multiple scales, combined with the optimization provided by the MGSOA algorithm, results in a highly accurate and efficient system for detecting tumors in medical images. This cutting-edge technology not only improves the accuracy of diagnosis but also helps healthcare professionals tailor treatment plans to individual patients, ultimately leading to better outcomes. By outperforming traditional cancer diagnosis models, the proposed model is revolutionizing the field of medical imaging and setting a new standard for precision and effectiveness in healthcare.
Collapse
Affiliation(s)
| | - Y Murali Mohan Babu
- N.B.K.R. Institute of Science and Technology, Vidhyanagar, Andhra Pradesh, India.
| |
Collapse
|
4
|
Solorzano L, Robertson S, Acs B, Hartman J, Rantalainen M. Ensemble-based deep learning improves detection of invasive breast cancer in routine histopathology images. Heliyon 2024; 10:e32892. [PMID: 39022088 PMCID: PMC11252882 DOI: 10.1016/j.heliyon.2024.e32892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 06/11/2024] [Accepted: 06/11/2024] [Indexed: 07/20/2024] Open
Abstract
Accurate detection of invasive breast cancer (IC) can provide decision support to pathologists as well as improve downstream computational analyses, where detection of IC is a first step. Tissue containing IC is characterized by the presence of specific morphological features, which can be learned by convolutional neural networks (CNN). Here, we compare the use of a single CNN model versus an ensemble of several base models with the same CNN architecture, and we evaluate prediction performance as well as variability across ensemble based model predictions. Two in-house datasets comprising 587 whole slide images (WSI) are used to train an ensemble of ten InceptionV3 models whose consensus is used to determine the presence of IC. A novel visualisation strategy was developed to communicate ensemble agreement spatially. Performance was evaluated in an internal test set with 118 WSIs, and in an additional external dataset (TCGA breast cancer) with 157 WSI. We observed that the ensemble-based strategy outperformed the single CNN-model alternative with respect to accuracy on tile level in 89 % of all WSIs in the test set. The overall accuracy was 0.92 (DICE coefficient, 0.90) for the ensemble model, and 0.85 (DICE coefficient, 0.83) for the single CNN alternative in the internal test set. For TCGA the ensemble outperformed the single CNN in 96.8 % of the WSI, with an accuracy of 0.87 (DICE coefficient 0.89), the single model provides an accuracy of 0.75 (DICE coefficient 0.78). The results suggest that an ensemble-based modeling strategy for breast cancer invasive cancer detection consistently outperforms the conventional single model alternative. Furthermore, visualisation of the ensemble agreement and confusion areas provide direct visual interpretation of the results. High performing cancer detection can provide decision support in the routine pathology setting as well as facilitate downstream computational analyses.
Collapse
Affiliation(s)
- Leslie Solorzano
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | | | - Balazs Acs
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
| | - Johan Hartman
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
| | - Mattias Rantalainen
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
5
|
Lambert B, Forbes F, Doyle S, Dehaene H, Dojat M. Trustworthy clinical AI solutions: A unified review of uncertainty quantification in Deep Learning models for medical image analysis. Artif Intell Med 2024; 150:102830. [PMID: 38553168 DOI: 10.1016/j.artmed.2024.102830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 02/28/2024] [Accepted: 03/01/2024] [Indexed: 04/02/2024]
Abstract
The full acceptance of Deep Learning (DL) models in the clinical field is rather low with respect to the quantity of high-performing solutions reported in the literature. End users are particularly reluctant to rely on the opaque predictions of DL models. Uncertainty quantification methods have been proposed in the literature as a potential solution, to reduce the black-box effect of DL models and increase the interpretability and the acceptability of the result by the final user. In this review, we propose an overview of the existing methods to quantify uncertainty associated with DL predictions. We focus on applications to medical image analysis, which present specific challenges due to the high dimensionality of images and their variable quality, as well as constraints associated with real-world clinical routine. Moreover, we discuss the concept of structural uncertainty, a corpus of methods to facilitate the alignment of segmentation uncertainty estimates with clinical attention. We then discuss the evaluation protocols to validate the relevance of uncertainty estimates. Finally, we highlight the open challenges for uncertainty quantification in the medical field.
Collapse
Affiliation(s)
- Benjamin Lambert
- Univ. Grenoble Alpes, Inserm, U1216, Grenoble Institut des Neurosciences, Grenoble, 38000, France; Pixyl Research and Development Laboratory, Grenoble, 38000, France
| | - Florence Forbes
- Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, Grenoble, 38000, France
| | - Senan Doyle
- Pixyl Research and Development Laboratory, Grenoble, 38000, France
| | - Harmonie Dehaene
- Pixyl Research and Development Laboratory, Grenoble, 38000, France
| | - Michel Dojat
- Univ. Grenoble Alpes, Inserm, U1216, Grenoble Institut des Neurosciences, Grenoble, 38000, France.
| |
Collapse
|
6
|
Yue G, Zhuo G, Yan W, Zhou T, Tang C, Yang P, Wang T. Boundary uncertainty aware network for automated polyp segmentation. Neural Netw 2024; 170:390-404. [PMID: 38029720 DOI: 10.1016/j.neunet.2023.11.050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 07/15/2023] [Accepted: 11/22/2023] [Indexed: 12/01/2023]
Abstract
Recently, leveraging deep neural networks for automated colorectal polyp segmentation has emerged as a hot topic due to the favored advantages in evading the limitations of visual inspection, e.g., overwork and subjectivity. However, most existing methods do not pay enough attention to the uncertain areas of colonoscopy images and often provide unsatisfactory segmentation performance. In this paper, we propose a novel boundary uncertainty aware network (BUNet) for precise and robust colorectal polyp segmentation. Specifically, considering that polyps vary greatly in size and shape, we first adopt a pyramid vision transformer encoder to learn multi-scale feature representations. Then, a simple yet effective boundary exploration module (BEM) is proposed to explore boundary cues from the low-level features. To make the network focus on the ambiguous area where the prediction score is biased to neither the foreground nor the background, we further introduce a boundary uncertainty aware module (BUM) that explores error-prone regions from the high-level features with the assistance of boundary cues provided by the BEM. Through the top-down hybrid deep supervision, our BUNet implements coarse-to-fine polyp segmentation and finally localizes polyp regions precisely. Extensive experiments on five public datasets show that BUNet is superior to thirteen competing methods in terms of both effectiveness and generalization ability.
Collapse
Affiliation(s)
- Guanghui Yue
- National-Reginoal Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Marshall Laboratory of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China
| | - Guibin Zhuo
- National-Reginoal Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Marshall Laboratory of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China
| | - Weiqing Yan
- School of Computer and Control Engineering, Yantai University, Yantai 264005, China
| | - Tianwei Zhou
- College of Management, Shenzhen University, Shenzhen 518060, China.
| | - Chang Tang
- School of Computer Science, China University of Geosciences, Wuhan 430074, China
| | - Peng Yang
- National-Reginoal Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Marshall Laboratory of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China
| | - Tianfu Wang
- National-Reginoal Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Marshall Laboratory of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China
| |
Collapse
|
7
|
Atabansi CC, Nie J, Liu H, Song Q, Yan L, Zhou X. A survey of Transformer applications for histopathological image analysis: New developments and future directions. Biomed Eng Online 2023; 22:96. [PMID: 37749595 PMCID: PMC10518923 DOI: 10.1186/s12938-023-01157-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 09/15/2023] [Indexed: 09/27/2023] Open
Abstract
Transformers have been widely used in many computer vision challenges and have shown the capability of producing better results than convolutional neural networks (CNNs). Taking advantage of capturing long-range contextual information and learning more complex relations in the image data, Transformers have been used and applied to histopathological image processing tasks. In this survey, we make an effort to present a thorough analysis of the uses of Transformers in histopathological image analysis, covering several topics, from the newly built Transformer models to unresolved challenges. To be more precise, we first begin by outlining the fundamental principles of the attention mechanism included in Transformer models and other key frameworks. Second, we analyze Transformer-based applications in the histopathological imaging domain and provide a thorough evaluation of more than 100 research publications across different downstream tasks to cover the most recent innovations, including survival analysis and prediction, segmentation, classification, detection, and representation. Within this survey work, we also compare the performance of CNN-based techniques to Transformers based on recently published papers, highlight major challenges, and provide interesting future research directions. Despite the outstanding performance of the Transformer-based architectures in a number of papers reviewed in this survey, we anticipate that further improvements and exploration of Transformers in the histopathological imaging domain are still required in the future. We hope that this survey paper will give readers in this field of study a thorough understanding of Transformer-based techniques in histopathological image analysis, and an up-to-date paper list summary will be provided at https://github.com/S-domain/Survey-Paper .
Collapse
Affiliation(s)
| | - Jing Nie
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| | - Haijun Liu
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| | - Qianqian Song
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| | - Lingfeng Yan
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| | - Xichuan Zhou
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| |
Collapse
|
8
|
Li M, Chen C, Cao Y, Zhou P, Deng X, Liu P, Wang Y, Lv X, Chen C. CIABNet: Category imbalance attention block network for the classification of multi-differentiated types of esophageal cancer. Med Phys 2023; 50:1507-1527. [PMID: 36272103 DOI: 10.1002/mp.16067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 08/25/2022] [Accepted: 09/09/2022] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND Esophageal cancer has become one of the important cancers that seriously threaten human life and health, and its incidence and mortality rate are still among the top malignant tumors. Histopathological image analysis is the gold standard for diagnosing different differentiation types of esophageal cancer. PURPOSE The grading accuracy and interpretability of the auxiliary diagnostic model for esophageal cancer are seriously affected by small interclass differences, imbalanced data distribution, and poor model interpretability. Therefore, we focused on developing the category imbalance attention block network (CIABNet) model to try to solve the previous problems. METHODS First, the quantitative metrics and model visualization results are integrated to transfer knowledge from the source domain images to better identify the regions of interest (ROI) in the target domain of esophageal cancer. Second, in order to pay attention to the subtle interclass differences, we propose the concatenate fusion attention block, which can focus on the contextual local feature relationships and the changes of channel attention weights among different regions simultaneously. Third, we proposed a category imbalance attention module, which treats each esophageal cancer differentiation class fairly based on aggregating different intensity information at multiple scales and explores more representative regional features for each class, which effectively mitigates the negative impact of category imbalance. Finally, we use feature map visualization to focus on interpreting whether the ROIs are the same or similar between the model and pathologists, thus better improving the interpretability of the model. RESULTS The experimental results show that the CIABNet model outperforms other state-of-the-art models, which achieves the most advanced results in classifying the differentiation types of esophageal cancer with an average classification accuracy of 92.24%, an average precision of 93.52%, an average recall of 90.31%, an average F1 value of 91.73%, and an average AUC value of 97.43%. In addition, the CIABNet model has essentially similar or identical to the ROI of pathologists in identifying histopathological images of esophageal cancer. CONCLUSIONS Our experimental results prove that our proposed computer-aided diagnostic algorithm shows great potential in histopathological images of multi-differentiated types of esophageal cancer.
Collapse
Affiliation(s)
- Min Li
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, China
| | - Chen Chen
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
- Xinjiang Cloud Computing Application Laboratory, Karamay, China
| | - Yanzhen Cao
- Department of Pathology, The Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, China
| | - Panyun Zhou
- College of Software, Xinjiang University, Urumqi, China
| | - Xin Deng
- College of Software, Xinjiang University, Urumqi, China
| | - Pei Liu
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Yunling Wang
- The First Affiliated Hospital of Xinjiang Medical University, Urumqi, China
| | - Xiaoyi Lv
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, China
- Xinjiang Cloud Computing Application Laboratory, Karamay, China
- College of Software, Xinjiang University, Urumqi, China
- Key Laboratory of software engineering technology, Xinjiang University, Urumqi, China
| | - Cheng Chen
- College of Software, Xinjiang University, Urumqi, China
| |
Collapse
|
9
|
Yu D, Zhang X, Lin J, Cao T, Chen Y. SECS: An Effective CNN Joint Construction Strategy for Breast Cancer Histopathological Image Classification. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2023. [DOI: 10.1016/j.jksuci.2023.01.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
10
|
Iqbal MS, Ahmad W, Alizadehsani R, Hussain S, Rehman R. Breast Cancer Dataset, Classification and Detection Using Deep Learning. Healthcare (Basel) 2022; 10:2395. [PMID: 36553919 PMCID: PMC9778593 DOI: 10.3390/healthcare10122395] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 11/24/2022] [Accepted: 11/25/2022] [Indexed: 12/05/2022] Open
Abstract
Incorporating scientific research into clinical practice via clinical informatics, which includes genomics, proteomics, bioinformatics, and biostatistics, improves patients' treatment. Computational pathology is a growing subspecialty with the potential to integrate whole slide images, multi-omics data, and health informatics. Pathology and laboratory medicine are critical to diagnosing cancer. This work will review existing computational and digital pathology methods for breast cancer diagnosis with a special focus on deep learning. The paper starts by reviewing public datasets related to breast cancer diagnosis. Additionally, existing deep learning methods for breast cancer diagnosis are reviewed. The publicly available code repositories are introduced as well. The paper is closed by highlighting challenges and future works for deep learning-based diagnosis.
Collapse
Affiliation(s)
- Muhammad Shahid Iqbal
- Department of Computer Science and Information Technology, Women University AJK, Bagh 12500, Pakistan
| | - Waqas Ahmad
- Higher Education Department Govt, AJK, Mirpur 10250, Pakistan
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, VIC 3216, Australia
| | - Sadiq Hussain
- Examination Branch, Dibrugarh University, Dibrugarh 786004, India
| | - Rizwan Rehman
- Centre for Computer Science and Applications, Dibrugarh University, Dibrugarh 786004, India
| |
Collapse
|
11
|
Dolezal JM, Srisuwananukorn A, Karpeyev D, Ramesh S, Kochanny S, Cody B, Mansfield AS, Rakshit S, Bansal R, Bois MC, Bungum AO, Schulte JJ, Vokes EE, Garassino MC, Husain AN, Pearson AT. Uncertainty-informed deep learning models enable high-confidence predictions for digital histopathology. Nat Commun 2022; 13:6572. [PMID: 36323656 PMCID: PMC9630455 DOI: 10.1038/s41467-022-34025-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 10/07/2022] [Indexed: 11/06/2022] Open
Abstract
A model's ability to express its own predictive uncertainty is an essential attribute for maintaining clinical user confidence as computational biomarkers are deployed into real-world medical settings. In the domain of cancer digital histopathology, we describe a clinically-oriented approach to uncertainty quantification for whole-slide images, estimating uncertainty using dropout and calculating thresholds on training data to establish cutoffs for low- and high-confidence predictions. We train models to identify lung adenocarcinoma vs. squamous cell carcinoma and show that high-confidence predictions outperform predictions without uncertainty, in both cross-validation and testing on two large external datasets spanning multiple institutions. Our testing strategy closely approximates real-world application, with predictions generated on unsupervised, unannotated slides using predetermined thresholds. Furthermore, we show that uncertainty thresholding remains reliable in the setting of domain shift, with accurate high-confidence predictions of adenocarcinoma vs. squamous cell carcinoma for out-of-distribution, non-lung cancer cohorts.
Collapse
Affiliation(s)
- James M Dolezal
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | | | | | - Siddhi Ramesh
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Sara Kochanny
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Brittany Cody
- Department of Pathology, University of Chicago, Chicago, IL, USA
| | | | - Sagar Rakshit
- Division of Medical Oncology, Mayo Clinic, Rochester, MN, USA
| | - Radhika Bansal
- Division of Medical Oncology, Mayo Clinic, Rochester, MN, USA
| | - Melanie C Bois
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, USA
| | - Aaron O Bungum
- Divisions of Pulmonary Medicine and Critical Care, Mayo Clinic, Rochester, MN, USA
| | - Jefree J Schulte
- Department of Pathology and Laboratory Medicine, University of Wisconsin at Madison, Madison, WN, USA
| | - Everett E Vokes
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Marina Chiara Garassino
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Aliya N Husain
- Department of Pathology, University of Chicago, Chicago, IL, USA
| | - Alexander T Pearson
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA.
| |
Collapse
|
12
|
Nawaz M, Nazir T, Masood M, Ali F, Khan MA, Tariq U, Sahar N, Damaševičius R. Melanoma segmentation: A framework of improved DenseNet77 and UNET convolutional neural network. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:2137-2153. [DOI: 10.1002/ima.22750] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 05/03/2022] [Indexed: 08/25/2024]
Abstract
AbstractMelanoma is the most fatal type of skin cancer which can cause the death of victims at the advanced stage. Extensive work has been presented by the researcher on computer vision for skin lesion localization. However, correct and effective melanoma segmentation is still a tough job because of the extensive variations found in the shape, color, and sizes of skin moles. Moreover, the presence of light and brightness variations further complicates the segmentation task. We have presented improved deep learning (DL)‐based approach, namely, the DenseNet77‐based UNET model. More clearly, we have introduced the DenseNet77 network at the encoder unit of the UNET approach to computing the more representative set of image features. The calculated keypoints are later segmented by the decoder of the UNET model. We have used two standard datasets, namely, the ISIC‐2017 and ISIC‐2018 to evaluate the performance of the proposed approach and acquired the segmentation accuracies of 99.21% and 99.51% for the ISIC‐2017 and ISIC‐2018 datasets, respectively. We have confirmed through both the quantitative and qualitative results that the proposed improved UNET approach is robust to skin lesions segmentation and can accurately recognize the moles of varying colors and sizes.
Collapse
Affiliation(s)
- Marriam Nawaz
- Department of Computer Science University of Engineering and Technology Taxila Pakistan
- Department of Software Engineering University of Enginering and Technology Taxila Pakistan
| | - Tahira Nazir
- Department of Computing Riphah International University Islamabad Pakistan
| | - Momina Masood
- Department of Computer Science University of Engineering and Technology Taxila Pakistan
| | - Farooq Ali
- Department of Computer Science University of Engineering and Technology Taxila Pakistan
| | | | - Usman Tariq
- College of Computer Engineering and Science Prince Sattam Bin Abdulaziz University Al‐Kharj Saudi Arabia
| | - Naveera Sahar
- Department of Computer Science University of Wah Wah Cantt Pakistan
| | | |
Collapse
|
13
|
MSeg-Net: A Melanoma Mole Segmentation Network Using CornerNet and Fuzzy -Means Clustering. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7502504. [PMID: 36276999 PMCID: PMC9586776 DOI: 10.1155/2022/7502504] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 09/17/2022] [Indexed: 11/18/2022]
Abstract
Melanoma is a dangerous form of skin cancer that results in the demise of patients at the developed stage. Researchers have attempted to develop automated systems for the timely recognition of this deadly disease. However, reliable and precise identification of melanoma moles is a tedious and complex activity as there exist huge differences in the mass, structure, and color of the skin lesions. Additionally, the incidence of noise, blurring, and chrominance changes in the suspected images further enhance the complexity of the detection procedure. In the proposed work, we try to overcome the limitations of the existing work by presenting a deep learning (DL) model. Descriptively, after accomplishing the preprocessing task, we have utilized an object detection approach named CornerNet model to detect melanoma lesions. Then the localized moles are passed as input to the fuzzy K-means (FLM) clustering approach to perform the segmentation task. To assess the segmentation power of the proposed approach, two standard databases named ISIC-2017 and ISIC-2018 are employed. Extensive experimentation has been conducted to demonstrate the robustness of the proposed approach through both numeric and pictorial results. The proposed approach is capable of detecting and segmenting the moles of arbitrary shapes and orientations. Furthermore, the presented work can tackle the presence of noise, blurring, and brightness variations as well. We have attained the segmentation accuracy values of 99.32% and 99.63% over the ISIC-2017 and ISIC-2018 databases correspondingly which clearly depicts the effectiveness of our model for the melanoma mole segmentation.
Collapse
|
14
|
Breast Cancer Pathological Image Classification Based on the Multiscale CNN Squeeze Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7075408. [PMID: 36072731 PMCID: PMC9444358 DOI: 10.1155/2022/7075408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/01/2022] [Indexed: 11/29/2022]
Abstract
The use of an automatic histopathological image identification system is essential for expediting diagnoses and lowering mistake rates. Although it is of enormous clinical importance, computerized breast cancer multiclassification using histological pictures has rarely been investigated. A deep learning-based classification strategy is suggested to solve the challenge of automated categorization of breast cancer pathology pictures. The attention model that acts on the feature channel is the channel refinement model. The learned channel weight may be used to reduce superfluous features when implementing the feature channel. To increase classification accuracy, calibration is necessary. To increase the accuracy of channel recalibration findings, a multiscale channel recalibration model is provided, and the msSE-ResNet convolutional neural network is built. The multiscale properties flow through the network's highest pooling layer. The channel weights obtained at different scales are delivered into line fusion and used as input to the next channel recalibration model, which may improve the results of channel recalibration. The experimental findings reveal that the spatial recalibration model fares poorly on the job of classifying breast cancer pathology pictures when applied to the semantic segmentation of brain MRI images. The public BreakHis dataset is used to conduct the experiment. The network performs benign/malignant breast pathology picture classification collected at various magnifications with a classification accuracy of 88.87 percent, according to experimental data. The diseased images are also more resilient. Experiments on pathological pictures at various magnifications show that msSE-ResNet34 is capable of performing well when used to classify pathological images at various magnifications.
Collapse
|
15
|
Review on Machine Learning Techniques for Medical Data Classification and Disease Diagnosis. REGENERATIVE ENGINEERING AND TRANSLATIONAL MEDICINE 2022. [DOI: 10.1007/s40883-022-00273-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
16
|
Skin lesion classification system using a K-nearest neighbor algorithm. Vis Comput Ind Biomed Art 2022; 5:7. [PMID: 35229199 PMCID: PMC8885942 DOI: 10.1186/s42492-022-00103-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 01/23/2022] [Indexed: 11/10/2022] Open
Abstract
One of the most critical steps in medical health is the proper diagnosis of the disease. Dermatology is one of the most volatile and challenging fields in terms of diagnosis. Dermatologists often require further testing, review of the patient's history, and other data to ensure a proper diagnosis. Therefore, finding a method that can guarantee a proper trusted diagnosis quickly is essential. Several approaches have been developed over the years to facilitate the diagnosis based on machine learning. However, the developed systems lack certain properties, such as high accuracy. This study proposes a system developed in MATLAB that can identify skin lesions and classify them as normal or benign. The classification process is effectuated by implementing the K-nearest neighbor (KNN) approach to differentiate between normal skin and malignant skin lesions that imply pathology. KNN is used because it is time efficient and promises highly accurate results. The accuracy of the system reached 98% in classifying skin lesions.
Collapse
|