1
|
Chowa SS, Azam S, Montaha S, Bhuiyan MRI, Jonkman M. Improving the Automated Diagnosis of Breast Cancer with Mesh Reconstruction of Ultrasound Images Incorporating 3D Mesh Features and a Graph Attention Network. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1067-1085. [PMID: 38361007 DOI: 10.1007/s10278-024-00983-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 11/17/2023] [Accepted: 12/11/2023] [Indexed: 02/17/2024]
Abstract
This study proposes a novel approach for breast tumor classification from ultrasound images into benign and malignant by converting the region of interest (ROI) of a 2D ultrasound image into a 3D representation using the point-e system, allowing for in-depth analysis of underlying characteristics. Instead of relying solely on 2D imaging features, this method extracts 3D mesh features that describe tumor patterns more precisely. Ten informative and medically relevant mesh features are extracted and assessed with two feature selection techniques. Additionally, a feature pattern analysis has been conducted to determine the feature's significance. A feature table with dimensions of 445 × 12 is generated and a graph is constructed, considering the rows as nodes and the relationships among the nodes as edges. The Spearman correlation coefficient method is employed to identify edges between the strongly connected nodes (with a correlation score greater than or equal to 0.7), resulting in a graph containing 56,054 edges and 445 nodes. A graph attention network (GAT) is proposed for the classification task and the model is optimized with an ablation study, resulting in the highest accuracy of 99.34%. The performance of the proposed model is compared with ten machine learning (ML) models and one-dimensional convolutional neural network where the test accuracy of these models ranges from 73 to 91%. Our novel 3D mesh-based approach, coupled with the GAT, yields promising performance for breast tumor classification, outperforming traditional models, and has the potential to reduce time and effort of radiologists providing a reliable diagnostic system.
Collapse
Affiliation(s)
- Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia.
| | - Sidratul Montaha
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Md Rahad Islam Bhuiyan
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|
2
|
Saikia S, Si T, Deb D, Bora K, Mallik S, Maulik U, Zhao Z. Lesion detection in women breast's dynamic contrast-enhanced magnetic resonance imaging using deep learning. Sci Rep 2023; 13:22555. [PMID: 38110462 PMCID: PMC10728155 DOI: 10.1038/s41598-023-48553-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 11/28/2023] [Indexed: 12/20/2023] Open
Abstract
Breast cancer is one of the most common cancers in women and the second foremost cause of cancer death in women after lung cancer. Recent technological advances in breast cancer treatment offer hope to millions of women in the world. Segmentation of the breast's Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is one of the necessary tasks in the diagnosis and detection of breast cancer. Currently, a popular deep learning model, U-Net is extensively used in biomedical image segmentation. This article aims to advance the state of the art and conduct a more in-depth analysis with a focus on the use of various U-Net models in lesion detection in women's breast DCE-MRI. In this article, we perform an empirical study of the effectiveness and efficiency of U-Net and its derived deep learning models including ResUNet, Dense UNet, DUNet, Attention U-Net, UNet++, MultiResUNet, RAUNet, Inception U-Net and U-Net GAN for lesion detection in breast DCE-MRI. All the models are applied to the benchmarked 100 Sagittal T2-Weighted fat-suppressed DCE-MRI slices of 20 patients and their performance is compared. Also, a comparative study has been conducted with V-Net, W-Net, and DeepLabV3+. Non-parametric statistical test Wilcoxon Signed Rank Test is used to analyze the significance of the quantitative results. Furthermore, Multi-Criteria Decision Analysis (MCDA) is used to evaluate overall performance focused on accuracy, precision, sensitivity, F[Formula: see text]-score, specificity, Geometric-Mean, DSC, and false-positive rate. The RAUNet segmentation model achieved a high accuracy of 99.76%, sensitivity of 85.04%, precision of 90.21%, and Dice Similarity Coefficient (DSC) of 85.04% whereas ResNet achieved 99.62% accuracy, 62.26% sensitivity, 99.56% precision, and 72.86% DSC. ResUNet is found to be the most effective model based on MCDA. On the other hand, U-Net GAN takes the least computational time to perform the segmentation task. Both quantitative and qualitative results demonstrate that the ResNet model performs better than other models in segmenting the images and lesion detection, though computational time in achieving the objectives varies.
Collapse
Affiliation(s)
- Sudarshan Saikia
- Information Technology Department, Oil India Limited, Duliajan, Assam, 786602, India
| | - Tapas Si
- AI Innovation Lab, Department of Computer Science & Engineering, University of Engineering & Management, Jaipur, GURUKUL, Jaipur, Rajasthan, 303807, India
| | - Darpan Deb
- Department of Computer Application, Christ University, Bengaluru, 560029, India
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, 781001, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, 02115, USA
| | - Ujjwal Maulik
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA.
| |
Collapse
|
3
|
Boudouh SS, Bouakkaz M. Enhanced breast mass mammography classification approach based on pre-processing and hybridization of transfer learning models. J Cancer Res Clin Oncol 2023; 149:14549-14564. [PMID: 37567987 DOI: 10.1007/s00432-023-05249-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Accepted: 08/04/2023] [Indexed: 08/13/2023]
Abstract
BACKGROUND AND OBJECTIVE The second most prevalent cause of death among women is now breast cancer, surpassing heart disease. Mammography images must accurately identify breast masses to diagnose early breast cancer, which can significantly increase the patient's survival percentage. Although, due to the diversity of breast masses and the complexity of their microenvironment, it is still a significant issue. Hence, an issue that researchers need to continue searching into is how to establish a reliable breast mass detection approach in an effective factor application to increase patient survival. Even though several machine and deep learning-based approaches were proposed to address these issues, pre-processing strategies and network architectures were insufficient for breast mass detection in mammogram scans, which directly influences the accuracy of the proposed models. METHODS Aiming to resolve these issues, we propose a two-stage classification method for breast mass mammography scans. First, we introduce a pre-processing stage divided into three sub-strategies, which include several filters for Region Of Interest (ROI) extraction, noise removal, and image enhancements. Secondly, we propose a classification stage based on transfer learning techniques for feature extraction, and global pooling for classification instead of standard machine learning algorithms or fully connected layers. However, instead of using the traditional fine-tuning feature extraction phase, we proposed a hybrid model where we concatenate two recent pre-trained CNNs to assist the feature extraction phase, rather than using one. RESULTS Using the CBIS-DDSM dataset, we managed to increase mainly each of the accuracy, sensitivity, and specificity reaching the highest accuracy of 98,1% using the Median filter for noise removal. Followed by the Gaussian filter trial with 96% accuracy, meanwhile, the winner filter attained the lowest accuracy of 94.13%. Moreover, the usage of global average pooling as a classifier is suitable in our case better than global max pooling. CONCLUSION The experimental findings demonstrate that the suggested strategy of breast Mass detection in mammography can outperform the top-ranked methods currently in use in terms of classification performance.
Collapse
Affiliation(s)
| | - Mustapha Bouakkaz
- LIM Laboratory, University of Laghouat Amar Telidji, Laghouat, Algeria
| |
Collapse
|
4
|
Pan QH, Zhang ZP, Yan LY, Jia NR, Ren XY, Wu BK, Hao YB, Li ZF. Association between ultrasound BI-RADS signs and molecular typing of invasive breast cancer. Front Oncol 2023; 13:1110796. [PMID: 37265799 PMCID: PMC10230953 DOI: 10.3389/fonc.2023.1110796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 05/02/2023] [Indexed: 06/03/2023] Open
Abstract
Objective To explore the correlation between ultrasound images and molecular typing of invasive breast cancer, so as to analyze the predictive value of preoperative ultrasound for invasive breast cancer. Methods 302 invasive breast cancer patients were enrolled in Heping Hospital affiliated to Changzhi Medical College in Shanxi, China during 2020 to 2022. All patients accepted ultrasonic and pathological examination, and all pathological tissues received molecular typing with immunohistochemical (IHC) staining. The relevance between different molecular typings and ultrasonic image, pathology were evaluated. Results Univariate analysis: among the four molecular typings, there were significant differences in tumor size, shape, margin, lymph node and histological grade (P<0.05). 1. Size: Luminal A tumor was smaller (69.4%), Basal -like type tumors are mostly larger (60.9%); 2. Shape: Basal-like type is more likely to show regular shape (45.7%); 3. Margin: Luminal A and Luminal B mostly are not circumscribed (79.6%, 74.8%), Basal -like type shows circumscribed(52.2%); 4. Lymph nodes: Luminal A type tends to be normal (87.8%), Luminal B type,Her-2+ type and Basal-like type tend to be abnormal (35.6%,36.4% and 39.1%). There was no significant difference in mass orientation, echo pattern, rear echo and calcification (P>0.05). Multivariate analysis: Basal-like breast cancer mostly showed regular shape, circumscribed margin and abnormal lymph nodes (P<0.05). Conclusion There are differences in the ultrasound manifestations of different molecular typings of breast cancer, and ultrasound features can be used as a potential imaging index to provide important information for the precise diagnosis and treatment of breast cancer.
Collapse
Affiliation(s)
- Qiao-Hong Pan
- Department of Ultrasound, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Zheng-Pin Zhang
- Department of Ultrasound, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Liu-Yi Yan
- Department of Ultrasound, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Ning-Rui Jia
- Department of Ultrasound, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Xin-Yu Ren
- School of Public Health, Shanxi Medical University, Taiyuan, China
| | - Bei-Ke Wu
- School of Public Health, Shanxi Medical University, Taiyuan, China
| | - Yu-Bing Hao
- School of Public Health, Shanxi Medical University, Taiyuan, China
| | - Zhi-Fang Li
- Department of Preventive Medicine, Changzhi Medical College, Changzhi, China
| |
Collapse
|
5
|
Neural Network in the Analysis of the MR Signal as an Image Segmentation Tool for the Determination of T 1 and T 2 Relaxation Times with Application to Cancer Cell Culture. Int J Mol Sci 2023; 24:ijms24021554. [PMID: 36675075 PMCID: PMC9861169 DOI: 10.3390/ijms24021554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/31/2022] [Accepted: 01/03/2023] [Indexed: 01/14/2023] Open
Abstract
Artificial intelligence has been entering medical research. Today, manufacturers of diagnostic instruments are including algorithms based on neural networks. Neural networks are quickly entering all branches of medical research and beyond. Analyzing the PubMed database from the last 5 years (2017 to 2021), we see that the number of responses to the query "neural network in medicine" exceeds 10,500 papers. Deep learning algorithms are of particular importance in oncology. This paper presents the use of neural networks to analyze the magnetic resonance imaging (MRI) images used to determine MRI relaxometry of the samples. Relaxometry is becoming an increasingly common tool in diagnostics. The aim of this work was to optimize the processing time of DICOM images by using a neural network implemented in the MATLAB package by The MathWorks with the patternnet function. The application of a neural network helps to eliminate spaces in which there are no objects with characteristics matching the phenomenon of longitudinal or transverse MRI relaxation. The result of this work is the elimination of aerated spaces in MRI images. The whole algorithm was implemented as an application in the MATLAB package.
Collapse
|
6
|
Wang J, Zheng Y, Ma J, Li X, Wang C, Gee J, Wang H, Huang W. Information bottleneck-based interpretable multitask network for breast cancer classification and segmentation. Med Image Anal 2023; 83:102687. [PMID: 36436356 DOI: 10.1016/j.media.2022.102687] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 09/19/2022] [Accepted: 11/07/2022] [Indexed: 11/13/2022]
Abstract
Breast cancer is one of the most common causes of death among women worldwide. Early signs of breast cancer can be an abnormality depicted on breast images (e.g., mammography or breast ultrasonography). However, reliable interpretation of breast images requires intensive labor and physicians with extensive experience. Deep learning is evolving breast imaging diagnosis by introducing a second opinion to physicians. However, most deep learning-based breast cancer analysis algorithms lack interpretability because of their black box nature, which means that domain experts cannot understand why the algorithms predict a label. In addition, most deep learning algorithms are formulated as a single-task-based model that ignores correlations between different tasks (e.g., tumor classification and segmentation). In this paper, we propose an interpretable multitask information bottleneck network (MIB-Net) to accomplish simultaneous breast tumor classification and segmentation. MIB-Net maximizes the mutual information between the latent representations and class labels while minimizing information shared by the latent representations and inputs. In contrast from existing models, our MIB-Net generates a contribution score map that offers an interpretable aid for physicians to understand the model's decision-making process. In addition, MIB-Net implements multitask learning and further proposes a dual prior knowledge guidance strategy to enhance deep task correlation. Our evaluations are carried out on three breast image datasets in different modalities. Our results show that the proposed framework is not only able to help physicians better understand the model's decisions but also improve breast tumor classification and segmentation accuracy over representative state-of-the-art models. Our code is available at https://github.com/jxw0810/MIB-Net.
Collapse
Affiliation(s)
- Junxia Wang
- School of Information Science and Engineering, Shandong Normal University, No. 1 Daxue Road, Changqing District, Jinan 250358, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, No. 1 Daxue Road, Changqing District, Jinan 250358, China; Shanghai AI Laboratory, No. 701 Yunjin Road, Xuhui District, Shanghai, 200433, China.
| | - Jun Ma
- School of Cyber Science and Engineering, Southeast University, No. 2 Southeast University Road, Jiangning District, Nanjing, 211189, China
| | - Xinmeng Li
- School of Information Science and Engineering, Shandong Normal University, No. 1 Daxue Road, Changqing District, Jinan 250358, China
| | - Chongjing Wang
- China Academy of Information and Communications Technology, No. 52 Huayuan North Road, Haidian District, Beijing 100191, China
| | - James Gee
- Penn Image Computing and Science Laboratory, University of Pennsylvania, PA 19104, USA
| | - Haipeng Wang
- Institute of Information Fusion, Naval Aviation University, Erma Road Yantai Shandong, Yantai 264001, China.
| | - Wenhui Huang
- School of Information Science and Engineering, Shandong Normal University, No. 1 Daxue Road, Changqing District, Jinan 250358, China.
| |
Collapse
|
7
|
Hsu SY, Wang CY, Kao YK, Liu KY, Lin MC, Yeh LR, Wang YM, Chen CI, Kao FC. Using Deep Neural Network Approach for Multiple-Class Assessment of Digital Mammography. Healthcare (Basel) 2022; 10:healthcare10122382. [PMID: 36553906 PMCID: PMC9778490 DOI: 10.3390/healthcare10122382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 11/16/2022] [Accepted: 11/25/2022] [Indexed: 11/29/2022] Open
Abstract
According to the Health Promotion Administration in the Ministry of Health and Welfare statistics in Taiwan, over ten thousand women have breast cancer every year. Mammography is widely used to detect breast cancer. However, it is limited by the operator's technique, the cooperation of the subjects, and the subjective interpretation by the physician. It results in inconsistent identification. Therefore, this study explores the use of a deep neural network algorithm for the classification of mammography images. In the experimental design, a retrospective study was used to collect imaging data from actual clinical cases. The mammography images were collected and classified according to the breast image reporting and data-analyzing system (BI-RADS). In terms of model building, a fully convolutional dense connection network (FC-DCN) is used for the network backbone. All the images were obtained through image preprocessing, a data augmentation method, and transfer learning technology to build a mammography image classification model. The research results show the model's accuracy, sensitivity, and specificity were 86.37%, 100%, and 72.73%, respectively. Based on the FC-DCN model framework, it can effectively reduce the number of training parameters and successfully obtain a reasonable image classification model for mammography.
Collapse
Affiliation(s)
- Shih-Yen Hsu
- Department of Information Engineering, I-Shou University, Kaohsiung City 84001, Taiwan
| | - Chi-Yuan Wang
- Department of Medical Imaging and Radiological Science, I-Shou University, Kaohsiung City 82445, Taiwan
| | - Yi-Kai Kao
- Division of Colorectal Surgery, Department of Surgery, E-DA Hospital, Kaohsiung City 82445, Taiwan
| | - Kuo-Ying Liu
- Department of Radiology, E-DA Cancer Hospital, I-Shou University, Kaohsiung City 82445, Taiwan
| | - Ming-Chia Lin
- Department of Nuclear Medicine, E-DA Hospital, I-Shou University, Kaohsiung City 82445, Taiwan
| | - Li-Ren Yeh
- Department of Anesthesiology, E-DA Cancer Hospital, I-Shou University, Kaohsiung City 82445, Taiwan
- Department of Medical Imaging and Radiology, Shu-Zen College of Medicine and Management, Kaohsiung City 82144, Taiwan
| | - Yi-Ming Wang
- Department of Information Engineering, I-Shou University, Kaohsiung City 84001, Taiwan
- Department of Critical Care Medicine, E-DA Hospital, I-Shou University, Kaohsiung City 82445, Taiwan
| | - Chih-I Chen
- Division of Colon and Rectal Surgery, Department of Surgery, E-DA Hospital, Kaohsiung City 82445, Taiwan
- Division of General Medicine Surgery, Department of Surgery, E-DA Hospital, Kaohsiung City 82445, Taiwan
- School of Medicine, College of Medicine, I-Shou University, Kaohsiung City 82445, Taiwan
- The School of Chinese Medicine for Post Baccalaureate, I-Shou University, Kaohsiung City 82445, Taiwan
- Correspondence: (C.-I.C.); (F.-C.K.)
| | - Feng-Chen Kao
- School of Medicine, College of Medicine, I-Shou University, Kaohsiung City 82445, Taiwan
- Department of Orthopedics, E-DA Hospital, Kaohsiung City 82445, Taiwan
- Department of Orthopedics, Dachang Hospital, Kaohsiung City 82445, Taiwan
- Correspondence: (C.-I.C.); (F.-C.K.)
| |
Collapse
|
8
|
Guo F, Li Q, Gao F, Huang C, Zhang F, Xu J, Xu Y, Li Y, Sun J, Jiang L. Evaluation of the peritumoral features using radiomics and deep learning technology in non-spiculated and noncalcified masses of the breast on mammography. Front Oncol 2022; 12:1026552. [PMID: 36479079 PMCID: PMC9721450 DOI: 10.3389/fonc.2022.1026552] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 10/18/2022] [Indexed: 09/05/2023] Open
Abstract
OBJECTIVE To assess the significance of peritumoral features based on deep learning in classifying non-spiculated and noncalcified masses (NSNCM) on mammography. METHODS We retrospectively screened the digital mammography data of 2254 patients who underwent surgery for breast lesions in Harbin Medical University Cancer Hospital from January to December 2018. Deep learning and radiomics models were constructed. The classification efficacy in ROI and patient levels of AUC, accuracy, sensitivity, and specificity were compared. Stratified analysis was conducted to analyze the influence of primary factors on the AUC of the deep learning model. The image filter and CAM were used to visualize the radiomics and depth features. RESULTS For 1298 included patients, 771 (59.4%) were benign, and 527 (40.6%) were malignant. The best model was the deep learning combined model (2 mm), in which the AUC was 0.884 (P < 0.05); especially the AUC of breast composition B reached 0.941. All the deep learning models were superior to the radiomics models (P < 0.05), and the class activation map (CAM) showed a high expression of signals around the tumor of the deep learning model. The deep learning model achieved higher AUC for large size, age >60 years, and breast composition type B (P < 0.05). CONCLUSION Combining the tumoral and peritumoral features resulted in better identification of malignant NSNCM on mammography, and the performance of the deep learning model exceeded the radiomics model. Age, tumor size, and the breast composition type are essential for diagnosis.
Collapse
Affiliation(s)
- Fei Guo
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Qiyang Li
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Fei Gao
- Deepwise Artificial Intelligence Lab, Beijing Deepwise and League of PHD Technology Co., Ltd, Beijing, China
| | - Chencui Huang
- Deepwise Artificial Intelligence Lab, Beijing Deepwise and League of PHD Technology Co., Ltd, Beijing, China
| | - Fandong Zhang
- Deepwise Artificial Intelligence Lab, Beijing Deepwise and League of PHD Technology Co., Ltd, Beijing, China
| | - Jingxu Xu
- Deepwise Artificial Intelligence Lab, Beijing Deepwise and League of PHD Technology Co., Ltd, Beijing, China
| | - Ye Xu
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Yuanzhou Li
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Jianghong Sun
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Li Jiang
- Department of Oncology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| |
Collapse
|
9
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
10
|
Breast Lesions Screening of Mammographic Images with 2D Spatial and 1D Convolutional Neural Network-Based Classifier. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157516] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Mammography is a first-line imaging examination that employs low-dose X-rays to rapidly screen breast tumors, cysts, and calcifications. This study proposes a two-dimensional (2D) spatial and one-dimensional (1D) convolutional neural network (CNN) to early detect possible breast lesions (tumors) to reduce patients’ mortality rates and to develop a classifier for use in mammographic images on regions of interest where breast lesions (tumors) may likely occur. The 2D spatial fractional-order convolutional processes are used to strengthen and sharpen the lesions’ features, denoise, and improve the feature extraction processes. Then, an automatic extraction task is performed using a specific bounding box to sequentially pick out feature patterns from each mammographic image. The multi-round 1D kernel convolutional processes can also strengthen and denoise 1D feature signals and assist in the identification of the differentiation levels of normality and abnormality signals. In the classification layer, a gray relational analysis-based classifier is used to screen the possible lesions, including normal (Nor), benign (B), and malignant (M) classes. The classifier development for clinical applications can reduce classifier’s training time, computational complexity level, computational time, and achieve a more accurate rate for meeting clinical/medical purpose. Mammographic images were selected from the mammographic image analysis society image database for experimental tests on breast lesions screening and K-fold cross-validations were performed. The experimental results showed promising performance in quantifying the classifier’s outcome for medical purpose evaluation in terms of recall (%), precision (%), accuracy (%), and F1 score.
Collapse
|
11
|
Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12126230] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Computer-aided diagnosis (CAD) systems can help radiologists in numerous medical tasks including classification and staging of the various diseases. The 3D tomosynthesis imaging technique adds value to the CAD systems in diagnosis and classification of the breast lesions. Several convolutional neural network (CNN) architectures have been proposed to classify the lesion shapes to the respective classes using a similar imaging method. However, not only is the black box nature of these CNN models questionable in the healthcare domain, but so is the morphological-based cancer classification, concerning the clinicians. As a result, this study proposes both a mathematically and visually explainable deep-learning-driven multiclass shape-based classification framework for the tomosynthesis breast lesion images. In this study, authors exploit eight pretrained CNN architectures for the classification task on the previously extracted regions of interests images containing the lesions. Additionally, the study also unleashes the black box nature of the deep learning models using two well-known perceptive explainable artificial intelligence (XAI) algorithms including Grad-CAM and LIME. Moreover, two mathematical-structure-based interpretability techniques, i.e., t-SNE and UMAP, are employed to investigate the pretrained models’ behavior towards multiclass feature clustering. The experimental results of the classification task validate the applicability of the proposed framework by yielding the mean area under the curve of 98.2%. The explanability study validates the applicability of all employed methods, mainly emphasizing the pros and cons of both Grad-CAM and LIME methods that can provide useful insights towards explainable CAD systems.
Collapse
|
12
|
Tsai KJ, Chou MC, Li HM, Liu ST, Hsu JH, Yeh WC, Hung CM, Yeh CY, Hwang SH. A High-Performance Deep Neural Network Model for BI-RADS Classification of Screening Mammography. SENSORS 2022; 22:s22031160. [PMID: 35161903 PMCID: PMC8838754 DOI: 10.3390/s22031160] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 01/27/2022] [Accepted: 01/31/2022] [Indexed: 11/16/2022]
Abstract
Globally, the incidence rate for breast cancer ranks first. Treatment for early-stage breast cancer is highly cost effective. Five-year survival rate for stage 0-2 breast cancer exceeds 90%. Screening mammography has been acknowledged as the most reliable way to diagnose breast cancer at an early stage. Taiwan government has been urging women without any symptoms, aged between 45 and 69, to have a screening mammogram bi-yearly. This brings about a large workload for radiologists. In light of this, this paper presents a deep neural network (DNN)-based model as an efficient and reliable tool to assist radiologists with mammographic interpretation. For the first time in the literature, mammograms are completely classified into BI-RADS categories 0, 1, 2, 3, 4A, 4B, 4C and 5. The proposed model was trained using block-based images segmented from a mammogram dataset of our own. A block-based image was applied to the model as an input, and a BI-RADS category was predicted as an output. At the end of this paper, the outperformance of this work is demonstrated by an overall accuracy of 94.22%, an average sensitivity of 95.31%, an average specificity of 99.15% and an area under curve (AUC) of 0.9723. When applied to breast cancer screening for Asian women who are more likely to have dense breasts, this model is expected to give a higher accuracy than others in the literature, since it was trained using mammograms taken from Taiwanese women.
Collapse
Affiliation(s)
- Kuen-Jang Tsai
- Department of General Surgey, E-Da Cancer Hospital, Yanchao Dist., Kaohsiung 82445, Taiwan; (K.-J.T.); (C.-M.H.)
- College of Medicine, I-Shou University, Yanchao Dist., Kaohsiung 82445, Taiwan
| | - Mei-Chun Chou
- Department of Radiology, E-Da Hospital, Yanchao Dist., Kaohsiung 82445, Taiwan; (M.-C.C.); (H.-M.L.); (S.-T.L.); (J.-H.H.)
| | - Hao-Ming Li
- Department of Radiology, E-Da Hospital, Yanchao Dist., Kaohsiung 82445, Taiwan; (M.-C.C.); (H.-M.L.); (S.-T.L.); (J.-H.H.)
| | - Shin-Tso Liu
- Department of Radiology, E-Da Hospital, Yanchao Dist., Kaohsiung 82445, Taiwan; (M.-C.C.); (H.-M.L.); (S.-T.L.); (J.-H.H.)
| | - Jung-Hsiu Hsu
- Department of Radiology, E-Da Hospital, Yanchao Dist., Kaohsiung 82445, Taiwan; (M.-C.C.); (H.-M.L.); (S.-T.L.); (J.-H.H.)
| | - Wei-Cheng Yeh
- Department of Radiology, E-Da Cancer Hospital, Yanchao Dist., Kaohsiung 82445, Taiwan;
| | - Chao-Ming Hung
- Department of General Surgey, E-Da Cancer Hospital, Yanchao Dist., Kaohsiung 82445, Taiwan; (K.-J.T.); (C.-M.H.)
| | - Cheng-Yu Yeh
- Department of Electrical Engineering, National Chin-Yi University of Technology, Taichung 41170, Taiwan
- Correspondence:
| | - Shaw-Hwa Hwang
- Department of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu 30010, Taiwan;
| |
Collapse
|
13
|
Wang H, Li Y, Liu S, Yue X. Design Computer-Aided Diagnosis System Based on Chest CT Evaluation of Pulmonary Nodules. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7729524. [PMID: 35047057 PMCID: PMC8763488 DOI: 10.1155/2022/7729524] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 12/08/2021] [Indexed: 11/17/2022]
Abstract
At present, the diagnosis and treatment of lung cancer have always been one of the research hotspots in the medical field. Early diagnosis and treatment of this disease are necessary means to improve the survival rate of lung cancer patients and reduce their mortality. The introduction of computer-aided diagnosis technology can easily, quickly, and accurately identify the lung nodule area as an imaging feature of early lung cancer for the clinical diagnosis of lung cancer and is helpful for the quantitative analysis of the characteristics of lung nodules and is useful for distinguishing benign and malignant lung nodules. Growth provides an objective diagnostic reference standard. This paper studies ITK and VTK toolkits and builds a system platform with MFC. By studying the process of doctors diagnosing lung nodules, the whole system is divided into seven modules: suspected lung shadow detection, image display and image annotation, and interaction. The system passes through the entire lung nodule auxiliary diagnosis process and obtains the number of nodules, the number of malignant nodules, and the number of false positives in each set of lung CT images to analyze the performance of the auxiliary diagnosis system. In this paper, a lung region segmentation method is proposed, which makes use of the obvious differences between the lung parenchyma and other human tissues connected with it, as well as the position relationship and shape characteristics of each human tissue in the image. Experiments are carried out to solve the problems of lung boundary, inaccurate segmentation of lung wall, and depression caused by noise and pleural nodule adhesion. Experiments show that there are 2316 CT images in 8 sets of images of different patients, and the number of nodules is 56. A total of 49 nodules were detected by the system, 7 were missed, and the detection rate was 87.5%. A total of 64 false-positive nodules were detected, with an average of 8 per set of images. This shows that the system is effective for CT images of different devices, pixel pitch, and slice pitch and has high sensitivity, which can provide doctors with good advice.
Collapse
Affiliation(s)
- Hui Wang
- Department of Radiology, The Second Affiliated Hospital of Harbin Medical University, 150086 Harbin, Heilongjiang, China
| | - Yanying Li
- Department of Radiology, The Second Affiliated Hospital of Harbin Medical University, 150086 Harbin, Heilongjiang, China
| | - Shanshan Liu
- Department of Radiology, Weifang Respiratory Disease Hospital, Weifang, 261041 Shandong, China
| | - Xianwen Yue
- Department of Radiology, Weifang Respiratory Disease Hospital, Weifang, 261041 Shandong, China
| |
Collapse
|
14
|
Ul Haq A, Li JP, Wali S, Ahmad S, Ali Z, Khan J, Khan A, Ali A. StackBC: Deep learning and transfer learning techniques based stacking approach for accurate Invasive Ductal Carcinoma classification using histology images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-212240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Artificial intelligence (AI) based computer-aided diagnostic (CAD) systems can effectively diagnose critical disease. AI-based detection of breast cancer (BC) through images data is more efficient and accurate than professional radiologists. However, the existing AI-based BC diagnosis methods have complexity in low prediction accuracy and high computation time. Due to these reasons, medical professionals are not employing the current proposed techniques in E-Healthcare to effectively diagnose the BC. To diagnose the breast cancer effectively need to incorporate advanced AI techniques based methods in diagnosis process. In this work, we proposed a deep learning based diagnosis method (StackBC) to detect breast cancer in the early stage for effective treatment and recovery. In particular, we have incorporated deep learning models including Convolutional neural network (CNN), Long short term memory (LSTM), and Gated recurrent unit (GRU) for the classification of Invasive Ductal Carcinoma (IDC). Additionally, data augmentation and transfer learning techniques have been incorporated for data set balancing and for effective training the model. To further improve the predictive performance of model we used stacking technique. Among the three base classifiers (CNN, LSTM, GRU) the predictive performance of GRU are better as compared to individual model. The GRU is selected as a meta classifier to distinguish between Non-IDC and IDC breast images. The method Hold-Out has been incorporated and the data set is split into 90% and 10% for training and testing of the model, respectively. Model evaluation metrics have been computed for model performance evaluation. To analyze the efficacy of the model, we have used breast histology images data set. Our experimental results demonstrated that the proposed StackBC method achieved improved performance by gaining 99.02% accuracy and 100% area under the receiver operating characteristics curve (AUC-ROC) compared to state-of-the-art methods. Due to the high performance of the proposed method, we recommend it for early recognition of breast cancer in E-Healthcare.
Collapse
Affiliation(s)
- Amin Ul Haq
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jian Ping Li
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Samad Wali
- Department of Mathematics, Namal Institute, Mianwali, Pakistan
| | - Sultan Ahmad
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Zafar Ali
- School of Computer Science and Engineering Southeast University, Nanjing, China
| | - Jalaluddin Khan
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Ajab Khan
- Director Oric, Abbottabad University of Science and Technology, Abbottabad, KPk, Pakistan
| | - Amjad Ali
- Department of Computer Science and Software Technology, University of Swat, KPK, Pakistan
| |
Collapse
|
15
|
Lin F, Sun H, Han L, Li J, Bao N, Li H, Chen J, Zhou S, Yu T. An effective fine grading method of BI-RADS classification in mammography. Int J Comput Assist Radiol Surg 2021; 17:239-247. [PMID: 34940931 DOI: 10.1007/s11548-021-02541-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 11/30/2021] [Indexed: 12/24/2022]
Abstract
PURPOSE Mammography is an important imaging technique for the detection of early breast cancer. Doctors classify mammograms as Breast Imaging Reporting and Data Systems (BI-RADS). This study aims to provide an intelligent BI-RADS grading prediction method, which can help radiologists and clinicians to distinguish the most challenging 4A, 4B, and 4C cases in mammography. METHODS Firstly, the breast region, the lesion region, and the corresponding region in the contralateral breast were extracted. Four categories of features were extracted from the original images and the images after the wavelet transform. Secondly, an optimized sequential forward floating selection (SFFS) was used for feature selection. Finally, a two-layer classifier integration was employed for fine grading prediction. 45 cases from the hospital and 500 cases from Digital Database for Screening Mammography (DDSM) database were used for evaluation. RESULTS The classification performance of the support vector machine (SVM), Bayes, and random forest is very close on the 45 testing set, with the area under the receiver operating characteristic curve (AUC) of 0.978, 0.967, and 0.968. On the DDSM set, the AUC achieves 0.931, 0.938, and 0.874. Using the mean probability prediction, the AUC on the two datasets reaches 0.998 and 0.916. However, they are all significantly higher than the doctors' diagnosis, with the AUC of 0.807 and 0.725. CONCLUSIONS A BI-RADS fine grading (2, 3, 4A, 4B, 4C, 5) prediction model was proposed. Through the evaluation from different datasets, the performance is proved higher than that of the doctors, which may provide great help for clinical BI-RADS classification diagnosis. Therefore, our method can produce more effective and reliable results.
Collapse
Affiliation(s)
- Fei Lin
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Hang Sun
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Lu Han
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
| | - Jing Li
- Department of Radiology, Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Nan Bao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Hong Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| | - Jing Chen
- Department of Radiology, Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Shi Zhou
- Department of Radiology, Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Tao Yu
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China.
| |
Collapse
|
16
|
He Z, Li Y, Zeng W, Xu W, Liu J, Ma X, Wei J, Zeng H, Xu Z, Wang S, Wen C, Wu J, Feng C, Ma M, Qin G, Lu Y, Chen W. Can a Computer-Aided Mass Diagnosis Model Based on Perceptive Features Learned From Quantitative Mammography Radiology Reports Improve Junior Radiologists' Diagnosis Performance? An Observer Study. Front Oncol 2021; 11:773389. [PMID: 34976817 PMCID: PMC8719464 DOI: 10.3389/fonc.2021.773389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 11/22/2021] [Indexed: 11/16/2022] Open
Abstract
Radiologists' diagnostic capabilities for breast mass lesions depend on their experience. Junior radiologists may underestimate or overestimate Breast Imaging Reporting and Data System (BI-RADS) categories of mass lesions owing to a lack of diagnostic experience. The computer-aided diagnosis (CAD) method assists in improving diagnostic performance by providing a breast mass classification reference to radiologists. This study aims to evaluate the impact of a CAD method based on perceptive features learned from quantitative BI-RADS descriptions on breast mass diagnosis performance. We conducted a retrospective multi-reader multi-case (MRMC) study to assess the perceptive feature-based CAD method. A total of 416 digital mammograms of patients with breast masses were obtained from 2014 through 2017, including 231 benign and 185 malignant masses, from which we randomly selected 214 cases (109 benign, 105 malignant) to train the CAD model for perceptive feature extraction and classification. The remaining 202 cases were enrolled as the test set for evaluation, of which 51 patients (29 benign and 22 malignant) participated in the MRMC study. In the MRMC study, we categorized six radiologists into three groups: junior, middle-senior, and senior. They diagnosed 51 patients with and without support from the CAD model. The BI-RADS category, benign or malignant diagnosis, malignancy probability, and diagnosis time during the two evaluation sessions were recorded. In the MRMC evaluation, the average area under the curve (AUC) of the six radiologists with CAD support was slightly higher than that without support (0.896 vs. 0.850, p = 0.0209). Both average sensitivity and specificity increased (p = 0.0253). Under CAD assistance, junior and middle-senior radiologists adjusted the assessment categories of more BI-RADS 4 cases. The diagnosis time with and without CAD support was comparable for five radiologists. The CAD model improved the radiologists' diagnostic performance for breast masses without prolonging the diagnosis time and assisted in a better BI-RADS assessment, especially for junior radiologists.
Collapse
Affiliation(s)
- Zilong He
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Yue Li
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Weixiong Zeng
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Weimin Xu
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Jialing Liu
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Xiangyuan Ma
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, China
| | - Jun Wei
- Perception Vision Medical Technologies Ltd. Co., Guangzhou, China
| | - Hui Zeng
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Zeyuan Xu
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Sina Wang
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Chanjuan Wen
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Jiefang Wu
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Chenya Feng
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Mengwei Ma
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Genggeng Qin
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Yao Lu
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
- Guangdong Province Key Laboratory of Computational Science, Sun Yat-sen University, Guangzhou, China
| | - Weiguo Chen
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
17
|
Conventional Machine Learning versus Deep Learning for Magnification Dependent Histopathological Breast Cancer Image Classification: A Comparative Study with Visual Explanation. Diagnostics (Basel) 2021; 11:diagnostics11030528. [PMID: 33809611 PMCID: PMC8001768 DOI: 10.3390/diagnostics11030528] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/03/2021] [Accepted: 03/08/2021] [Indexed: 12/14/2022] Open
Abstract
Breast cancer is a serious threat to women. Many machine learning-based computer-aided diagnosis (CAD) methods have been proposed for the early diagnosis of breast cancer based on histopathological images. Even though many such classification methods achieved high accuracy, many of them lack the explanation of the classification process. In this paper, we compare the performance of conventional machine learning (CML) against deep learning (DL)-based methods. We also provide a visual interpretation for the task of classifying breast cancer in histopathological images. For CML-based methods, we extract a set of handcrafted features using three feature extractors and fuse them to get image representation that would act as an input to train five classical classifiers. For DL-based methods, we adopt the transfer learning approach to the well-known VGG-19 deep learning architecture, where its pre-trained version on the large scale ImageNet, is block-wise fine-tuned on histopathological images. The evaluation of the proposed methods is carried out on the publicly available BreaKHis dataset for the magnification dependent classification of benign and malignant breast cancer and their eight sub-classes, and a further validation on KIMIA Path960, a magnification-free histopathological dataset with 20 image classes, is also performed. After providing the classification results of CML and DL methods, and to better explain the difference in the classification performance, we visualize the learned features. For the DL-based method, we intuitively visualize the areas of interest of the best fine-tuned deep neural networks using attention maps to explain the decision-making process and improve the clinical interpretability of the proposed models. The visual explanation can inherently improve the pathologist’s trust in automated DL methods as a credible and trustworthy support tool for breast cancer diagnosis. The achieved results show that DL methods outperform CML approaches where we reached an accuracy between 94.05% and 98.13% for the binary classification and between 76.77% and 88.95% for the eight-class classification, while for DL approaches, the accuracies range from 85.65% to 89.32% for the binary classification and from 63.55% to 69.69% for the eight-class classification.
Collapse
|