1
|
Sun L, Han B, Jiang W, Liu W, Liu B, Tao D, Yu Z, Li C. Multi-scale region selection network in deep features for full-field mammogram classification. Med Image Anal 2025; 100:103399. [PMID: 39615148 DOI: 10.1016/j.media.2024.103399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 11/14/2024] [Accepted: 11/17/2024] [Indexed: 12/16/2024]
Abstract
Early diagnosis and treatment of breast cancer can effectively reduce mortality. Since mammogram is one of the most commonly used methods in the early diagnosis of breast cancer, the classification of mammogram images is an important work of computer-aided diagnosis (CAD) systems. With the development of deep learning in CAD, deep convolutional neural networks have been shown to have the ability to complete the classification of breast cancer tumor patches with high quality, which makes most previous CNN-based full-field mammography classification methods rely on region of interest (ROI) or segmentation annotation to enable the model to locate and focus on small tumor regions. However, the dependence on ROI greatly limits the development of CAD, because obtaining a large number of reliable ROI annotations is expensive and difficult. Some full-field mammography image classification algorithms use multi-stage training or multi-feature extractors to get rid of the dependence on ROI, which increases the computational amount of the model and feature redundancy. In order to reduce the cost of model training and make full use of the feature extraction capability of CNN, we propose a deep multi-scale region selection network (MRSN) in deep features for end-to-end training to classify full-field mammography without ROI or segmentation annotation. Inspired by the idea of multi-example learning and the patch classifier, MRSN filters the feature information and saves only the feature information of the tumor region to make the performance of the full-field image classifier closer to the patch classifier. MRSN first scores different regions under different dimensions to obtain the location information of tumor regions. Then, a few high-scoring regions are selected by location information as feature representations of the entire image, allowing the model to focus on the tumor region. Experiments on two public datasets and one private dataset prove that the proposed MRSN achieves the most advanced performance.
Collapse
Affiliation(s)
- Luhao Sun
- Breast Cancer Center, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Bowen Han
- School of Computer Science and Technology, Tongji University, Shanghai 201804, China
| | - Wenzong Jiang
- The College of Oceanography and Space Informatics, China University of Petroleum (East China), Qingdao 266580, China
| | - Weifeng Liu
- The College of Control Science and Engineering, China University of Petroleum (East China), Qingdao 266580, China
| | - Baodi Liu
- The College of Control Science and Engineering, China University of Petroleum (East China), Qingdao 266580, China
| | - Dapeng Tao
- The School of Information Science and Engineering, Yunnan University, Yunnan 650504, China; Yunnan United Vision Technology Co., Ltd., Yunnan 650504, China
| | - Zhiyong Yu
- Breast Cancer Center, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China.
| | - Chao Li
- Breast Cancer Center, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China.
| |
Collapse
|
2
|
Han B, Sun L, Li C, Yu Z, Jiang W, Liu W, Tao D, Liu B. Deep Location Soft-Embedding-Based Network With Regional Scoring for Mammogram Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3137-3148. [PMID: 38625766 DOI: 10.1109/tmi.2024.3389661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2024]
Abstract
Early detection and treatment of breast cancer can significantly reduce patient mortality, and mammogram is an effective method for early screening. Computer-aided diagnosis (CAD) of mammography based on deep learning can assist radiologists in making more objective and accurate judgments. However, existing methods often depend on datasets with manual segmentation annotations. In addition, due to the large image sizes and small lesion proportions, many methods that do not use region of interest (ROI) mostly rely on multi-scale and multi-feature fusion models. These shortcomings increase the labor, money, and computational overhead of applying the model. Therefore, a deep location soft-embedding-based network with regional scoring (DLSEN-RS) is proposed. DLSEN-RS is an end-to-end mammography image classification method containing only one feature extractor and relies on positional embedding (PE) and aggregation pooling (AP) modules to locate lesion areas without bounding boxes, transfer learning, or multi-stage training. In particular, the introduced PE and AP modules exhibit versatility across various CNN models and improve the model's tumor localization and diagnostic accuracy for mammography images. Experiments are conducted on published INbreast and CBIS-DDSM datasets, and compared to previous state-of-the-art mammographic image classification methods, DLSEN-RS performed satisfactorily.
Collapse
|
3
|
Bhalla D, Rangarajan K, Chandra T, Banerjee S, Arora C. Reproducibility and Explainability of Deep Learning in Mammography: A Systematic Review of Literature. Indian J Radiol Imaging 2024; 34:469-487. [PMID: 38912238 PMCID: PMC11188703 DOI: 10.1055/s-0043-1775737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024] Open
Abstract
Background Although abundant literature is currently available on the use of deep learning for breast cancer detection in mammography, the quality of such literature is widely variable. Purpose To evaluate published literature on breast cancer detection in mammography for reproducibility and to ascertain best practices for model design. Methods The PubMed and Scopus databases were searched to identify records that described the use of deep learning to detect lesions or classify images into cancer or noncancer. A modification of Quality Assessment of Diagnostic Accuracy Studies (mQUADAS-2) tool was developed for this review and was applied to the included studies. Results of reported studies (area under curve [AUC] of receiver operator curve [ROC] curve, sensitivity, specificity) were recorded. Results A total of 12,123 records were screened, of which 107 fit the inclusion criteria. Training and test datasets, key idea behind model architecture, and results were recorded for these studies. Based on mQUADAS-2 assessment, 103 studies had high risk of bias due to nonrepresentative patient selection. Four studies were of adequate quality, of which three trained their own model, and one used a commercial network. Ensemble models were used in two of these. Common strategies used for model training included patch classifiers, image classification networks (ResNet in 67%), and object detection networks (RetinaNet in 67%). The highest reported AUC was 0.927 ± 0.008 on a screening dataset, while it reached 0.945 (0.919-0.968) on an enriched subset. Higher values of AUC (0.955) and specificity (98.5%) were reached when combined radiologist and Artificial Intelligence readings were used than either of them alone. None of the studies provided explainability beyond localization accuracy. None of the studies have studied interaction between AI and radiologist in a real world setting. Conclusion While deep learning holds much promise in mammography interpretation, evaluation in a reproducible clinical setting and explainable networks are the need of the hour.
Collapse
Affiliation(s)
- Deeksha Bhalla
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Krithika Rangarajan
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Tany Chandra
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Subhashis Banerjee
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| | - Chetan Arora
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| |
Collapse
|
4
|
Baghdadi NA, Malki A, Magdy Balaha H, AbdulAzeem Y, Badawy M, Elhosseini M. Classification of breast cancer using a manta-ray foraging optimized transfer learning framework. PeerJ Comput Sci 2022; 8:e1054. [PMID: 36092017 PMCID: PMC9454783 DOI: 10.7717/peerj-cs.1054] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
Due to its high prevalence and wide dissemination, breast cancer is a particularly dangerous disease. Breast cancer survival chances can be improved by early detection and diagnosis. For medical image analyzers, diagnosing is tough, time-consuming, routine, and repetitive. Medical image analysis could be a useful method for detecting such a disease. Recently, artificial intelligence technology has been utilized to help radiologists identify breast cancer more rapidly and reliably. Convolutional neural networks, among other technologies, are promising medical image recognition and classification tools. This study proposes a framework for automatic and reliable breast cancer classification based on histological and ultrasound data. The system is built on CNN and employs transfer learning technology and metaheuristic optimization. The Manta Ray Foraging Optimization (MRFO) approach is deployed to improve the framework's adaptability. Using the Breast Cancer Dataset (two classes) and the Breast Ultrasound Dataset (three-classes), eight modern pre-trained CNN architectures are examined to apply the transfer learning technique. The framework uses MRFO to improve the performance of CNN architectures by optimizing their hyperparameters. Extensive experiments have recorded performance parameters, including accuracy, AUC, precision, F1-score, sensitivity, dice, recall, IoU, and cosine similarity. The proposed framework scored 97.73% on histopathological data and 99.01% on ultrasound data in terms of accuracy. The experimental results show that the proposed framework is superior to other state-of-the-art approaches in the literature review.
Collapse
Affiliation(s)
- Nadiah A. Baghdadi
- College of Nursing, Nursing Management and Education Department, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Amer Malki
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
| | - Hossam Magdy Balaha
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Yousry AbdulAzeem
- Computer Engineering Department, Misr Higher Institute for Engineering and Technology, Mansoura, Egypt
| | - Mahmoud Badawy
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Mostafa Elhosseini
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| |
Collapse
|
5
|
Zhao F, Liu W, Wen C. A New Method of Image Classification Based on Domain Adaptation. SENSORS 2022; 22:s22041315. [PMID: 35214217 PMCID: PMC8877464 DOI: 10.3390/s22041315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 01/27/2022] [Accepted: 01/29/2022] [Indexed: 02/06/2023]
Abstract
Deep neural networks can learn powerful representations from massive amounts of labeled data; however, their performance is unsatisfactory in the case of large samples and small labels. Transfer learning can bridge between a source domain with rich sample data and a target domain with only a few or zero labeled samples and, thus, complete the transfer of knowledge by aligning the distribution between domains through methods, such as domain adaptation. Previous domain adaptation methods mostly align the features in the feature space of all categories on a global scale. Recently, the method of locally aligning the sub-categories by introducing label information achieved better results. Based on this, we present a deep fuzzy domain adaptation (DFDA) that assigns different weights to samples of the same category in the source and target domains, which enhances the domain adaptive capabilities. Our experiments demonstrate that DFDA can achieve remarkable results on standard domain adaptation datasets.
Collapse
Affiliation(s)
- Fangwen Zhao
- School of Electrical and Control Engineering, Shaanxi University of Science and Technology, Xi’an 710021, China; (F.Z.); (W.L.)
| | - Weifeng Liu
- School of Electrical and Control Engineering, Shaanxi University of Science and Technology, Xi’an 710021, China; (F.Z.); (W.L.)
| | - Chenglin Wen
- School of Automation, Guangdong University of Petrochemical Technology, Maoming 525000, China
- Correspondence:
| |
Collapse
|
6
|
Montaha S, Azam S, Rafid AKMRH, Ghosh P, Hasan MZ, Jonkman M, De Boer F. BreastNet18: A High Accuracy Fine-Tuned VGG16 Model Evaluated Using Ablation Study for Diagnosing Breast Cancer from Enhanced Mammography Images. BIOLOGY 2021; 10:biology10121347. [PMID: 34943262 PMCID: PMC8698892 DOI: 10.3390/biology10121347] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 12/12/2021] [Accepted: 12/14/2021] [Indexed: 12/14/2022]
Abstract
Simple Summary Breast cancer diagnosis at an early stage using mammography is important, as it assists clinical specialists in treatment planning to increase survival rates. The aim of this study is to construct an effective method to classify breast images into four classes with a low error rate. Initially, unwanted regions of mammograms are removed, the quality is enhanced, and the cancerous lesions are highlighted with different artifacts removal, noise reduction, and enhancement techniques. The number of mammograms is increased using seven augmentation techniques to deal with over-fitting and under-fitting problems. Afterwards, six fine-tuned convolution neural networks (CNNs), originally developed for other purposes, are evaluated, and VGG16 yielded the highest performance. We propose a BreastNet18 model based on the fine-tuned VGG16, changing different hyper parameters and layer structures after experimentation with our dataset. Performing an ablation study on the proposed model and selecting suitable parameter values for preprocessing algorithms increases the accuracy of our model to 98.02%, outperforming some existing state-of-the-art approaches. To analyze the performance, several performance metrics are generated and evaluated for every model and for BreastNet18. Results suggest that accuracy improvement can be obtained through image pre-processing techniques, augmentation, and ablation study. To investigate possible overfitting issues, a k-fold cross validation is carried out. To assert the robustness of the network, the model is tested on a dataset containing noisy mammograms. This may help medical specialists in efficient and accurate diagnosis and early treatment planning. Abstract Background: Identification and treatment of breast cancer at an early stage can reduce mortality. Currently, mammography is the most widely used effective imaging technique in breast cancer detection. However, an erroneous mammogram based interpretation may result in false diagnosis rate, as distinguishing cancerous masses from adjacent tissue is often complex and error-prone. Methods: Six pre-trained and fine-tuned deep CNN architectures: VGG16, VGG19, MobileNetV2, ResNet50, DenseNet201, and InceptionV3 are evaluated to determine which model yields the best performance. We propose a BreastNet18 model using VGG16 as foundational base, since VGG16 performs with the highest accuracy. An ablation study is performed on BreastNet18, to evaluate its robustness and achieve the highest possible accuracy. Various image processing techniques with suitable parameter values are employed to remove artefacts and increase the image quality. A total dataset of 1442 preprocessed mammograms was augmented using seven augmentation techniques, resulting in a dataset of 11,536 images. To investigate possible overfitting issues, a k-fold cross validation is carried out. The model was then tested on noisy mammograms to evaluate its robustness. Results were compared with previous studies. Results: Proposed BreastNet18 model performed best with a training accuracy of 96.72%, a validating accuracy of 97.91%, and a test accuracy of 98.02%. In contrast to this, VGGNet19 yielded test accuracy of 96.24%, MobileNetV2 77.84%, ResNet50 79.98%, DenseNet201 86.92%, and InceptionV3 76.87%. Conclusions: Our proposed approach based on image processing, transfer learning, fine-tuning, and ablation study has demonstrated a high correct breast cancer classification while dealing with a limited number of complex medical images.
Collapse
Affiliation(s)
- Sidratul Montaha
- Department of Computer Science and Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (S.M.); (A.K.M.R.H.R.); (M.Z.H.)
| | - Sami Azam
- College of Engineering, IT and Environment, Charles Darwin University, Darwin, NT 0909, Australia; (M.J.); (F.D.B.)
- Correspondence:
| | | | - Pronab Ghosh
- Department of Computer Science (CS), Lakehead University, 955 Oliver Rd, Thunder Bay, ON P7B 5E1, Canada;
| | - Md. Zahid Hasan
- Department of Computer Science and Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (S.M.); (A.K.M.R.H.R.); (M.Z.H.)
| | - Mirjam Jonkman
- College of Engineering, IT and Environment, Charles Darwin University, Darwin, NT 0909, Australia; (M.J.); (F.D.B.)
| | - Friso De Boer
- College of Engineering, IT and Environment, Charles Darwin University, Darwin, NT 0909, Australia; (M.J.); (F.D.B.)
| |
Collapse
|
7
|
Oza P, Sharma P, Patel S, Bruno A. A Bottom-Up Review of Image Analysis Methods for Suspicious Region Detection in Mammograms. J Imaging 2021; 7:190. [PMID: 34564116 PMCID: PMC8466003 DOI: 10.3390/jimaging7090190] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 09/09/2021] [Accepted: 09/14/2021] [Indexed: 11/17/2022] Open
Abstract
Breast cancer is one of the most common death causes amongst women all over the world. Early detection of breast cancer plays a critical role in increasing the survival rate. Various imaging modalities, such as mammography, breast MRI, ultrasound and thermography, are used to detect breast cancer. Though there is a considerable success with mammography in biomedical imaging, detecting suspicious areas remains a challenge because, due to the manual examination and variations in shape, size, other mass morphological features, mammography accuracy changes with the density of the breast. Furthermore, going through the analysis of many mammograms per day can be a tedious task for radiologists and practitioners. One of the main objectives of biomedical imaging is to provide radiologists and practitioners with tools to help them identify all suspicious regions in a given image. Computer-aided mass detection in mammograms can serve as a second opinion tool to help radiologists avoid running into oversight errors. The scientific community has made much progress in this topic, and several approaches have been proposed along the way. Following a bottom-up narrative, this paper surveys different scientific methodologies and techniques to detect suspicious regions in mammograms spanning from methods based on low-level image features to the most recent novelties in AI-based approaches. Both theoretical and practical grounds are provided across the paper sections to highlight the pros and cons of different methodologies. The paper's main scope is to let readers embark on a journey through a fully comprehensive description of techniques, strategies and datasets on the topic.
Collapse
Affiliation(s)
- Parita Oza
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Paawan Sharma
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Samir Patel
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Alessandro Bruno
- Department of Computing and Informatics, Bournemouth University, Poole, Dorset BH12 5BB, UK
| |
Collapse
|