1
|
Gupta M, Verma N, Sharma N, Singh SN, Brojen Singh RK, Sharma SK. Deep transfer learning hybrid techniques for precision in breast cancer tumor histopathology classification. Health Inf Sci Syst 2025; 13:20. [PMID: 39949707 PMCID: PMC11813847 DOI: 10.1007/s13755-025-00337-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Accepted: 01/07/2025] [Indexed: 02/16/2025] Open
Abstract
The breast cancer is one of the most prevalent causes of cancer-related death globally. Preliminary diagnosis of breast cancer increases the patient's chances of survival. Breast cancer classification is a challenging problem due to dense tissue structures, subtle variations, cellular heterogeneity, artifacts, and variability. In this paper, we propose three hybrid deep-transfer learning models for breast cancer classification using histopathology images. These models use Xception model as a base model, and we add seven more layers to fine-tune the base model. We also performed an extensive comparative analysis of five prominent machine-learning classifiers, namely Random Forest Classifier (RFC), Logistic Regression (LR), Support Vector Classifier (SVC), K-Nearest Neighbors (KNN), and Ada-boost. We incorporate the best performing two classifiers, namely RFC and SVC, in the fine-tuned Xception model, and accordingly, they are named as Xception Random Forest (XRF) and Xception Support Vector (XSV), respectively. The fine-tuned Xception model with softmax classifier is termed as Multi-layer Xception Classifier (MXC). These three models are evaluated on the two publically available datasets: BreakHis and Breast Histopathology Images Database (BHID). Our all three models perform better than the state-of-the-art methods. The XRF provides the best performance at the 40 × magnification level on the BreakHis dataset, with an accuracy (ACC) of 94.44%, F1 score (F1) of 94.44%, area under the receiver operating characteristic curve (AUC) of 95.12%, Matthew's correlation coefficient (MCC) of 88.98%, kappa (K) of 88.88%, and classification success index (CSI) of 89.23%. The MXC provides the best performance on the BHID dataset, with an ACC of 88.50%, F1 of 88.50%, AUC of 95.12%, MCC of 77.03%, K of 77.00%, and CSI of 79.13%. Further, to validate our models, we performed fivefold cross-validation on both datasets and obtained similar results.
Collapse
Affiliation(s)
- Muniraj Gupta
- School of Computer & Systems Sciences, Jawaharlal Nehru University, New Delhi, 110067 India
| | - Nidhi Verma
- Ramlal Anand College, University of Delhi, South Campus, Anand Niketan, New Delhi, 110021 India
| | - Naveen Sharma
- Indian Council of Medical Research, New Delhi, 110029 India
| | | | - R. K. Brojen Singh
- School of Computational & Integrative Sciences, Jawaharlal Nehru University, New Delhi, 110067 India
| | - Saurabh Kumar Sharma
- School of Computer & Systems Sciences, Jawaharlal Nehru University, New Delhi, 110067 India
| |
Collapse
|
2
|
Wang Y, Luo L, Wu M, Wang Q, Chen H. Learning robust medical image segmentation from multi-source annotations. Med Image Anal 2025; 101:103489. [PMID: 39933334 DOI: 10.1016/j.media.2025.103489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 11/02/2024] [Accepted: 01/28/2025] [Indexed: 02/13/2025]
Abstract
Collecting annotations from multiple independent sources could mitigate the impact of potential noises and biases from a single source, which is a common practice in medical image segmentation. However, learning segmentation networks from multi-source annotations remains a challenge due to the uncertainties brought by the variance of the annotations. In this paper, we proposed an Uncertainty-guided Multi-source Annotation Network (UMA-Net), which guided the training process by uncertainty estimation at both the pixel and the image levels. First, we developed an annotation uncertainty estimation module (AUEM) to estimate the pixel-wise uncertainty of each annotation, which then guided the network to learn from reliable pixels by a weighted segmentation loss. Second, a quality assessment module (QAM) was proposed to assess the image-level quality of the input samples based on the former estimated annotation uncertainties. Furthermore, instead of discarding the low-quality samples, we introduced an auxiliary predictor to learn from them and thus ensured the preservation of their representation knowledge in the backbone without directly accumulating errors within the primary predictor. Extensive experiments demonstrated the effectiveness and feasibility of our proposed UMA-Net on various datasets, including 2D chest X-ray segmentation dataset, 2D fundus image segmentation dataset, 3D breast DCE-MRI segmentation dataset, and the QUBIQ multi-task segmentation dataset. Code will be released at https://github.com/wangjin2945/UMA-Net.
Collapse
Affiliation(s)
- Yifeng Wang
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Luyang Luo
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | | | - Qiong Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China; Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China; Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China; State Key Laboratory of Molecular Neuroscience, The Hong Kong University of Science and Technology, Hong Kong, China.
| |
Collapse
|
3
|
Huang Y, Chang A, Dou H, Tao X, Zhou X, Cao Y, Huang R, Frangi AF, Bao L, Yang X, Ni D. Flip Learning: Weakly supervised erase to segment nodules in breast ultrasound. Med Image Anal 2025; 102:103552. [PMID: 40179628 DOI: 10.1016/j.media.2025.103552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 12/01/2024] [Accepted: 03/11/2025] [Indexed: 04/05/2025]
Abstract
Accurate segmentation of nodules in both 2D breast ultrasound (BUS) and 3D automated breast ultrasound (ABUS) is crucial for clinical diagnosis and treatment planning. Therefore, developing an automated system for nodule segmentation can enhance user independence and expedite clinical analysis. Unlike fully-supervised learning, weakly-supervised segmentation (WSS) can streamline the laborious and intricate annotation process. However, current WSS methods face challenges in achieving precise nodule segmentation, as many of them depend on inaccurate activation maps or inefficient pseudo-mask generation algorithms. In this study, we introduce a novel multi-agent reinforcement learning-based WSS framework called Flip Learning, which relies solely on 2D/3D boxes for accurate segmentation. Specifically, multiple agents are employed to erase the target from the box to facilitate classification tag flipping, with the erased region serving as the predicted segmentation mask. The key contributions of this research are as follows: (1) Adoption of a superpixel/supervoxel-based approach to encode the standardized environment, capturing boundary priors and expediting the learning process. (2) Introduction of three meticulously designed rewards, comprising a classification score reward and two intensity distribution rewards, to steer the agents' erasing process precisely, thereby avoiding both under- and over-segmentation. (3) Implementation of a progressive curriculum learning strategy to enable agents to interact with the environment in a progressively challenging manner, thereby enhancing learning efficiency. Extensively validated on the large in-house BUS and ABUS datasets, our Flip Learning method outperforms state-of-the-art WSS methods and foundation models, and achieves comparable performance as fully-supervised learning algorithms.
Collapse
Affiliation(s)
- Yuhao Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Ao Chang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Haoran Dou
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), University of Leeds, Leeds, UK; Department of Computer Science, School of Engineering, University of Manchester, Manchester, UK
| | - Xing Tao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xinrui Zhou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Yan Cao
- Shenzhen RayShape Medical Technology Co., Ltd, Shenzhen, China
| | - Ruobing Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Alejandro F Frangi
- Division of Informatics, Imaging and Data Science, School of Health Sciences, University of Manchester, Manchester, UK; Department of Computer Science, School of Engineering, University of Manchester, Manchester, UK; Medical Imaging Research Center (MIRC), Department of Electrical Engineering, Department of Cardiovascular Sciences, KU Leuven, Belgium; Alan Turing Institute, London, UK; NIHR Manchester Biomedical Research Centre, Manchester Academic Health Science Centre, Manchester, UK
| | - Lingyun Bao
- Department of Ultrasound, Affiliated Hangzhou First People's Hospital, School of Medicine, Westlake University, China.
| | - Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China.
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China; School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, China.
| |
Collapse
|
4
|
Wen X, Tu H, Zhao B, Zhou W, Yang Z, Li L. Identification of benign and malignant breast nodules on ultrasound: comparison of multiple deep learning models and model interpretation. Front Oncol 2025; 15:1517278. [PMID: 40040727 PMCID: PMC11876547 DOI: 10.3389/fonc.2025.1517278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2024] [Accepted: 01/30/2025] [Indexed: 03/06/2025] Open
Abstract
Background and Purpose Deep learning (DL) algorithms generally require full supervision of annotating the region of interest (ROI), a process that is both labor-intensive and susceptible to bias. We aimed to develop a weakly supervised algorithm to differentiate between benign and malignant breast tumors in ultrasound images without image annotation. Methods We developed and validated the models using two publicly available datasets: breast ultrasound image (BUSI) and GDPH&SYSUCC breast ultrasound datasets. After removing the poor quality images, a total of 3049 images were included, divided into two classes: benign (N = 1320 images) and malignant (N = 1729 images). Weakly-supervised DL algorithms were implemented with four networks (DenseNet121, ResNet50, EffientNetb0, and Vision Transformer) and trained using 2136 unannotated breast ultrasound images. 609 and 304 images were used for validation and test sets, respectively. Diagnostic performances were calculated as the area under the receiver operating characteristic curve (AUC). Using the class activation map to interpret the prediction results of weakly supervised DL algorithms. Results The DenseNet121 model, utilizing complete image inputs without ROI annotations, demonstrated superior diagnostic performance in distinguishing between benign and malignant breast nodules when compared to ResNet50, EfficientNetb0, and Vision Transformer models. DenseNet121 achieved the highest AUC, with values of 0.94 on the validation set and 0.93 on the test set, significantly surpassing the performance of the other models across both datasets (all P < 0.05). Conclusion The weakly supervised DenseNet121 model developed in this study demonstrated feasibility for ultrasound diagnosis of breast tumor and showed good capabilities in differential diagnosis. This model may help radiologists, especially novice doctors, to improve the accuracy of breast tumor diagnosis using ultrasound.
Collapse
Affiliation(s)
- Xi Wen
- Department of Ultrasound, The Central Hospital of Enshi Tujia And Miao Autonomous Prefecture (Enshi Clinical College of Wuhan University), Enshi, China
| | - Hao Tu
- Department of Ultrasound, The Central Hospital of Enshi Tujia And Miao Autonomous Prefecture (Enshi Clinical College of Wuhan University), Enshi, China
| | - Bingyang Zhao
- Department of Neurology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Wenbo Zhou
- Department of Stomatology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Zhuo Yang
- Department of Ultrasound, The Central Hospital of Enshi Tujia And Miao Autonomous Prefecture (Enshi Clinical College of Wuhan University), Enshi, China
| | - Lijuan Li
- Department of Ultrasound, The Central Hospital of Enshi Tujia And Miao Autonomous Prefecture (Enshi Clinical College of Wuhan University), Enshi, China
| |
Collapse
|
5
|
Saldanha OL, Zhu J, Müller-Franzes G, Carrero ZI, Payne NR, Escudero Sánchez L, Varoutas PC, Kyathanahally S, Laleh NG, Pfeiffer K, Ligero M, Behner J, Abdullah KA, Apostolakos G, Kolofousi C, Kleanthous A, Kalogeropoulos M, Rossi C, Nowakowska S, Athanasiou A, Perez-Lopez R, Mann R, Veldhuis W, Camps J, Schulz V, Wenzel M, Morozov S, Ciritsis A, Kuhl C, Gilbert FJ, Truhn D, Kather JN. Swarm learning with weak supervision enables automatic breast cancer detection in magnetic resonance imaging. COMMUNICATIONS MEDICINE 2025; 5:38. [PMID: 39915630 PMCID: PMC11802753 DOI: 10.1038/s43856-024-00722-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Accepted: 12/18/2024] [Indexed: 02/09/2025] Open
Abstract
BACKGROUND Over the next 5 years, new breast cancer screening guidelines recommending magnetic resonance imaging (MRI) for certain patients will significantly increase the volume of imaging data to be analyzed. While this increase poses challenges for radiologists, artificial intelligence (AI) offers potential solutions to manage this workload. However, the development of AI models is often hindered by manual annotation requirements and strict data-sharing regulations between institutions. METHODS In this study, we present an integrated pipeline combining weakly supervised learning-reducing the need for detailed annotations-with local AI model training via swarm learning (SL), which circumvents centralized data sharing. We utilized three datasets comprising 1372 female bilateral breast MRI exams from institutions in three countries: the United States (US), Switzerland, and the United Kingdom (UK) to train models. These models were then validated on two external datasets consisting of 649 bilateral breast MRI exams from Germany and Greece. RESULTS Upon systematically benchmarking various weakly supervised two-dimensional (2D) and three-dimensional (3D) deep learning (DL) methods, we find that the 3D-ResNet-101 demonstrates superior performance. By implementing a real-world SL setup across three international centers, we observe that these collaboratively trained models outperform those trained locally. Even with a smaller dataset, we demonstrate the practical feasibility of deploying SL internationally with on-site data processing, addressing challenges such as data privacy and annotation variability. CONCLUSIONS Combining weakly supervised learning with SL enhances inter-institutional collaboration, improving the utility of distributed datasets for medical AI training without requiring detailed annotations or centralized data sharing.
Collapse
Grants
- The study is organized and funded by the ODELIA consortium, which receives funding from the European Union’s Horizon Europe research and innovation program under grant agreement No 101057091. In addition, JNK is supported by the German Federal Ministry of Health (DEEP LIVER, ZMVI1-2520DAT111), the German Cancer Aid (DECADE, 70115166), the German Federal Ministry of Education and Research (PEARL, 01KD2104C; CAMINO, 01EO2101; SWAG, 01KD2215A; TRANSFORM LIVER, 031L0312A; TANGERINE, 01KT2302 through ERA-NET Transcan), the German Academic Exchange Service (SECAI, 57616814), the German Federal Joint Committee (TransplantKI, 01VSF21048) the European Union’s Horizon Europe and innovation program (GENIAL, 101096312) and the National Institute for Health and Care Research (NIHR, NIHR213331) Leeds Biomedical Research Centre. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care. LM is funded by „NUM 2.0“ (FKZ: 01KX2121).
Collapse
Affiliation(s)
- Oliver Lester Saldanha
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Aachen, Germany
| | - Jiefu Zhu
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Aachen, Germany
| | - Zunamys I Carrero
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Aachen, Germany
| | - Nicholas R Payne
- Department of Radiology, Clinical School, Cambridge Biomedical Research Centre, University of Cambridge, Cambridge, UK
| | - Lorena Escudero Sánchez
- Department of Radiology, Clinical School, Cambridge Biomedical Research Centre, University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, Cambridge, UK
| | | | - Sreenath Kyathanahally
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland
- b-rayZ AG, Schlieren, Switzerland
| | - Narmin Ghaffari Laleh
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Kevin Pfeiffer
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Marta Ligero
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Jakob Behner
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Kamarul A Abdullah
- Department of Radiology, Clinical School, Cambridge Biomedical Research Centre, University of Cambridge, Cambridge, UK
- Universiti Sultan Zainal Abidin, Kuala Nerus, Terengganu, Malaysia
| | | | | | - Antri Kleanthous
- Breast Imaging Department, Mitera Hospital Athens, Athens, Greece
| | | | - Cristina Rossi
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland
- b-rayZ AG, Schlieren, Switzerland
| | - Sylwia Nowakowska
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland
| | | | - Raquel Perez-Lopez
- Radiomics Group, Vall d'Hebron Institute of Oncology (VHIO), Barcelona, Spain
| | - Ritse Mann
- Department of Diagnostic Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
- Department of radiology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Wouter Veldhuis
- Imaging Division, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Julia Camps
- Breast Cancer Unit, Ribera Salud Hospitals, Valencia, Spain
| | - Volkmar Schulz
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Markus Wenzel
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Constructor University Bremen GmbH, Bremen, Germany
| | - Sergey Morozov
- The European Society of Medical Imaging Informatics (EuSoMII), Vienna, Austria
| | - Alexander Ciritsis
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland
| | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Aachen, Germany
| | - Fiona J Gilbert
- Department of Radiology, Clinical School, Cambridge Biomedical Research Centre, University of Cambridge, Cambridge, UK
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Aachen, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany.
- Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.
- Department of Medicine 1, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany.
| |
Collapse
|
6
|
Abdullah KA, Marziali S, Nanaa M, Escudero Sánchez L, Payne NR, Gilbert FJ. Deep learning-based breast cancer diagnosis in breast MRI: systematic review and meta-analysis. Eur Radiol 2025:10.1007/s00330-025-11406-6. [PMID: 39907762 DOI: 10.1007/s00330-025-11406-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Revised: 12/10/2024] [Accepted: 01/10/2025] [Indexed: 02/06/2025]
Abstract
OBJECTIVES The aim of this work is to evaluate the performance of deep learning (DL) models for breast cancer diagnosis with MRI. MATERIALS AND METHODS A literature search was conducted on Web of Science, PubMed, and IEEE Xplore for relevant studies published from January 2015 to February 2024. The study was registered with the PROSPERO International Prospective Register of Systematic Reviews (protocol no. CRD42024485371). The quality assessment of diagnostic accuracy studies-2 (QUADAS2) tool and the Must AI Criteria-10 (MAIC-10) checklist were used to assess quality and risk of bias. The meta-analysis included studies reporting DL for breast cancer diagnosis and their performance, from which pooled summary estimates for the area under the curve (AUC), sensitivity, and specificity were calculated. RESULTS A total of 40 studies were included, of which only 21 were eligible for quantitative analysis. Convolutional neural networks (CNNs) were used in 62.5% (25/40) of the implemented models, with the remaining 37.5% (15/40) hybrid composite models (HCMs). The pooled estimates of AUC, sensitivity, and specificity were 0.90 (95% CI: 0.87, 0.93), 88% (95% CI: 86, 91%), and 90% (95% CI: 87, 93%), respectively. CONCLUSIONS DL models used for breast cancer diagnosis on MRI achieve high performance. However, there is considerable inherent variability in this analysis. Therefore, continuous evaluation and refinement of DL models is essential to ensure their practicality in the clinical setting. KEY POINTS Question Can DL models improve diagnostic accuracy in breast MRI, addressing challenges like overfitting and heterogeneity in study designs and imaging sequences? Findings DL achieved high diagnostic accuracy (AUC 0.90, sensitivity 88%, specificity 90%) in breast MRI, with training size significantly impacting performance metrics (p < 0.001). Clinical relevance DL models demonstrate high accuracy in breast cancer diagnosis using MRI, showing the potential to enhance diagnostic confidence and reduce radiologist workload, especially with larger datasets minimizing overfitting and improving clinical reliability.
Collapse
Affiliation(s)
- Kamarul Amin Abdullah
- Department of Radiology, University of Cambridge School of Clinical Medicine, Cambridge Biomedical Campus, Cambridge, UK
- Universiti Sultan Zainal Abidin, Terengganu, Malaysia
| | - Sara Marziali
- Department of Radiology, University of Cambridge School of Clinical Medicine, Cambridge Biomedical Campus, Cambridge, UK
- Department of Radiology and Radiotherapy, Istituto Nazionale dei Tumori, Milan, Italy
| | - Muzna Nanaa
- Department of Radiology, University of Cambridge School of Clinical Medicine, Cambridge Biomedical Campus, Cambridge, UK
| | - Lorena Escudero Sánchez
- Department of Radiology, University of Cambridge School of Clinical Medicine, Cambridge Biomedical Campus, Cambridge, UK
- Cancer Research UK Cambridge Centre, Li Ka Shing Centre, Cambridge, UK
| | - Nicholas R Payne
- Department of Radiology, University of Cambridge School of Clinical Medicine, Cambridge Biomedical Campus, Cambridge, UK
| | - Fiona J Gilbert
- Department of Radiology, University of Cambridge School of Clinical Medicine, Cambridge Biomedical Campus, Cambridge, UK.
- Department of Radiology, Addenbrookes Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK.
| |
Collapse
|
7
|
Luo L, Wang X, Lin Y, Ma X, Tan A, Chan R, Vardhanabhuti V, Chu WC, Cheng KT, Chen H. Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions. IEEE Rev Biomed Eng 2025; 18:130-151. [PMID: 38265911 DOI: 10.1109/rbme.2024.3357877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. This paper provides an extensive review of deep learning-based breast cancer imaging research, covering studies on mammograms, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are elaborated and discussed. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.
Collapse
|
8
|
Rai HM, Yoo J, Dashkevych S. Transformative Advances in AI for Precise Cancer Detection: A Comprehensive Review of Non-Invasive Techniques. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING 2025. [DOI: 10.1007/s11831-024-10219-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 12/07/2024] [Indexed: 03/02/2025]
|
9
|
Jannatdoust P, Valizadeh P, Saeedi N, Valizadeh G, Salari HM, Saligheh Rad H, Gity M. Computer-Aided Detection (CADe) and Segmentation Methods for Breast Cancer Using Magnetic Resonance Imaging (MRI). J Magn Reson Imaging 2025. [PMID: 39781684 DOI: 10.1002/jmri.29687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2024] [Revised: 11/30/2024] [Accepted: 12/02/2024] [Indexed: 01/12/2025] Open
Abstract
Breast cancer continues to be a major health concern, and early detection is vital for enhancing survival rates. Magnetic resonance imaging (MRI) is a key tool due to its substantial sensitivity for invasive breast cancers. Computer-aided detection (CADe) systems enhance the effectiveness of MRI by identifying potential lesions, aiding radiologists in focusing on areas of interest, extracting quantitative features, and integrating with computer-aided diagnosis (CADx) pipelines. This review aims to provide a comprehensive overview of the current state of CADe systems in breast MRI, focusing on the technical details of pipelines and segmentation models including classical intensity-based methods, supervised and unsupervised machine learning (ML) approaches, and the latest deep learning (DL) architectures. It highlights recent advancements from traditional algorithms to sophisticated DL models such as U-Nets, emphasizing CADe implementation of multi-parametric MRI acquisitions. Despite these advancements, CADe systems face challenges like variable false-positive and negative rates, complexity in interpreting extensive imaging data, variability in system performance, and lack of large-scale studies and multicentric models, limiting the generalizability and suitability for clinical implementation. Technical issues, including image artefacts and the need for reproducible and explainable detection algorithms, remain significant hurdles. Future directions emphasize developing more robust and generalizable algorithms, integrating explainable AI to improve transparency and trust among clinicians, developing multi-purpose AI systems, and incorporating large language models to enhance diagnostic reporting and patient management. Additionally, efforts to standardize and streamline MRI protocols aim to increase accessibility and reduce costs, optimizing the use of CADe systems in clinical practice. LEVEL OF EVIDENCE: NA TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Payam Jannatdoust
- School of Medicine, Tehran University of Medical Science, Tehran, Iran
| | - Parya Valizadeh
- School of Medicine, Tehran University of Medical Science, Tehran, Iran
| | - Nikoo Saeedi
- Student Research Committee, Islamic Azad University, Mashhad Branch, Mashhad, Iran
| | - Gelareh Valizadeh
- Quantitative MR Imaging and Spectroscopy Group (QMISG), Tehran University of Medical Sciences, Tehran, Iran
| | - Hanieh Mobarak Salari
- Quantitative MR Imaging and Spectroscopy Group (QMISG), Tehran University of Medical Sciences, Tehran, Iran
| | - Hamidreza Saligheh Rad
- Quantitative MR Imaging and Spectroscopy Group (QMISG), Tehran University of Medical Sciences, Tehran, Iran
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran
| | - Masoumeh Gity
- Advanced Diagnostic and Interventional Radiology Research Center, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
10
|
Singh S, Healy NA. The top 100 most-cited articles on artificial intelligence in breast radiology: a bibliometric analysis. Insights Imaging 2024; 15:297. [PMID: 39666106 PMCID: PMC11638451 DOI: 10.1186/s13244-024-01869-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Accepted: 11/24/2024] [Indexed: 12/13/2024] Open
Abstract
INTRODUCTION Artificial intelligence (AI) in radiology is a rapidly evolving field. In breast imaging, AI has already been applied in a real-world setting and multiple studies have been conducted in the area. The aim of this analysis is to identify the most influential publications on the topic of artificial intelligence in breast imaging. METHODS A retrospective bibliometric analysis was conducted on artificial intelligence in breast radiology using the Web of Science database. The search strategy involved searching for the keywords 'breast radiology' or 'breast imaging' and the various keywords associated with AI such as 'deep learning', 'machine learning,' and 'neural networks'. RESULTS From the top 100 list, the number of citations per article ranged from 30 to 346 (average 85). The highest cited article titled 'Artificial Neural Networks In Mammography-Application To Decision-Making In The Diagnosis Of Breast-Cancer' was published in Radiology in 1993. Eighty-three of the articles were published in the last 10 years. The journal with the greatest number of articles was Radiology (n = 22). The most common country of origin was the United States (n = 51). Commonly occurring topics published were the use of deep learning models for breast cancer detection in mammography or ultrasound, radiomics in breast cancer, and the use of AI for breast cancer risk prediction. CONCLUSION This study provides a comprehensive analysis of the top 100 most-cited papers on the subject of artificial intelligence in breast radiology and discusses the current most influential papers in the field. CLINICAL RELEVANCE STATEMENT This article provides a concise summary of the top 100 most-cited articles in the field of artificial intelligence in breast radiology. It discusses the most impactful articles and explores the recent trends and topics of research in the field. KEY POINTS Multiple studies have been conducted on AI in breast radiology. The most-cited article was published in the journal Radiology in 1993. This study highlights influential articles and topics on AI in breast radiology.
Collapse
Affiliation(s)
- Sneha Singh
- Department of Radiology, Royal College of Surgeons in Ireland, Dublin, Ireland.
- Beaumont Breast Centre, Beaumont Hospital, Dublin, Ireland.
| | - Nuala A Healy
- Department of Radiology, Royal College of Surgeons in Ireland, Dublin, Ireland
- Beaumont Breast Centre, Beaumont Hospital, Dublin, Ireland
- Department of Radiology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
11
|
Gui H, Jiao H, Li L, Jiang X, Su T, Pang Z. Breast Tumor Detection and Diagnosis Using an Improved Faster R-CNN in DCE-MRI. Bioengineering (Basel) 2024; 11:1217. [PMID: 39768035 PMCID: PMC11673413 DOI: 10.3390/bioengineering11121217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Revised: 11/27/2024] [Accepted: 11/28/2024] [Indexed: 01/11/2025] Open
Abstract
AI-based breast cancer detection can improve the sensitivity and specificity of detection, especially for small lesions, which has clinical value in realizing early detection and treatment so as to reduce mortality. The two-stage detection network performs well; however, it adopts an imprecise ROI during classification, which can easily include surrounding tumor tissues. Additionally, fuzzy noise is a significant contributor to false positives. We adopted Faster RCNN as the architecture, introduced ROI aligning to minimize quantization errors and feature pyramid network (FPN) to extract different resolution features, added a bounding box quadratic regression feature map extraction network and three convolutional layers to reduce interference from tumor surrounding information, and extracted more accurate and deeper feature maps. Our approach outperformed Faster R-CNN, Mask R-CNN, and YOLOv9 in breast cancer detection across 485 internal cases. We achieved superior performance in mAP, sensitivity, and false positive rate ((0.752, 0.950, 0.133) vs. (0.711, 0.950, 0.200) vs. (0.718, 0.880, 0.120) vs. (0.658, 0.680, 405)), which represents a 38.5% reduction in false positives compared to manual detection. Additionally, in a public dataset of 220 cases, our model also demonstrated the best performance. It showed improved sensitivity and specificity, effectively assisting doctors in diagnosing cancer.
Collapse
Affiliation(s)
- Haitian Gui
- School of Biomedical Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China;
| | - Han Jiao
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China;
| | - Li Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Department of Medical Imaging, Sun Yat-sen University Cancer Center (SYSUCC), Guangzhou 510060, China; (L.L.); (X.J.)
| | - Xinhua Jiang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Department of Medical Imaging, Sun Yat-sen University Cancer Center (SYSUCC), Guangzhou 510060, China; (L.L.); (X.J.)
| | - Tao Su
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China;
| | - Zhiyong Pang
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China;
| |
Collapse
|
12
|
Rai HM, Yoo J, Razaque A. A depth analysis of recent innovations in non-invasive techniques using artificial intelligence approach for cancer prediction. Med Biol Eng Comput 2024; 62:3555-3580. [PMID: 39012415 DOI: 10.1007/s11517-024-03158-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 06/22/2024] [Indexed: 07/17/2024]
Abstract
The fight against cancer, a relentless global health crisis, emphasizes the urgency for efficient and automated early detection methods. To address this critical need, this review assesses recent advances in non-invasive cancer prediction techniques, comparing conventional machine learning (CML) and deep neural networks (DNNs). Focusing on these seven major cancers, we analyze 310 publications spanning the years 2018 to 2024, focusing on detection accuracy as the key metric to identify the most effective predictive models, highlighting critical gaps in current methodologies, and suggesting directions for future research. We further delved into factors like datasets, features, and modalities to gain a comprehensive understanding of each approach's performance. Separate review tables for each cancer type and approach facilitated comparisons between top performers (accuracy exceeding 99%) and low performers (65.83 to 85.8%). Our exploration of public databases and commonly used classifiers revealed that optimal combinations of features, datasets, and models can achieve up to 100% accuracy for both CML and DNN. However, significant variations in accuracy (up to 35%) were observed, particularly when optimization was lacking. Notably, colorectal cancer exhibited the lowest accuracy (DNN 69%, CML 65.83%). A five-point comparative analysis (best/worst models, performance gap, average accuracy, and research trends) revealed that while DNN research is gaining momentum, CML approaches remain competitive, even outperforming DNN in some cases. This study presents an in-depth comparative analysis of CML and DNN techniques for cancer detection. This knowledge can inform future research directions and contribute to the development of increasingly accurate and reliable cancer detection tools.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-Gu, Seongnam-Si, 13120, Gyeonggi-Do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-Gu, Seongnam-Si, 13120, Gyeonggi-Do, Republic of Korea
| | - Abdul Razaque
- Department of Cyber Security, Information Processing and Storage, Satbayev University, Almaty, Kazakhstan
| |
Collapse
|
13
|
Gullo RL, Brunekreef J, Marcus E, Han LK, Eskreis-Winkler S, Thakur SB, Mann R, Lipman KG, Teuwen J, Pinker K. AI Applications to Breast MRI: Today and Tomorrow. J Magn Reson Imaging 2024; 60:2290-2308. [PMID: 38581127 PMCID: PMC11452568 DOI: 10.1002/jmri.29358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 03/07/2024] [Accepted: 03/09/2024] [Indexed: 04/08/2024] Open
Abstract
In breast imaging, there is an unrelenting increase in the demand for breast imaging services, partly explained by continuous expanding imaging indications in breast diagnosis and treatment. As the human workforce providing these services is not growing at the same rate, the implementation of artificial intelligence (AI) in breast imaging has gained significant momentum to maximize workflow efficiency and increase productivity while concurrently improving diagnostic accuracy and patient outcomes. Thus far, the implementation of AI in breast imaging is at the most advanced stage with mammography and digital breast tomosynthesis techniques, followed by ultrasound, whereas the implementation of AI in breast magnetic resonance imaging (MRI) is not moving along as rapidly due to the complexity of MRI examinations and fewer available dataset. Nevertheless, there is persisting interest in AI-enhanced breast MRI applications, even as the use of and indications of breast MRI continue to expand. This review presents an overview of the basic concepts of AI imaging analysis and subsequently reviews the use cases for AI-enhanced MRI interpretation, that is, breast MRI triaging and lesion detection, lesion classification, prediction of treatment response, risk assessment, and image quality. Finally, it provides an outlook on the barriers and facilitators for the adoption of AI in breast MRI. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 6.
Collapse
Affiliation(s)
- Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Joren Brunekreef
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Eric Marcus
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Lynn K Han
- Weill Cornell Medical College, New York-Presbyterian Hospital, New York, NY, USA
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Sunitha B Thakur
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Ritse Mann
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Kevin Groot Lipman
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Jonas Teuwen
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Katja Pinker
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| |
Collapse
|
14
|
Kadoi T, Mizuno K, Ishida S, Onozato S, Washiyama H, Uehara Y, Saito Y, Okamoto K, Sakamoto S, Sugimoto Y, Terayama K. Development of a method for estimating asari clam distribution by combining three-dimensional acoustic coring system and deep neural network. Sci Rep 2024; 14:26467. [PMID: 39488638 PMCID: PMC11531588 DOI: 10.1038/s41598-024-77893-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Accepted: 10/25/2024] [Indexed: 11/04/2024] Open
Abstract
Developing non-contact, non-destructive monitoring methods for marine life is crucial for sustainable resource management. Recent monitoring technologies and machine learning analysis advancements have enhanced underwater image and acoustic data acquisition. Systems to obtain 3D acoustic data from beneath the seafloor are being developed; however, manual analysis of large 3D datasets is challenging. Therefore, an automatic method for analyzing benthic resource distribution is needed. This study developed a system to estimate benthic resource distribution non-destructively by combining high-precision habitat data acquisition using high-frequency ultrasonic waves and prediction models based on a 3D convolutional neural network (3D-CNN). The system estimated the distribution of asari clams (Ruditapes philippinarum) in Lake Hamana, Japan. Clam presence and count were successfully estimated in a voxel with an ROC-AUC of 0.9 and a macro-average ROC-AUC of 0.8, respectively. This system visualized clam distribution and estimated numbers, demonstrating its effectiveness for quantifying marine resources beneath the seafloor.
Collapse
Affiliation(s)
- Tokimu Kadoi
- Graduate School of Medical Life Science, Yokohama City University, 1-7-29, Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa, 230-0045, Japan
| | - Katsunori Mizuno
- Department of Environment Systems, Graduate School of Frontier Sciences, The University of Tokyo, Kashiwanoha, Kashiwa, Chiba, 277-8561, Japan.
| | - Shoichi Ishida
- Graduate School of Medical Life Science, Yokohama City University, 1-7-29, Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa, 230-0045, Japan
| | - Shogo Onozato
- Department of Environment Systems, Graduate School of Frontier Sciences, The University of Tokyo, Kashiwanoha, Kashiwa, Chiba, 277-8561, Japan
| | - Hirofumi Washiyama
- Shizuoka Prefectural Research Institute of Fishery and Ocean, 5005-3, Bentenjima, Maisaka-cho, Chūō-ku, Hamamatsu-shi, Shizuoka, 431-0214, Japan
| | - Yohei Uehara
- Shizuoka Prefectural Research Institute of Fishery and Ocean, 5005-3, Bentenjima, Maisaka-cho, Chūō-ku, Hamamatsu-shi, Shizuoka, 431-0214, Japan
| | - Yoshimoto Saito
- Marine Open Innovation Institute, 2nd Floor, Shimizu Marine Building, 9-25, Hinode-cho, Shimizu-ku, Shizuoka-shi, Shizuoka, 424-0922, Japan
| | - Kazutoshi Okamoto
- Marine Open Innovation Institute, 2nd Floor, Shimizu Marine Building, 9-25, Hinode-cho, Shimizu-ku, Shizuoka-shi, Shizuoka, 424-0922, Japan
| | - Shingo Sakamoto
- Windy Network Corporation, 1-19-4, Higashi-Hongo, Shimoda-shi, Shizuoka, 415-0035, Japan
| | - Yusuke Sugimoto
- Windy Network Corporation, 1-19-4, Higashi-Hongo, Shimoda-shi, Shizuoka, 415-0035, Japan
| | - Kei Terayama
- Graduate School of Medical Life Science, Yokohama City University, 1-7-29, Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa, 230-0045, Japan.
- RIKEN Center for Advanced Intelligence Project, 1-4-1, Nihonbashi, Chuo-ku, Tokyo, 103-0027, Japan.
- MDX Research Center for Element Strategy, Tokyo Institute of Technology, 4259, Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa, 226-8501, Japan.
| |
Collapse
|
15
|
Talaat FM, Gamel SA, El-Balka RM, Shehata M, ZainEldin H. Grad-CAM Enabled Breast Cancer Classification with a 3D Inception-ResNet V2: Empowering Radiologists with Explainable Insights. Cancers (Basel) 2024; 16:3668. [PMID: 39518105 PMCID: PMC11544836 DOI: 10.3390/cancers16213668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2024] [Revised: 10/25/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024] Open
Abstract
Breast cancer (BCa) poses a severe threat to women's health worldwide as it is the most frequently diagnosed type of cancer and the primary cause of death for female patients. The biopsy procedure remains the gold standard for accurate and effective diagnosis of BCa. However, its adverse effects, such as invasiveness, bleeding, infection, and reporting time, keep this procedure as a last resort for diagnosis. A mammogram is considered the routine noninvasive imaging-based procedure for diagnosing BCa, mitigating the need for biopsies; however, it might be prone to subjectivity depending on the radiologist's experience. Therefore, we propose a novel, mammogram image-based BCa explainable AI (BCaXAI) model with a deep learning-based framework for precise, noninvasive, objective, and timely manner diagnosis of BCa. The proposed BCaXAI leverages the Inception-ResNet V2 architecture, where the integration of explainable AI components, such as Grad-CAM, provides radiologists with valuable visual insights into the model's decision-making process, fostering trust and confidence in the AI-based system. Based on using the DDSM and CBIS-DDSM mammogram datasets, BCaXAI achieved exceptional performance, surpassing traditional models such as ResNet50 and VGG16. The model demonstrated superior accuracy (98.53%), recall (98.53%), precision (98.40%), F1-score (98.43%), and AUROC (0.9933), highlighting its effectiveness in distinguishing between benign and malignant cases. These promising results could alleviate the diagnostic subjectivity that might arise as a result of the experience-variability between different radiologists, as well as minimize the need for repetitive biopsy procedures.
Collapse
Affiliation(s)
- Fatma M. Talaat
- Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh 33511, Egypt;
- Faculty of Computer Science & Engineering, New Mansoura University, Gamasa 35712, Egypt
| | - Samah A. Gamel
- Electronics and Communication Engineering Department, Faculty of Engineering, Horus University Egypt, Damietta 34518, Egypt;
| | - Rana Mohamed El-Balka
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt; (R.M.E.-B.); (H.Z.)
- Electronic Engineering Department Higher, Institute of Engineering and Technology, Manzala 35642, Egypt
| | - Mohamed Shehata
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt; (R.M.E.-B.); (H.Z.)
- Department of Bioengineering, Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| | - Hanaa ZainEldin
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt; (R.M.E.-B.); (H.Z.)
| |
Collapse
|
16
|
Qi YJ, Su GH, You C, Zhang X, Xiao Y, Jiang YZ, Shao ZM. Radiomics in breast cancer: Current advances and future directions. Cell Rep Med 2024; 5:101719. [PMID: 39293402 PMCID: PMC11528234 DOI: 10.1016/j.xcrm.2024.101719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Revised: 07/10/2024] [Accepted: 08/14/2024] [Indexed: 09/20/2024]
Abstract
Breast cancer is a common disease that causes great health concerns to women worldwide. During the diagnosis and treatment of breast cancer, medical imaging plays an essential role, but its interpretation relies on radiologists or clinical doctors. Radiomics can extract high-throughput quantitative imaging features from images of various modalities via traditional machine learning or deep learning methods following a series of standard processes. Hopefully, radiomic models may aid various processes in clinical practice. In this review, we summarize the current utilization of radiomics for predicting clinicopathological indices and clinical outcomes. We also focus on radio-multi-omics studies that bridge the gap between phenotypic and microscopic scale information. Acknowledging the deficiencies that currently hinder the clinical adoption of radiomic models, we discuss the underlying causes of this situation and propose future directions for advancing radiomics in breast cancer research.
Collapse
Affiliation(s)
- Ying-Jia Qi
- Key Laboratory of Breast Cancer in Shanghai, Department of Breast Surgery, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Guan-Hua Su
- Key Laboratory of Breast Cancer in Shanghai, Department of Breast Surgery, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Chao You
- Department of Radiology, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Xu Zhang
- Department of Radiology, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Yi Xiao
- Key Laboratory of Breast Cancer in Shanghai, Department of Breast Surgery, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China.
| | - Yi-Zhou Jiang
- Key Laboratory of Breast Cancer in Shanghai, Department of Breast Surgery, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China.
| | - Zhi-Ming Shao
- Key Laboratory of Breast Cancer in Shanghai, Department of Breast Surgery, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China.
| |
Collapse
|
17
|
Arslan M, Asim M, Sattar H, Khan A, Thoppil Ali F, Zehra M, Talluri K. Role of Radiology in the Diagnosis and Treatment of Breast Cancer in Women: A Comprehensive Review. Cureus 2024; 16:e70097. [PMID: 39449897 PMCID: PMC11500669 DOI: 10.7759/cureus.70097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/24/2024] [Indexed: 10/26/2024] Open
Abstract
Breast cancer remains a leading cause of morbidity and mortality among women worldwide. Early detection and precise diagnosis are critical for effective treatment and improved patient outcomes. This review explores the evolving role of radiology in the diagnosis and treatment of breast cancer, highlighting advancements in imaging technologies and the integration of artificial intelligence (AI). Traditional imaging modalities such as mammography, ultrasound, and magnetic resonance imaging have been the cornerstone of breast cancer diagnostics, with each modality offering unique advantages. The advent of radiomics, which involves extracting quantitative data from medical images, has further augmented the diagnostic capabilities of these modalities. AI, particularly deep learning algorithms, has shown potential in improving diagnostic accuracy and reducing observer variability across imaging modalities. AI-driven tools are increasingly being integrated into clinical workflows to assist in image interpretation, lesion classification, and treatment planning. Additionally, radiology plays a crucial role in guiding treatment decisions, particularly in the context of image-guided radiotherapy and monitoring response to neoadjuvant chemotherapy. The review also discusses the emerging field of theranostics, where diagnostic imaging is combined with therapeutic interventions to provide personalized cancer care. Despite these advancements, challenges such as the need for large annotated datasets and the integration of AI into clinical practice remain. The review concludes that while the role of radiology in breast cancer management is rapidly evolving, further research is required to fully realize the potential of these technologies in improving patient outcomes.
Collapse
Affiliation(s)
| | - Muhammad Asim
- Emergency Medicine, Royal Free Hospital, London, GBR
| | - Hina Sattar
- Medicine, Dow University of Health Sciences, Karachi, PAK
| | - Anita Khan
- Medicine, Khyber Girls Medical College, Peshawar, PAK
| | | | - Muneeza Zehra
- Internal Medicine, Karachi Medical and Dental College, Karachi, PAK
| | - Keerthi Talluri
- General Medicine, GSL (Ganni Subba Lakshmi garu) Medical College, Rajahmundry, IND
| |
Collapse
|
18
|
Sureshkumar V, Prasad RSN, Balasubramaniam S, Jagannathan D, Daniel J, Dhanasekaran S. Breast Cancer Detection and Analytics Using Hybrid CNN and Extreme Learning Machine. J Pers Med 2024; 14:792. [PMID: 39201984 PMCID: PMC11355507 DOI: 10.3390/jpm14080792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 07/08/2024] [Accepted: 07/15/2024] [Indexed: 09/03/2024] Open
Abstract
Early detection of breast cancer is essential for increasing survival rates, as it is one of the primary causes of death for women globally. Mammograms are extensively used by physicians for diagnosis, but selecting appropriate algorithms for image enhancement, segmentation, feature extraction, and classification remains a significant research challenge. This paper presents a computer-aided diagnosis (CAD)-based hybrid model combining convolutional neural networks (CNN) with a pruned ensembled extreme learning machine (HCPELM) to enhance breast cancer detection, segmentation, feature extraction, and classification. The model employs the rectified linear unit (ReLU) activation function to enhance data analytics after removing artifacts and pectoral muscles, and the HCPELM hybridized with the CNN model improves feature extraction. The hybrid elements are convolutional and fully connected layers. Convolutional layers extract spatial features like edges, textures, and more complex features in deeper layers. The fully connected layers take these features and combine them in a non-linear manner to perform the final classification. ELM performs classification and recognition tasks, aiming for state-of-the-art performance. This hybrid classifier is used for transfer learning by freezing certain layers and modifying the architecture to reduce parameters, easing cancer detection. The HCPELM classifier was trained using the MIAS database and evaluated against benchmark methods. It achieved a breast image recognition accuracy of 86%, outperforming benchmark deep learning models. HCPELM is demonstrating superior performance in early detection and diagnosis, thus aiding healthcare practitioners in breast cancer diagnosis.
Collapse
Affiliation(s)
- Vidhushavarshini Sureshkumar
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Vadapalani, Chennai 600026, India
| | | | | | - Dhayanithi Jagannathan
- Department of Computer Science and Engineering, Sona College of Technology, Salem 636005, India; (S.B.); (D.J.)
| | - Jayanthi Daniel
- Department of Electronics and Communication Engineering, Rajalakshmi Engineering College, Chennai 602105, India;
| | | |
Collapse
|
19
|
Duan W, Wu Z, Zhu H, Zhu Z, Liu X, Shu Y, Zhu X, Wu J, Peng D. Deep learning modeling using mammography images for predicting estrogen receptor status in breast cancer. Am J Transl Res 2024; 16:2411-2422. [PMID: 39006260 PMCID: PMC11236640 DOI: 10.62347/puhr6185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 05/12/2024] [Indexed: 07/16/2024]
Abstract
BACKGROUND The estrogen receptor (ER) serves as a pivotal indicator for assessing endocrine therapy efficacy and breast cancer prognosis. Invasive biopsy is a conventional approach for appraising ER expression levels, but it bears disadvantages due to tumor heterogeneity. To address the issue, a deep learning model leveraging mammography images was developed in this study for accurate evaluation of ER status in patients with breast cancer. OBJECTIVES To predict the ER status in breast cancer patients with a newly developed deep learning model leveraging mammography images. MATERIALS AND METHODS Datasets comprising preoperative mammography images, ER expression levels, and clinical data spanning from October 2016 to October 2021 were retrospectively collected from 358 patients diagnosed with invasive ductal carcinoma. Following collection, these datasets were divided into a training dataset (n = 257) and a testing dataset (n = 101). Subsequently, a deep learning prediction model, referred to as IP-SE-DResNet model, was developed utilizing two deep residual networks along with the Squeeze-and-Excitation attention mechanism. This model was tailored to forecast the ER status in breast cancer patients utilizing mammography images from both craniocaudal view and mediolateral oblique view. Performance measurements including prediction accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curves (AUCs) were employed to assess the effectiveness of the model. RESULTS In the training dataset, the AUCs for the IP-SE-DResNet model utilizing mammography images from the craniocaudal view, mediolateral oblique view, and the combined images from both views, were 0.849 (95% CIs: 0.809-0.868), 0.858 (95% CIs: 0.813-0.872), and 0.895 (95% CIs: 0.866-0.913), respectively. Correspondingly, the AUCs for these three image categories in the testing dataset were 0.835 (95% CIs: 0.790-0.887), 0.746 (95% CIs: 0.793-0.889), and 0.886 (95% CIs: 0.809-0.934), respectively. A comprehensive comparison between performance measurements underscored a substantial enhancement achieved by the proposed IP-SE-DResNet model in contrast to a traditional radiomics model employing the naive Bayesian classifier. For the latter, the AUCs stood at only 0.614 (95% CIs: 0.594-0.638) in the training dataset and 0.613 (95% CIs: 0.587-0.654) in the testing dataset, both utilizing a combination of mammography images from the craniocaudal and mediolateral oblique views. CONCLUSIONS The proposed IP-SE-DResNet model presents a potent and non-invasive approach for predicting ER status in breast cancer patients, potentially enhancing the efficiency and diagnostic precision of radiologists.
Collapse
Affiliation(s)
- Wenfeng Duan
- Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchang, Jiangxi, China
| | - Zhiheng Wu
- School of Information Engineering, Nanchang UniversityNanchang, Jiangxi, China
| | - Huijun Zhu
- School of Information Engineering, Nanchang UniversityNanchang, Jiangxi, China
| | - Zhiyun Zhu
- Department of Cardiology, Jiangxi Provincial People’s HospitalNanchang, Jiangxi, China
| | - Xiang Liu
- Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchang, Jiangxi, China
| | - Yongqiang Shu
- Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchang, Jiangxi, China
| | - Xishun Zhu
- School of Advanced Manufacturing, Nanchang UniversityNanchang, Jiangxi, China
| | - Jianhua Wu
- School of Information Engineering, Nanchang UniversityNanchang, Jiangxi, China
| | - Dechang Peng
- Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchang, Jiangxi, China
| |
Collapse
|
20
|
Cong C, Li X, Zhang C, Zhang J, Sun K, Liu L, Ambale-Venkatesh B, Chen X, Wang Y. MRI-Based Breast Cancer Classification and Localization by Multiparametric Feature Extraction and Combination Using Deep Learning. J Magn Reson Imaging 2024; 59:148-161. [PMID: 37013422 DOI: 10.1002/jmri.28713] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 03/16/2023] [Accepted: 03/16/2023] [Indexed: 04/05/2023] Open
Abstract
BACKGROUND Deep learning (DL) have been reported feasible in breast MRI. However, the effectiveness of DL method in mpMRI combinations for breast cancer detection has not been well investigated. PURPOSE To implement a DL method for breast cancer classification and detection using feature extraction and combination from multiple sequences. STUDY TYPE Retrospective. POPULATION A total of 569 local cases as internal cohort (50.2 ± 11.2 years; 100% female), divided among training (218), validation (73) and testing (278); 125 cases from a public dataset as the external cohort (53.6 ± 11.5 years; 100% female). FIELD STRENGTH/SEQUENCE T1-weighted imaging and dynamic contrast-enhanced MRI (DCE-MRI) with gradient echo sequences, T2-weighted imaging (T2WI) with spin-echo sequences, diffusion-weighted imaging with single-shot echo-planar sequence and at 1.5-T. ASSESSMENT A convolutional neural network and long short-term memory cascaded network was implemented for lesion classification with histopathology as the ground truth for malignant and benign categories and contralateral breasts as healthy category in internal/external cohorts. BI-RADS categories were assessed by three independent radiologists as comparison, and class activation map was employed for lesion localization in internal cohort. The classification and localization performances were assessed with DCE-MRI and non-DCE sequences, respectively. STATISTICAL TESTS Sensitivity, specificity, area under the curve (AUC), DeLong test, and Cohen's kappa for lesion classification. Sensitivity and mean squared error for localization. A P-value <0.05 was considered statistically significant. RESULTS With the optimized mpMRI combinations, the lesion classification achieved an AUC = 0.98/0.91, sensitivity = 0.96/0.83 in the internal/external cohorts, respectively. Without DCE-MRI, the DL-based method was superior to radiologists' readings (AUC 0.96 vs. 0.90). The lesion localization achieved sensitivities of 0.97/0.93 with DCE-MRI/T2WI alone, respectively. DATA CONCLUSION The DL method achieved high accuracy for lesion detection in the internal/external cohorts. The classification performance with a contrast agent-free combination is comparable to DCE-MRI alone and the radiologists' reading in AUC and sensitivity. EVIDENCE LEVEL 3. TECHNICAL EFFICACY Stage 2.
Collapse
Affiliation(s)
- Chao Cong
- Department of Radiology, Daping Hospital, Army Medical University, Chongqing, China
- School of Electrical and Electronic Engineering, Chongqing University of Technology, Chongqing, China
- Department of Nuclear Medicine, Daping Hospital, Army Medical University, Chongqing, China
| | - Xiaoguang Li
- Department of Radiology, Daping Hospital, Army Medical University, Chongqing, China
| | - Chunlai Zhang
- Department of Radiology, Daping Hospital, Army Medical University, Chongqing, China
| | - Jing Zhang
- Department of Radiology, Daping Hospital, Army Medical University, Chongqing, China
| | - Kaixiang Sun
- School of Electrical and Electronic Engineering, Chongqing University of Technology, Chongqing, China
| | - Lianluyi Liu
- School of Electrical and Electronic Engineering, Chongqing University of Technology, Chongqing, China
| | | | - Xiao Chen
- Department of Nuclear Medicine, Daping Hospital, Army Medical University, Chongqing, China
| | - Yi Wang
- Department of Nuclear Medicine, Daping Hospital, Army Medical University, Chongqing, China
| |
Collapse
|
21
|
Yang H, Yuwen C, Cheng X, Fan H, Wang X, Ge Z. Deep Learning: A Primer for Neurosurgeons. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1462:39-70. [PMID: 39523259 DOI: 10.1007/978-3-031-64892-2_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
This chapter explores the transformative impact of deep learning (DL) on neurosurgery, elucidating its pivotal role in enhancing diagnostic performance, surgical planning, execution, and postoperative assessment. It delves into various deep learning architectures, including convolutional and recurrent neural networks, and their applications in analyzing neuroimaging data for brain tumors, spinal cord injuries, and other neurological conditions. The integration of DL in neurosurgical robotics and the potential for fully autonomous surgical procedures are discussed, highlighting advancements in surgical precision and patient outcomes. The chapter also examines the challenges of data privacy, quality, and interpretability that accompany the implementation of DL in neurosurgery. The potential for DL to revolutionize neurosurgical practices through improved diagnostics, patient-specific surgical planning, and the advent of intelligent surgical robots is underscored, promising a future where technology and healthcare converge to offer unprecedented solutions in neurosurgery.
Collapse
Affiliation(s)
- Hongxi Yang
- Department of Data Science and Artificial Intelligence (DSAI), Faculty of Information Technology, Monash University, Clayton, VIC, Australia
| | - Chang Yuwen
- Monash Suzhou Research Institute, Monash University, Suzhou, China
| | - Xuelian Cheng
- Department of Data Science and Artificial Intelligence (DSAI), Faculty of Information Technology, Monash University, Clayton, VIC, Australia
- Monash Suzhou Research Institute, Monash University, Suzhou, China
| | - Hengwei Fan
- Shukun (Beijing) Technology Co, Beijing, China
| | - Xin Wang
- Shukun (Beijing) Technology Co, Beijing, China
| | - Zongyuan Ge
- Department of Data Science and Artificial Intelligence (DSAI), Faculty of Information Technology, Monash University, Clayton, VIC, Australia.
| |
Collapse
|
22
|
Harrison P, Hasan R, Park K. State-of-the-Art of Breast Cancer Diagnosis in Medical Images via Convolutional Neural Networks (CNNs). JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2023; 7:387-432. [PMID: 37927373 PMCID: PMC10620373 DOI: 10.1007/s41666-023-00144-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 08/14/2023] [Accepted: 08/22/2023] [Indexed: 11/07/2023]
Abstract
Early detection of breast cancer is crucial for a better prognosis. Various studies have been conducted where tumor lesions are detected and localized on images. This is a narrative review where the studies reviewed are related to five different image modalities: histopathological, mammogram, magnetic resonance imaging (MRI), ultrasound, and computed tomography (CT) images, making it different from other review studies where fewer image modalities are reviewed. The goal is to have the necessary information, such as pre-processing techniques and CNN-based diagnosis techniques for the five modalities, readily available in one place for future studies. Each modality has pros and cons, such as mammograms might give a high false positive rate for radiographically dense breasts, while ultrasounds with low soft tissue contrast result in early-stage false detection, and MRI provides a three-dimensional volumetric image, but it is expensive and cannot be used as a routine test. Various studies were manually reviewed using particular inclusion and exclusion criteria; as a result, 91 recent studies that classify and detect tumor lesions on breast cancer images from 2017 to 2022 related to the five image modalities were included. For histopathological images, the maximum accuracy achieved was around 99 % , and the maximum sensitivity achieved was 97.29 % by using DenseNet, ResNet34, and ResNet50 architecture. For mammogram images, the maximum accuracy achieved was 96.52 % using a customized CNN architecture. For MRI, the maximum accuracy achieved was 98.33 % using customized CNN architecture. For ultrasound, the maximum accuracy achieved was around 99 % by using DarkNet-53, ResNet-50, G-CNN, and VGG. For CT, the maximum sensitivity achieved was 96 % by using Xception architecture. Histopathological and ultrasound images achieved higher accuracy of around 99 % by using ResNet34, ResNet50, DarkNet-53, G-CNN, and VGG compared to other modalities for either of the following reasons: use of pre-trained architectures with pre-processing techniques, use of modified architectures with pre-processing techniques, use of two-stage CNN, and higher number of studies available for Artificial Intelligence (AI)/machine learning (ML) researchers to reference. One of the gaps we found is that only a single image modality is used for CNN-based diagnosis; in the future, a multiple image modality approach can be used to design a CNN architecture with higher accuracy.
Collapse
Affiliation(s)
- Pratibha Harrison
- Department of Computer and Information Science, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| | - Rakib Hasan
- Department of Mechanical Engineering, Khulna University of Engineering & Technology, PhulBari Gate, Khulna, 9203 Bangladesh
| | - Kihan Park
- Department of Mechanical Engineering, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| |
Collapse
|
23
|
Ghorbian M, Ghorbian S. Usefulness of machine learning and deep learning approaches in screening and early detection of breast cancer. Heliyon 2023; 9:e22427. [PMID: 38076050 PMCID: PMC10709063 DOI: 10.1016/j.heliyon.2023.e22427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 11/07/2023] [Accepted: 11/13/2023] [Indexed: 10/16/2024] Open
Abstract
Breast cancer (BC) is one of the most common types of cancer in women, and its prevalence is on the rise. The diagnosis of this disease in the first steps can be highly challenging. Hence, early and rapid diagnosis of this disease in its early stages increases the likelihood of a patient's recovery and survival. This study presents a systematic and detailed analysis of the various ML approaches and mechanisms employed during the BC diagnosis process. Further, this study provides a comprehensive and accurate overview of techniques, approaches, challenges, solutions, and important concepts related to this process in order to provide healthcare professionals and technologists with a deeper understanding of new screening and diagnostic tools and approaches, as well as identify new challenges and popular approaches in this field. Therefore, this study has attempted to provide a comprehensive taxonomy of applying ML techniques to BC diagnosis, focusing on the data obtained from the clinical methods diagnosis. The taxonomy presented in this study has two major components. Clinical diagnostic methods such as MRI, mammography, and hybrid methods are presented in the first part of the taxonomy. The second part involves implementing machine learning approaches such as neural networks (NN), deep learning (DL), and hybrid on the dataset in the first part. Then, the taxonomy will be analyzed based on implementing ML approaches in clinical diagnosis methods. The findings of the study demonstrated that the approaches based on NN and DL are the most accurate and widely used models for BC diagnosis compared to other diagnostic techniques, and accuracy (ACC), sensitivity (SEN), and specificity (SPE) are the most commonly used performance evaluation criteria. Additionally, factors such as the advantages and disadvantages of using machine learning techniques, as well as the objectives of each research, separately for ML technology and BC detection, as well as evaluation criteria, are discussed in this study. Lastly, this study provides an overview of open and unresolved issues related to using ML for BC diagnosis, along with a proposal to resolve each issue to assist researchers and healthcare professionals.
Collapse
Affiliation(s)
- Mohsen Ghorbian
- Department of Computer Engineering, Qom Branch, Islamic Azad University, Qom, Iran
| | - Saeid Ghorbian
- Department of Molecular Genetics, Ahar Branch, Islamic Azad University, Ahar, Iran
| |
Collapse
|
24
|
Yan S, Li J, Wu W. Artificial intelligence in breast cancer: application and future perspectives. J Cancer Res Clin Oncol 2023; 149:16179-16190. [PMID: 37656245 DOI: 10.1007/s00432-023-05337-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 08/24/2023] [Indexed: 09/02/2023]
Abstract
Breast cancer is one of the most common cancers and is one of the leading causes of cancer-related deaths in women worldwide. Early diagnosis and treatment are the key for a favorable prognosis. The application of artificial intelligence technology in the medical field is increasingly extensive, including image analysis, automated diagnosis, intelligent pharmaceutical system, personalized treatment and so on. AI-based breast cancer imaging, pathology and adjuvant therapy technology cannot only reduce the workload of clinicians, but also continuously improve the accuracy and sensitivity of breast cancer diagnosis and treatment. This paper reviews the application of AI in breast cancer, as well as looks ahead and poses challenges to the future development of AI for breast cancer detection and therapeutic, so as to provide ideas for future research.
Collapse
Affiliation(s)
- Shuixin Yan
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Jiadi Li
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Weizhu Wu
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China.
| |
Collapse
|
25
|
Bae K, Jeon YS, Hwangbo Y, Yoo CW, Han N, Feng M. Data-Efficient Computational Pathology Platform for Faster and Cheaper Breast Cancer Subtype Identifications: Development of a Deep Learning Model. JMIR Cancer 2023; 9:e45547. [PMID: 37669090 PMCID: PMC10509735 DOI: 10.2196/45547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Revised: 07/07/2023] [Accepted: 07/21/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND Breast cancer subtyping is a crucial step in determining therapeutic options, but the molecular examination based on immunohistochemical staining is expensive and time-consuming. Deep learning opens up the possibility to predict the subtypes based on the morphological information from hematoxylin and eosin staining, a much cheaper and faster alternative. However, training the predictive model conventionally requires a large number of histology images, which is challenging to collect by a single institute. OBJECTIVE We aimed to develop a data-efficient computational pathology platform, 3DHistoNet, which is capable of learning from z-stacked histology images to accurately predict breast cancer subtypes with a small sample size. METHODS We retrospectively examined 401 cases of patients with primary breast carcinoma diagnosed between 2018 and 2020 at the Department of Pathology, National Cancer Center, South Korea. Pathology slides of the patients with breast carcinoma were prepared according to the standard protocols. Age, gender, histologic grade, hormone receptor (estrogen receptor [ER], progesterone receptor [PR], and androgen receptor [AR]) status, erb-B2 receptor tyrosine kinase 2 (HER2) status, and Ki-67 index were evaluated by reviewing medical charts and pathological records. RESULTS The area under the receiver operating characteristic curve and decision curve were analyzed to evaluate the performance of our 3DHistoNet platform for predicting the ER, PR, AR, HER2, and Ki67 subtype biomarkers with 5-fold cross-validation. We demonstrated that 3DHistoNet can predict all clinically important biomarkers (ER, PR, AR, HER2, and Ki67) with performance exceeding the conventional multiple instance learning models by a considerable margin (area under the receiver operating characteristic curve: 0.75-0.91 vs 0.67-0.8). We further showed that our z-stack histology scanning method can make up for insufficient training data sets without any additional cost incurred. Finally, 3DHistoNet offered an additional capability to generate attention maps that reveal correlations between Ki67 and histomorphological features, which renders the hematoxylin and eosin image in higher fidelity to the pathologist. CONCLUSIONS Our stand-alone, data-efficient pathology platform that can both generate z-stacked images and predict key biomarkers is an appealing tool for breast cancer diagnosis. Its development would encourage morphology-based diagnosis, which is faster, cheaper, and less error-prone compared to the protein quantification method based on immunohistochemical staining.
Collapse
Affiliation(s)
- Kideog Bae
- Healthcare AI Team, Healthcare Platform Center, National Cancer Center, Goyang-si, Gyeonggi-do, Republic of Korea
| | - Young Seok Jeon
- Institute of Data Science, National University of Singapore, Singapore, Singapore
| | - Yul Hwangbo
- Healthcare AI Team, Healthcare Platform Center, National Cancer Center, Goyang-si, Gyeonggi-do, Republic of Korea
- Department of Cancer AI & Digital Health, Graduate School of Cancer Science and Policy, National Cancer Center, Goyang-si, Gyeonggi-do, Republic of Korea
| | - Chong Woo Yoo
- Department of Pathology, National Cancer Center Hospital, National Cancer Center, Goyang-si, Gyeonggi-do, Republic of Korea
| | - Nayoung Han
- Healthcare AI Team, Healthcare Platform Center, National Cancer Center, Goyang-si, Gyeonggi-do, Republic of Korea
- Department of Cancer AI & Digital Health, Graduate School of Cancer Science and Policy, National Cancer Center, Goyang-si, Gyeonggi-do, Republic of Korea
- Department of Pathology, National Cancer Center Hospital, National Cancer Center, Goyang-si, Gyeonggi-do, Republic of Korea
| | - Mengling Feng
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore, Singapore
| |
Collapse
|
26
|
Zhang Y, Liu YL, Nie K, Zhou J, Chen Z, Chen JH, Wang X, Kim B, Parajuli R, Mehta RS, Wang M, Su MY. Deep Learning-based Automatic Diagnosis of Breast Cancer on MRI Using Mask R-CNN for Detection Followed by ResNet50 for Classification. Acad Radiol 2023; 30 Suppl 2:S161-S171. [PMID: 36631349 PMCID: PMC10515321 DOI: 10.1016/j.acra.2022.12.038] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 12/10/2022] [Accepted: 12/23/2022] [Indexed: 01/11/2023]
Abstract
RATIONALE AND OBJECTIVES Diagnosis of breast cancer on MRI requires, first, the identification of suspicious lesions; second, the characterization to give a diagnostic impression. We implemented Mask Reginal-Convolutional Neural Network (R-CNN) to detect abnormal lesions, followed by ResNet50 to estimate the malignancy probability. MATERIALS AND METHODS Two datasets were used. The first set had 176 cases, 103 cancer, and 73 benign. The second set had 84 cases, 53 cancer, and 31 benign. For detection, the pre-contrast image and the subtraction images of left and right breasts were used as inputs, so the symmetry could be considered. The detected suspicious area was characterized by ResNet50, using three DCE parametric maps as inputs. The results obtained using slice-based analyses were combined to give a lesion-based diagnosis. RESULTS In the first dataset, 101 of 103 cancers were detected by Mask R-CNN as suspicious, and 99 of 101 were correctly classified by ResNet50 as cancer, with a sensitivity of 99/103 = 96%. 48 of 73 benign lesions and 131 normal areas were identified as suspicious. Following classification by ResNet50, only 16 benign and 16 normal areas remained as malignant. The second dataset was used for independent testing. The sensitivity was 43/53 = 81%. Of the total of 121 identified non-cancerous lesions, only 6 of 31 benign lesions and 22 normal tissues were classified as malignant. CONCLUSION ResNet50 could eliminate approximately 80% of false positives detected by Mask R-CNN. Combining Mask R-CNN and ResNet50 has the potential to develop a fully-automatic computer-aided diagnostic system for breast cancer on MRI.
Collapse
Affiliation(s)
- Yang Zhang
- Department of Radiological Sciences, University of California, Irvine, California; Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, New Jersey
| | - Yan-Lin Liu
- Department of Radiological Sciences, University of California, Irvine, California
| | - Ke Nie
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, New Jersey
| | - Jiejie Zhou
- Department of Radiology, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Zhongwei Chen
- Department of Radiology, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Jeon-Hor Chen
- Department of Radiological Sciences, University of California, Irvine, California; Department of Radiology, E-Da Hospital and I-Shou University, Kaohsiung, Taiwan
| | - Xiao Wang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, New Jersey
| | - Bomi Kim
- Department of Radiological Sciences, University of California, Irvine, California; Department of Breast Radiology, Ilsan Hospital, Goyang, South Korea
| | - Ritesh Parajuli
- Department of Medicine, University of California, Irvine, United States
| | - Rita S Mehta
- Department of Medicine, University of California, Irvine, United States
| | - Meihao Wang
- Department of Radiology, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, Irvine, California; Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan.
| |
Collapse
|
27
|
Sun R, Wei C, Jiang Z, Huang G, Xie Y, Nie S. Weakly Supervised Breast Lesion Detection in Dynamic Contrast-Enhanced MRI. J Digit Imaging 2023; 36:1553-1564. [PMID: 37253896 PMCID: PMC10406986 DOI: 10.1007/s10278-023-00846-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 05/05/2023] [Accepted: 05/08/2023] [Indexed: 06/01/2023] Open
Abstract
Currently, obtaining accurate medical annotations requires high labor and time effort, which largely limits the development of supervised learning-based tumor detection tasks. In this work, we investigated a weakly supervised learning model for detecting breast lesions in dynamic contrast-enhanced MRI (DCE-MRI) with only image-level labels. Two hundred fifty-four normal and 398 abnormal cases with pathologically confirmed lesions were retrospectively enrolled into the breast dataset, which was divided into the training set (80%), validation set (10%), and testing set (10%) at the patient level. First, the second image series S2 after the injection of a contrast agent was acquired from the 3.0-T, T1-weighted dynamic enhanced MR imaging sequences. Second, a feature pyramid network (FPN) with convolutional block attention module (CBAM) was proposed to extract multi-scale feature maps of the modified classification network VGG16. Then, initial location information was obtained from the heatmaps generated using the layer class activation mapping algorithm (Layer-CAM). Finally, the detection results of breast lesion were refined by the conditional random field (CRF). Accuracy, sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve (AUC) were utilized for evaluation of image-level classification. Average precision (AP) was estimated for breast lesion localization. Delong's test was used to compare the AUCs of different models for significance. The proposed model was effective with accuracy of 95.2%, sensitivity of 91.6%, specificity of 99.2%, and AUC of 0.986. The AP for breast lesion detection was 84.1% using weakly supervised learning. Weakly supervised learning based on FPN combined with Layer-CAM facilitated automatic detection of breast lesion.
Collapse
Affiliation(s)
- Rong Sun
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai, 200093, China
| | - Chuanling Wei
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai, 200093, China
| | - Zhuoyun Jiang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai, 200093, China
| | - Gang Huang
- Shanghai University of Medicine & Health Sciences, Shanghai, China
| | - Yuanzhong Xie
- Medical Imaging Center, Tai'an Central Hospital, No. 29 Long-Tan Road, Shandong, 271099, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai, 200093, China.
| |
Collapse
|
28
|
Sun R, Zhang X, Xie Y, Nie S. Weakly supervised breast lesion detection in DCE-MRI using self-transfer learning. Med Phys 2023; 50:4960-4972. [PMID: 36820793 DOI: 10.1002/mp.16296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 02/03/2023] [Accepted: 02/04/2023] [Indexed: 02/24/2023] Open
Abstract
BACKGROUND Breast cancer is a typically diagnosed and life-threatening cancer in women. Thus, dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly used for breast lesion detection and diagnosis because of the high resolution of soft tissues. Moreover, supervised detection methods have been implemented for breast lesion detection. However, these methods require substantial time and specialized staff to develop the labeled training samples. PURPOSE To investigate the potential of weakly supervised deep learning models for breast lesion detection. METHODS A total of 1003 breast DCE-MRI studies were collected, including 603 abnormal cases with 770 breast lesions and 400 normal subjects. The proposed model was trained using breast DCE-MRI considering only the image-level labels (normal and abnormal) and optimized for classification and detection sub-tasks simultaneously. Ablation experiments were performed to evaluate different convolutional neural network (CNN) backbones (VGG19 and ResNet50) as shared convolutional layers, as well as to evaluate the effect of the preprocessing methods. RESULTS Our weakly supervised model performed better with VGG19 than with ResNet50 (p < 0.05). The average precision (AP) of the classification sub-task was 91.7% for abnormal cases and 88.0% for normal samples. The area under the receiver operating characteristic (ROC) curve (AUC) was 0.939 (95% confidence interval [CI]: 0.920-0.941). The weakly supervised detection task AP was 85.7%, and the correct location (CorLoc) was 90.2%. A sensitivity of 84.0% at two-false positives per image was assessed based on free-response ROC (FROC) curve. CONCLUSIONS The results confirm that a weakly supervised CNN based on self-transfer learning is an effective and promising auxiliary tool for detecting breast lesions.
Collapse
Affiliation(s)
- Rong Sun
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Xiaobing Zhang
- Department of Radiology, Ruijin Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Yuanzhong Xie
- Medical Imaging Center, Taian Center Hospital, Shandong, China
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
29
|
Adam R, Dell'Aquila K, Hodges L, Maldjian T, Duong TQ. Deep learning applications to breast cancer detection by magnetic resonance imaging: a literature review. Breast Cancer Res 2023; 25:87. [PMID: 37488621 PMCID: PMC10367400 DOI: 10.1186/s13058-023-01687-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 07/11/2023] [Indexed: 07/26/2023] Open
Abstract
Deep learning analysis of radiological images has the potential to improve diagnostic accuracy of breast cancer, ultimately leading to better patient outcomes. This paper systematically reviewed the current literature on deep learning detection of breast cancer based on magnetic resonance imaging (MRI). The literature search was performed from 2015 to Dec 31, 2022, using Pubmed. Other database included Semantic Scholar, ACM Digital Library, Google search, Google Scholar, and pre-print depositories (such as Research Square). Articles that were not deep learning (such as texture analysis) were excluded. PRISMA guidelines for reporting were used. We analyzed different deep learning algorithms, methods of analysis, experimental design, MRI image types, types of ground truths, sample sizes, numbers of benign and malignant lesions, and performance in the literature. We discussed lessons learned, challenges to broad deployment in clinical practice and suggested future research directions.
Collapse
Affiliation(s)
- Richard Adam
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Kevin Dell'Aquila
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Laura Hodges
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Takouhie Maldjian
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Tim Q Duong
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA.
| |
Collapse
|
30
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
31
|
Wang X, Su R, Xie W, Wang W, Xu Y, Mann R, Han J, Tan T. 2.75D: Boosting learning by representing 3D Medical imaging to 2D features for small data. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2023]
|
32
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
33
|
Zhao X, Bai JW, Guo Q, Ren K, Zhang GJ. Clinical applications of deep learning in breast MRI. Biochim Biophys Acta Rev Cancer 2023; 1878:188864. [PMID: 36822377 DOI: 10.1016/j.bbcan.2023.188864] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 02/25/2023]
Abstract
Deep learning (DL) is one of the most powerful data-driven machine-learning techniques in artificial intelligence (AI). It can automatically learn from raw data without manual feature selection. DL models have led to remarkable advances in data extraction and analysis for medical imaging. Magnetic resonance imaging (MRI) has proven useful in delineating the characteristics and extent of breast lesions and tumors. This review summarizes the current state-of-the-art applications of DL models in breast MRI. Many recent DL models were examined in this field, along with several advanced learning approaches and methods for data normalization and breast and lesion segmentation. For clinical applications, DL-based breast MRI models were proven useful in five aspects: diagnosis of breast cancer, classification of molecular types, classification of histopathological types, prediction of neoadjuvant chemotherapy response, and prediction of lymph node metastasis. For subsequent studies, further improvement in data acquisition and preprocessing is necessary, additional DL techniques in breast MRI should be investigated, and wider clinical applications need to be explored.
Collapse
Affiliation(s)
- Xue Zhao
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jing-Wen Bai
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Oncology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Qiu Guo
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Ke Ren
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China.
| | - Guo-Jun Zhang
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
34
|
Using Deep Learning with Bayesian–Gaussian Inspired Convolutional Neural Architectural Search for Cancer Recognition and Classification from Histopathological Image Frames. JOURNAL OF HEALTHCARE ENGINEERING 2023. [DOI: 10.1155/2023/4597445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
We propose a neural architectural search model which examines histopathological images to detect the presence of cancer in both lung and colon tissues. In recent times, deep artificial neural networks have made tremendous impacts in healthcare. However, obtaining an optimal artificial neural network model that could yield excellent performance during training, evaluation, and inferencing has been a bottleneck for researchers. Our method uses a Bayesian convolutional neural architectural search algorithm in collaboration with Gaussian processes to provide an efficient neural network architecture for efficient colon and lung cancer classification and recognition. The proposed model learns by using the Gaussian process to estimate the required optimal architectural values by choosing a set of model parameters through the exploitation of the expected improvement (EI) values, thereby minimizing the number of sampled trials and suggesting the best model architecture. Several experiments were conducted, and a landmark performance was obtained in both validation and test data through the evaluation of the proposed model on a dataset consisting of 25,000 images of five different classes with convergence and F1-score matrices.
Collapse
|
35
|
Zeng Y, Zhang X, Kawasumi Y, Usui A, Ichiji K, Funayama M, Homma N. A 2.5D Deep Learning-Based Method for Drowning Diagnosis Using Post-Mortem Computed Tomography. IEEE J Biomed Health Inform 2023; 27:1026-1035. [PMID: 36446008 DOI: 10.1109/jbhi.2022.3225416] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
It is challenging to diagnose drowning in autopsy even with the help of post-mortem multi-slice computed tomography (MSCT) due to the complex pathophysiology and the shortage of forensic specialists equipped with radiology knowledge. Therefore, a computer-aided diagnosis (CAD) system was developed to help with diagnosis. Most deep learning-based CAD systems only utilize 2D information, which is proper for 2D data such as chest X-ray images. However, 3D information should also be considered for 3D data like CT. Conventional 3D methods require a huge amount of data and computational cost when using 3D methods. In this article, we proposed a 2.5D method that converts 3D data into 2D images to train 2D deep learning models for drowning diagnosis. The key point of this 2.5D method is that it uses a subset to represent the whole case, covering this case as much as possible while avoiding other repetitive information. To evaluate the effectiveness of the proposed method, conventional 2D, previous 2.5D, and 3D deep learning-based methods were tested using an MSCT dataset obtained from Tohoku university. Then, to provide explainable diagnosis results, a visualization method called Gradient-weighted Class Activation Mapping was employed to visualize features relevant to drowning in CT images. Results on drowning diagnosis showed that our proposed method achieved the best performance compared to other 2D, 2.5D, and 3D methods. The visual assessment also demonstrated that our method could find the saliency regions corresponding to drowning.
Collapse
|
36
|
Liu X, Pan Y, Zhang X, Sha Y, Wang S, Li H, Liu J. A Deep Learning Model for Classification of Parotid Neoplasms Based on Multimodal Magnetic Resonance Image Sequences. Laryngoscope 2023; 133:327-335. [PMID: 35575610 PMCID: PMC10083903 DOI: 10.1002/lary.30154] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 03/17/2022] [Accepted: 04/12/2022] [Indexed: 01/19/2023]
Abstract
OBJECTIVE To design a deep learning model based on multimodal magnetic resonance image (MRI) sequences for automatic parotid neoplasm classification, and to improve the diagnostic decision-making in clinical settings. METHODS First, multimodal MRI sequences were collected from 266 patients with parotid neoplasms, and an artificial intelligence (AI)-based deep learning model was designed from scratch, combining the image classification network of Resnet and the Transformer network of Natural language processing. Second, the effectiveness of the deep learning model was improved through the multi-modality fusion of MRI sequences, and the fusion strategy of various MRI sequences was optimized. In addition, we compared the effectiveness of the model in the parotid neoplasm classification with experienced radiologists. RESULTS The deep learning model delivered reliable outcomes in differentiating benign and malignant parotid neoplasms. The model, which was trained by the fusion of T2-weighted, postcontrast T1-weighted, and diffusion-weighted imaging (b = 1000 s/mm2 ), produced the best result, with an accuracy score of 0.85, an area under the receiver operator characteristic (ROC) curve of 0.96, a sensitivity score of 0.90, and a specificity score of 0.84. In addition, the multi-modal paradigm exhibited reliable outcomes in diagnosing the pleomorphic adenoma and the Warthin tumor, but not in the identification of the basal cell adenoma. CONCLUSION An accurate and efficient AI based classification model was produced to classify parotid neoplasms, resulting from the fusion of multimodal MRI sequences. The effectiveness certainly outperformed the model with single MRI images or single MRI sequences as input, and potentially, experienced radiologists. LEVEL OF EVIDENCE 3 Laryngoscope, 133:327-335, 2023.
Collapse
Affiliation(s)
- Xu Liu
- ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, Fudan University, Shanghai, China.,ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, NHC Key Laboratory of Hearing Medicine (Fudan University), Shanghai, China
| | - Yucheng Pan
- Department of Radiology, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Xin Zhang
- ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, Fudan University, Shanghai, China.,ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, NHC Key Laboratory of Hearing Medicine (Fudan University), Shanghai, China
| | - Yongfang Sha
- ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, Fudan University, Shanghai, China.,ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, NHC Key Laboratory of Hearing Medicine (Fudan University), Shanghai, China
| | - Shihui Wang
- Lab of Sensing and Computing, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
| | - Hongzhe Li
- Research Service, VA Loma Linda Healthcare System, Loma Linda, California, U.S.A.,Department of Otolaryngology-Head and Neck Surgery, Loma Linda University School of Medicine, Loma Linda, California, U.S.A
| | - Jianping Liu
- ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, Fudan University, Shanghai, China.,ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, NHC Key Laboratory of Hearing Medicine (Fudan University), Shanghai, China
| |
Collapse
|
37
|
Xue H, Qian G, Wu X, Gao Y, Yang H, Liu M, Wang L, Chen R, Wang P. A coarse-to-fine and automatic algorithm for breast diagnosis on multi-series MRI images. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1054158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
IntroductionEarly breast carcinomas can be effectively diagnosed and controlled. However, it demands extra work and radiologist in China often suffer from overtime working due to too many patients, even experienced ones could make mistakes after overloaded work. To improve the efficiency and reduce the rate of misdiagnosis, automatic breast diagnosis on Magnetic Resonance Imaging (MRI) images is vital yet challenging for breast disease screening and successful treatment planning. There are some obstacles that hinder the development of automatic approaches, such as class-imbalance of samples, hard mimics of lesions, etc. In this paper, we propose a coarse-to-fine algorithm to address those problems of automatic breast diagnosis on multi-series MRI images. The algorithm utilizes deep learning techniques to provide breast segmentation, tumor segmentation and tumor classification functions, thus supporting doctors' decisions in clinical practice.MethodsIn proposed algorithm, a DenseUNet is firstly employed to extract breast-related regions by removing irrelevant parts in the thoracic cavity. Then, by taking advantage of the attention mechanism and the focal loss, a novel network named Attention Dense UNet (ADUNet) is designed for the tumor segmentation. Particularly, the focal loss in ADUNet addresses class-imbalance and model overwhelmed problems. Finally, a customized network is developed for the tumor classification. Besides, while most approaches only consider one or two series, the proposed algorithm takes in account multiple series of MRI images.ResultsExtensive experiments are carried out to evaluate its performance on 435 multi-series MRI volumes from 87 patients collected from Tongji Hospital. In the dataset, all cases are with benign, malignant, or both type of tumors, the category of which covers carcinoma, fibroadenoma, cyst and abscess. The ground truths of tumors are labeled by two radiologists with 3 years of experience on breast MRI reporting by drawing contours of tumor slice by slice. ADUNet is compared with other prevalent deep-learning methods on the tumor segmentation and quantitative results, and achieves the best performance on both Case Dice Score and Global Dice Score by 0.748 and 0.801 respectively. Moreover, the customized classification network outperforms two CNN-M based models and achieves tumor-level and case-level AUC by 0.831 and 0.918 respectively.DiscussionAll data in this paper are collected from the same MRI device, thus it is reasonable to assume that they are from the same domain and independent identically distributed. Whether the proposed algorithm is robust enough in a multi-source case still remains an open question. Each stage of the proposed algorithm is trained separately, which makes each stage more robust and converge faster. Such training strategy considers each stage as a separate task and does not take into account the relationships between tasks.
Collapse
|
38
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
39
|
Luo L, Chen H, Xiao Y, Zhou Y, Wang X, Vardhanabhuti V, Wu M, Han C, Liu Z, Fang XHB, Tsougenis E, Lin H, Heng PA. Rethinking Annotation Granularity for Overcoming Shortcuts in Deep Learning-based Radiograph Diagnosis: A Multicenter Study. Radiol Artif Intell 2022; 4:e210299. [PMID: 36204545 PMCID: PMC9530769 DOI: 10.1148/ryai.210299] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 06/17/2022] [Accepted: 07/07/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE To evaluate the ability of fine-grained annotations to overcome shortcut learning in deep learning (DL)-based diagnosis using chest radiographs. MATERIALS AND METHODS Two DL models were developed using radiograph-level annotations (disease present: yes or no) and fine-grained lesion-level annotations (lesion bounding boxes), respectively named CheXNet and CheXDet. A total of 34 501 chest radiographs obtained from January 2005 to September 2019 were retrospectively collected and annotated regarding cardiomegaly, pleural effusion, mass, nodule, pneumonia, pneumothorax, tuberculosis, fracture, and aortic calcification. The internal classification performance and lesion localization performance of the models were compared on a testing set (n = 2922); external classification performance was compared on National Institutes of Health (NIH) Google (n = 4376) and PadChest (n = 24 536) datasets; and external lesion localization performance was compared on the NIH ChestX-ray14 dataset (n = 880). The models were also compared with radiologist performance on a subset of the internal testing set (n = 496). Performance was evaluated using receiver operating characteristic (ROC) curve analysis. RESULTS Given sufficient training data, both models performed similarly to radiologists. CheXDet achieved significant improvement for external classification, such as classifying fracture on NIH Google (CheXDet area under the ROC curve [AUC], 0.67; CheXNet AUC, 0.51; P < .001) and PadChest (CheXDet AUC, 0.78; CheXNet AUC, 0.55; P < .001). CheXDet achieved higher lesion detection performance than CheXNet for most abnormalities on all datasets, such as detecting pneumothorax on the internal set (CheXDet jackknife alternative free-response ROC [JAFROC] figure of merit [FOM], 0.87; CheXNet JAFROC FOM, 0.13; P < .001) and NIH ChestX-ray14 (CheXDet JAFROC FOM, 0.55; CheXNet JAFROC FOM, 0.04; P < .001). CONCLUSION Fine-grained annotations overcame shortcut learning and enabled DL models to identify correct lesion patterns, improving the generalizability of the models.Keywords: Computer-aided Diagnosis, Conventional Radiography, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms, Localization Supplemental material is available for this article © RSNA, 2022.
Collapse
|
40
|
Zhu J, Geng J, Shan W, Zhang B, Shen H, Dong X, Liu M, Li X, Cheng L. Development and validation of a deep learning model for breast lesion segmentation and characterization in multiparametric MRI. Front Oncol 2022; 12:946580. [PMID: 36033449 PMCID: PMC9402900 DOI: 10.3389/fonc.2022.946580] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 07/12/2022] [Indexed: 11/13/2022] Open
Abstract
Importance The utilization of artificial intelligence for the differentiation of benign and malignant breast lesions in multiparametric MRI (mpMRI) assists radiologists to improve diagnostic performance. Objectives To develop an automated deep learning model for breast lesion segmentation and characterization and to evaluate the characterization performance of AI models and radiologists. Materials and methods For lesion segmentation, 2,823 patients were used for the training, validation, and testing of the VNet-based segmentation models, and the average Dice similarity coefficient (DSC) between the manual segmentation by radiologists and the mask generated by VNet was calculated. For lesion characterization, 3,303 female patients with 3,607 pathologically confirmed lesions (2,213 malignant and 1,394 benign lesions) were used for the three ResNet-based characterization models (two single-input and one multi-input models). Histopathology was used as the diagnostic criterion standard to assess the characterization performance of the AI models and the BI-RADS categorized by the radiologists, in terms of sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve (AUC). An additional 123 patients with 136 lesions (81 malignant and 55 benign lesions) from another institution were available for external testing. Results Of the 5,811 patients included in the study, the mean age was 46.14 (range 11–89) years. In the segmentation task, a DSC of 0.860 was obtained between the VNet-generated mask and manual segmentation by radiologists. In the characterization task, the AUCs of the multi-input and the other two single-input models were 0.927, 0.821, and 0.795, respectively. Compared to the single-input DWI or DCE model, the multi-input DCE and DWI model obtained a significant increase in sensitivity, specificity, and accuracy (0.831 vs. 0.772/0.776, 0.874 vs. 0.630/0.709, 0.846 vs. 0.721/0.752). Furthermore, the specificity of the multi-input model was higher than that of the radiologists, whether using BI-RADS category 3 or 4 as a cutoff point (0.874 vs. 0.404/0.841), and the accuracy was intermediate between the two assessment methods (0.846 vs. 0.773/0.882). For the external testing, the performance of the three models remained robust with AUCs of 0.812, 0.831, and 0.885, respectively. Conclusions Combining DCE with DWI was superior to applying a single sequence for breast lesion characterization. The deep learning computer-aided diagnosis (CADx) model we developed significantly improved specificity and achieved comparable accuracy to the radiologists with promise for clinical application to provide preliminary diagnoses.
Collapse
Affiliation(s)
- Jingjin Zhu
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Jiahui Geng
- Department of Neurology, Beijing Tiantan Hospital, Beijing, China
| | - Wei Shan
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Boya Zhang
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Huaqing Shen
- Department of Neurology, Beijing Tiantan Hospital, Beijing, China
| | - Xiaohan Dong
- Department of Radiology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Mei Liu
- Department of Pathology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Xiru Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
- *Correspondence: Liuquan Cheng, ; Xiru Li,
| | - Liuquan Cheng
- Department of Radiology, Chinese People’s Liberation Army General Hospital, Beijing, China
- *Correspondence: Liuquan Cheng, ; Xiru Li,
| |
Collapse
|
41
|
Vicini S, Bortolotto C, Rengo M, Ballerini D, Bellini D, Carbone I, Preda L, Laghi A, Coppola F, Faggioni L. A narrative review on current imaging applications of artificial intelligence and radiomics in oncology: focus on the three most common cancers. Radiol Med 2022; 127:819-836. [DOI: 10.1007/s11547-022-01512-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Accepted: 06/01/2022] [Indexed: 12/24/2022]
|
42
|
Deep Learning Approaches for Automatic Localization in Medical Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6347307. [PMID: 35814554 PMCID: PMC9259335 DOI: 10.1155/2022/6347307] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 05/23/2022] [Indexed: 12/21/2022]
Abstract
Recent revolutionary advances in deep learning (DL) have fueled several breakthrough achievements in various complicated computer vision tasks. The remarkable successes and achievements started in 2012 when deep learning neural networks (DNNs) outperformed the shallow machine learning models on a number of significant benchmarks. Significant advances were made in computer vision by conducting very complex image interpretation tasks with outstanding accuracy. These achievements have shown great promise in a wide variety of fields, especially in medical image analysis by creating opportunities to diagnose and treat diseases earlier. In recent years, the application of the DNN for object localization has gained the attention of researchers due to its success over conventional methods, especially in object localization. As this has become a very broad and rapidly growing field, this study presents a short review of DNN implementation for medical images and validates its efficacy on benchmarks. This study presents the first review that focuses on object localization using the DNN in medical images. The key aim of this study was to summarize the recent studies based on the DNN for medical image localization and to highlight the research gaps that can provide worthwhile ideas to shape future research related to object localization tasks. It starts with an overview on the importance of medical image analysis and existing technology in this space. The discussion then proceeds to the dominant DNN utilized in the current literature. Finally, we conclude by discussing the challenges associated with the application of the DNN for medical image localization which can drive further studies in identifying potential future developments in the relevant field of study.
Collapse
|
43
|
Bhowmik A, Eskreis-Winkler S. Deep learning in breast imaging. BJR Open 2022; 4:20210060. [PMID: 36105427 PMCID: PMC9459862 DOI: 10.1259/bjro.20210060] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 04/04/2022] [Accepted: 04/21/2022] [Indexed: 11/22/2022] Open
Abstract
Millions of breast imaging exams are performed each year in an effort to reduce the morbidity and mortality of breast cancer. Breast imaging exams are performed for cancer screening, diagnostic work-up of suspicious findings, evaluating extent of disease in recently diagnosed breast cancer patients, and determining treatment response. Yet, the interpretation of breast imaging can be subjective, tedious, time-consuming, and prone to human error. Retrospective and small reader studies suggest that deep learning (DL) has great potential to perform medical imaging tasks at or above human-level performance, and may be used to automate aspects of the breast cancer screening process, improve cancer detection rates, decrease unnecessary callbacks and biopsies, optimize patient risk assessment, and open up new possibilities for disease prognostication. Prospective trials are urgently needed to validate these proposed tools, paving the way for real-world clinical use. New regulatory frameworks must also be developed to address the unique ethical, medicolegal, and quality control issues that DL algorithms present. In this article, we review the basics of DL, describe recent DL breast imaging applications including cancer detection and risk prediction, and discuss the challenges and future directions of artificial intelligence-based systems in the field of breast cancer.
Collapse
Affiliation(s)
- Arka Bhowmik
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
44
|
Balkenende L, Teuwen J, Mann RM. Application of Deep Learning in Breast Cancer Imaging. Semin Nucl Med 2022; 52:584-596. [PMID: 35339259 DOI: 10.1053/j.semnuclmed.2022.02.003] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/15/2022] [Accepted: 02/16/2022] [Indexed: 11/11/2022]
Abstract
This review gives an overview of the current state of deep learning research in breast cancer imaging. Breast imaging plays a major role in detecting breast cancer at an earlier stage, as well as monitoring and evaluating breast cancer during treatment. The most commonly used modalities for breast imaging are digital mammography, digital breast tomosynthesis, ultrasound and magnetic resonance imaging. Nuclear medicine imaging techniques are used for detection and classification of axillary lymph nodes and distant staging in breast cancer imaging. All of these techniques are currently digitized, enabling the possibility to implement deep learning (DL), a subset of Artificial intelligence, in breast imaging. DL is nowadays embedded in a plethora of different tasks, such as lesion classification and segmentation, image reconstruction and generation, cancer risk prediction, and prediction and assessment of therapy response. Studies show similar and even better performances of DL algorithms compared to radiologists, although it is clear that large trials are needed, especially for ultrasound and magnetic resonance imaging, to exactly determine the added value of DL in breast cancer imaging. Studies on DL in nuclear medicine techniques are only sparsely available and further research is mandatory. Legal and ethical issues need to be considered before the role of DL can expand to its full potential in clinical breast care practice.
Collapse
Affiliation(s)
- Luuk Balkenende
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Radiation Oncology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands
| | - Ritse M Mann
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands.
| |
Collapse
|
45
|
Tomographic Ultrasound Imaging in the Diagnosis of Breast Tumors under the Guidance of Deep Learning Algorithms. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9227440. [PMID: 35265119 PMCID: PMC8901319 DOI: 10.1155/2022/9227440] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 01/23/2022] [Accepted: 02/01/2022] [Indexed: 11/18/2022]
Abstract
This study was aimed to discuss the feasibility of distinguishing benign and malignant breast tumors under the tomographic ultrasound imaging (TUI) of deep learning algorithm. The deep learning algorithm was used to segment the images, and 120 patients with breast tumor were included in this study, all of whom underwent routine ultrasound examinations. Subsequently, TUI was used to assist in guiding the positioning, and the light scattering tomography system was used to further measure the lesions. A deep learning model was established to process the imaging results, and the pathological test results were undertaken as the gold standard for the efficiency of different imaging methods to diagnose the breast tumors. The results showed that, among 120 patients with breast tumor, 56 were benign lesions and 64 were malignant lesions. The average total amount of hemoglobin (HBT) of malignant lesions was significantly higher than that of benign lesions (P < 0.05). The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of TUI in the diagnosis of breast cancer were 90.4%, 75.6%, 81.4%, 84.7%, and 80.6%, respectively. The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of ultrasound in the diagnosis of breast cancer were 81.7%, 64.9%, 70.5%, 75.9%, and 80.6%, respectively. In addition, for suspected breast malignant lesions, the combined application of ultrasound and tomography can increase the diagnostic specificity to 82.1% and the accuracy to 83.8%. Based on the above results, it was concluded that TUI combined with ultrasound had a significant effect on benign and malignant diagnosis of breast cancer and can significantly improve the specificity and accuracy of diagnosis. It also reflected that deep learning technology had a good auxiliary role in the examination of diseases and was worth the promotion of clinical application.
Collapse
|
46
|
Deep learning in image-based breast and cervical cancer detection: a systematic review and meta-analysis. NPJ Digit Med 2022; 5:19. [PMID: 35169217 PMCID: PMC8847584 DOI: 10.1038/s41746-022-00559-z] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 12/22/2021] [Indexed: 12/15/2022] Open
Abstract
Accurate early detection of breast and cervical cancer is vital for treatment success. Here, we conduct a meta-analysis to assess the diagnostic performance of deep learning (DL) algorithms for early breast and cervical cancer identification. Four subgroups are also investigated: cancer type (breast or cervical), validation type (internal or external), imaging modalities (mammography, ultrasound, cytology, or colposcopy), and DL algorithms versus clinicians. Thirty-five studies are deemed eligible for systematic review, 20 of which are meta-analyzed, with a pooled sensitivity of 88% (95% CI 85–90%), specificity of 84% (79–87%), and AUC of 0.92 (0.90–0.94). Acceptable diagnostic performance with analogous DL algorithms was highlighted across all subgroups. Therefore, DL algorithms could be useful for detecting breast and cervical cancer using medical imaging, having equivalent performance to human clinicians. However, this tentative assertion is based on studies with relatively poor designs and reporting, which likely caused bias and overestimated algorithm performance. Evidence-based, standardized guidelines around study methods and reporting are required to improve the quality of DL research.
Collapse
|
47
|
Wu Y, Wu J, Dou Y, Rubert N, Wang Y, Deng J. A deep learning fusion model with evidence-based confidence level analysis for differentiation of malignant and benign breast tumors using dynamic contrast enhanced MRI. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
48
|
Subasree S, Sakthivel N, Tripathi K, Agarwal D, Tyagi AK. Combining the advantages of radiomic features based feature extraction and hyper parameters tuned RERNN using LOA for breast cancer classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
49
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
50
|
Cao C, Liu Z, Liu G, Jin S, Xia S. Ability of weakly supervised learning to detect acute ischemic stroke and hemorrhagic infarction lesions with diffusion-weighted imaging. Quant Imaging Med Surg 2022; 12:321-332. [PMID: 34993081 DOI: 10.21037/qims-21-324] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 06/27/2021] [Indexed: 12/31/2022]
Abstract
BACKGROUND Gradient-recalled echo (GRE) sequence is time-consuming and not routinely performed. Herein, we aimed to investigate the ability of weakly supervised learning to identify acute ischemic stroke (AIS) and concurrent hemorrhagic infarction based on diffusion-weighted imaging (DWI). METHODS First, we proposed spatially locating small stroke lesions in different positions and hemorrhagic infarction lesions by residual neural and visual geometry group networks using weakly supervised learning. Next, we compared the sensitivity and specificity for identifying automatically concurrent hemorrhagic infarction in stroke patients with the sensitivity and specificity of human readings of diffusion and b0 images to evaluate the performance of the weakly supervised methods. Also, the labeling time of the weakly supervised approach was compared with that of the fully supervised approach. RESULTS Data from a total of 1,027 patients were analyzed. The residual neural network displayed a higher sensitivity than did the visual geometry group network in spatially locating the small stroke and hemorrhagic infarction lesions. The residual neural network had significantly greater patient-level sensitivity than did the human readers (98.4% versus 86.2%, P=0.008) in identifying concurrent hemorrhagic infarction with GRE as the reference standard; however, their specificities were comparable (95.4% versus 96.9%, P>0.99). Weak labeling of lesions required significantly less time than did full labeling of lesions (2.667 versus 10.115 minutes, P<0.001). CONCLUSIONS Weakly supervised learning was able to spatially locate small stroke lesions in different positions and showed more sensitivity than did human reading in identifying concurrent hemorrhagic infarction based on DWI. The proposed approach can reduce the labeling workload.
Collapse
Affiliation(s)
- Chen Cao
- Department of Radiology, First Central Clinical College, Tianjin Medical University, Tianjin, China.,Department of Radiology, Tianjin Huanhu Hospital, Tianjin, China
| | - Zhiyang Liu
- Tianjin Key Laboratory of Optoelectronic Sensor and Sensing Network Technology, College of Electronic Information and Optical Engineering, Nankai University, Tianjin, China
| | - Guohua Liu
- Tianjin Key Laboratory of Optoelectronic Sensor and Sensing Network Technology, College of Electronic Information and Optical Engineering, Nankai University, Tianjin, China
| | - Song Jin
- Department of Radiology, Tianjin Huanhu Hospital, Tianjin, China
| | - Shuang Xia
- Department of Radiology, Tianjin First Central Hospital, School of Medicine, Nankai University, Tianjin, China
| |
Collapse
|