1
|
Nowakowska S, Borkowski K, Ruppert C, Hejduk P, Ciritsis A, Landsmann A, Marcon M, Berger N, Boss A, Rossi C. Explainable Precision Medicine in Breast MRI: A Combined Radiomics and Deep Learning Approach for the Classification of Contrast Agent Uptake. Bioengineering (Basel) 2024; 11:556. [PMID: 38927793 PMCID: PMC11200390 DOI: 10.3390/bioengineering11060556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 05/24/2024] [Accepted: 05/29/2024] [Indexed: 06/28/2024] Open
Abstract
In DCE-MRI, the degree of contrast uptake in normal fibroglandular tissue, i.e., background parenchymal enhancement (BPE), is a crucial biomarker linked to breast cancer risk and treatment outcome. In accordance with the Breast Imaging Reporting & Data System (BI-RADS), it should be visually classified into four classes. The susceptibility of such an assessment to inter-reader variability highlights the urgent need for a standardized classification algorithm. In this retrospective study, the first post-contrast subtraction images for 27 healthy female subjects were included. The BPE was classified slice-wise by two expert radiologists. The extraction of radiomic features from segmented BPE was followed by dataset splitting and dimensionality reduction. The latent representations were then utilized as inputs to a deep neural network classifying BPE into BI-RADS classes. The network's predictions were elucidated at the radiomic feature level with Shapley values. The deep neural network achieved a BPE classification accuracy of 84 ± 2% (p-value < 0.00001). Most of the misclassifications involved adjacent classes. Different radiomic features were decisive for the prediction of each BPE class underlying the complexity of the decision boundaries. A highly precise and explainable pipeline for BPE classification was achieved without user- or algorithm-dependent radiomic feature selection.
Collapse
Affiliation(s)
- Sylwia Nowakowska
- Diagnostic and Interventional Radiology, University Hospital Zürich, University Zürich, Rämistrasse 100, 8091 Zürich, Switzerland (C.R.)
| | | | - Carlotta Ruppert
- Diagnostic and Interventional Radiology, University Hospital Zürich, University Zürich, Rämistrasse 100, 8091 Zürich, Switzerland (C.R.)
- b-rayZ AG, Wagistrasse 21, 8952 Schlieren, Switzerland
| | - Patryk Hejduk
- Diagnostic and Interventional Radiology, University Hospital Zürich, University Zürich, Rämistrasse 100, 8091 Zürich, Switzerland (C.R.)
| | - Alexander Ciritsis
- Diagnostic and Interventional Radiology, University Hospital Zürich, University Zürich, Rämistrasse 100, 8091 Zürich, Switzerland (C.R.)
- b-rayZ AG, Wagistrasse 21, 8952 Schlieren, Switzerland
| | - Anna Landsmann
- Diagnostic and Interventional Radiology, University Hospital Zürich, University Zürich, Rämistrasse 100, 8091 Zürich, Switzerland (C.R.)
| | - Magda Marcon
- Diagnostic and Interventional Radiology, University Hospital Zürich, University Zürich, Rämistrasse 100, 8091 Zürich, Switzerland (C.R.)
| | - Nicole Berger
- Diagnostic and Interventional Radiology, University Hospital Zürich, University Zürich, Rämistrasse 100, 8091 Zürich, Switzerland (C.R.)
| | - Andreas Boss
- Diagnostic and Interventional Radiology, University Hospital Zürich, University Zürich, Rämistrasse 100, 8091 Zürich, Switzerland (C.R.)
| | - Cristina Rossi
- Diagnostic and Interventional Radiology, University Hospital Zürich, University Zürich, Rämistrasse 100, 8091 Zürich, Switzerland (C.R.)
- b-rayZ AG, Wagistrasse 21, 8952 Schlieren, Switzerland
| |
Collapse
|
2
|
Ripaud E, Jailin C, Quintana GI, Milioni de Carvalho P, Sanchez de la Rosa R, Vancamberg L. Deep-learning model for background parenchymal enhancement classification in contrast-enhanced mammography. Phys Med Biol 2024; 69:115013. [PMID: 38657641 DOI: 10.1088/1361-6560/ad42ff] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 04/24/2024] [Indexed: 04/26/2024]
Abstract
Background.Breast background parenchymal enhancement (BPE) is correlated with the risk of breast cancer. BPE level is currently assessed by radiologists in contrast-enhanced mammography (CEM) using 4 classes: minimal, mild, moderate and marked, as described inbreast imaging reporting and data system(BI-RADS). However, BPE classification remains subject to intra- and inter-reader variability. Fully automated methods to assess BPE level have already been developed in breast contrast-enhanced MRI (CE-MRI) and have been shown to provide accurate and repeatable BPE level classification. However, to our knowledge, no BPE level classification tool is available in the literature for CEM.Materials and methods.A BPE level classification tool based on deep learning has been trained and optimized on 7012 CEM image pairs (low-energy and recombined images) and evaluated on a dataset of 1013 image pairs. The impact of image resolution, backbone architecture and loss function were analyzed, as well as the influence of lesion presence and type on BPE assessment. The evaluation of the model performance was conducted using different metrics including 4-class balanced accuracy and mean absolute error. The results of the optimized model for a binary classification: minimal/mild versus moderate/marked, were also investigated.Results.The optimized model achieved a 4-class balanced accuracy of 71.5% (95% CI: 71.2-71.9) with 98.8% of classification errors between adjacent classes. For binary classification, the accuracy reached 93.0%. A slight decrease in model accuracy is observed in the presence of lesions, but it is not statistically significant, suggesting that our model is robust to the presence of lesions in the image for a classification task. Visual assessment also confirms that the model is more affected by non-mass enhancements than by mass-like enhancements.Conclusion.The proposed BPE classification tool for CEM achieves similar results than what is published in the literature for CE-MRI.
Collapse
|
3
|
Shamir SB, Sasson AL, Margolies LR, Mendelson DS. New Frontiers in Breast Cancer Imaging: The Rise of AI. Bioengineering (Basel) 2024; 11:451. [PMID: 38790318 PMCID: PMC11117903 DOI: 10.3390/bioengineering11050451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 04/18/2024] [Accepted: 04/26/2024] [Indexed: 05/26/2024] Open
Abstract
Artificial intelligence (AI) has been implemented in multiple fields of medicine to assist in the diagnosis and treatment of patients. AI implementation in radiology, more specifically for breast imaging, has advanced considerably. Breast cancer is one of the most important causes of cancer mortality among women, and there has been increased attention towards creating more efficacious methods for breast cancer detection utilizing AI to improve radiologist accuracy and efficiency to meet the increasing demand of our patients. AI can be applied to imaging studies to improve image quality, increase interpretation accuracy, and improve time efficiency and cost efficiency. AI applied to mammography, ultrasound, and MRI allows for improved cancer detection and diagnosis while decreasing intra- and interobserver variability. The synergistic effect between a radiologist and AI has the potential to improve patient care in underserved populations with the intention of providing quality and equitable care for all. Additionally, AI has allowed for improved risk stratification. Further, AI application can have treatment implications as well by identifying upstage risk of ductal carcinoma in situ (DCIS) to invasive carcinoma and by better predicting individualized patient response to neoadjuvant chemotherapy. AI has potential for advancement in pre-operative 3-dimensional models of the breast as well as improved viability of reconstructive grafts.
Collapse
Affiliation(s)
- Stephanie B. Shamir
- Department of Diagnostic, Molecular and Interventional Radiology, The Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY 10029, USA
| | | | | | | |
Collapse
|
4
|
Wang H, H M van der Velden B, Verburg E, Bakker MF, Pijnappel RM, Veldhuis WB, van Gils CH, Gilhuijs KGA. Automated rating of background parenchymal enhancement in MRI of extremely dense breasts without compromising the association with breast cancer in the DENSE trial. Eur J Radiol 2024; 175:111442. [PMID: 38583349 DOI: 10.1016/j.ejrad.2024.111442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 02/06/2024] [Accepted: 03/21/2024] [Indexed: 04/09/2024]
Abstract
OBJECTIVES Background parenchymal enhancement (BPE) on dynamic contrast-enhanced MRI (DCE-MRI) as rated by radiologists is subject to inter- and intrareader variability. We aim to automate BPE category from DCE-MRI. METHODS This study represents a secondary analysis of the Dense Tissue and Early Breast Neoplasm Screening trial. 4553 women with extremely dense breasts who received supplemental breast MRI screening in eight hospitals were included. Minimal, mild, moderate and marked BPE rated by radiologists were used as reference. Fifteen quantitative MRI features of the fibroglandular tissue were extracted to predict BPE using Random Forest, Naïve Bayes, and KNN classifiers. Majority voting was used to combine the predictions. Internal-external validation was used for training and validation. The inverse-variance weighted mean accuracy was used to express mean performance across the eight hospitals. Cox regression was used to verify non inferiority of the association between automated rating and breast cancer occurrence compared to the association for manual rating. RESULTS The accuracy of majority voting ranged between 0.56 and 0.84 across the eight hospitals. The weighted mean prediction accuracy for the four BPE categories was 0.76. The hazard ratio (HR) of BPE for breast cancer occurrence was comparable between automated rating and manual rating (HR = 2.12 versus HR = 1.97, P = 0.65 for mild/moderate/marked BPE relative to minimal BPE). CONCLUSION It is feasible to rate BPE automatically in DCE-MRI of women with extremely dense breasts without compromising the underlying association between BPE and breast cancer occurrence. The accuracy for minimal BPE is superior to that for other BPE categories.
Collapse
Affiliation(s)
- Hui Wang
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | - Erik Verburg
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Marije F Bakker
- Julius Center for Health Sciences and Primary Care, Utrecht, The Netherlands
| | - Ruud M Pijnappel
- Department of Radiology, University Medical Center Utrecht, The Netherlands
| | - Wouter B Veldhuis
- Department of Radiology, University Medical Center Utrecht, The Netherlands
| | - Carla H van Gils
- Julius Center for Health Sciences and Primary Care, Utrecht, The Netherlands
| | - Kenneth G A Gilhuijs
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands.
| |
Collapse
|
5
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
6
|
Zhang B, Zhu J, Zhang P, Wei Y, Li Y, Xu A, Zhang Y, Zheng H, Dong X, Yang K, Dong C, Chen Z, Li X, Cheng L. A background parenchymal enhancement quantification framework of breast magnetic resonance imaging. Quant Imaging Med Surg 2023; 13:8350-8357. [PMID: 38106260 PMCID: PMC10721989 DOI: 10.21037/qims-23-514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Accepted: 09/15/2023] [Indexed: 12/19/2023]
Abstract
Background Background parenchymal enhancement (BPE) is defined as the enhanced proportion of normal fibroglandular tissue on enhanced magnetic resonance imaging. BPE shows promise as a quantitative imaging biomarker (QIB). However, the lack of consensus among radiologists in their semi-quantitative grading of BPE limits its clinical utility. Methods The main objective of this study was to develop a BPE quantification model according to clinical expertise, with the BPE integral being used as a QIB to incorporate both the volume and intensity of the enhancement metrics. The model was applied to 2,786 cases to compare our quantitative results with radiologists' semi-quantitative BPE grading to evaluate the effectiveness of using the BPE integral as a QIB for analyzing BPE. Comparisons between multiple groups of nonnormally distributed BPE integrals were performed using the Kruskal-Wallis test. Results Our study found a considerable degree of concordance between our BPE quantitative integral and radiologists' semi-quantitative assessments. Specifically, our research results revealed significant variability in BPE integral attained through the BPE quantification framework among all semi-quantitative BPE grading groups labeled by experienced radiologists, including mild-moderate (P<0.001), mild-marked (P<0.001), and moderate-marked (P<0.001). Furthermore, there was an apparent correlation between BPE integral and BPE grades, with marked BPE displaying the highest BPE integral, followed by moderate BPE, with mild BPE exhibiting the lowest BPE integral value. Conclusions The study developed and implemented a BPE quantification framework, which incorporated both the volume and intensity of enhancement and which could serve as a QIB for BPE.
Collapse
Affiliation(s)
- Boya Zhang
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, The First Medical Center of Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Jingjin Zhu
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, The First Medical Center of Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Peifang Zhang
- Department of Big Data Center, The First Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Yufan Wei
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, The First Medical Center of Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Yan Li
- Department of General Surgery, The First Medical Center of Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Aoxi Xu
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, The First Medical Center of Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Yiheng Zhang
- Department of General Surgery, The First Medical Center of Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Hongye Zheng
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, The First Medical Center of Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Xiaohan Dong
- Department of Radiology, The Sixth Medical Center of Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Kaizhou Yang
- Department of Radiology, The Sixth Medical Center of Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Chuang Dong
- Department of Radiology, The Sixth Medical Center of Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Zhengming Chen
- Department of Radiology, The Sixth Medical Center of Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Xiru Li
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, The First Medical Center of Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Liuquan Cheng
- Department of Radiology, The Sixth Medical Center of Chinese People’s Liberation Army General Hospital, Beijing, China
| |
Collapse
|
7
|
Nowakowska S, Borkowski K, Ruppert CM, Landsmann A, Marcon M, Berger N, Boss A, Ciritsis A, Rossi C. Generalizable attention U-Net for segmentation of fibroglandular tissue and background parenchymal enhancement in breast DCE-MRI. Insights Imaging 2023; 14:185. [PMID: 37932462 PMCID: PMC10628070 DOI: 10.1186/s13244-023-01531-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 09/25/2023] [Indexed: 11/08/2023] Open
Abstract
OBJECTIVES Development of automated segmentation models enabling standardized volumetric quantification of fibroglandular tissue (FGT) from native volumes and background parenchymal enhancement (BPE) from subtraction volumes of dynamic contrast-enhanced breast MRI. Subsequent assessment of the developed models in the context of FGT and BPE Breast Imaging Reporting and Data System (BI-RADS)-compliant classification. METHODS For the training and validation of attention U-Net models, data coming from a single 3.0-T scanner was used. For testing, additional data from 1.5-T scanner and data acquired in a different institution with a 3.0-T scanner was utilized. The developed models were used to quantify the amount of FGT and BPE in 80 DCE-MRI examinations, and a correlation between these volumetric measures and the classes assigned by radiologists was performed. RESULTS To assess the model performance using application-relevant metrics, the correlation between the volumes of breast, FGT, and BPE calculated from ground truth masks and predicted masks was checked. Pearson correlation coefficients ranging from 0.963 ± 0.004 to 0.999 ± 0.001 were achieved. The Spearman correlation coefficient for the quantitative and qualitative assessment, i.e., classification by radiologist, of FGT amounted to 0.70 (p < 0.0001), whereas BPE amounted to 0.37 (p = 0.0006). CONCLUSIONS Generalizable algorithms for FGT and BPE segmentation were developed and tested. Our results suggest that when assessing FGT, it is sufficient to use volumetric measures alone. However, for the evaluation of BPE, additional models considering voxels' intensity distribution and morphology are required. CRITICAL RELEVANCE STATEMENT A standardized assessment of FGT density can rely on volumetric measures, whereas in the case of BPE, the volumetric measures constitute, along with voxels' intensity distribution and morphology, an important factor. KEY POINTS • Our work contributes to the standardization of FGT and BPE assessment. • Attention U-Net can reliably segment intricately shaped FGT and BPE structures. • The developed models were robust to domain shift.
Collapse
Affiliation(s)
- Sylwia Nowakowska
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland.
| | | | - Carlotta M Ruppert
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Anna Landsmann
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Magda Marcon
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Nicole Berger
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- Present Address: Institut RadiologieSpital Lachen, Oberdorfstrasse 41, 8853, Lachen, Switzerland
| | - Andreas Boss
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- Present address: GZO AG Spital Wetzikon, Spitalstrasse 66, 8620, Wetzikon, Switzerland
| | - Alexander Ciritsis
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- b-rayZ AG, Wagistrasse 21, 8952, Schlieren, Switzerland
| | - Cristina Rossi
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- b-rayZ AG, Wagistrasse 21, 8952, Schlieren, Switzerland
| |
Collapse
|
8
|
Raimundo JNC, Fontes JPP, Gonzaga Mendes Magalhães L, Guevara Lopez MA. An Innovative Faster R-CNN-Based Framework for Breast Cancer Detection in MRI. J Imaging 2023; 9:169. [PMID: 37754933 PMCID: PMC10532017 DOI: 10.3390/jimaging9090169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 08/11/2023] [Accepted: 08/16/2023] [Indexed: 09/28/2023] Open
Abstract
Replacing lung cancer as the most commonly diagnosed cancer globally, breast cancer (BC) today accounts for 1 in 8 cancer diagnoses and a total of 2.3 million new cases in both sexes combined. An estimated 685,000 women died from BC in 2020, corresponding to 16% or 1 in every 6 cancer deaths in women. BC represents a quarter of a total of cancer cases in females and by far the most commonly diagnosed cancer in women in 2020. However, when detected in the early stages of the disease, treatment methods have proven to be very effective in increasing life expectancy and, in many cases, patients fully recover. Several medical imaging modalities, such as X-rays Mammography (MG), Ultrasound (US), Computer Tomography (CT), Magnetic Resonance Imaging (MRI), and Digital Tomosynthesis (DT) have been explored to support radiologists/physicians in clinical decision-making workflows for the detection and diagnosis of BC. In this work, we propose a novel Faster R-CNN-based framework to automate the detection of BC pathological Lesions in MRI. As a main contribution, we have developed and experimentally (statistically) validated an innovative method improving the "breast MRI preprocessing phase" to select the patient's slices (images) and associated bounding boxes representing pathological lesions. In this way, it is possible to create a more robust training (benchmarking) dataset to feed Deep Learning (DL) models, reducing the computation time and the dimension of the dataset, and more importantly, to identify with high accuracy the specific regions (bounding boxes) for each of the patient's images, in which a possible pathological lesion (tumor) has been identified. As a result, in an experimental setting using a fully annotated dataset (released to the public domain) comprising a total of 922 MRI-based BC patient cases, we have achieved, as the most accurate trained model, an accuracy rate of 97.83%, and subsequently, applying a ten-fold cross-validation method, a mean accuracy on the trained models of 94.46% and an associated standard deviation of 2.43%.
Collapse
Affiliation(s)
| | - João Pedro Pereira Fontes
- Centro ALGORITMI, Universidade do Minho, Campus de Azurém, 4800-058 Guimarães, Portugal; (J.P.P.F.); (L.G.M.M.)
| | | | - Miguel Angel Guevara Lopez
- Instituto Politécnico de Setúbal, Escola Superior de Tecnologia de Setúbal, 2914-508 Setúbal, Portugal;
- Centro ALGORITMI, Universidade do Minho, Campus de Azurém, 4800-058 Guimarães, Portugal; (J.P.P.F.); (L.G.M.M.)
| |
Collapse
|
9
|
Zhao X, Bai JW, Guo Q, Ren K, Zhang GJ. Clinical applications of deep learning in breast MRI. Biochim Biophys Acta Rev Cancer 2023; 1878:188864. [PMID: 36822377 DOI: 10.1016/j.bbcan.2023.188864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 02/25/2023]
Abstract
Deep learning (DL) is one of the most powerful data-driven machine-learning techniques in artificial intelligence (AI). It can automatically learn from raw data without manual feature selection. DL models have led to remarkable advances in data extraction and analysis for medical imaging. Magnetic resonance imaging (MRI) has proven useful in delineating the characteristics and extent of breast lesions and tumors. This review summarizes the current state-of-the-art applications of DL models in breast MRI. Many recent DL models were examined in this field, along with several advanced learning approaches and methods for data normalization and breast and lesion segmentation. For clinical applications, DL-based breast MRI models were proven useful in five aspects: diagnosis of breast cancer, classification of molecular types, classification of histopathological types, prediction of neoadjuvant chemotherapy response, and prediction of lymph node metastasis. For subsequent studies, further improvement in data acquisition and preprocessing is necessary, additional DL techniques in breast MRI should be investigated, and wider clinical applications need to be explored.
Collapse
Affiliation(s)
- Xue Zhao
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jing-Wen Bai
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Oncology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Qiu Guo
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Ke Ren
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China.
| | - Guo-Jun Zhang
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
10
|
Eskreis-Winkler S, Sutton EJ, D’Alessio D, Gallagher K, Saphier N, Stember J, Martinez DF, Morris EA, Pinker K. Breast MRI Background Parenchymal Enhancement Categorization Using Deep Learning: Outperforming the Radiologist. J Magn Reson Imaging 2022; 56:1068-1076. [PMID: 35167152 PMCID: PMC9376189 DOI: 10.1002/jmri.28111] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 02/01/2022] [Accepted: 02/02/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Background parenchymal enhancement (BPE) is assessed on breast MRI reports as mandated by the Breast Imaging Reporting and Data System (BI-RADS) but is prone to inter and intrareader variation. Semiautomated and fully automated BPE assessment tools have been developed but none has surpassed radiologist BPE designations. PURPOSE To develop a deep learning model for automated BPE classification and to compare its performance with current standard-of-care radiology report BPE designations. STUDY TYPE Retrospective. POPULATION Consecutive high-risk patients (i.e. >20% lifetime risk of breast cancer) who underwent contrast-enhanced screening breast MRI from October 2013 to January 2019. The study included 5224 breast MRIs, divided into 3998 training, 444 validation, and 782 testing exams. On radiology reports, 1286 exams were categorized as high BPE (i.e., marked or moderate) and 3938 as low BPE (i.e., mild or minimal). FIELD STRENGTH/SEQUENCE A 1.5 T or 3 T system; one precontrast and three postcontrast phases of fat-saturated T1-weighted dynamic contrast-enhanced imaging. ASSESSMENT Breast MRIs were used to develop two deep learning models (Slab artificial intelligence (AI); maximum intensity projection [MIP] AI) for BPE categorization using radiology report BPE labels. Models were tested on a heldout test sets using radiology report BPE and three-reader averaged consensus as the reference standards. STATISTICAL TESTS Model performance was assessed using receiver operating characteristic curve analysis. Associations between high BPE and BI-RADS assessments were evaluated using McNemar's chi-square test (α* = 0.025). RESULTS The Slab AI model significantly outperformed the MIP AI model across the full test set (area under the curve of 0.84 vs. 0.79) using the radiology report reference standard. Using three-reader consensus BPE labels reference standard, our AI model significantly outperformed radiology report BPE labels. Finally, the AI model was significantly more likely than the radiologist to assign "high BPE" to suspicious breast MRIs and significantly less likely than the radiologist to assign "high BPE" to negative breast MRIs. DATA CONCLUSION Fully automated BPE assessments for breast MRIs could be more accurate than BPE assessments from radiology reports. LEVEL OF EVIDENCE 4 TECHNICAL EFFICACY STAGE: 3.
Collapse
Affiliation(s)
- Sarah Eskreis-Winkler
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| | - Elizabeth J. Sutton
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| | - Donna D’Alessio
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| | - Katherine Gallagher
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| | - Nicole Saphier
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| | - Joseph Stember
- Department of Radiology, Neuroradiology Service, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 10065, USA
| | - Danny F Martinez
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| | | | - Katja Pinker
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| |
Collapse
|
11
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 89] [Impact Index Per Article: 44.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
12
|
Grøvik E, Hoff SR. Editorial for "Breast MRI Background Parenchymal Enhancement Categorization Using Deep Learning: Outperforming the Radiologist". J Magn Reson Imaging 2022; 56:1077-1078. [PMID: 35343010 DOI: 10.1002/jmri.28183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 03/15/2022] [Indexed: 11/10/2022] Open
Affiliation(s)
- Endre Grøvik
- Department of Radiology, Ålesund Hospital, Møre og Romsdal Hospital Trust, Alesund, Norway.,Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| | - Solveig Roth Hoff
- Department of Radiology, Ålesund Hospital, Møre og Romsdal Hospital Trust, Alesund, Norway.,Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
13
|
Chalfant JS, Mortazavi S, Lee-Felker SA. Background Parenchymal Enhancement on Breast MRI: Assessment and Clinical Implications. CURRENT RADIOLOGY REPORTS 2021. [DOI: 10.1007/s40134-021-00386-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Abstract
Purpose of Review
To present recent literature regarding the assessment and clinical implications of background parenchymal enhancement on breast MRI.
Recent Findings
The qualitative assessment of BPE remains variable within the literature, as well as in clinical practice. Several different quantitative approaches have been investigated in recent years, most commonly region of interest-based and segmentation-based assessments. However, quantitative assessment has not become standard in clinical practice to date. Numerous studies have demonstrated a clear association between higher BPE and future breast cancer risk. While higher BPE does not appear to significantly impact cancer detection, it may result in a higher abnormal interpretation rate. BPE is also likely a marker of pathologic complete response after neoadjuvant chemotherapy, with decreases in BPE during and after neoadjuvant chemotherapy correlated with pCR. In contrast, pre-treatment BPE does not appear to be predictive of pCR. The association between BPE and prognosis is less clear, with heterogeneous results in the literature.
Summary
Assessment of BPE continues to evolve, with heterogeneity in approaches to both qualitative and quantitative assessment. The level of BPE has important clinical implications, with associations with future breast cancer risk and treatment response. BPE may also be an imaging marker of prognosis, but future research is needed on this topic.
Collapse
|