1
|
Shamir SB, Sasson AL, Margolies LR, Mendelson DS. New Frontiers in Breast Cancer Imaging: The Rise of AI. Bioengineering (Basel) 2024; 11:451. [PMID: 38790318 PMCID: PMC11117903 DOI: 10.3390/bioengineering11050451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 04/18/2024] [Accepted: 04/26/2024] [Indexed: 05/26/2024] Open
Abstract
Artificial intelligence (AI) has been implemented in multiple fields of medicine to assist in the diagnosis and treatment of patients. AI implementation in radiology, more specifically for breast imaging, has advanced considerably. Breast cancer is one of the most important causes of cancer mortality among women, and there has been increased attention towards creating more efficacious methods for breast cancer detection utilizing AI to improve radiologist accuracy and efficiency to meet the increasing demand of our patients. AI can be applied to imaging studies to improve image quality, increase interpretation accuracy, and improve time efficiency and cost efficiency. AI applied to mammography, ultrasound, and MRI allows for improved cancer detection and diagnosis while decreasing intra- and interobserver variability. The synergistic effect between a radiologist and AI has the potential to improve patient care in underserved populations with the intention of providing quality and equitable care for all. Additionally, AI has allowed for improved risk stratification. Further, AI application can have treatment implications as well by identifying upstage risk of ductal carcinoma in situ (DCIS) to invasive carcinoma and by better predicting individualized patient response to neoadjuvant chemotherapy. AI has potential for advancement in pre-operative 3-dimensional models of the breast as well as improved viability of reconstructive grafts.
Collapse
Affiliation(s)
- Stephanie B. Shamir
- Department of Diagnostic, Molecular and Interventional Radiology, The Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY 10029, USA
| | | | | | | |
Collapse
|
2
|
Cè M, Caloro E, Pellegrino ME, Basile M, Sorce A, Fazzini D, Oliva G, Cellina M. Artificial intelligence in breast cancer imaging: risk stratification, lesion detection and classification, treatment planning and prognosis-a narrative review. EXPLORATION OF TARGETED ANTI-TUMOR THERAPY 2022; 3:795-816. [PMID: 36654817 PMCID: PMC9834285 DOI: 10.37349/etat.2022.00113] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 09/28/2022] [Indexed: 12/28/2022] Open
Abstract
The advent of artificial intelligence (AI) represents a real game changer in today's landscape of breast cancer imaging. Several innovative AI-based tools have been developed and validated in recent years that promise to accelerate the goal of real patient-tailored management. Numerous studies confirm that proper integration of AI into existing clinical workflows could bring significant benefits to women, radiologists, and healthcare systems. The AI-based approach has proved particularly useful for developing new risk prediction models that integrate multi-data streams for planning individualized screening protocols. Furthermore, AI models could help radiologists in the pre-screening and lesion detection phase, increasing diagnostic accuracy, while reducing workload and complications related to overdiagnosis. Radiomics and radiogenomics approaches could extrapolate the so-called imaging signature of the tumor to plan a targeted treatment. The main challenges to the development of AI tools are the huge amounts of high-quality data required to train and validate these models and the need for a multidisciplinary team with solid machine-learning skills. The purpose of this article is to present a summary of the most important AI applications in breast cancer imaging, analyzing possible challenges and new perspectives related to the widespread adoption of these new tools.
Collapse
Affiliation(s)
- Maurizio Cè
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy,Correspondence: Maurizio Cè, Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, Via Festa del Perdono, 7, 20122 Milan, Italy.
| | - Elena Caloro
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy
| | - Maria E. Pellegrino
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy
| | - Mariachiara Basile
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy
| | - Adriana Sorce
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy
| | | | - Giancarlo Oliva
- Department of Radiology, ASST Fatebenefratelli Sacco, 20121 Milan, Italy
| | - Michaela Cellina
- Department of Radiology, ASST Fatebenefratelli Sacco, 20121 Milan, Italy
| |
Collapse
|
3
|
Hu X, Jiang L, You C, Gu Y. Fibroglandular Tissue and Background Parenchymal Enhancement on Breast MR Imaging Correlates With Breast Cancer. Front Oncol 2021; 11:616716. [PMID: 34660251 PMCID: PMC8515131 DOI: 10.3389/fonc.2021.616716] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 09/16/2021] [Indexed: 11/13/2022] Open
Abstract
Objectives To evaluate the association of breast cancer with both the background parenchymal enhancement intensity and volume (BPEI and BPEV, respectively) and the amount of fibroglandular tissue (FGT) using an automatic quantitative assessment method in breast magnetic resonance imaging (MRI). Materials and Methods Among 17,274 women who underwent breast MRI, 132 normal women (control group), 132 women with benign breast lesions (benign group), and 132 women with breast cancer (cancer group) were randomly selected and matched by age and menopausal status. The area under the receiver operating characteristic curve (AUC) was compared in Cancer vs Control and Cancer vs Benign groups to assess the discriminative ability of BPEI, BPEV and FGT. Results Compared with the control groups, the cancer group showed a significant difference in BPEV with a maximum AUC of 0.715 and 0.684 for patients in premenopausal and postmenopausal subgroup, respectively. And the cancer group showed a significant difference in BPEV with a maximum AUC of 0.622 and 0.633 for patients in premenopausal and postmenopausal subgroup, respectively, when compared with the benign group. FGT showed no significant difference when breast cancer group was compared with normal control and benign lesion group, respectively. Compared with the control groups, BPEI showed a slight difference in the cancer group. Compared with the benign group, no significant difference was seen in cancer group. Conclusion Increased BPEV is correlated with a high risk of breast cancer While FGT is not.
Collapse
Affiliation(s)
- Xiaoxin Hu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China
| | - Luan Jiang
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
| | - Chao You
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China
| |
Collapse
|
4
|
Chalfant JS, Mortazavi S, Lee-Felker SA. Background Parenchymal Enhancement on Breast MRI: Assessment and Clinical Implications. CURRENT RADIOLOGY REPORTS 2021. [DOI: 10.1007/s40134-021-00386-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Abstract
Purpose of Review
To present recent literature regarding the assessment and clinical implications of background parenchymal enhancement on breast MRI.
Recent Findings
The qualitative assessment of BPE remains variable within the literature, as well as in clinical practice. Several different quantitative approaches have been investigated in recent years, most commonly region of interest-based and segmentation-based assessments. However, quantitative assessment has not become standard in clinical practice to date. Numerous studies have demonstrated a clear association between higher BPE and future breast cancer risk. While higher BPE does not appear to significantly impact cancer detection, it may result in a higher abnormal interpretation rate. BPE is also likely a marker of pathologic complete response after neoadjuvant chemotherapy, with decreases in BPE during and after neoadjuvant chemotherapy correlated with pCR. In contrast, pre-treatment BPE does not appear to be predictive of pCR. The association between BPE and prognosis is less clear, with heterogeneous results in the literature.
Summary
Assessment of BPE continues to evolve, with heterogeneity in approaches to both qualitative and quantitative assessment. The level of BPE has important clinical implications, with associations with future breast cancer risk and treatment response. BPE may also be an imaging marker of prognosis, but future research is needed on this topic.
Collapse
|
5
|
Huo L, Hu X, Xiao Q, Gu Y, Chu X, Jiang L. Segmentation of whole breast and fibroglandular tissue using nnU-Net in dynamic contrast enhanced MR images. Magn Reson Imaging 2021; 82:31-41. [PMID: 34147598 DOI: 10.1016/j.mri.2021.06.017] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 05/14/2021] [Accepted: 06/15/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE Segmentation of the whole breast and fibroglandular tissue (FGT) is important for quantitatively analyzing the breast cancer risk in the dynamic contrast-enhanced magnetic resonance (DCE-MR) images. The purpose of this study is to improve the accuracy and efficiency of the segmentation of the whole breast and FGT in 3-D fat-suppressed DCE-MR images with a versatile deep learning (DL) framework. METHODS We randomly collected 100 breast DCE-MR scans from Shanghai Cancer Hospital of Fudan University. The MR scans in the dataset were different in both the spatial resolution and the MR scanners employed. Furthermore, four breast density categories were assessed by radiologists based on Breast Imaging Reporting and Data System (BI-RADS) of American College of Radiology. The dataset was separated into the training and the testing sets, while keeping a balanced distribution of scans with different imaging parameters and density categories. The nnU-Net has been recently proposed to automatically adapt preprocessing strategies and network architectures for a given medical image dataset, thus showing a great potential in the systematic adaptation of DL methods to different datasets. In this study, we applied the nnU-Net to segment the whole breast and FGT in 3-D fat-suppressed DCE-MR images. Five-fold cross validation was employed to train and validate the segmentation method. RESULTS The segmentation performance was evaluated with the volume and surface agreement metrics between the DL-based automatic and the manually delineated masks, as quantified with the following measures: the average Dice volume overlap (0.968 ± 0.017 and 0.877 ± 0.081), the average surface distances (0.201 ± 0.080 mm and 0.310 ± 0.043 mm), and the Pearson correlation coefficient of masks (0.995 and 0.972) between the automatic and the manually delineated masks, as calculated for the whole breast and the FGT segmentation, respectively. The correlation coefficient between the breast densities obtained with the DL-based segmentation and the manual delineation was 0.981. There was a positive bias of 0.8% (DL-based relative to manual) in breast density measurement with the Bland-Altman plot. The execution time of the DL-based segmentation was approximately 20 s for the whole breast segmentation and 15 s for the FGT segmentation. CONCLUSIONS Our DL-based segmentation framework using nnU-Net could robustly achieve high accuracy and efficiency across variable MR imaging settings without extra pre- or post-processing procedures. It would be useful for developing DCE-MR-based CAD systems to quantify breast cancer risk and to be integrated into the clinical workflow.
Collapse
Affiliation(s)
- Lu Huo
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; University of Chinese Academy of Sciences, No.19 Yuquan Road, Beijing 100049, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China
| | - Xiaoxin Hu
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Qin Xiao
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Yajia Gu
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Xu Chu
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China
| | - Luan Jiang
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China.
| |
Collapse
|
6
|
Borkowski K, Rossi C, Ciritsis A, Marcon M, Hejduk P, Stieb S, Boss A, Berger N. Fully automatic classification of breast MRI background parenchymal enhancement using a transfer learning approach. Medicine (Baltimore) 2020; 99:e21243. [PMID: 32702902 PMCID: PMC7373599 DOI: 10.1097/md.0000000000021243] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Marked enhancement of the fibroglandular tissue on contrast-enhanced breast magnetic resonance imaging (MRI) may affect lesion detection and classification and is suggested to be associated with higher risk of developing breast cancer. The background parenchymal enhancement (BPE) is qualitatively classified according to the BI-RADS atlas into the categories "minimal," "mild," "moderate," and "marked." The purpose of this study was to train a deep convolutional neural network (dCNN) for standardized and automatic classification of BPE categories.This IRB-approved retrospective study included 11,769 single MR images from 149 patients. The MR images were derived from the subtraction between the first post-contrast volume and the native T1-weighted images. A hierarchic approach was implemented relying on 2 dCNN models for detection of MR-slices imaging breast tissue and for BPE classification, respectively. Data annotation was performed by 2 board-certified radiologists. The consensus of the 2 radiologists was chosen as reference for BPE classification. The clinical performances of the single readers and of the dCNN were statistically compared using the quadratic Cohen's kappa.Slices depicting the breast were classified with training, validation, and real-world (test) accuracies of 98%, 96%, and 97%, respectively. Over the 4 classes, the BPE classification was reached with mean accuracies of 74% for training, 75% for the validation, and 75% for the real word dataset. As compared to the reference, the inter-reader reliabilities for the radiologists were 0.780 (reader 1) and 0.679 (reader 2). On the other hand, the reliability for the dCNN model was 0.815.Automatic classification of BPE can be performed with high accuracy and support the standardization of tissue classification in MRI.
Collapse
|
7
|
Jiao H, Jiang X, Pang Z, Lin X, Huang Y, Li L. Deep Convolutional Neural Networks-Based Automatic Breast Segmentation and Mass Detection in DCE-MRI. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:2413706. [PMID: 32454879 PMCID: PMC7232735 DOI: 10.1155/2020/2413706] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Accepted: 02/13/2020] [Indexed: 02/07/2023]
Abstract
Breast segmentation and mass detection in medical images are important for diagnosis and treatment follow-up. Automation of these challenging tasks can assist radiologists by reducing the high manual workload of breast cancer analysis. In this paper, deep convolutional neural networks (DCNN) were employed for breast segmentation and mass detection in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). First, the region of the breasts was segmented from the remaining body parts by building a fully convolutional neural network based on U-Net++. Using the method of deep learning to extract the target area can help to reduce the interference external to the breast. Second, a faster region with convolutional neural network (Faster RCNN) was used for mass detection on segmented breast images. The dataset of DCE-MRI used in this study was obtained from 75 patients, and a 5-fold cross validation method was adopted. The statistical analysis of breast region segmentation was carried out by computing the Dice similarity coefficient (DSC), Jaccard coefficient, and segmentation sensitivity. For validation of breast mass detection, the sensitivity with the number of false positives per case was computed and analyzed. The Dice and Jaccard coefficients and the segmentation sensitivity value for breast region segmentation were 0.951, 0.908, and 0.948, respectively, which were better than those of the original U-Net algorithm, and the average sensitivity for mass detection achieved 0.874 with 3.4 false positives per case.
Collapse
Affiliation(s)
- Han Jiao
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China
| | - Xinhua Jiang
- Department of Medical Imaging, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China
| | - Zhiyong Pang
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China
| | - Xiaofeng Lin
- Department of Medical Imaging, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China
| | - Yihua Huang
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China
| | - Li Li
- Department of Medical Imaging, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China
| |
Collapse
|
8
|
Automatic Breast and Fibroglandular Tissue Segmentation in Breast MRI Using Deep Learning by a Fully-Convolutional Residual Neural Network U-Net. Acad Radiol 2019; 26:1526-1535. [PMID: 30713130 DOI: 10.1016/j.acra.2019.01.012] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 01/03/2019] [Accepted: 01/13/2019] [Indexed: 12/17/2022]
Abstract
RATIONALE AND OBJECTIVES Breast segmentation using the U-net architecture was implemented and tested in independent validation datasets to quantify fibroglandular tissue volume in breast MRI. MATERIALS AND METHODS Two datasets were used. The training set was MRI of 286 patients with unilateral breast cancer. The segmentation was done on the contralateral normal breasts. The ground truth for the breast and fibroglandular tissue (FGT) was obtained by using a template-based segmentation method. The U-net deep learning algorithm was implemented to analyze the training set, and the final model was obtained using 10-fold cross-validation. The independent validation set was MRI of 28 normal volunteers acquired using four different MR scanners. Dice Similarity Coefficient (DSC), voxel-based accuracy, and Pearson's correlation were used to evaluate the performance. RESULTS For the 10-fold cross-validation in the initial training set of 286 patients, the DSC range was 0.83-0.98 (mean 0.95 ± 0.02) for breast and 0.73-0.97 (mean 0.91 ± 0.03) for FGT; and the accuracy range was 0.92-0.99 (mean 0.98 ± 0.01) for breast and 0.87-0.99 (mean 0.97 ± 0.01) for FGT. For the entire 224 testing breasts of the 28 normal volunteers in the validation datasets, the mean DSC was 0.86 ± 0.05 for breast, 0.83 ± 0.06 for FGT; and the mean accuracy was 0.94 ± 0.03 for breast and 0.93 ± 0.04 for FGT. The testing results for MRI acquired using four different scanners were comparable. CONCLUSION Deep learning based on the U-net algorithm can achieve accurate segmentation results for the breast and FGT on MRI. It may provide a reliable and efficient method to process large number of MR images for quantitative analysis of breast density.
Collapse
|
9
|
Xu X, Fu L, Chen Y, Larsson R, Zhang D, Suo S, Hua J, Zhao J. Breast Region Segmentation being Convolutional Neural Network in Dynamic Contrast Enhanced MRI. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:750-753. [PMID: 30440504 DOI: 10.1109/embc.2018.8512422] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Breast density and background parenchymal enhancement (BPE) are suggested to be related to the risk of breast cancer. The first step to quantitative analysis of breast density and BPE is segmenting the breast from body. Nowadays, convolutional neural networks (CNNs) are widely used in image segmentation and work well in semantic segmentation, however, CNNs have been rarely used in breast region segmentation. In this paper, the CNN was employed to segment the breast region in transverse fat-suppressed breast dynamic contrast enhanced magnetic resonance imaging (DCE-MRI). Image normalization was initially performed. Subsequently, the dataset was divided into three sets randomly: train set validation set and test set. The 2-D U-Net was trained by train set and the optimum model was chosen by validation set. Finally, segmentation results of test set obtained by U-Net were adjusted in the postprocessing. In this step, two largest volumes were computed to determine whether the smaller volume is the scar after mastectomy. With the limitation of small dataset, 5-fold cross-validation and data augmentation were used in this study. Final results on the test set were evaluated by volume-based and boundary-based metrics with manual segmentation results. By using this method, the mean dice similarity coefficient (DSC), dice difference coefficient (DDC), and root-mean-square distance reached 97.44%, 5.11%, and 1.25 pixels, respectively.
Collapse
|
10
|
Verburg E, Wolterink JM, Waard SN, Išgum I, Gils CH, Veldhuis WB, Gilhuijs KGA. Knowledge‐based and deep learning‐based automated chest wall segmentation in magnetic resonance images of extremely dense breasts. Med Phys 2019; 46:4405-4416. [DOI: 10.1002/mp.13699] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2019] [Revised: 06/21/2019] [Accepted: 06/26/2019] [Indexed: 11/07/2022] Open
Affiliation(s)
- Erik Verburg
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Jelmer M. Wolterink
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Stephanie N. Waard
- Department of Radiology University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Ivana Išgum
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Carla H. Gils
- Julius Center for Health Sciences and Primary Care University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Wouter B. Veldhuis
- Department of Radiology University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Kenneth G. A. Gilhuijs
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| |
Collapse
|
11
|
Zhang L, Mohamed AA, Chai R, Guo Y, Zheng B, Wu S. Automated deep learning method for whole-breast segmentation in diffusion-weighted breast MRI. J Magn Reson Imaging 2019; 51:635-643. [PMID: 31301201 DOI: 10.1002/jmri.26860] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2019] [Accepted: 06/26/2019] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND Diffusion-weighted imaging (DWI) in MRI plays an increasingly important role in diagnostic applications and developing imaging biomarkers. Automated whole-breast segmentation is an important yet challenging step for quantitative breast imaging analysis. While methods have been developed on dynamic contrast-enhanced (DCE) MRI, automatic whole-breast segmentation in breast DWI MRI is still underdeveloped. PURPOSE To develop a deep/transfer learning-based segmentation approach for DWI MRI scans and conduct an extensive study assessment on four imaging datasets from both internal and external sources. STUDY TYPE Retrospective. SUBJECTS In all, 98 patients (144 MRI scans; 11,035 slices) of four different breast MRI datasets from two different institutions. FIELD STRENGTH/SEQUENCES 1.5T scanners with DCE sequence (Dataset 1 and Dataset 2) and DWI sequence. A 3.0T scanner with one external DWI sequence. ASSESSMENT Deep learning models (UNet and SegNet) and transfer learning were used as segmentation approaches. The main DCE Dataset (4,251 2D slices from 39 patients) was used for pre-training and internal validation, and an unseen DCE Dataset (431 2D slices from 20 patients) was used as an independent test dataset for evaluating the pre-trained DCE models. The main DWI Dataset (6,343 2D slices from 75 MRI scans of 29 patients) was used for transfer learning and internal validation, and an unseen DWI Dataset (10 2D slices from 10 patients) was used for independent evaluation to the fine-tuned models for DWI segmentation. Manual segmentations by three radiologists (>10-year experience) were used to establish the ground truth for assessment. The segmentation performance was measured using the Dice Coefficient (DC) for the agreement between manual expert radiologist's segmentation and algorithm-generated segmentation. STATISTICAL TESTS The mean value and standard deviation of the DCs were calculated to compare segmentation results from different deep learning models. RESULTS For the segmentation on the DCE MRI, the average DC of the UNet was 0.92 (cross-validation on the main DCE dataset) and 0.87 (external evaluation on the unseen DCE dataset), both higher than the performance of the SegNet. When segmenting the DWI images by the fine-tuned models, the average DC of the UNet was 0.85 (cross-validation on the main DWI dataset) and 0.72 (external evaluation on the unseen DWI dataset), both outperforming the SegNet on the same datasets. DATA CONCLUSION The internal and independent tests show that the deep/transfer learning models can achieve promising segmentation effects validated on DWI data from different institutions and scanner types. Our proposed approach may provide an automated toolkit to help computer-aided quantitative analyses of breast DWI images. LEVEL OF EVIDENCE 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2020;51:635-643.
Collapse
Affiliation(s)
- Lei Zhang
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Aly A Mohamed
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Ruimei Chai
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA.,Department of Radiology, First Hospital of China Medical University, Heping District, Shenyang, Liaoning, China
| | - Yuan Guo
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA.,Department of Radiology, Second Affiliated Hospital of South China University of Technology, Guangzhou First People's Hospital, Guangzhou, China
| | - Bingjie Zheng
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA.,Department of Radiology, Henan Cancer Hospital, Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Shandong Wu
- Departments of Radiology, Biomedical Informatics, Bioengineering, Intelligent Systems, and Clinical and Translational Science, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
12
|
Rampun A, Scotney BW, Morrow PJ, Wang H, Winder J. Segmentation of breast MR images using a generalised 2D mathematical model with inflation and deflation forces of active contours. Artif Intell Med 2019; 97:44-60. [DOI: 10.1016/j.artmed.2018.10.007] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Revised: 09/26/2018] [Accepted: 10/23/2018] [Indexed: 11/28/2022]
|
13
|
Liao GJ, Henze Bancroft LC, Strigel RM, Chitalia RD, Kontos D, Moy L, Partridge SC, Rahbar H. Background parenchymal enhancement on breast MRI: A comprehensive review. J Magn Reson Imaging 2019; 51:43-61. [PMID: 31004391 DOI: 10.1002/jmri.26762] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 04/09/2019] [Accepted: 04/09/2019] [Indexed: 12/22/2022] Open
Abstract
The degree of normal fibroglandular tissue that enhances on breast MRI, known as background parenchymal enhancement (BPE), was initially described as an incidental finding that could affect interpretation performance. While BPE is now established to be a physiologic phenomenon that is affected by both endogenous and exogenous hormone levels, evidence supporting the notion that BPE frequently masks breast cancers is limited. However, compelling data have emerged to suggest BPE is an independent marker of breast cancer risk and breast cancer treatment outcomes. Specifically, multiple studies have shown that elevated BPE levels, measured qualitatively or quantitatively, are associated with a greater risk of developing breast cancer. Evidence also suggests that BPE could be a predictor of neoadjuvant breast cancer treatment response and overall breast cancer treatment outcomes. These discoveries come at a time when breast cancer screening and treatment have moved toward an increased emphasis on targeted and individualized approaches, of which the identification of imaging features that can predict cancer diagnosis and treatment response is an increasingly recognized component. Historically, researchers have primarily studied quantitative tumor imaging features in pursuit of clinically useful biomarkers. However, the need to segment less well-defined areas of normal tissue for quantitative BPE measurements presents its own unique challenges. Furthermore, there is no consensus on the optimal timing on dynamic contrast-enhanced MRI for BPE quantitation. This article comprehensively reviews BPE with a particular focus on its potential to increase precision approaches to breast cancer risk assessment, diagnosis, and treatment. It also describes areas of needed future research, such as the applicability of BPE to women at average risk, the biological underpinnings of BPE, and the standardization of BPE characterization. Level of Evidence: 3 Technical Efficacy Stage: 5 J. Magn. Reson. Imaging 2020;51:43-61.
Collapse
Affiliation(s)
- Geraldine J Liao
- Department of Radiology, University of Washington School of Medicine, Seattle, Washington, USA.,Department of Radiology, Virginia Mason Medical Center, Seattle, Washington, USA
| | | | - Roberta M Strigel
- Department of Radiology, University of Wisconsin, Madison, Wisconsin, USA.,Department of Medical Physics, University of Wisconsin, Madison, Wisconsin, USA.,Carbone Cancer Center, University of Wisconsin, Madison, Wisconsin, USA
| | - Rhea D Chitalia
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Despina Kontos
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Linda Moy
- Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Savannah C Partridge
- Department of Radiology, University of Washington School of Medicine, Seattle, Washington, USA
| | - Habib Rahbar
- Department of Radiology, University of Washington School of Medicine, Seattle, Washington, USA
| |
Collapse
|
14
|
Fashandi H, Kuling G, Lu Y, Wu H, Martel AL. An investigation of the effect of fat suppression and dimensionality on the accuracy of breast MRI segmentation using U-nets. Med Phys 2019; 46:1230-1244. [PMID: 30609062 DOI: 10.1002/mp.13375] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2018] [Revised: 10/17/2018] [Accepted: 12/11/2018] [Indexed: 01/17/2023] Open
Abstract
PURPOSE Accurate segmentation of the breast is required for breast density estimation and the assessment of background parenchymal enhancement, both of which have been shown to be related to breast cancer risk. The MRI breast segmentation task is challenging, and recent work has demonstrated that convolutional neural networks perform well for this task. In this study, we have investigated the performance of several two-dimensional (2D) U-Net and three-dimensional (3D) U-Net configurations using both fat-suppressed and nonfat-suppressed images. We have also assessed the effect of changing the number and quality of the ground truth segmentations. MATERIALS AND METHODS We designed eight studies to investigate the effect of input types and the dimensionality of the U-Net operations for the breast MRI segmentation. Our training data contained 70 whole breast volumes of T1-weighted sequences without fat suppression (WOFS) and with fat suppression (FS). For each subject, we registered the WOFS and FS volumes together before manually segmenting the breast to generate ground truth. We compared four different input types to the U-nets: WOFS, FS, MIXED (WOFS and FS images treated as separate samples), and MULTI (WOFS and FS images combined into a single multichannel image). We trained 2D U-Nets and 3D U-Nets with these data, which resulted in our eight studies (2D-WOFS, 3D-WOFS, 2D-FS, 3D-FS, 2D-MIXED, 3D-MIXED, 2D-MULTI, and 3D-MULT). For each of these studies, we performed a systematic grid search to tune the hyperparameters of the U-Nets. A separate validation set with 15 whole breast volumes was used for hyperparameter tuning. We performed Kruskal-Walis test on the results of our hyperparameter tuning and did not find a statistically significant difference in the ten top models of each study. For this reason, we chose the best model as the model with the highest mean dice similarity coefficient (DSC) value on the validation set. The reported test results are the results of the top model of each study on our test set which contained 19 whole breast volumes annotated by three readers fused with the STAPLE algorithm. We also investigated the effect of the quality of the training annotations and the number of training samples for this task. RESULTS The study with the highest average DSC result was 3D-MULTI with 0.96 ± 0.02. The second highest average is 2D WOFS (0.96 ± 0.03), and the third is 2D MULTI (0.96 ± 0.03). We performed the Kruskal-Wallis one-way ANOVA test with Dunn's multiple comparison tests using Bonferroni P-value correction on the results of the selected model of each study and found that 3D-MULTI, 2D-MULTI, 3D-WOFS, 2D-WOFS, 2D-FS, and 3D-FS were not statistically different in their distributions, which indicates that comparable results could be obtained in fat-suppressed and nonfat-suppressed volumes and that there is no significant difference between the 3D and 2D approach. Our results also suggested that the networks trained on single sequence images or multiple sequence images organized in multichannel images perform better than the models trained on a mixture of volumes from different sequences. Our investigation of the size of the training set revealed that training a U-Net in this domain only requires a modest amount of training data and results obtained with 49 and 70 training datasets were not significantly different. CONCLUSIONS To summarize, we investigated the use of 2D U-Nets and 3D U-Nets for breast volume segmentation in T1 fat-suppressed and without fat-suppressed volumes. Although our highest score was obtained in the 3D MULTI study, when we took advantage of information in both fat-suppressed and nonfat-suppressed volumes and their 3D structure, all of the methods we explored gave accurate segmentations with an average DSC on >94% demonstrating that the U-Net is a robust segmentation method for breast MRI volumes.
Collapse
Affiliation(s)
- Homa Fashandi
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, M4N 3M5, Canada
| | - Gregory Kuling
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, M5G 1L7, Canada
| | - Yingli Lu
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, M4N 3M5, Canada
| | - Hongbo Wu
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, M5G 1L7, Canada
| | - Anne L Martel
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, M4N 3M5, Canada.,Department of Medical Biophysics, University of Toronto, Toronto, Ontario, M5G 1L7, Canada
| |
Collapse
|
15
|
Automated Detection and Segmentation of Nonmass-Enhancing Breast Tumors with Dynamic Contrast-Enhanced Magnetic Resonance Imaging. CONTRAST MEDIA & MOLECULAR IMAGING 2018; 2018:5308517. [PMID: 30647551 PMCID: PMC6311739 DOI: 10.1155/2018/5308517] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 09/16/2018] [Indexed: 01/27/2023]
Abstract
Nonmass-enhancing (NME) lesions constitute a diagnostic challenge in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of the breast. Computer-aided diagnosis (CAD) systems provide physicians with advanced tools for analysis, assessment, and evaluation that have a significant impact on the diagnostic performance. Here, we propose a new approach to address the challenge of NME lesion detection and segmentation, taking advantage of independent component analysis (ICA) to extract data-driven dynamic lesion characterizations. A set of independent sources was obtained from the DCE-MRI dataset of breast cancer patients, and the dynamic behavior of the different tissues was described by multiple dynamic curves, together with a set of eigenimages describing the scores for each voxel. A new test image is projected onto the independent source space using the unmixing matrix, and each voxel is classified by a support vector machine (SVM) that has already been trained with manually delineated data. A solution to the high false-positive rate problem is proposed by controlling the SVM hyperplane location, outperforming previously published approaches.
Collapse
|