1
|
Hwang J, Chun J, Cho S, Kim JH, Cho MS, Choi SH, Kim JS. Personalized Deep Learning Model for Clinical Target Volume on Daily Cone Beam Computed Tomography in Breast Cancer Patients. Adv Radiat Oncol 2024; 9:101580. [PMID: 39258144 PMCID: PMC11381721 DOI: 10.1016/j.adro.2024.101580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 07/17/2024] [Indexed: 09/12/2024] Open
Abstract
Purpose Herein, we developed a deep learning algorithm to improve the segmentation of the clinical target volume (CTV) on daily cone beam computed tomography (CBCT) scans in breast cancer radiation therapy. By leveraging the Intentional Deep Overfit Learning (IDOL) framework, we aimed to enhance personalized image-guided radiation therapy based on patient-specific learning. Methods and Materials We used 240 CBCT scans from 100 breast cancer patients and employed a 2-stage training approach. The first stage involved training a novel general deep learning model (Swin UNETR, UNET, and SegResNET) on 90 patients. The second stage used intentional overfitting on the remaining 10 patients for patient-specific CBCT outputs. Quantitative evaluation was conducted using the Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), mean surface distance (MSD), and independent samples t test with expert contours on CBCT scans from the first to 15th fractions. Results IDOL integration significantly improved CTV segmentation, particularly with the Swin UNETR model (P values < .05). Using patient-specific data, IDOL enhanced the DSC, HD, and MSD metrics. The average DSC for the 15th fraction improved from 0.9611 to 0.9819, the average HD decreased from 4.0118 mm to 1.3935 mm, and the average MSD decreased from 0.8723 to 0.4603. Incorporating CBCT scans from the initial treatments and first to third fractions further improved results, with an average DSC of 0.9850, an average HD of 1.2707 mm, and an average MSD of 0.4076 for the 15th fraction, closely aligning with physician-drawn contours. Conclusion Compared with a general model, our patient-specific deep learning-based training algorithm significantly improved CTV segmentation accuracy of CBCT scans in patients with breast cancer. This approach, coupled with continuous deep learning training using daily CBCT scans, demonstrated enhanced CTV delineation accuracy and efficiency. Future studies should explore the adaptability of the IDOL framework to diverse deep learning models, data sets, and cancer sites.
Collapse
Affiliation(s)
- Joonil Hwang
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Medical Image and Radiotherapy Lab (MIRLAB), Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Jaehee Chun
- OncoSoft, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Seungryong Cho
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Medical Image and Radiotherapy Lab (MIRLAB), Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Joo-Ho Kim
- Department of Radiation Oncology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Republic of Korea
| | - Min-Seok Cho
- Department of Radiation Oncology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Republic of Korea
| | - Seo Hee Choi
- Department of Radiation Oncology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Republic of Korea
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jin Sung Kim
- OncoSoft, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
2
|
Huang Y, Leotta NJ, Hirsch L, Gullo RL, Hughes M, Reiner J, Saphier NB, Myers KS, Panigrahi B, Ambinder E, Di Carlo P, Grimm LJ, Lowell D, Yoon S, Ghate SV, Parra LC, Sutton EJ. Cross-site Validation of AI Segmentation and Harmonization in Breast MRI. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01266-9. [PMID: 39320547 DOI: 10.1007/s10278-024-01266-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 09/05/2024] [Accepted: 09/09/2024] [Indexed: 09/26/2024]
Abstract
This work aims to perform a cross-site validation of automated segmentation for breast cancers in MRI and to compare the performance to radiologists. A three-dimensional (3D) U-Net was trained to segment cancers in dynamic contrast-enhanced axial MRIs using a large dataset from Site 1 (n = 15,266; 449 malignant and 14,817 benign). Performance was validated on site-specific test data from this and two additional sites, and common publicly available testing data. Four radiologists from each of the three clinical sites provided two-dimensional (2D) segmentations as ground truth. Segmentation performance did not differ between the network and radiologists on the test data from Sites 1 and 2 or the common public data (median Dice score Site 1, network 0.86 vs. radiologist 0.85, n = 114; Site 2, 0.91 vs. 0.91, n = 50; common: 0.93 vs. 0.90). For Site 3, an affine input layer was fine-tuned using segmentation labels, resulting in comparable performance between the network and radiologist (0.88 vs. 0.89, n = 42). Radiologist performance differed on the common test data, and the network numerically outperformed 11 of the 12 radiologists (median Dice: 0.85-0.94, n = 20). In conclusion, a deep network with a novel supervised harmonization technique matches radiologists' performance in MRI tumor segmentation across clinical sites. We make code and weights publicly available to promote reproducible AI in radiology.
Collapse
Affiliation(s)
- Yu Huang
- Department of Biomedical Engineering, The City College of the City University of New York, 160 Convent Ave, New York, NY, 10031, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Nicholas J Leotta
- Department of Biomedical Engineering, The City College of the City University of New York, 160 Convent Ave, New York, NY, 10031, USA
| | - Lukas Hirsch
- Department of Biomedical Engineering, The City College of the City University of New York, 160 Convent Ave, New York, NY, 10031, USA
| | - Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Mary Hughes
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Jeffrey Reiner
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Nicole B Saphier
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Kelly S Myers
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD, 21224, USA
| | - Babita Panigrahi
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD, 21224, USA
| | - Emily Ambinder
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD, 21224, USA
| | - Philip Di Carlo
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD, 21224, USA
| | - Lars J Grimm
- Department of Radiology, Duke University School of Medicine, Durham, NC, 27710, USA
| | - Dorothy Lowell
- Department of Radiology, Duke University School of Medicine, Durham, NC, 27710, USA
| | - Sora Yoon
- Department of Radiology, Duke University School of Medicine, Durham, NC, 27710, USA
| | - Sujata V Ghate
- Department of Radiology, Duke University School of Medicine, Durham, NC, 27710, USA
| | - Lucas C Parra
- Department of Biomedical Engineering, The City College of the City University of New York, 160 Convent Ave, New York, NY, 10031, USA.
| | - Elizabeth J Sutton
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
3
|
Wang H, Wang T, Hao Y, Ding S, Feng J. Breast tumor segmentation via deep correlation analysis of multi-sequence MRI. Med Biol Eng Comput 2024:10.1007/s11517-024-03166-0. [PMID: 39031329 DOI: 10.1007/s11517-024-03166-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 07/03/2024] [Indexed: 07/22/2024]
Abstract
Precise segmentation of breast tumors from MRI is crucial for breast cancer diagnosis, as it allows for detailed calculation of tumor characteristics such as shape, size, and edges. Current segmentation methodologies face significant challenges in accurately modeling the complex interrelationships inherent in multi-sequence MRI data. This paper presents a hybrid deep network framework with three interconnected modules, aimed at efficiently integrating and exploiting the spatial-temporal features among multiple MRI sequences for breast tumor segmentation. The first module involves an advanced multi-sequence encoder with a densely connected architecture, separating the encoding pathway into multiple streams for individual MRI sequences. To harness the intricate correlations between different sequence features, we propose a sequence-awareness and temporal-awareness method that adeptly fuses spatial-temporal features of MRI in the second multi-scale feature embedding module. Finally, the decoder module engages in the upsampling of feature maps, meticulously refining the resolution to achieve highly precise segmentation of breast tumors. In contrast to other popular methods, the proposed method learns the interrelationships inherent in multi-sequence MRI. We justify the proposed method through extensive experiments. It achieves notable improvements in segmentation performance, with Dice Similarity Coefficient (DSC), Intersection over Union (IoU), and Positive Predictive Value (PPV) scores of 80.57%, 74.08%, and 84.74% respectively.
Collapse
Affiliation(s)
- Hongyu Wang
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
| | - Tonghui Wang
- Department of Information Science and Technology, Northwest University, Xi'an, Shaanxi, 7101127, China
| | - Yanfang Hao
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
| | - Songtao Ding
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
| | - Jun Feng
- Department of Information Science and Technology, Northwest University, Xi'an, Shaanxi, 7101127, China.
| |
Collapse
|
4
|
Park GE, Kim SH, Nam Y, Kang J, Park M, Kang BJ. 3D Breast Cancer Segmentation in DCE-MRI Using Deep Learning With Weak Annotation. J Magn Reson Imaging 2024; 59:2252-2262. [PMID: 37596823 DOI: 10.1002/jmri.28960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/01/2023] [Accepted: 08/03/2023] [Indexed: 08/20/2023] Open
Abstract
BACKGROUND Deep learning models require large-scale training to perform confidently, but obtaining annotated datasets in medical imaging is challenging. Weak annotation has emerged as a way to save time and effort. PURPOSE To develop a deep learning model for 3D breast cancer segmentation in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) using weak annotation with reliable performance. STUDY TYPE Retrospective. POPULATION Seven hundred and thirty-six women with breast cancer from a single institution, divided into the development (N = 544) and test dataset (N = 192). FIELD STRENGTH/SEQUENCE 3.0-T, 3D fat-saturated gradient-echo axial T1-weighted flash 3D volumetric interpolated brain examination (VIBE) sequences. ASSESSMENT Two radiologists performed a weak annotation of the ground truth using bounding boxes. Based on this, the ground truth annotation was completed through autonomic and manual correction. The deep learning model using 3D U-Net transformer (UNETR) was trained with this annotated dataset. The segmentation results of the test set were analyzed by quantitative and qualitative methods, and the regions were divided into whole breast and region of interest (ROI) within the bounding box. STATISTICAL TESTS As a quantitative method, we used the Dice similarity coefficient to evaluate the segmentation result. The volume correlation with the ground truth was evaluated with the Spearman correlation coefficient. Qualitatively, three readers independently evaluated the visual score in four scales. A P-value <0.05 was considered statistically significant. RESULTS The deep learning model we developed achieved a median Dice similarity score of 0.75 and 0.89 for the whole breast and ROI, respectively. The volume correlation coefficient with respect to the ground truth volume was 0.82 and 0.86 for the whole breast and ROI, respectively. The mean visual score, as evaluated by three readers, was 3.4. DATA CONCLUSION The proposed deep learning model with weak annotation may show good performance for 3D segmentations of breast cancer using DCE-MRI. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Ga Eun Park
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Sung Hun Kim
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yoonho Nam
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Junghwa Kang
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Minjeong Park
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Bong Joo Kang
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
5
|
Nogueira L, Adubeiro N, Nunes RG. Editorial for "3D Breast Cancer Segmentation in DCE-MRI Using Deep Learning With Weak Annotation". J Magn Reson Imaging 2024; 59:2263-2264. [PMID: 37578324 DOI: 10.1002/jmri.28957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 07/31/2023] [Indexed: 08/15/2023] Open
Affiliation(s)
- Luísa Nogueira
- Department of Radiology, School of Health of Porto/Polytechnic Institute of Porto (ESS/IPP), Porto, Portugal
- EPIUnit, Institute of Public Health, University of Porto, Porto, Portugal
- Departement of Public Health, Laboratory for Integrative and Translational Research in Population Health (ITR), Porto, Portugal
| | - Nuno Adubeiro
- Department of Radiology, School of Health of Porto/Polytechnic Institute of Porto (ESS/IPP), Porto, Portugal
- EPIUnit, Institute of Public Health, University of Porto, Porto, Portugal
- Departement of Public Health, Laboratory for Integrative and Translational Research in Population Health (ITR), Porto, Portugal
| | - Rita G Nunes
- Institute for Systems and Robotics - Lisboa and Department of Bioengineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| |
Collapse
|
6
|
Gong Z, Li X, Shi M, Cai G, Chen S, Ye Z, Gan X, Yang R, Wang R, Chen Z. Measuring the binary thickness of buccal bone of anterior maxilla in low-resolution cone-beam computed tomography via a bilinear convolutional neural network. Quant Imaging Med Surg 2023; 13:8053-8066. [PMID: 38106266 PMCID: PMC10722026 DOI: 10.21037/qims-23-744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 08/28/2023] [Indexed: 12/19/2023]
Abstract
Background The thickness of the buccal bone of the anterior maxilla is an important aesthetic-determining factor for dental implant, which is divided into the thick (≥1 mm) and thin type (<1 mm). However, as a micro-scale structure that is evaluated through low-resolution cone-beam computed tomography (CBCT), its thickness measurement is error-prone under the circumstance of enormous patients and relatively inexperienced primary dentists. Further, the challenges of deep learning-based analysis of the binary thickness of buccal bone include the substantial real-world variance caused by pixel error, the extraction of fine-grained features, and burdensome annotations. Methods This study built bilinear convolutional neural network (BCNN) with 2 convolutional neural network (CNN) backbones and a bilinear pooling module to predict the binary thickness of buccal bone (thick or thin) of the anterior maxilla in an end-to-end manner. The methods of 5-fold cross-validation and model ensemble were adopted at the training and testing stages. The visualization methods of Gradient Weighted Class Activation Mapping (Grad-CAM), Guided Grad-CAM, and layer-wise relevance propagation (LRP) were used for revealing the important features on which the model focused. The performance metrics and efficacy were compared between BCNN, dentists of different clinical experience (i.e., dental student, junior dentist, and senior dentist), and the fusion of BCNN and dentists to investigate the clinical feasibility of BCNN. Results Based on the dataset of 4,000 CBCT images from 1,000 patients (aged 36.15±13.09 years), the BCNN with visual geometry group (VGG)16 backbone achieved an accuracy of 0.870 [95% confidence interval (CI): 0.838-0.902] and an area under the receiver operating characteristic (ROC) curve (AUC) of 0.924 (95% CI: 0.896-0.948). Compared with the conventional CNNs, BCNN precisely located the buccal bone wall over irrelevant regions. The BCNN generally outperformed the expert-level dentists. The clinical diagnostic performance of the dentists was improved with the assistance of BCNN. Conclusions The application of BCNN to the quantitative analysis of binary buccal bone thickness validated the model's excellent ability of subtle feature extraction and achieved expert-level performance. This work signals the potential of fine-grained image recognition networks to the precise quantitative analysis of micro-scale structures.
Collapse
Affiliation(s)
- Zhuohong Gong
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Xiaohui Li
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Mengru Shi
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Gengbin Cai
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Shijie Chen
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Zejun Ye
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Xuejing Gan
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Ruihan Yang
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Ruixuan Wang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Zetao Chen
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
7
|
Zhang J, Wu J, Zhou XS, Shi F, Shen D. Recent advancements in artificial intelligence for breast cancer: Image augmentation, segmentation, diagnosis, and prognosis approaches. Semin Cancer Biol 2023; 96:11-25. [PMID: 37704183 DOI: 10.1016/j.semcancer.2023.09.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/03/2023] [Accepted: 09/05/2023] [Indexed: 09/15/2023]
Abstract
Breast cancer is a significant global health burden, with increasing morbidity and mortality worldwide. Early screening and accurate diagnosis are crucial for improving prognosis. Radiographic imaging modalities such as digital mammography (DM), digital breast tomosynthesis (DBT), magnetic resonance imaging (MRI), ultrasound (US), and nuclear medicine techniques, are commonly used for breast cancer assessment. And histopathology (HP) serves as the gold standard for confirming malignancy. Artificial intelligence (AI) technologies show great potential for quantitative representation of medical images to effectively assist in segmentation, diagnosis, and prognosis of breast cancer. In this review, we overview the recent advancements of AI technologies for breast cancer, including 1) improving image quality by data augmentation, 2) fast detection and segmentation of breast lesions and diagnosis of malignancy, 3) biological characterization of the cancer such as staging and subtyping by AI-based classification technologies, 4) prediction of clinical outcomes such as metastasis, treatment response, and survival by integrating multi-omics data. Then, we then summarize large-scale databases available to help train robust, generalizable, and reproducible deep learning models. Furthermore, we conclude the challenges faced by AI in real-world applications, including data curating, model interpretability, and practice regulations. Besides, we expect that clinical implementation of AI will provide important guidance for the patient-tailored management.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
8
|
Xu Z, Rauch DE, Mohamed RM, Pashapoor S, Zhou Z, Panthi B, Son JB, Hwang KP, Musall BC, Adrada BE, Candelaria RP, Leung JWT, Le-Petross HTC, Lane DL, Perez F, White J, Clayborn A, Reed B, Chen H, Sun J, Wei P, Thompson A, Korkut A, Huo L, Hunt KK, Litton JK, Valero V, Tripathy D, Yang W, Yam C, Ma J. Deep Learning for Fully Automatic Tumor Segmentation on Serially Acquired Dynamic Contrast-Enhanced MRI Images of Triple-Negative Breast Cancer. Cancers (Basel) 2023; 15:4829. [PMID: 37835523 PMCID: PMC10571741 DOI: 10.3390/cancers15194829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 09/10/2023] [Accepted: 09/22/2023] [Indexed: 10/15/2023] Open
Abstract
Accurate tumor segmentation is required for quantitative image analyses, which are increasingly used for evaluation of tumors. We developed a fully automated and high-performance segmentation model of triple-negative breast cancer using a self-configurable deep learning framework and a large set of dynamic contrast-enhanced MRI images acquired serially over the patients' treatment course. Among all models, the top-performing one that was trained with the images across different time points of a treatment course yielded a Dice similarity coefficient of 93% and a sensitivity of 96% on baseline images. The top-performing model also produced accurate tumor size measurements, which is valuable for practical clinical applications.
Collapse
Affiliation(s)
- Zhan Xu
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - David E. Rauch
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Rania M. Mohamed
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Sanaz Pashapoor
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Zijian Zhou
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Bikash Panthi
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Jong Bum Son
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Ken-Pin Hwang
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Benjamin C. Musall
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Beatriz E. Adrada
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Rosalind P. Candelaria
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jessica W. T. Leung
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Huong T. C. Le-Petross
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Deanna L. Lane
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Frances Perez
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jason White
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Alyson Clayborn
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Brandy Reed
- Department of Clinical Research Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Huiqin Chen
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jia Sun
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Peng Wei
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Alastair Thompson
- Section of Breast Surgery, Baylor College of Medicine, Houston, TX 77030, USA
| | - Anil Korkut
- Department of Bioinformatics & Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Lei Huo
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Kelly K. Hunt
- Department of Breast Surgical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jennifer K. Litton
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Vicente Valero
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Debu Tripathy
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Wei Yang
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Clinton Yam
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jingfei Ma
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| |
Collapse
|
9
|
Mohammadi S, Ghaderi S, Ghaderi K, Mohammadi M, Pourasl MH. Automated segmentation of meningioma from contrast-enhanced T1-weighted MRI images in a case series using a marker-controlled watershed segmentation and fuzzy C-means clustering machine learning algorithm. Int J Surg Case Rep 2023; 111:108818. [PMID: 37716060 PMCID: PMC10514425 DOI: 10.1016/j.ijscr.2023.108818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/07/2023] [Accepted: 09/09/2023] [Indexed: 09/18/2023] Open
Abstract
INTRODUCTION AND IMPORTANCE Accurate segmentation of meningiomas from contrast-enhanced T1-weighted (CE T1-w) magnetic resonance imaging (MRI) is crucial for diagnosis and treatment planning. Manual segmentation is time-consuming and prone to variability. To evaluate an automated segmentation approach for meningiomas using marker-controlled watershed segmentation (MCWS) and fuzzy c-means (FCM) algorithms. CASE PRESENTATION AND METHODS CE T1-w MRI of 3 female patients (aged 59, 44, 67 years) with right frontal meningiomas were analyzed. Images were converted to grayscale and preprocessed with Otsu's thresholding and FCM clustering. MCWS segmentation was performed. Segmentation accuracy was assessed by comparing automated segmentations to manual delineations. CLINICAL DISCUSSION The approach successfully segmented meningiomas in all cases. Mean sensitivity was 0.8822, indicating accurate identification of tumors. Mean Dice similarity coefficient between Otsu's and FCM1 was 0.6599, suggesting good overlap between segmentation methods. CONCLUSION The MCWS and FCM approach enables accurate automated segmentation of meningiomas from CE T1-w MRI. With further validation on larger datasets, this could provide an efficient tool to assist in delineating meningioma boundaries for clinical management.
Collapse
Affiliation(s)
- Sana Mohammadi
- Department of Medical Sciences, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Sadegh Ghaderi
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran.
| | - Kayvan Ghaderi
- Department of Information Technology and Computer Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj 66177-15175, Iran
| | - Mahdi Mohammadi
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | | |
Collapse
|
10
|
Yue WY, Zhang HT, Gao S, Li G, Sun ZY, Tang Z, Cai JM, Tian N, Zhou J, Dong JH, Liu Y, Bai X, Sheng FG. Predicting Breast Cancer Subtypes Using Magnetic Resonance Imaging Based Radiomics With Automatic Segmentation. J Comput Assist Tomogr 2023; 47:729-737. [PMID: 37707402 PMCID: PMC10510832 DOI: 10.1097/rct.0000000000001474] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 02/02/2023] [Indexed: 05/21/2023]
Abstract
OBJECTIVE The aim of the study is to demonstrate whether radiomics based on an automatic segmentation method is feasible for predicting molecular subtypes. METHODS This retrospective study included 516 patients with confirmed breast cancer. An automatic segmentation-3-dimensional UNet-based Convolutional Neural Networks, trained on our in-house data set-was applied to segment the regions of interest. A set of 1316 radiomics features per region of interest was extracted. Eighteen cross-combination radiomics methods-with 6 feature selection methods and 3 classifiers-were used for model selection. Model classification performance was assessed using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity. RESULTS The average dice similarity coefficient value of the automatic segmentation was 0.89. The radiomics models were predictive of 4 molecular subtypes with the best average: AUC = 0.8623, accuracy = 0.6596, sensitivity = 0.6383, and specificity = 0.8775. For luminal versus nonluminal subtypes, AUC = 0.8788 (95% confidence interval [CI], 0.8505-0.9071), accuracy = 0.7756, sensitivity = 0.7973, and specificity = 0.7466. For human epidermal growth factor receptor 2 (HER2)-enriched versus non-HER2-enriched subtypes, AUC = 0.8676 (95% CI, 0.8370-0.8982), accuracy = 0.7737, sensitivity = 0.8859, and specificity = 0.7283. For triple-negative breast cancer versus non-triple-negative breast cancer subtypes, AUC = 0.9335 (95% CI, 0.9027-0.9643), accuracy = 0.9110, sensitivity = 0.4444, and specificity = 0.9865. CONCLUSIONS Radiomics based on automatic segmentation of magnetic resonance imaging can predict breast cancer of 4 molecular subtypes noninvasively and is potentially applicable in large samples.
Collapse
Affiliation(s)
- Wen-Yi Yue
- From the Fifth Medical Center of Chinese PLA General Hospital
- Chinese PLA General Medical School
| | - Hong-Tao Zhang
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Shen Gao
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Guang Li
- Keya Medical Technology Co, Ltd, Beijing, China
| | - Ze-Yu Sun
- Keya Medical Technology Co, Ltd, Beijing, China
| | - Zhe Tang
- Keya Medical Technology Co, Ltd, Beijing, China
| | - Jian-Ming Cai
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Ning Tian
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Juan Zhou
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Jing-Hui Dong
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Yuan Liu
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Xu Bai
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Fu-Geng Sheng
- From the Fifth Medical Center of Chinese PLA General Hospital
| |
Collapse
|
11
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|