1
|
Xue X, Liang D, Wang K, Gao J, Ding J, Zhou F, Xu J, Liu H, Sun Q, Jiang P, Tao L, Shi W, Cheng J. A deep learning-based 3D Prompt-nnUnet model for automatic segmentation in brachytherapy of postoperative endometrial carcinoma. J Appl Clin Med Phys 2024:e14371. [PMID: 38682540 DOI: 10.1002/acm2.14371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 02/07/2024] [Accepted: 03/25/2024] [Indexed: 05/01/2024] Open
Abstract
PURPOSE To create and evaluate a three-dimensional (3D) Prompt-nnUnet module that utilizes the prompts-based model combined with 3D nnUnet for producing the rapid and consistent autosegmentation of high-risk clinical target volume (HR CTV) and organ at risk (OAR) in high-dose-rate brachytherapy (HDR BT) for patients with postoperative endometrial carcinoma (EC). METHODS AND MATERIALS On two experimental batches, a total of 321 computed tomography (CT) scans were obtained for HR CTV segmentation from 321 patients with EC, and 125 CT scans for OARs segmentation from 125 patients. The numbers of training/validation/test were 257/32/32 and 87/13/25 for HR CTV and OARs respectively. A novel comparison of the deep learning neural network 3D Prompt-nnUnet and 3D nnUnet was applied for HR CTV and OARs segmentation. Three-fold cross validation and several quantitative metrics were employed, including Dice similarity coefficient (DSC), Hausdorff distance (HD), 95th percentile of Hausdorff distance (HD95%), and intersection over union (IoU). RESULTS The Prompt-nnUnet included two forms of parameters Predict-Prompt (PP) and Label-Prompt (LP), with the LP performing most similarly to the experienced radiation oncologist and outperforming the less experienced ones. During the testing phase, the mean DSC values for the LP were 0.96 ± 0.02, 0.91 ± 0.02, and 0.83 ± 0.07 for HR CTV, rectum and urethra, respectively. The mean HD values (mm) were 2.73 ± 0.95, 8.18 ± 4.84, and 2.11 ± 0.50, respectively. The mean HD95% values (mm) were 1.66 ± 1.11, 3.07 ± 0.94, and 1.35 ± 0.55, respectively. The mean IoUs were 0.92 ± 0.04, 0.84 ± 0.03, and 0.71 ± 0.09, respectively. A delineation time < 2.35 s per structure in the new model was observed, which was available to save clinician time. CONCLUSION The Prompt-nnUnet architecture, particularly the LP, was highly consistent with ground truth (GT) in HR CTV or OAR autosegmentation, reducing interobserver variability and shortening treatment time.
Collapse
Affiliation(s)
- Xian Xue
- Secondary Standard Dosimetry Laboratory, National Institute for Radiological Protection, Chinese Center for Disease Control and Prevention (CDC), Beijing, China
| | - Dazhu Liang
- Digital Health China Technologies Co., LTD, Beijing, China
| | - Kaiyue Wang
- Department of Radiotherapy, Peking University Third Hospital, Beijing, China
| | - Jianwei Gao
- Digital Health China Technologies Co., LTD, Beijing, China
| | - Jingjing Ding
- Department of Radiotherapy, Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Fugen Zhou
- Department of Aero-space Information Engineering, Beihang University, Beijing, China
| | - Juan Xu
- Digital Health China Technologies Co., LTD, Beijing, China
| | - Hefeng Liu
- Digital Health China Technologies Co., LTD, Beijing, China
| | - Quanfu Sun
- Secondary Standard Dosimetry Laboratory, National Institute for Radiological Protection, Chinese Center for Disease Control and Prevention (CDC), Beijing, China
| | - Ping Jiang
- Department of Radiotherapy, Peking University Third Hospital, Beijing, China
| | - Laiyuan Tao
- Digital Health China Technologies Co., LTD, Beijing, China
| | - Wenzhao Shi
- Digital Health China Technologies Co., LTD, Beijing, China
| | - Jinsheng Cheng
- Secondary Standard Dosimetry Laboratory, National Institute for Radiological Protection, Chinese Center for Disease Control and Prevention (CDC), Beijing, China
| |
Collapse
|
2
|
Zhang W, Zhao N, Gao Y, Huang B, Wang L, Zhou X, Li Z. Automatic liver segmentation and assessment of liver fibrosis using deep learning with MR T1-weighted images in rats. Magn Reson Imaging 2024; 107:1-7. [PMID: 38147969 DOI: 10.1016/j.mri.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 12/15/2023] [Accepted: 12/22/2023] [Indexed: 12/28/2023]
Abstract
OBJECTIVES To validate the performance of nnU-Net in segmentation and CNN in classification for liver fibrosis using T1-weighted images. MATERIALS AND METHODS In this prospective study, animal models of liver fibrosis were induced by injecting subcutaneously a mixture of Carbon tetrachloride and olive oil. A total of 99 male Wistar rats were successfully induced and underwent MR scanning with no contrast agent to get T1-weighted images. The regions of interest (ROIs) of the whole liver were delineated layer by layer along the liver edge by 3D Slicer. For segmentation task, all T1-weighted images were randomly divided into training and test cohorts in a ratio of 7:3. For classification, images containing the hepatic maximum diameter of every rat were selected and 80% images of no liver fibrosis (NLF), early liver fibrosis (ELF) and progressive liver fibrosis (PLF) stages were randomly selected for training, while the rest were used for testing. Liver segmentation was performed by the nnU-Net model. The convolutional neural network (CNN) was used for classification task of liver fibrosis stages. The Dice similarity coefficient was used to evaluate the segmentation performance of nnU-Net. Confusion matrix, ROC curve and accuracy were used to show the classification performance of CNN. RESULTS A total of 2628 images were obtained from 99 Wistar rats by MR scanning. For liver segmentation by nnU-Net, the Dice similarity coefficient in the test set was 0.8477. The accuracies of CNN in staging NLF, ELF and PLF were 0.73, 0.89 and 0.84, respectively. The AUCs were 0.76, 0.88 and 0.79, respectively. CONCLUSION The nnU-Net architecture is of high accuracy for liver segmentation and CNN for assessment of liver fibrosis with T1-weighted images.
Collapse
Affiliation(s)
- Wenjing Zhang
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Nan Zhao
- College of Computer Science and Technology of Qingdao University, Qingdao, China
| | - Yuanxiang Gao
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Baoxiang Huang
- College of Computer Science and Technology of Qingdao University, Qingdao, China
| | - Lili Wang
- Department of Pathology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Xiaoming Zhou
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Zhiming Li
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China.
| |
Collapse
|
3
|
Lew CO, Harouni M, Kirksey ER, Kang EJ, Dong H, Gu H, Grimm LJ, Walsh R, Lowell DA, Mazurowski MA. A publicly available deep learning model and dataset for segmentation of breast, fibroglandular tissue, and vessels in breast MRI. Sci Rep 2024; 14:5383. [PMID: 38443410 PMCID: PMC10915139 DOI: 10.1038/s41598-024-54048-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 02/08/2024] [Indexed: 03/07/2024] Open
Abstract
Breast density, or the amount of fibroglandular tissue (FGT) relative to the overall breast volume, increases the risk of developing breast cancer. Although previous studies have utilized deep learning to assess breast density, the limited public availability of data and quantitative tools hinders the development of better assessment tools. Our objective was to (1) create and share a large dataset of pixel-wise annotations according to well-defined criteria, and (2) develop, evaluate, and share an automated segmentation method for breast, FGT, and blood vessels using convolutional neural networks. We used the Duke Breast Cancer MRI dataset to randomly select 100 MRI studies and manually annotated the breast, FGT, and blood vessels for each study. Model performance was evaluated using the Dice similarity coefficient (DSC). The model achieved DSC values of 0.92 for breast, 0.86 for FGT, and 0.65 for blood vessels on the test set. The correlation between our model's predicted breast density and the manually generated masks was 0.95. The correlation between the predicted breast density and qualitative radiologist assessment was 0.75. Our automated models can accurately segment breast, FGT, and blood vessels using pre-contrast breast MRI data. The data and the models were made publicly available.
Collapse
Affiliation(s)
- Christopher O Lew
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA.
| | - Majid Harouni
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Ella R Kirksey
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Elianne J Kang
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Haoyu Dong
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Hanxue Gu
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Lars J Grimm
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Ruth Walsh
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Dorothy A Lowell
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Maciej A Mazurowski
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| |
Collapse
|
4
|
Choi Y, Bang J, Kim SY, Seo M, Jang J. Deep learning-based multimodal segmentation of oropharyngeal squamous cell carcinoma on CT and MRI using self-configuring nnU-Net. Eur Radiol 2024:10.1007/s00330-024-10585-y. [PMID: 38243135 DOI: 10.1007/s00330-024-10585-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 12/05/2023] [Accepted: 12/17/2023] [Indexed: 01/21/2024]
Abstract
PURPOSE To evaluate deep learning-based segmentation models for oropharyngeal squamous cell carcinoma (OPSCC) using CT and MRI with nnU-Net. METHODS This single-center retrospective study included 91 patients with OPSCC. The patients were grouped into the development (n = 56), test 1 (n = 13), and test 2 (n = 22) cohorts. In the development cohort, OPSCC was manually segmented on CT, MR, and co-registered CT-MR images, which served as the ground truth. The multimodal and multichannel input images were then trained using a self-configuring nnU-Net. For evaluation metrics, dice similarity coefficient (DSC) and mean Hausdorff distance (HD) were calculated for test cohorts. Pearson's correlation and Bland-Altman analyses were performed between ground truth and prediction volumes. Intraclass correlation coefficients (ICCs) of radiomic features were calculated for reproducibility assessment. RESULTS All models achieved robust segmentation performances with DSC of 0.64 ± 0.33 (CT), 0.67 ± 0.27 (MR), and 0.65 ± 0.29 (CT-MR) in test cohort 1 and 0.57 ± 0.31 (CT), 0.77 ± 0.08 (MR), and 0.73 ± 0.18 (CT-MR) in test cohort 2. No significant differences were found in DSC among the models. HD of CT-MR (1.57 ± 1.06 mm) and MR models (1.36 ± 0.61 mm) were significantly lower than that of the CT model (3.48 ± 5.0 mm) (p = 0.037 and p = 0.014, respectively). The correlation coefficients between the ground truth and prediction volumes for CT, MR, and CT-MR models were 0.88, 0.93, and 0.9, respectively. MR models demonstrated excellent mean ICCs of radiomic features (0.91-0.93). CONCLUSION The self-configuring nnU-Net demonstrated reliable and accurate segmentation of OPSCC on CT and MRI. The multimodal CT-MR model showed promising results for the simultaneous segmentation on CT and MRI. CLINICAL RELEVANCE STATEMENT Deep learning-based automatic detection and segmentation of oropharyngeal squamous cell carcinoma on pre-treatment CT and MRI would facilitate radiologic response assessment and radiotherapy planning. KEY POINTS • The nnU-Net framework produced a reliable and accurate segmentation of OPSCC on CT and MRI. • MR and CT-MR models showed higher DSC and lower Hausdorff distance than the CT model. • Correlation coefficients between the ground truth and predicted segmentation volumes were high in all the three models.
Collapse
Affiliation(s)
- Yangsean Choi
- Department of Radiology, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea.
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Centre, 43 Olympic-Ro 88, Songpa-Gu, Seoul, 05505, Republic of Korea.
| | - Jooin Bang
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea
| | - Sang-Yeon Kim
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea
| | - Minkook Seo
- Department of Radiology, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea
| | - Jinhee Jang
- Department of Radiology, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
5
|
Saikia S, Si T, Deb D, Bora K, Mallik S, Maulik U, Zhao Z. Lesion detection in women breast's dynamic contrast-enhanced magnetic resonance imaging using deep learning. Sci Rep 2023; 13:22555. [PMID: 38110462 PMCID: PMC10728155 DOI: 10.1038/s41598-023-48553-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 11/28/2023] [Indexed: 12/20/2023] Open
Abstract
Breast cancer is one of the most common cancers in women and the second foremost cause of cancer death in women after lung cancer. Recent technological advances in breast cancer treatment offer hope to millions of women in the world. Segmentation of the breast's Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is one of the necessary tasks in the diagnosis and detection of breast cancer. Currently, a popular deep learning model, U-Net is extensively used in biomedical image segmentation. This article aims to advance the state of the art and conduct a more in-depth analysis with a focus on the use of various U-Net models in lesion detection in women's breast DCE-MRI. In this article, we perform an empirical study of the effectiveness and efficiency of U-Net and its derived deep learning models including ResUNet, Dense UNet, DUNet, Attention U-Net, UNet++, MultiResUNet, RAUNet, Inception U-Net and U-Net GAN for lesion detection in breast DCE-MRI. All the models are applied to the benchmarked 100 Sagittal T2-Weighted fat-suppressed DCE-MRI slices of 20 patients and their performance is compared. Also, a comparative study has been conducted with V-Net, W-Net, and DeepLabV3+. Non-parametric statistical test Wilcoxon Signed Rank Test is used to analyze the significance of the quantitative results. Furthermore, Multi-Criteria Decision Analysis (MCDA) is used to evaluate overall performance focused on accuracy, precision, sensitivity, F[Formula: see text]-score, specificity, Geometric-Mean, DSC, and false-positive rate. The RAUNet segmentation model achieved a high accuracy of 99.76%, sensitivity of 85.04%, precision of 90.21%, and Dice Similarity Coefficient (DSC) of 85.04% whereas ResNet achieved 99.62% accuracy, 62.26% sensitivity, 99.56% precision, and 72.86% DSC. ResUNet is found to be the most effective model based on MCDA. On the other hand, U-Net GAN takes the least computational time to perform the segmentation task. Both quantitative and qualitative results demonstrate that the ResNet model performs better than other models in segmenting the images and lesion detection, though computational time in achieving the objectives varies.
Collapse
Affiliation(s)
- Sudarshan Saikia
- Information Technology Department, Oil India Limited, Duliajan, Assam, 786602, India
| | - Tapas Si
- AI Innovation Lab, Department of Computer Science & Engineering, University of Engineering & Management, Jaipur, GURUKUL, Jaipur, Rajasthan, 303807, India
| | - Darpan Deb
- Department of Computer Application, Christ University, Bengaluru, 560029, India
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, 781001, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, 02115, USA
| | - Ujjwal Maulik
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA.
| |
Collapse
|
6
|
Zhang J, Cui Z, Zhou L, Sun Y, Li Z, Liu Z, Shen D. Breast Fibroglandular Tissue Segmentation for Automated BPE Quantification With Iterative Cycle-Consistent Semi-Supervised Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3944-3955. [PMID: 37756174 DOI: 10.1109/tmi.2023.3319646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
Background Parenchymal Enhancement (BPE) quantification in Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) plays a pivotal role in clinical breast cancer diagnosis and prognosis. However, the emerging deep learning-based breast fibroglandular tissue segmentation, a crucial step in automated BPE quantification, often suffers from limited training samples with accurate annotations. To address this challenge, we propose a novel iterative cycle-consistent semi-supervised framework to leverage segmentation performance by using a large amount of paired pre-/post-contrast images without annotations. Specifically, we design the reconstruction network, cascaded with the segmentation network, to learn a mapping from the pre-contrast images and segmentation predictions to the post-contrast images. Thus, we can implicitly use the reconstruction task to explore the inter-relationship between these two-phase images, which in return guides the segmentation task. Moreover, the reconstructed post-contrast images across multiple auto-context modeling-based iterations can be viewed as new augmentations, facilitating cycle-consistent constraints across each segmentation output. Extensive experiments on two datasets with various data distributions show great segmentation and BPE quantification accuracy compared with other state-of-the-art semi-supervised methods. Importantly, our method achieves 11.80 times of quantification accuracy improvement along with 10 times faster, compared with clinical physicians, demonstrating its potential for automated BPE quantification. The code is available at https://github.com/ZhangJD-ong/Iterative-Cycle-consistent-Semi-supervised-Learning-for-fibroglandular-tissue-segmentation.
Collapse
|
7
|
Nowakowska S, Borkowski K, Ruppert CM, Landsmann A, Marcon M, Berger N, Boss A, Ciritsis A, Rossi C. Generalizable attention U-Net for segmentation of fibroglandular tissue and background parenchymal enhancement in breast DCE-MRI. Insights Imaging 2023; 14:185. [PMID: 37932462 PMCID: PMC10628070 DOI: 10.1186/s13244-023-01531-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 09/25/2023] [Indexed: 11/08/2023] Open
Abstract
OBJECTIVES Development of automated segmentation models enabling standardized volumetric quantification of fibroglandular tissue (FGT) from native volumes and background parenchymal enhancement (BPE) from subtraction volumes of dynamic contrast-enhanced breast MRI. Subsequent assessment of the developed models in the context of FGT and BPE Breast Imaging Reporting and Data System (BI-RADS)-compliant classification. METHODS For the training and validation of attention U-Net models, data coming from a single 3.0-T scanner was used. For testing, additional data from 1.5-T scanner and data acquired in a different institution with a 3.0-T scanner was utilized. The developed models were used to quantify the amount of FGT and BPE in 80 DCE-MRI examinations, and a correlation between these volumetric measures and the classes assigned by radiologists was performed. RESULTS To assess the model performance using application-relevant metrics, the correlation between the volumes of breast, FGT, and BPE calculated from ground truth masks and predicted masks was checked. Pearson correlation coefficients ranging from 0.963 ± 0.004 to 0.999 ± 0.001 were achieved. The Spearman correlation coefficient for the quantitative and qualitative assessment, i.e., classification by radiologist, of FGT amounted to 0.70 (p < 0.0001), whereas BPE amounted to 0.37 (p = 0.0006). CONCLUSIONS Generalizable algorithms for FGT and BPE segmentation were developed and tested. Our results suggest that when assessing FGT, it is sufficient to use volumetric measures alone. However, for the evaluation of BPE, additional models considering voxels' intensity distribution and morphology are required. CRITICAL RELEVANCE STATEMENT A standardized assessment of FGT density can rely on volumetric measures, whereas in the case of BPE, the volumetric measures constitute, along with voxels' intensity distribution and morphology, an important factor. KEY POINTS • Our work contributes to the standardization of FGT and BPE assessment. • Attention U-Net can reliably segment intricately shaped FGT and BPE structures. • The developed models were robust to domain shift.
Collapse
Affiliation(s)
- Sylwia Nowakowska
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland.
| | | | - Carlotta M Ruppert
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Anna Landsmann
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Magda Marcon
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Nicole Berger
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- Present Address: Institut RadiologieSpital Lachen, Oberdorfstrasse 41, 8853, Lachen, Switzerland
| | - Andreas Boss
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- Present address: GZO AG Spital Wetzikon, Spitalstrasse 66, 8620, Wetzikon, Switzerland
| | - Alexander Ciritsis
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- b-rayZ AG, Wagistrasse 21, 8952, Schlieren, Switzerland
| | - Cristina Rossi
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- b-rayZ AG, Wagistrasse 21, 8952, Schlieren, Switzerland
| |
Collapse
|
8
|
Müller-Franzes G, Müller-Franzes F, Huck L, Raaff V, Kemmer E, Khader F, Arasteh ST, Lemainque T, Kather JN, Nebelung S, Kuhl C, Truhn D. Fibroglandular tissue segmentation in breast MRI using vision transformers: a multi-institutional evaluation. Sci Rep 2023; 13:14207. [PMID: 37648728 PMCID: PMC10468506 DOI: 10.1038/s41598-023-41331-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 08/24/2023] [Indexed: 09/01/2023] Open
Abstract
Accurate and automatic segmentation of fibroglandular tissue in breast MRI screening is essential for the quantification of breast density and background parenchymal enhancement. In this retrospective study, we developed and evaluated a transformer-based neural network for breast segmentation (TraBS) in multi-institutional MRI data, and compared its performance to the well established convolutional neural network nnUNet. TraBS and nnUNet were trained and tested on 200 internal and 40 external breast MRI examinations using manual segmentations generated by experienced human readers. Segmentation performance was assessed in terms of the Dice score and the average symmetric surface distance. The Dice score for nnUNet was lower than for TraBS on the internal testset (0.909 ± 0.069 versus 0.916 ± 0.067, P < 0.001) and on the external testset (0.824 ± 0.144 versus 0.864 ± 0.081, P = 0.004). Moreover, the average symmetric surface distance was higher (= worse) for nnUNet than for TraBS on the internal (0.657 ± 2.856 versus 0.548 ± 2.195, P = 0.001) and on the external testset (0.727 ± 0.620 versus 0.584 ± 0.413, P = 0.03). Our study demonstrates that transformer-based networks improve the quality of fibroglandular tissue segmentation in breast MRI compared to convolutional-based models like nnUNet. These findings might help to enhance the accuracy of breast density and parenchymal enhancement quantification in breast MRI screening.
Collapse
Affiliation(s)
- Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Fritz Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Luisa Huck
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Vanessa Raaff
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Eva Kemmer
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Firas Khader
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Soroosh Tayebi Arasteh
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Teresa Lemainque
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Technical University, Dresden, Germany
- Department of Medicine III, University Hospital RWTH, Aachen, Germany
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany.
| |
Collapse
|
9
|
Yang KB, Lee J, Yang J. Multi-class semantic segmentation of breast tissues from MRI images using U-Net based on Haar wavelet pooling. Sci Rep 2023; 13:11704. [PMID: 37474633 PMCID: PMC10359288 DOI: 10.1038/s41598-023-38557-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 07/11/2023] [Indexed: 07/22/2023] Open
Abstract
MRI images used in breast cancer diagnosis are taken in a lying position and therefore are inappropriate for reconstructing the natural breast shape in a standing position. Some studies have proposed methods to present the breast shape in a standing position using an ordinary differential equation of the finite element method. However, it is difficult to obtain meaningful results because breast tissues have different elastic moduli. This study proposed a multi-class semantic segmentation method for breast tissues to reconstruct breast shapes using U-Net based on Haar wavelet pooling. First, a dataset was constructed by labeling the skin, fat, and fibro-glandular tissues and the background from MRI images taken in a lying position. Next, multi-class semantic segmentation was performed using U-Net based on Haar wavelet pooling to improve the segmentation accuracy for breast tissues. The U-Net effectively extracted breast tissue features while reducing image information loss in a subsampling stage using multiple sub-bands. In addition, the proposed network is robust to overfitting. The proposed network showed a mIOU of 87.48 for segmenting breast tissues. The proposed networks demonstrated high-accuracy segmentation for breast tissue with different elastic moduli to reconstruct the natural breast shape.
Collapse
Affiliation(s)
- Kwang Bin Yang
- Devision of Memory - Memory FAB Team 1, Samsung Electronics, 1 Samsungjeonja-ro, Hwaseong, Gyeonggi, 18448, Republic of Korea
| | - Jinwon Lee
- Department of Industrial and Management Engineering, Gangneung-Wonju National University, 150 Namwon-ro, Wonju, Gangwon, 26403, Republic of Korea
| | - Jeongsam Yang
- Department of Industrial Engineering, Ajou University, 206 Worldcup-ro, Suwon, Gyeonggi, 16499, Republic of Korea.
| |
Collapse
|
10
|
Barbaroux H, Kunze KP, Neji R, Nazir MS, Pennell DJ, Nielles-Vallespin S, Scott AD, Young AA. Automated segmentation of long and short axis DENSE cardiovascular magnetic resonance for myocardial strain analysis using spatio-temporal convolutional neural networks. J Cardiovasc Magn Reson 2023; 25:16. [PMID: 36991474 PMCID: PMC10061808 DOI: 10.1186/s12968-023-00927-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 02/01/2023] [Indexed: 03/31/2023] Open
Abstract
BACKGROUND Cine Displacement Encoding with Stimulated Echoes (DENSE) facilitates the quantification of myocardial deformation, by encoding tissue displacements in the cardiovascular magnetic resonance (CMR) image phase, from which myocardial strain can be estimated with high accuracy and reproducibility. Current methods for analyzing DENSE images still heavily rely on user input, making this process time-consuming and subject to inter-observer variability. The present study sought to develop a spatio-temporal deep learning model for segmentation of the left-ventricular (LV) myocardium, as spatial networks often fail due to contrast-related properties of DENSE images. METHODS 2D + time nnU-Net-based models have been trained to segment the LV myocardium from DENSE magnitude data in short- and long-axis images. A dataset of 360 short-axis and 124 long-axis slices was used to train the networks, from a combination of healthy subjects and patients with various conditions (hypertrophic and dilated cardiomyopathy, myocardial infarction, myocarditis). Segmentation performance was evaluated using ground-truth manual labels, and a strain analysis using conventional methods was performed to assess strain agreement with manual segmentation. Additional validation was performed using an externally acquired dataset to compare the inter- and intra-scanner reproducibility with respect to conventional methods. RESULTS Spatio-temporal models gave consistent segmentation performance throughout the cine sequence, while 2D architectures often failed to segment end-diastolic frames due to the limited blood-to-myocardium contrast. Our models achieved a DICE score of 0.83 ± 0.05 and a Hausdorff distance of 4.0 ± 1.1 mm for short-axis segmentation, and 0.82 ± 0.03 and 7.9 ± 3.9 mm respectively for long-axis segmentations. Strain measurements obtained from automatically estimated myocardial contours showed good to excellent agreement with manual pipelines, and remained within the limits of inter-user variability estimated in previous studies. CONCLUSION Spatio-temporal deep learning shows increased robustness for the segmentation of cine DENSE images. It provides excellent agreement with manual segmentation for strain extraction. Deep learning will facilitate the analysis of DENSE data, bringing it one step closer to clinical routine.
Collapse
Affiliation(s)
- Hugo Barbaroux
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.
- Cardiovascular Magnetic Resonance Unit, The Royal Brompton Hospital (Guy's and St Thomas' NHS Foundation Trust), London, UK.
| | - Karl P Kunze
- MR Research Collaborations, Siemens Healthcare Limited, Camberley, UK
| | - Radhouene Neji
- MR Research Collaborations, Siemens Healthcare Limited, Camberley, UK
| | - Muhummad Sohaib Nazir
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Dudley J Pennell
- Cardiovascular Magnetic Resonance Unit, The Royal Brompton Hospital (Guy's and St Thomas' NHS Foundation Trust), London, UK
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Sonia Nielles-Vallespin
- Cardiovascular Magnetic Resonance Unit, The Royal Brompton Hospital (Guy's and St Thomas' NHS Foundation Trust), London, UK
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Andrew D Scott
- Cardiovascular Magnetic Resonance Unit, The Royal Brompton Hospital (Guy's and St Thomas' NHS Foundation Trust), London, UK
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Alistair A Young
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
11
|
Zhao X, Bai JW, Guo Q, Ren K, Zhang GJ. Clinical applications of deep learning in breast MRI. Biochim Biophys Acta Rev Cancer 2023; 1878:188864. [PMID: 36822377 DOI: 10.1016/j.bbcan.2023.188864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 02/25/2023]
Abstract
Deep learning (DL) is one of the most powerful data-driven machine-learning techniques in artificial intelligence (AI). It can automatically learn from raw data without manual feature selection. DL models have led to remarkable advances in data extraction and analysis for medical imaging. Magnetic resonance imaging (MRI) has proven useful in delineating the characteristics and extent of breast lesions and tumors. This review summarizes the current state-of-the-art applications of DL models in breast MRI. Many recent DL models were examined in this field, along with several advanced learning approaches and methods for data normalization and breast and lesion segmentation. For clinical applications, DL-based breast MRI models were proven useful in five aspects: diagnosis of breast cancer, classification of molecular types, classification of histopathological types, prediction of neoadjuvant chemotherapy response, and prediction of lymph node metastasis. For subsequent studies, further improvement in data acquisition and preprocessing is necessary, additional DL techniques in breast MRI should be investigated, and wider clinical applications need to be explored.
Collapse
Affiliation(s)
- Xue Zhao
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jing-Wen Bai
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Oncology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Qiu Guo
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Ke Ren
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China.
| | - Guo-Jun Zhang
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
12
|
Qureshi I, Yan J, Abbas Q, Shaheed K, Riaz AB, Wahid A, Khan MWJ, Szczuko P. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends. INFORMATION FUSION 2023. [DOI: 10.1016/j.inffus.2022.09.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
|
13
|
Zhu Y, Chen L, Lu W, Gong Y, Wang X. The application of the nnU-Net-based automatic segmentation model in assisting carotid artery stenosis and carotid atherosclerotic plaque evaluation. Front Physiol 2022; 13:1057800. [PMID: 36561211 PMCID: PMC9763590 DOI: 10.3389/fphys.2022.1057800] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Accepted: 11/11/2022] [Indexed: 12/12/2022] Open
Abstract
Objective: No new U-net (nnU-Net) is a newly-developed deep learning neural network, whose advantages in medical image segmentation have been noticed recently. This study aimed to investigate the value of the nnU-Net-based model for computed tomography angiography (CTA) imaging in assisting the evaluation of carotid artery stenosis (CAS) and atherosclerotic plaque. Methods: This study retrospectively enrolled 93 CAS-suspected patients who underwent head and neck CTA examination, then randomly divided them into the training set (N = 70) and the validation set (N = 23) in a 3:1 ratio. The radiologist-marked images in the training set were used for the development of the nnU-Net model, which was subsequently tested in the validation set. Results: In the training set, the nnU-Net had already displayed a good performance for CAS diagnosis and atherosclerotic plaque segmentation. Then, its utility was further confirmed in the validation set: the Dice similarity coefficient value of the nnU-Net model in segmenting background, blood vessels, calcification plaques, and dark spots reached 0.975, 0.974 0.795, and 0.498, accordingly. Besides, the nnU-Net model displayed a good consistency with physicians in assessing CAS (Kappa = 0.893), stenosis degree (Kappa = 0.930), the number of calcification plaque (Kappa = 0.922), non-calcification (Kappa = 0.768) and mixed plaque (Kappa = 0.793), as well as the max thickness of calcification plaque (intraclass correlation coefficient = 0.972). Additionally, the evaluation time of the nnU-Net model was shortened compared with the physicians (27.3 ± 4.4 s vs. 296.8 ± 81.1 s, p < 0.001). Conclusion: The automatic segmentation model based on nnU-Net shows good accuracy, reliability, and efficiency in assisting CTA to evaluate CAS and carotid atherosclerotic plaques.
Collapse
Affiliation(s)
- Ying Zhu
- First Clinical Medical College, Soochow University, Suzhou, China
| | - Liwei Chen
- Department of Radiology, School of Medicine, Tongren Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Wenjie Lu
- Department of Radiology, School of Medicine, Tongren Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yongjun Gong
- Department of Radiology, School of Medicine, Tongren Hospital, Shanghai Jiao Tong University, Shanghai, China,*Correspondence: Yongjun Gong, ; Ximing Wang,
| | - Ximing Wang
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, China,*Correspondence: Yongjun Gong, ; Ximing Wang,
| |
Collapse
|
14
|
Xu Z, Yu F, Zhang B, Zhang Q. Intelligent diagnosis of left ventricular hypertrophy using transthoracic echocardiography videos. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107182. [PMID: 36257197 DOI: 10.1016/j.cmpb.2022.107182] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 09/14/2022] [Accepted: 10/08/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE Left ventricular hypertrophy (LVH) is an independent risk factor for cardiovascular events and mortality. Pathological LVH can be caused by various diseases. In this study, we explored the possibility of using time and frequency domain analysis of myocardial radiomics features for patients with LVH in differentiating hypertrophic cardiomyopathy (HCM), hypertensive heart disease (HHD) and uremic cardiomyopathy (UCM) based on transthoracic echocardiography (TTE). This was the first study to explore TTE myocardial time and frequency domain analyses for multiple LVH etiology differentiation. MATERIALS AND METHODS We proposed an artificially intelligent diagnosis system based on radiomics techniques for differentiating HCM, HHD and UCM on TTE videos of the apical four-chamber view, which mainly included interventricular septum (IVS) segmentation, feature extraction and classification. We used two independent cohorts, one with 150 patients, including 50 HHD, 50 HCM and 50 UCM, for segmentation training and testing, and another with 149 patients (namely the main cohort), including 50 HHD, 46 HCM and 53 UCM, for classification training and testing after segmentation and feature extraction. Firstly, the U-Net, Residual U-Net (ResUNet) and nnU-Net were trained and tested to segment the IVS on TTE still images in the first cohort. Then the trained model with the best segmentation performance was further used for IVS prediction of ordered TTE images in video sequences in the main cohort. The post-processing was used to eliminate the noisy debris by selecting the maximum connected region and smoothing the edges of the predicted IVS region. Secondly, static radiomics features were extracted from the IVS of ordered TTE images in each video sequence, and subsequently the time and frequency domain features were further extracted from each time series of a static radiomics feature in the video sequence. Finally, the point-wise gated Boltzmann machine (PGBM) was used to learn and fuse the time and frequency domain features, and the support vector machine was used to classify the learned features for LVH diagnosis. The classification was performed with five-fold cross validation. RESULTS The ResUNet showed the best segmentation performance, with Dice coefficient, sensitivity, specificity and accuracy of 0.817, 76.3%, 99.6% and 98.6%, respectively. With post-processing, the Dice coefficient, sensitivity, specificity and accuracy of the ResUNet were further improved to 0.839, 77.0%, 99.8%, and 98.8%, respectively. The classification areas under the receiver operating characteristic curves (AUCs) were 0.838 ± 0.049 for HHD vs. HCM, 0.868 ± 0.042 for HCM vs. UCM and 0.701 ± 0.140 for HHD vs. UCM. CONCLUSION In this work, we proposed an intelligent identification system for LVH etiology classification based on routine TTE video images with good diagnostic performance. This deep learning method is feasible in automatic TTE images interpretation and expected to assist clinicians in detecting the primary cause of LVH.
Collapse
Affiliation(s)
- Zhou Xu
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Fei Yu
- Department of Ultrasound in Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China; Department of Ultrasound in Medicine, Ningbo First Hospital, Ningbo, China
| | - Bo Zhang
- Department of Ultrasound in Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China.
| | - Qi Zhang
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai, China.
| |
Collapse
|
15
|
Li F, Sun L, Lam KY, Zhang S, Sun Z, Peng B, Xu H, Zhang L. Segmentation of human aorta using 3D nnU-net-oriented deep learning. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2022; 93:114103. [PMID: 36461517 DOI: 10.1063/5.0084433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 09/20/2022] [Indexed: 06/17/2023]
Abstract
Computed tomography angiography (CTA) has become the main imaging technique for cardiovascular diseases. Before performing the transcatheter aortic valve intervention operation, segmenting images of the aortic sinus and nearby cardiovascular tissue from enhanced images of the human heart is essential for auxiliary diagnosis and guiding doctors to make treatment plans. This paper proposes a nnU-Net (no-new-Net) framework based on deep learning (DL) methods to segment the aorta and the heart tissue near the aortic valve in cardiac CTA images, and verifies its accuracy and effectiveness. A total of 130 sets of cardiac CTA image data (88 training sets, 22 validation sets, and 20 test sets) of different subjects have been used for the study. The advantage of the nnU-Net model is that it can automatically perform preprocessing and data augmentation according to the input image data, can dynamically adjust the network structure and parameter configuration, and has a high model generalization ability. Experimental results show that the DL method based on nnU-Net can accurately and effectively complete the segmentation task of cardiac aorta and cardiac tissue near the root on the cardiac CTA dataset, and achieves an average Dice similarity coefficient of 0.9698 ± 0.0081. The actual inference segmentation effect basically meets the preoperative needs of the clinic. Using the DL method based on the nnU-Net model solves the problems of low accuracy in threshold segmentation, bad segmentation of organs with fuzzy edges, and poor adaptability to different patients' cardiac CTA images. nnU-Net will become an excellent DL technology in cardiac CTA image segmentation tasks.
Collapse
Affiliation(s)
- Feng Li
- Zhejiang Gongshang University, Hangzhou 310018, China
| | - Lianzhong Sun
- Zhejiang Gongshang University, Hangzhou 310018, China
| | - Kwok-Yan Lam
- Nanyang Technological University, 639798, Singapore
| | - Songbo Zhang
- Zhejiang Gongshang University, Hangzhou 310018, China
| | - Zhongming Sun
- Zhejiang Gongshang University, Hangzhou 310018, China
| | - Bao Peng
- Shenzhen Institute of Information Technology, Shenzhen 518172, China
| | - Hongzeng Xu
- The People's Hospital of China Medical University, The People's Hospital of Liaoning Province, No. 33, Wenyi Road, Shenhe District, Shenyang City, Liaoning Province 110011, China
| | - Libo Zhang
- Key Laboratory of Cardiovascular Imaging and Research of Liaoning Province, Shenyang 110016, China
| |
Collapse
|
16
|
Li Z, Zhu Q, Zhang L, Yang X, Li Z, Fu J. A deep learning-based self-adapting ensemble method for segmentation in gynecological brachytherapy. Radiat Oncol 2022; 17:152. [PMID: 36064571 PMCID: PMC9446699 DOI: 10.1186/s13014-022-02121-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 08/29/2022] [Indexed: 11/10/2022] Open
Abstract
Purpose Fast and accurate outlining of the organs at risk (OARs) and high-risk clinical tumor volume (HRCTV) is especially important in high-dose-rate brachytherapy due to the highly time-intensive online treatment planning process and the high dose gradient around the HRCTV. This study aims to apply a self-configured ensemble method for fast and reproducible auto-segmentation of OARs and HRCTVs in gynecological cancer. Materials and methods We applied nnU-Net (no new U-Net), an automatically adapted deep convolutional neural network based on U-Net, to segment the bladder, rectum and HRCTV on CT images in gynecological cancer. In nnU-Net, three architectures, including 2D U-Net, 3D U-Net and 3D-Cascade U-Net, were trained and finally ensembled. 207 cases were randomly chosen for training, and 30 for testing. Quantitative evaluation used well-established image segmentation metrics, including dice similarity coefficient (DSC), 95% Hausdorff distance (HD95%), and average surface distance (ASD). Qualitative analysis of automated segmentation results was performed visually by two radiation oncologists. The dosimetric evaluation was performed by comparing the dose-volume parameters of both predicted segmentation and human contouring. Results nnU-Net obtained high qualitative and quantitative segmentation accuracy on the test dataset and performed better than previously reported methods in bladder and rectum segmentation. In quantitative evaluation, 3D-Cascade achieved the best performance in the bladder (DSC: 0.936 ± 0.051, HD95%: 3.503 ± 1.956, ASD: 0.944 ± 0.503), rectum (DSC: 0.831 ± 0.074, HD95%: 7.579 ± 5.857, ASD: 3.6 ± 3.485), and HRCTV (DSC: 0.836 ± 0.07, HD95%: 7.42 ± 5.023, ASD: 2.094 ± 1.311). According to the qualitative evaluation, over 76% of the test data set had no or minor visually detectable errors in segmentation. Conclusion This work showed nnU-Net’s superiority in segmenting OARs and HRCTV in gynecological brachytherapy cases in our center, among which 3D-Cascade shows the highest accuracy in segmentation across different applicators and patient anatomy. Supplementary Information The online version contains supplementary material available at 10.1186/s13014-022-02121-3.
Collapse
Affiliation(s)
- Zhen Li
- Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Xuhui District, Shanghai, China
| | - Qingyuan Zhu
- Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Xuhui District, Shanghai, China
| | - Lihua Zhang
- Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Xuhui District, Shanghai, China
| | - Xiaojing Yang
- Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Xuhui District, Shanghai, China
| | - Zhaobin Li
- Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Xuhui District, Shanghai, China.
| | - Jie Fu
- Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Xuhui District, Shanghai, China.
| |
Collapse
|
17
|
din NMU, Dar RA, Rasool M, Assad A. Breast cancer detection using deep learning: Datasets, methods, and challenges ahead. Comput Biol Med 2022; 149:106073. [DOI: 10.1016/j.compbiomed.2022.106073] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 08/21/2022] [Accepted: 08/27/2022] [Indexed: 12/22/2022]
|
18
|
Samperna R, Moriakov N, Karssemeijer N, Teuwen J, Mann RM. Exploiting the Dixon Method for a Robust Breast and Fibro-Glandular Tissue Segmentation in Breast MRI. Diagnostics (Basel) 2022; 12:diagnostics12071690. [PMID: 35885594 PMCID: PMC9324146 DOI: 10.3390/diagnostics12071690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/07/2022] [Accepted: 07/09/2022] [Indexed: 11/26/2022] Open
Abstract
Automatic breast and fibro-glandular tissue (FGT) segmentation in breast MRI allows for the efficient and accurate calculation of breast density. The U-Net architecture, either 2D or 3D, has already been shown to be effective at addressing the segmentation problem in breast MRI. However, the lack of publicly available datasets for this task has forced several authors to rely on internal datasets composed of either acquisitions without fat suppression (WOFS) or with fat suppression (FS), limiting the generalization of the approach. To solve this problem, we propose a data-centric approach, efficiently using the data available. By collecting a dataset of T1-weighted breast MRI acquisitions acquired with the use of the Dixon method, we train a network on both T1 WOFS and FS acquisitions while utilizing the same ground truth segmentation. Using the “plug-and-play” framework nnUNet, we achieve, on our internal test set, a Dice Similarity Coefficient (DSC) of 0.96 and 0.91 for WOFS breast and FGT segmentation and 0.95 and 0.86 for FS breast and FGT segmentation, respectively. On an external, publicly available dataset, a panel of breast radiologists rated the quality of our automatic segmentation with an average of 3.73 on a four-point scale, with an average percentage agreement of 67.5%.
Collapse
Affiliation(s)
- Riccardo Samperna
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- Department of Radiology, The Netherlands Cancer Institute (NKI), 1066 CX Amsterdam, The Netherlands
- Correspondence:
| | - Nikita Moriakov
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- Department of Radiation Oncology, The Netherlands Cancer Institute (NKI), 1066 CX Amsterdam, The Netherlands
| | - Nico Karssemeijer
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- ScreenPoint Medical BV, 6525 EC Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- Department of Radiation Oncology, The Netherlands Cancer Institute (NKI), 1066 CX Amsterdam, The Netherlands
| | - Ritse M. Mann
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- Department of Radiology, The Netherlands Cancer Institute (NKI), 1066 CX Amsterdam, The Netherlands
| |
Collapse
|
19
|
Ma Q, Yi Y, Liu T, Wen X, Shan F, Feng F, Yan Q, Shen J, Yang G, Shi Y. MRI-based radiomics signature for identification of invisible basal cisterns changes in tuberculous meningitis: a preliminary multicenter study. Eur Radiol 2022; 32:8659-8669. [PMID: 35748898 PMCID: PMC9226270 DOI: 10.1007/s00330-022-08911-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 04/27/2022] [Accepted: 05/27/2022] [Indexed: 12/04/2022]
Abstract
Objective To develop and evaluate a radiomics signature based on magnetic resonance imaging (MRI) from multicenter datasets for identification of invisible basal cisterns changes in tuberculous meningitis (TBM) patients. Methods Our retrospective study enrolled 184 TBM patients and 187 non-TBM controls from 3 Chinese hospitals (training dataset, 158 TBM patients and 159 non-TBM controls; testing dataset, 26 TBM patients and 28 non-TBM controls). nnU-Net was used to segment basal cisterns in fluid-attenuated inversion recovery (FLAIR) images. Subsequently, radiomics features were extracted from segmented basal cisterns in FLAIR and T2-weighted (T2W) images. Feature selection was carried out in three steps. Support vector machine (SVM) and logistic regression (LR) classifiers were applied to construct the radiomics signature to directly identify basal cisterns changes in TBM patients. Finally, the diagnostic performance was evaluated by the receiver operating characteristic (ROC) curve analysis, calibration curve, and decision curve analysis (DCA). Results The segmentation model achieved the mean Dice coefficients of 0.920 and 0.727 in the training and testing datasets, respectively. The SVM model with 7 T2WI–based radiomics features achieved best discrimination capability for basal cisterns changes with an AUC of 0.796 (95% CI, 0.744–0.847) in the training dataset, and an AUC of 0.751 (95% CI, 0.617–0.886) with good calibration in the testing dataset. DCA confirmed its clinical usefulness. Conclusion The T2WI–based radiomics signature combined with deep learning segmentation could provide a fully automatic, non-invasive tool to identify invisible changes of basal cisterns, which has the potential to assist in the diagnosis of TBM. Key Points • The T2WI–based radiomics signature was useful for identifying invisible basal cistern changes in TBM. • The nnU-Net model achieved acceptable results for the auto-segmentation of basal cisterns. • Combining radiomics and deep learning segmentation provided an automatic, non-invasive approach to assist in the diagnosis of TBM.
Collapse
Affiliation(s)
- Qiong Ma
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai, China.,Shanghai Institute of Medical Imaging, Fudan University, Shanghai, China
| | - Yinqiao Yi
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, China
| | - Tiejun Liu
- Department of Radiology, Liuzhou People's Hospital, Liuzhou, Guangxi Zhuang Autonomous Region, China
| | - Xinnian Wen
- Department of Radiology, Guangxi Zhuang Autonomous Region Chest Hospital, Liuzhou, Guangxi Zhuang Autonomous Region, China
| | - Fei Shan
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | - Feng Feng
- Department of Radiology, Nantong Tumor Hospital, Nantong, Jiangsu, China
| | - Qinqin Yan
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | - Jie Shen
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | - Guang Yang
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, China.
| | - Yuxin Shi
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai, China.
| |
Collapse
|
20
|
Wu X, Guo Y, Sa Y, Song Y, Li X, Lv Y, Xing D, Sun Y, Cong Y, Yu H, Jiang W. Contrast-Enhanced Spectral Mammography-Based Prediction of Non-Sentinel Lymph Node Metastasis and Axillary Tumor Burden in Patients With Breast Cancer. Front Oncol 2022; 12:823897. [PMID: 35615151 PMCID: PMC9125761 DOI: 10.3389/fonc.2022.823897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 04/06/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeTo establish and evaluate non-invasive models for estimating the risk of non-sentinel lymph node (NSLN) metastasis and axillary tumor burden among breast cancer patients with 1–2 positive sentinel lymph nodes (SLNs).Materials and MethodsBreast cancer patients with 1–2 positive SLNs who underwent axillary lymph node dissection (ALND) and contrast-enhanced spectral mammography (CESM) examination were enrolled between 2018 and 2021. CESM-based radiomics and deep learning features of tumors were extracted. The correlation analysis, least absolute shrinkage and selection operator (LASSO), and analysis of variance (ANOVA) were used for further feature selection. Models based on the selected features and clinical risk factors were constructed with multivariate logistic regression. Finally, two radiomics nomograms were proposed for predicting NSLN metastasis and the probability of high axillary tumor burden.ResultsA total of 182 patients [53.13 years ± 10.03 (standard deviation)] were included. For predicting the NSLN metastasis status, the radiomics nomogram built by 5 selected radiomics features and 3 clinical risk factors including the number of positive SLNs, ratio of positive SLNs, and lymphovascular invasion (LVI), achieved the area under the receiver operating characteristic curve (AUC) of 0.85 [95% confidence interval (CI): 0.71–0.99] in the testing set and 0.82 (95% CI: 0.67–0.97) in the temporal validation cohort. For predicting the high axillary tumor burden, the AUC values of the developed radiomics nomogram are 0.82 (95% CI: 0.66–0.97) in the testing set and 0.77 (95% CI: 0.62–0.93) in the temporal validation cohort.DiscussionCESM images contain useful information for predicting NSLN metastasis and axillary tumor burden of breast cancer patients. Radiomics can inspire the potential of CESM images to identify lymph node metastasis and improve predictive performance.
Collapse
Affiliation(s)
- Xiaoqian Wu
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, China
| | - Yu Guo
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, China
| | - Yu Sa
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, China
| | - Yipeng Song
- Department of Radiotherapy, Yantai Yuhuangding Hospital, Yantai, China
| | - Xinghua Li
- Department of Radiotherapy, Yantai Yuhuangding Hospital, Yantai, China
| | - Yongbin Lv
- Department of Radiology, Yantai Yuhuangding Hospital, Yantai, China
| | - Dong Xing
- Department of Radiology, Yantai Yuhuangding Hospital, Yantai, China
| | - Yan Sun
- Department of Otorhinolaryngology–Head and Neck Surgery, Yuhuangding Hospital of Qingdao University, Yantai, China
- Shandong Provincial Clinical Research Center for Otorhinolaryngologic Diseases, Yantai, China
| | - Yizi Cong
- Department of Breast Surgery, Yantai Yuhuangding Hospital, Yantai, China
- *Correspondence: Wei Jiang, ; Yizi Cong, ; Hui Yu,
| | - Hui Yu
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, China
- *Correspondence: Wei Jiang, ; Yizi Cong, ; Hui Yu,
| | - Wei Jiang
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, China
- Department of Radiotherapy, Yantai Yuhuangding Hospital, Yantai, China
- *Correspondence: Wei Jiang, ; Yizi Cong, ; Hui Yu,
| |
Collapse
|
21
|
Balkenende L, Teuwen J, Mann RM. Application of Deep Learning in Breast Cancer Imaging. Semin Nucl Med 2022; 52:584-596. [PMID: 35339259 DOI: 10.1053/j.semnuclmed.2022.02.003] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/15/2022] [Accepted: 02/16/2022] [Indexed: 11/11/2022]
Abstract
This review gives an overview of the current state of deep learning research in breast cancer imaging. Breast imaging plays a major role in detecting breast cancer at an earlier stage, as well as monitoring and evaluating breast cancer during treatment. The most commonly used modalities for breast imaging are digital mammography, digital breast tomosynthesis, ultrasound and magnetic resonance imaging. Nuclear medicine imaging techniques are used for detection and classification of axillary lymph nodes and distant staging in breast cancer imaging. All of these techniques are currently digitized, enabling the possibility to implement deep learning (DL), a subset of Artificial intelligence, in breast imaging. DL is nowadays embedded in a plethora of different tasks, such as lesion classification and segmentation, image reconstruction and generation, cancer risk prediction, and prediction and assessment of therapy response. Studies show similar and even better performances of DL algorithms compared to radiologists, although it is clear that large trials are needed, especially for ultrasound and magnetic resonance imaging, to exactly determine the added value of DL in breast cancer imaging. Studies on DL in nuclear medicine techniques are only sparsely available and further research is mandatory. Legal and ethical issues need to be considered before the role of DL can expand to its full potential in clinical breast care practice.
Collapse
Affiliation(s)
- Luuk Balkenende
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Radiation Oncology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands
| | - Ritse M Mann
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands.
| |
Collapse
|
22
|
Cè M, Caloro E, Pellegrino ME, Basile M, Sorce A, Fazzini D, Oliva G, Cellina M. Artificial intelligence in breast cancer imaging: risk stratification, lesion detection and classification, treatment planning and prognosis-a narrative review. EXPLORATION OF TARGETED ANTI-TUMOR THERAPY 2022; 3:795-816. [PMID: 36654817 PMCID: PMC9834285 DOI: 10.37349/etat.2022.00113] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 09/28/2022] [Indexed: 12/28/2022] Open
Abstract
The advent of artificial intelligence (AI) represents a real game changer in today's landscape of breast cancer imaging. Several innovative AI-based tools have been developed and validated in recent years that promise to accelerate the goal of real patient-tailored management. Numerous studies confirm that proper integration of AI into existing clinical workflows could bring significant benefits to women, radiologists, and healthcare systems. The AI-based approach has proved particularly useful for developing new risk prediction models that integrate multi-data streams for planning individualized screening protocols. Furthermore, AI models could help radiologists in the pre-screening and lesion detection phase, increasing diagnostic accuracy, while reducing workload and complications related to overdiagnosis. Radiomics and radiogenomics approaches could extrapolate the so-called imaging signature of the tumor to plan a targeted treatment. The main challenges to the development of AI tools are the huge amounts of high-quality data required to train and validate these models and the need for a multidisciplinary team with solid machine-learning skills. The purpose of this article is to present a summary of the most important AI applications in breast cancer imaging, analyzing possible challenges and new perspectives related to the widespread adoption of these new tools.
Collapse
Affiliation(s)
- Maurizio Cè
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy,Correspondence: Maurizio Cè, Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, Via Festa del Perdono, 7, 20122 Milan, Italy.
| | - Elena Caloro
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy
| | - Maria E. Pellegrino
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy
| | - Mariachiara Basile
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy
| | - Adriana Sorce
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy
| | | | - Giancarlo Oliva
- Department of Radiology, ASST Fatebenefratelli Sacco, 20121 Milan, Italy
| | - Michaela Cellina
- Department of Radiology, ASST Fatebenefratelli Sacco, 20121 Milan, Italy
| |
Collapse
|