1
|
Jeong S, Cheon W, Kim S, Park W, Han Y. Deep-learning-based segmentation using individual patient data on prostate cancer radiation therapy. PLoS One 2024; 19:e0308181. [PMID: 39083552 PMCID: PMC11290636 DOI: 10.1371/journal.pone.0308181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 07/17/2024] [Indexed: 08/02/2024] Open
Abstract
PURPOSE Organ-at-risk segmentation is essential in adaptive radiotherapy (ART). Learning-based automatic segmentation can reduce committed labor and accelerate the ART process. In this study, an auto-segmentation model was developed by employing individual patient datasets and a deep-learning-based augmentation method for tailoring radiation therapy according to the changes in the target and organ of interest in patients with prostate cancer. METHODS Two computed tomography (CT) datasets with well-defined labels, including contoured prostate, bladder, and rectum, were obtained from 18 patients. The labels of the CT images captured during radiation therapy (CT2nd) were predicted using CT images scanned before radiation therapy (CT1st). From the deformable vector fields (DVFs) created by using the VoxelMorph method, 10 DVFs were extracted when each of the modified CT and CT2nd images were deformed and registered to the fixed CT1st image. Augmented images were acquired by utilizing 110 extracted DVFs and spatially transforming the CT1st images and labels. An nnU-net autosegmentation network was trained by using the augmented images, and the CT2nd label was predicted. A patient-specific model was created for 18 patients, and the performances of the individual models were evaluated. The results were evaluated by employing the Dice similarity coefficient (DSC), average Hausdorff distance, and mean surface distance. The accuracy of the proposed model was compared with those of models trained with large datasets. RESULTS Patient-specific models were developed successfully. For the proposed method, the DSC values of the actual and predicted labels for the bladder, prostate, and rectum were 0.94 ± 0.03, 0.84 ± 0.07, and 0.83 ± 0.04, respectively. CONCLUSION We demonstrated the feasibility of automatic segmentation by employing individual patient datasets and image augmentation techniques. The proposed method has potential for clinical application in automatic prostate segmentation for ART.
Collapse
Affiliation(s)
- Sangwoon Jeong
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul, Korea
| | - Wonjoong Cheon
- Department of Radiation Oncology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Sungjin Kim
- Department of Radiation Oncology, Samsung Medical Center, Seoul, Korea
| | - Won Park
- Department of Radiation Oncology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Youngyih Han
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul, Korea
- Department of Radiation Oncology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| |
Collapse
|
2
|
Lin H, Zhao M, Zhu L, Pei X, Wu H, Zhang L, Li Y. Gaussian filter facilitated deep learning-based architecture for accurate and efficient liver tumor segmentation for radiation therapy. Front Oncol 2024; 14:1423774. [PMID: 38966060 PMCID: PMC11222586 DOI: 10.3389/fonc.2024.1423774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 06/06/2024] [Indexed: 07/06/2024] Open
Abstract
Purpose Addressing the challenges of unclear tumor boundaries and the confusion between cysts and tumors in liver tumor segmentation, this study aims to develop an auto-segmentation method utilizing Gaussian filter with the nnUNet architecture to effectively distinguish between tumors and cysts, enhancing the accuracy of liver tumor auto-segmentation. Methods Firstly, 130 cases of liver tumorsegmentation challenge 2017 (LiTS2017) were used for training and validating nnU-Net-based auto-segmentation model. Then, 14 cases of 3D-IRCADb dataset and 25 liver cancer cases retrospectively collected in our hospital were used for testing. The dice similarity coefficient (DSC) was used to evaluate the accuracy of auto-segmentation model by comparing with manual contours. Results The nnU-Net achieved an average DSC value of 0.86 for validation set (20 LiTS cases) and 0.82 for public testing set (14 3D-IRCADb cases). For clinical testing set, the standalone nnU-Net model achieved an average DSC value of 0.75, which increased to 0.81 after post-processing with the Gaussian filter (P<0.05), demonstrating its effectiveness in mitigating the influence of liver cysts on liver tumor segmentation. Conclusion Experiments show that Gaussian filter is beneficial to improve the accuracy of liver tumor segmentation in clinic.
Collapse
Affiliation(s)
- Hongyu Lin
- Department of Oncology, First Hospital of Hebei Medical University, Shijiazhuang, China
| | - Min Zhao
- Department of Oncology, First Hospital of Hebei Medical University, Shijiazhuang, China
| | - Lingling Zhu
- Department of Oncology, First Hospital of Hebei Medical University, Shijiazhuang, China
| | - Xi Pei
- Technology Development Department, Anhui Wisdom Technology Co., Ltd., Hefei, China
| | - Haotian Wu
- Technology Development Department, Anhui Wisdom Technology Co., Ltd., Hefei, China
| | - Lian Zhang
- Department of Oncology, First Hospital of Hebei Medical University, Shijiazhuang, China
| | - Ying Li
- Department of Oncology, First Hospital of Hebei Medical University, Shijiazhuang, China
| |
Collapse
|
3
|
Liu X, Qu L, Xie Z, Zhao J, Shi Y, Song Z. Towards more precise automatic analysis: a systematic review of deep learning-based multi-organ segmentation. Biomed Eng Online 2024; 23:52. [PMID: 38851691 PMCID: PMC11162022 DOI: 10.1186/s12938-024-01238-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 04/11/2024] [Indexed: 06/10/2024] Open
Abstract
Accurate segmentation of multiple organs in the head, neck, chest, and abdomen from medical images is an essential step in computer-aided diagnosis, surgical navigation, and radiation therapy. In the past few years, with a data-driven feature extraction approach and end-to-end training, automatic deep learning-based multi-organ segmentation methods have far outperformed traditional methods and become a new research topic. This review systematically summarizes the latest research in this field. We searched Google Scholar for papers published from January 1, 2016 to December 31, 2023, using keywords "multi-organ segmentation" and "deep learning", resulting in 327 papers. We followed the PRISMA guidelines for paper selection, and 195 studies were deemed to be within the scope of this review. We summarized the two main aspects involved in multi-organ segmentation: datasets and methods. Regarding datasets, we provided an overview of existing public datasets and conducted an in-depth analysis. Concerning methods, we categorized existing approaches into three major classes: fully supervised, weakly supervised and semi-supervised, based on whether they require complete label information. We summarized the achievements of these methods in terms of segmentation accuracy. In the discussion and conclusion section, we outlined and summarized the current trends in multi-organ segmentation.
Collapse
Affiliation(s)
- Xiaoyu Liu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Linhao Qu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Ziyue Xie
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Jiayue Zhao
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Yonghong Shi
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| |
Collapse
|
4
|
Zoghby MM, Erickson BJ, Conte GM. Generative Adversarial Networks for Brain MRI Synthesis: Impact of Training Set Size on Clinical Application. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1228-1238. [PMID: 38366293 PMCID: PMC11169340 DOI: 10.1007/s10278-024-00976-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/27/2023] [Accepted: 11/27/2023] [Indexed: 02/18/2024]
Abstract
We evaluated the impact of training set size on generative adversarial networks (GANs) to synthesize brain MRI sequences. We compared three sets of GANs trained to generate pre-contrast T1 (gT1) from post-contrast T1 and FLAIR (gFLAIR) from T2. The baseline models were trained on 135 cases; for this study, we used the same model architecture but a larger cohort of 1251 cases and two stopping rules, an early checkpoint (early models) and one after 50 epochs (late models). We tested all models on an independent dataset of 485 newly diagnosed gliomas. We compared the generated MRIs with the original ones using the structural similarity index (SSI) and mean squared error (MSE). We simulated scenarios where either the original T1, FLAIR, or both were missing and used their synthesized version as inputs for a segmentation model with the original post-contrast T1 and T2. We compared the segmentations using the dice similarity coefficient (DSC) for the contrast-enhancing area, non-enhancing area, and the whole lesion. For the baseline, early, and late models on the test set, for the gT1, median SSI was .957, .918, and .947; median MSE was .006, .014, and .008. For the gFLAIR, median SSI was .924, .908, and .915; median MSE was .016, .016, and .019. The range DSC was .625-.955, .420-.952, and .610-.954. Overall, GANs trained on a relatively small cohort performed similarly to those trained on a cohort ten times larger, making them a viable option for rare diseases or institutions with limited resources.
Collapse
Affiliation(s)
- M M Zoghby
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - B J Erickson
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - G M Conte
- Department of Radiology, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
5
|
Liu M, Shao X, Jiang L, Wu K. 3D EAGAN: 3D edge-aware attention generative adversarial network for prostate segmentation in transrectal ultrasound images. Quant Imaging Med Surg 2024; 14:4067-4085. [PMID: 38846298 PMCID: PMC11151225 DOI: 10.21037/qims-23-1698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 04/18/2024] [Indexed: 06/09/2024]
Abstract
Background The segmentation of prostates from transrectal ultrasound (TRUS) images is a critical step in the diagnosis and treatment of prostate cancer. Nevertheless, the manual segmentation performed by physicians is a time-consuming and laborious task. To address this challenge, there is a pressing need to develop computerized algorithms capable of autonomously segmenting prostates from TRUS images, which sets a direction and form for future development. However, automatic prostate segmentation in TRUS images has always been a challenging problem since prostates in TRUS images have ambiguous boundaries and inhomogeneous intensity distribution. Although many prostate segmentation methods have been proposed, they still need to be improved due to the lack of sensibility to edge information. Consequently, the objective of this study is to devise a highly effective prostate segmentation method that overcomes these limitations and achieves accurate segmentation of prostates in TRUS images. Methods A three-dimensional (3D) edge-aware attention generative adversarial network (3D EAGAN)-based prostate segmentation method is proposed in this paper, which consists of an edge-aware segmentation network (EASNet) that performs the prostate segmentation and a discriminator network that distinguishes predicted prostates from real prostates. The proposed EASNet is composed of an encoder-decoder-based U-Net backbone network, a detail compensation module (DCM), four 3D spatial and channel attention modules (3D SCAM), an edge enhancement module (EEM), and a global feature extractor (GFE). The DCM is proposed to compensate for the loss of detailed information caused by the down-sampling process of the encoder. The features of the DCM are selectively enhanced by the 3D spatial and channel attention module. Furthermore, an EEM is proposed to guide shallow layers in the EASNet to focus on contour and edge information in prostates. Finally, features from shallow layers and hierarchical features from the decoder module are fused through the GFE to predict the segmentation prostates. Results The proposed method is evaluated on our TRUS image dataset and the open-source µRegPro dataset. Specifically, experimental results on two datasets show that the proposed method significantly improved the average segmentation Dice score from 85.33% to 90.06%, Jaccard score from 76.09% to 84.11%, Hausdorff distance (HD) score from 8.59 to 4.58 mm, Precision score from 86.48% to 90.58%, and Recall score from 84.79% to 89.24%. Conclusions A novel 3D EAGAN-based prostate segmentation method is proposed. The proposed method consists of an EASNet and a discriminator network. Experimental results demonstrate that the proposed method has achieved satisfactory results on 3D TRUS image segmentation for prostates.
Collapse
Affiliation(s)
- Mengqing Liu
- School of Computer and Information Engineering, Nantong Institute of Technology, Nantong, China
- School of Information Engineering, Nanchang Hangkong University, Nanchang, China
| | - Xiao Shao
- School of Computer Science, Nanjing University of Information Science and Technology, Nanjing, China
| | - Liping Jiang
- Department of Ultrasound Medicine, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Kaizhi Wu
- School of Information Engineering, Nanchang Hangkong University, Nanchang, China
| |
Collapse
|
6
|
Li Z, Gan G, Guo J, Zhan W, Chen L. Accurate object localization facilitates automatic esophagus segmentation in deep learning. Radiat Oncol 2024; 19:55. [PMID: 38735947 PMCID: PMC11088757 DOI: 10.1186/s13014-024-02448-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 05/01/2024] [Indexed: 05/14/2024] Open
Abstract
BACKGROUND Currently, automatic esophagus segmentation remains a challenging task due to its small size, low contrast, and large shape variation. We aimed to improve the performance of esophagus segmentation in deep learning by applying a strategy that involves locating the object first and then performing the segmentation task. METHODS A total of 100 cases with thoracic computed tomography scans from two publicly available datasets were used in this study. A modified CenterNet, an object location network, was employed to locate the center of the esophagus for each slice. Subsequently, the 3D U-net and 2D U-net_coarse models were trained to segment the esophagus based on the predicted object center. A 2D U-net_fine model was trained based on the updated object center according to the 3D U-net model. The dice similarity coefficient and the 95% Hausdorff distance were used as quantitative evaluation indexes for the delineation performance. The characteristics of the automatically delineated esophageal contours by the 2D U-net and 3D U-net models were summarized. Additionally, the impact of the accuracy of object localization on the delineation performance was analyzed. Finally, the delineation performance in different segments of the esophagus was also summarized. RESULTS The mean dice coefficient of the 3D U-net, 2D U-net_coarse, and 2D U-net_fine models were 0.77, 0.81, and 0.82, respectively. The 95% Hausdorff distance for the above models was 6.55, 3.57, and 3.76, respectively. Compared with the 2D U-net, the 3D U-net has a lower incidence of delineating wrong objects and a higher incidence of missing objects. After using the fine object center, the average dice coefficient was improved by 5.5% in the cases with a dice coefficient less than 0.75, while that value was only 0.3% in the cases with a dice coefficient greater than 0.75. The dice coefficients were lower for the esophagus between the orifice of the inferior and the pulmonary bifurcation compared with the other regions. CONCLUSION The 3D U-net model tended to delineate fewer incorrect objects but also miss more objects. Two-stage strategy with accurate object location could enhance the robustness of the segmentation model and significantly improve the esophageal delineation performance, especially for cases with poor delineation results.
Collapse
Affiliation(s)
- Zhibin Li
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Guanghui Gan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Jian Guo
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Wei Zhan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Long Chen
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China.
| |
Collapse
|
7
|
Yasaka K, Abe O. Impact of rapid iodine contrast agent infusion on tracheal diameter and lung volume in CT pulmonary angiography measured with deep learning-based algorithm. Jpn J Radiol 2024:10.1007/s11604-024-01591-7. [PMID: 38733470 DOI: 10.1007/s11604-024-01591-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 05/04/2024] [Indexed: 05/13/2024]
Abstract
PURPOSE To compare computed tomography (CT) pulmonary angiography and unenhanced CT to determine the effect of rapid iodine contrast agent infusion on tracheal diameter and lung volume. MATERIAL AND METHODS This retrospective study included 101 patients who underwent CT pulmonary angiography and unenhanced CT, for which the time interval between them was within 365 days. CT pulmonary angiography was scanned 20 s after starting the contrast agent injection at the end-inspiratory level. Commercial software, which was developed based on deep learning technique, was used to segment the lung, and its volume was automatically evaluated. The tracheal diameter at the thoracic inlet level was also measured. Then, the ratios for the CT pulmonary angiography to unenhanced CT of the tracheal diameter (TDPAU) and both lung volumes (BLVPAU) were calculated. RESULTS Tracheal diameter and both lung volumes were significantly smaller in CT pulmonary angiography (17.2 ± 2.6 mm and 3668 ± 1068 ml, respectively) than those in unenhanced CT (17.7 ± 2.5 mm and 3887 ± 1086 ml, respectively) (p < 0.001 for both). A statistically significant correlation was found between TDPAU and BLVPAU with a correlation coefficient of 0.451 (95% confidence interval, 0.280-0.594) (p < 0.001). No factor showed a significant association with TDPAU. The type of contrast agent had a significant association for BLVPAU (p = 0.042). CONCLUSIONS Rapid infusion of iodine contrast agent reduced the tracheal diameter and both lung volumes in CT pulmonary angiography, which was scanned at end-inspiratory level, compared with those in unenhanced CT.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
8
|
Du W, Guo H, Chen B, Cui M, Zhang T, Sun D, Ma H. Cascaded-TOARNet: A cascaded framework based on mixed attention and multiscale information for thoracic OARs segmentation. Med Phys 2024; 51:3405-3420. [PMID: 38063140 DOI: 10.1002/mp.16881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Revised: 10/20/2023] [Accepted: 11/19/2023] [Indexed: 05/08/2024] Open
Abstract
BACKGROUND Accurate and automated segmentation of thoracic organs-at-risk (OARs) is critical for radiotherapy treatment planning of thoracic cancers. However, this has remained a challenging task for four major reasons: (1) thoracic OARs have diverse morphologies; (2) thoracic OARs have low contrast with the background; (3) boundaries of thoracic OARs are blurry; (4) class imbalance issue caused by small organs. PURPOSE To overcome the above challenges and achieve accurate and automated segmentation of thoracic OARs on thoracic CT. METHODS A novel cascaded framework based on mixed attention and multiscale information for thoracic OARs segmentation, called Cascaded-TOARNet. This cascaded framework comprises two stages: localization and segmentation. During the localization stage, TOARNet locates each organ to crop the regions of interest (ROIs). During the segmentation stage, TOARNet accurately segments the ROIs, and the segmentation results are merged into a complete result. RESULTS We evaluated our proposed method and other common segmentation methods on two public datasets: the AAPM Thoracic Auto-Segmentation Challenge dataset and the Segmentation of Thoracic Organs at Risk (SegTHOR) dataset. Our method demonstrated superior performance, achieving a mean Dice score of 92.6% on the SegTHOR dataset and 90.8% on the AAPM dataset. CONCLUSIONS This segmentation method holds great promise as an essential tool for enhancing the efficiency of thoracic radiotherapy planning.
Collapse
Affiliation(s)
- Wu Du
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Huimin Guo
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Boyang Chen
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Ming Cui
- Gastrointestinal and Urinary and Musculoskeletal Cancer, Cancer Hospital of Dalian University of Technology, Shenyang, Liaoning, China
| | - Teng Zhang
- Gastrointestinal and Urinary and Musculoskeletal Cancer, Cancer Hospital of Dalian University of Technology, Shenyang, Liaoning, China
| | - Deyu Sun
- Gastrointestinal and Urinary and Musculoskeletal Cancer, Cancer Hospital of Dalian University of Technology, Shenyang, Liaoning, China
| | - He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, Liaoning, China
| |
Collapse
|
9
|
Wang X, Chang Y, Pei X, Xu XG. A prior-information-based automatic segmentation method for the clinical target volume in adaptive radiotherapy of cervical cancer. J Appl Clin Med Phys 2024; 25:e14350. [PMID: 38546277 PMCID: PMC11087177 DOI: 10.1002/acm2.14350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 01/09/2024] [Accepted: 03/18/2024] [Indexed: 05/12/2024] Open
Abstract
OBJECTIVE Adaptive planning to accommodate anatomic changes during treatment often requires repeated segmentation. In this study, prior patient-specific data was integrateda into a registration-guided multi-channel multi-path (Rg-MCMP) segmentation framework to improve the accuracy of repeated clinical target volume (CTV) segmentation. METHODS This study was based on CT image datasets for a total of 90 cervical cancer patients who received two courses of radiotherapy. A total of 15 patients were selected randomly as the test set. In the Rg-MCMP segmentation framework, the first-course CT images (CT1) were registered to second-course CT images (CT2) to yield aligned CT images (aCT1), and the CTV in the first course (CTV1) was propagated to yield aligned CTV contours (aCTV1). Then, aCT1, aCTV1, and CT2 were combined as the inputs for 3D U-Net consisting of a channel-based multi-path feature extraction network. The performance of the Rg-MCMP segmentation framework was evaluated and compared with the single-channel single-path model (SCSP), the standalone registration methods, and the registration-guided multi-channel single-path (Rg-MCSP) model. The Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and average surface distance (ASD) were used as the metrics. RESULTS The average DSC of CTV for the deformable image DIR-MCMP model was found to be 0.892, greater than that of the standalone DIR (0.856), SCSP (0.837), and DIR-MCSP (0.877), which were improvements of 4.2%, 6.6%, and 1.7%, respectively. Similarly, the rigid body DIR-MCMP model yielded an average DSC of 0.875, which exceeded standalone RB (0.787), SCSP (0.837), and registration-guided multi-channel single-path (0.848), which were improvements of 11.2%, 4.5%, and 3.2%, respectively. These improvements in DSC were statistically significant (p < 0.05). CONCLUSION The proposed Rg-MCMP framework achieved excellent accuracy in CTV segmentation as part of the adaptive radiotherapy workflow.
Collapse
Affiliation(s)
- Xuanhe Wang
- School of Nuclear Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina
| | - Yankui Chang
- School of Nuclear Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina
| | - Xi Pei
- School of Nuclear Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina
- Anhui Wisdom Technology Company LtmitedHefeiChina
| | - Xie George Xu
- School of Nuclear Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina
- Department of Radiation OncologyThe First Affiliated Hospital of University of Science and Technology of ChinaHefeiChina
| |
Collapse
|
10
|
Safari M, Eidex Z, Chang CW, Qiu RL, Yang X. Fast MRI Reconstruction Using Deep Learning-based Compressed Sensing: A Systematic Review. ARXIV 2024:arXiv:2405.00241v1. [PMID: 38745700 PMCID: PMC11092677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Magnetic resonance imaging (MRI) has revolutionized medical imaging, providing a non-invasive and highly detailed look into the human body. However, the long acquisition times of MRI present challenges, causing patient discomfort, motion artifacts, and limiting real-time applications. To address these challenges, researchers are exploring various techniques to reduce acquisition time and improve the overall efficiency of MRI. One such technique is compressed sensing (CS), which reduces data acquisition by leveraging image sparsity in transformed spaces. In recent years, deep learning (DL) has been integrated with CS-MRI, leading to a new framework that has seen remarkable growth. DL-based CS-MRI approaches are proving to be highly effective in accelerating MR imaging without compromising image quality. This review comprehensively examines DL-based CS-MRI techniques, focusing on their role in increasing MR imaging speed. We provide a detailed analysis of each category of DL-based CS-MRI including end-to-end, unroll optimization, self-supervised, and federated learning. Our systematic review highlights significant contributions and underscores the exciting potential of DL in CS-MRI. Additionally, our systematic review efficiently summarizes key results and trends in DL-based CS-MRI including quantitative metrics, the dataset used, acceleration factors, and the progress of and research interest in DL techniques over time. Finally, we discuss potential future directions and the importance of DL-based CS-MRI in the advancement of medical imaging. To facilitate further research in this area, we provide a GitHub repository that includes up-to-date DL-based CS-MRI publications and publicly available datasets - https://github.com/mosaf/Awesome-DL-based-CS-MRI.
Collapse
Affiliation(s)
- Mojtaba Safari
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| |
Collapse
|
11
|
Wang G, Zhou M, Ning X, Tiwari P, Zhu H, Yang G, Yap CH. US2Mask: Image-to-mask generation learning via a conditional GAN for cardiac ultrasound image segmentation. Comput Biol Med 2024; 172:108282. [PMID: 38503085 DOI: 10.1016/j.compbiomed.2024.108282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 02/29/2024] [Accepted: 03/12/2024] [Indexed: 03/21/2024]
Abstract
Cardiac ultrasound (US) image segmentation is vital for evaluating clinical indices, but it often demands a large dataset and expert annotations, resulting in high costs for deep learning algorithms. To address this, our study presents a framework utilizing artificial intelligence generation technology to produce multi-class RGB masks for cardiac US image segmentation. The proposed approach directly performs semantic segmentation of the heart's main structures in US images from various scanning modes. Additionally, we introduce a novel learning approach based on conditional generative adversarial networks (CGAN) for cardiac US image segmentation, incorporating a conditional input and paired RGB masks. Experimental results from three cardiac US image datasets with diverse scan modes demonstrate that our approach outperforms several state-of-the-art models, showcasing improvements in five commonly used segmentation metrics, with lower noise sensitivity. Source code is available at https://github.com/energy588/US2mask.
Collapse
Affiliation(s)
- Gang Wang
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, Chongqing; Department of Bioengineering, Imperial College London, London, UK
| | - Mingliang Zhou
- School of Computer Science, Chongqing University, Chongqing, Chongqing.
| | - Xin Ning
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Prayag Tiwari
- School of Information Technology, Halmstad University, Halmstad, Sweden
| | | | - Guang Yang
- Department of Bioengineering, Imperial College London, London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
12
|
Wen X, Zhao C, Zhao B, Yuan M, Chang J, Liu W, Meng J, Shi L, Yang S, Zeng J, Yang Y. Application of deep learning in radiation therapy for cancer. Cancer Radiother 2024; 28:208-217. [PMID: 38519291 DOI: 10.1016/j.canrad.2023.07.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 07/17/2023] [Accepted: 07/18/2023] [Indexed: 03/24/2024]
Abstract
In recent years, with the development of artificial intelligence, deep learning has been gradually applied to clinical treatment and research. It has also found its way into the applications in radiotherapy, a crucial method for cancer treatment. This study summarizes the commonly used and latest deep learning algorithms (including transformer, and diffusion models), introduces the workflow of different radiotherapy, and illustrates the application of different algorithms in different radiotherapy modules, as well as the defects and challenges of deep learning in the field of radiotherapy, so as to provide some help for the development of automatic radiotherapy for cancer.
Collapse
Affiliation(s)
- X Wen
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - C Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Minhang District, Shanghai, China
| | - B Zhao
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - M Yuan
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - J Chang
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - W Liu
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - J Meng
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - L Shi
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - S Yang
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - J Zeng
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - Y Yang
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China.
| |
Collapse
|
13
|
Xia S, Li Q, Zhu HT, Zhang XY, Shi YJ, Yang D, Wu J, Guan Z, Lu Q, Li XT, Sun YS. Fully semantic segmentation for rectal cancer based on post-nCRT MRl modality and deep learning framework. BMC Cancer 2024; 24:315. [PMID: 38454349 PMCID: PMC10919051 DOI: 10.1186/s12885-024-11997-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 02/13/2024] [Indexed: 03/09/2024] Open
Abstract
PURPOSE Rectal tumor segmentation on post neoadjuvant chemoradiotherapy (nCRT) magnetic resonance imaging (MRI) has great significance for tumor measurement, radiomics analysis, treatment planning, and operative strategy. In this study, we developed and evaluated segmentation potential exclusively on post-chemoradiation T2-weighted MRI using convolutional neural networks, with the aim of reducing the detection workload for radiologists and clinicians. METHODS A total of 372 consecutive patients with LARC were retrospectively enrolled from October 2015 to December 2017. The standard-of-care neoadjuvant process included 22-fraction intensity-modulated radiation therapy and oral capecitabine. Further, 243 patients (3061 slices) were grouped into training and validation datasets with a random 80:20 split, and 41 patients (408 slices) were used as the test dataset. A symmetric eight-layer deep network was developed using the nnU-Net Framework, which outputs the segmentation result with the same size. The trained deep learning (DL) network was examined using fivefold cross-validation and tumor lesions with different TRGs. RESULTS At the stage of testing, the Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and mean surface distance (MSD) were applied to quantitatively evaluate the performance of generalization. Considering the test dataset (41 patients, 408 slices), the average DSC, HD95, and MSD were 0.700 (95% CI: 0.680-0.720), 17.73 mm (95% CI: 16.08-19.39), and 3.11 mm (95% CI: 2.67-3.56), respectively. Eighty-two percent of the MSD values were less than 5 mm, and fifty-five percent were less than 2 mm (median 1.62 mm, minimum 0.07 mm). CONCLUSIONS The experimental results indicated that the constructed pipeline could achieve relatively high accuracy. Future work will focus on assessing the performances with multicentre external validation.
Collapse
Affiliation(s)
- Shaojun Xia
- Institute of Medical Technology, Peking University Health Science Center, Haidian District, No. 38 Xueyuan Road, Beijing, 100191, China
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Qingyang Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Hai-Tao Zhu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Xiao-Yan Zhang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Yan-Jie Shi
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Ding Yang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Jiaqi Wu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Zhen Guan
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Qiaoyuan Lu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Xiao-Ting Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Ying-Shi Sun
- Institute of Medical Technology, Peking University Health Science Center, Haidian District, No. 38 Xueyuan Road, Beijing, 100191, China.
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China.
| |
Collapse
|
14
|
Yasaka K, Saigusa H, Abe O. Effects of Intravenous Infusion of Iodine Contrast Media on the Tracheal Diameter and Lung Volume Measured with Deep Learning-Based Algorithm. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01071-4. [PMID: 38448759 DOI: 10.1007/s10278-024-01071-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 02/06/2024] [Accepted: 02/22/2024] [Indexed: 03/08/2024]
Abstract
This study aimed to investigate the effects of intravenous injection of iodine contrast agent on the tracheal diameter and lung volume. In this retrospective study, a total of 221 patients (71.1 ± 12.4 years, 174 males) who underwent vascular dynamic CT examination including chest were included. Unenhanced, arterial phase, and delayed-phase images were scanned. The tracheal luminal diameters at the level of the thoracic inlet and both lung volumes were evaluated by a radiologist using a commercial software, which allows automatic airway and lung segmentation. The tracheal diameter and both lung volumes were compared between the unenhanced vs. arterial and delayed phase using a paired t-test. The Bonferroni correction was performed for multiple group comparisons. The tracheal diameter in the arterial phase (18.6 ± 2.4 mm) was statistically significantly smaller than those in the unenhanced CT (19.1 ± 2.5 mm) (p < 0.001). No statistically significant difference was found in the tracheal diameter between the delayed phase (19.0 ± 2.4 mm) and unenhanced CT (p = 0.077). Both lung volumes in the arterial phase were 4131 ± 1051 mL which was significantly smaller than those in the unenhanced CT (4332 ± 1076 mL) (p < 0.001). No statistically significant difference was found in both lung volumes between the delayed phase (4284 ± 1054 mL) and unenhanced CT (p = 0.068). In conclusion, intravenous infusion of iodine contrast agent transiently decreased the tracheal diameter and both lung volumes.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Hiroyuki Saigusa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
15
|
Cantrell DR, Cho L, Zhou C, Faruqui SHA, Potts MB, Jahromi BS, Abdalla R, Shaibani A, Ansari SA. Background Subtraction Angiography with Deep Learning Using Multi-frame Spatiotemporal Angiographic Input. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:134-144. [PMID: 38343209 PMCID: PMC10980661 DOI: 10.1007/s10278-023-00921-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 09/29/2023] [Accepted: 10/23/2023] [Indexed: 03/02/2024]
Abstract
Catheter Digital Subtraction Angiography (DSA) is markedly degraded by all voluntary, respiratory, or cardiac motion artifact that occurs during the exam acquisition. Prior efforts directed toward improving DSA images with machine learning have focused on extracting vessels from individual, isolated 2D angiographic frames. In this work, we introduce improved 2D + t deep learning models that leverage the rich temporal information in angiographic timeseries. A total of 516 cerebral angiograms were collected with 8784 individual series. We utilized feature-based computer vision algorithms to separate the database into "motionless" and "motion-degraded" subsets. Motion measured from the "motion degraded" category was then used to create a realistic, but synthetic, motion-augmented dataset suitable for training 2D U-Net, 3D U-Net, SegResNet, and UNETR models. Quantitative results on a hold-out test set demonstrate that the 3D U-Net outperforms competing 2D U-Net architectures, with substantially reduced motion artifacts when compared to DSA. In comparison to single-frame 2D U-Net, the 3D U-Net utilizing 16 input frames achieves a reduced RMSE (35.77 ± 15.02 vs 23.14 ± 9.56, p < 0.0001; mean ± std dev) and an improved Multi-Scale SSIM (0.86 ± 0.08 vs 0.93 ± 0.05, p < 0.0001). The 3D U-Net also performs favorably in comparison to alternative convolutional and transformer-based architectures (U-Net RMSE 23.20 ± 7.55 vs SegResNet 23.99 ± 7.81, p < 0.0001, and UNETR 25.42 ± 7.79, p < 0.0001, mean ± std dev). These results demonstrate that multi-frame temporal information can boost performance of motion-resistant Background Subtraction Deep Learning algorithms, and we have presented a neuroangiography domain-specific synthetic affine motion augmentation pipeline that can be utilized to generate suitable datasets for supervised training of 3D (2d + t) architectures.
Collapse
Affiliation(s)
- Donald R Cantrell
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA.
- Department of Neurology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA.
- Department of Radiology, Ann and Robert H. Lurie Children's Hospital, Chicago, IL, USA.
| | - Leon Cho
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
| | - Chaochao Zhou
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
| | - Syed H A Faruqui
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
| | - Matthew B Potts
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
- Department of Neurological Surgery, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Babak S Jahromi
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
- Department of Neurological Surgery, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Ramez Abdalla
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
| | - Ali Shaibani
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
- Department of Neurology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
- Department of Neurological Surgery, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
- Department of Radiology, Ann and Robert H. Lurie Children's Hospital, Chicago, IL, USA
| | - Sameer A Ansari
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
- Department of Neurology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
- Department of Neurological Surgery, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| |
Collapse
|
16
|
Karimipourfard M, Sina S, Mahani H, Alavi M, Yazdi M. Impact of deep learning-based multiorgan segmentation methods on patient-specific internal dosimetry in PET/CT imaging: A comparative study. J Appl Clin Med Phys 2024; 25:e14254. [PMID: 38214349 PMCID: PMC10860559 DOI: 10.1002/acm2.14254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 10/29/2023] [Accepted: 11/30/2023] [Indexed: 01/13/2024] Open
Abstract
PURPOSE Accurate and fast multiorgan segmentation is essential in image-based internal dosimetry in nuclear medicine. While conventional manual PET image segmentation is widely used, it suffers from both being time-consuming as well as subject to human error. This study exploited 2D and 3D deep learning (DL) models. Key organs in the trunk of the body were segmented and then used as a reference for networks. METHODS The pre-trained p2p-U-Net-GAN and HighRes3D architectures were fine-tuned with PET-only images as inputs. Additionally, the HighRes3D model was alternatively trained with PET/CT images. Evaluation metrics such as sensitivity (SEN), specificity (SPC), intersection over union (IoU), and Dice scores were considered to assess the performance of the networks. The impact of DL-assisted PET image segmentation methods was further assessed using the Monte Carlo (MC)-derived S-values to be used for internal dosimetry. RESULTS A fair comparison with manual low-dose CT-aided segmentation of the PET images was also conducted. Although both 2D and 3D models performed well, the HighRes3D offers superior performance with Dice scores higher than 0.90. Key evaluation metrics such as SEN, SPC, and IoU vary between 0.89-0.93, 0.98-0.99, and 0.87-0.89 intervals, respectively, indicating the encouraging performance of the models. The percentage differences between the manual and DL segmentation methods in the calculated S-values varied between 0.1% and 6% with a maximum attributed to the stomach. CONCLUSION The findings prove while the incorporation of anatomical information provided by the CT data offers superior performance in terms of Dice score, the performance of HighRes3D remains comparable without the extra CT channel. It is concluded that both proposed DL-based methods provide automated and fast segmentation of whole-body PET/CT images with promising evaluation metrics. Between them, the HighRes3D is more pronounced by providing better performance and can therefore be the method of choice for 18F-FDG-PET image segmentation.
Collapse
Affiliation(s)
| | - Sedigheh Sina
- Department of Ray‐Medical EngineeringShiraz UniversityShirazIran
- Radiation Research CenterShiraz UniversityShirazIran
| | - Hojjat Mahani
- Radiation Applications Research SchoolNuclear Science and Technology Research InstituteTehranIran
| | - Mehrosadat Alavi
- Department of Nuclear MedicineShiraz University of Medical SciencesShirazIran
| | - Mehran Yazdi
- School of Electrical and Computer EngineeringShiraz UniversityShirazIran
| |
Collapse
|
17
|
Sheikhi M, Sina S, Karimipourfard M. Deep-learned generation of renal dual-energy CT from a single-energy scan. Clin Radiol 2024; 79:e17-e25. [PMID: 37923626 DOI: 10.1016/j.crad.2023.09.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 09/14/2023] [Accepted: 09/24/2023] [Indexed: 11/07/2023]
Abstract
AIM To investigate the role of the deep-learning (DL) method in the generation of dual-energy computed tomography (DECT) images from single-energy images for precise diagnosis of kidney stone type. MATERIALS AND METHODS DECT of 23 patients was acquired, and the stone types were investigated based on the DECT software suggestions. The data were divided into two paired groups:120 kVp input and 80 kVp target and 120 kVp input and 135 kVp targets, p2p-UNet-GAN was exploited to generate the different energy images based on the common CT protocols. RESULTS The images generated of the generative adversarial network (GAN) network were evaluated based on the SSIM, PSNR, and MSE metrics, and the values were estimated as 0.85-0.95, 28-32, and 0.85-0.89 respectively. The attenuation ratio of test patient images were estimated and compared with real patient reports. The network achieved high accuracy in stone region localisation and resulted in accurate stone type predictions. CONCLUSION This study presents a useful method based on the DL technique to reduce patient radiation dose and facilitate the prediction of urinary stone types using single-energy CT imaging.
Collapse
Affiliation(s)
- M Sheikhi
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran; Abu Ali Sina Hospital, Shiraz, Iran
| | - S Sina
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran; Radiation Research Center, Shiraz University, Shiraz, Iran.
| | - M Karimipourfard
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
| |
Collapse
|
18
|
Huang W, Zhang H, Cheng Y, Quan X. DRCM: a disentangled representation network based on coordinate and multimodal attention for medical image fusion. Front Physiol 2023; 14:1241370. [PMID: 38028809 PMCID: PMC10656763 DOI: 10.3389/fphys.2023.1241370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 10/02/2023] [Indexed: 12/01/2023] Open
Abstract
Recent studies on medical image fusion based on deep learning have made remarkable progress, but the common and exclusive features of different modalities, especially their subsequent feature enhancement, are ignored. Since medical images of different modalities have unique information, special learning of exclusive features should be designed to express the unique information of different modalities so as to obtain a medical fusion image with more information and details. Therefore, we propose an attention mechanism-based disentangled representation network for medical image fusion, which designs coordinate attention and multimodal attention to extract and strengthen common and exclusive features. First, the common and exclusive features of each modality were obtained by the cross mutual information and adversarial objective methods, respectively. Then, coordinate attention is focused on the enhancement of the common and exclusive features of different modalities, and the exclusive features are weighted by multimodal attention. Finally, these two kinds of features are fused. The effectiveness of the three innovation modules is verified by ablation experiments. Furthermore, eight comparison methods are selected for qualitative analysis, and four metrics are used for quantitative comparison. The values of the four metrics demonstrate the effect of the DRCM. Furthermore, the DRCM achieved better results on SCD, Nabf, and MS-SSIM metrics, which indicates that the DRCM achieved the goal of further improving the visual quality of the fused image with more information from source images and less noise. Through the comprehensive comparison and analysis of the experimental results, it was found that the DRCM outperforms the comparison method.
Collapse
Affiliation(s)
| | - Han Zhang
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | | | | |
Collapse
|
19
|
Hong GS, Jang M, Kyung S, Cho K, Jeong J, Lee GY, Shin K, Kim KD, Ryu SM, Seo JB, Lee SM, Kim N. Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning. Korean J Radiol 2023; 24:1061-1080. [PMID: 37724586 PMCID: PMC10613849 DOI: 10.3348/kjr.2023.0393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/01/2023] [Accepted: 07/30/2023] [Indexed: 09/21/2023] Open
Abstract
Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.
Collapse
Affiliation(s)
- Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Miso Jang
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Kyungjin Cho
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jiheon Jeong
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Grace Yoojin Lee
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Keewon Shin
- Laboratory for Biosignal Analysis and Perioperative Outcome Research, Biomedical Engineering Center, Asan Institute of Lifesciences, Asan Medical Center, Seoul, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung Min Ryu
- Department of Orthopedic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
20
|
Tian M, Wang H, Liu X, Ye Y, Ouyang G, Shen Y, Li Z, Wang X, Wu S. Delineation of clinical target volume and organs at risk in cervical cancer radiotherapy by deep learning networks. Med Phys 2023; 50:6354-6365. [PMID: 37246619 DOI: 10.1002/mp.16468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 04/17/2023] [Accepted: 04/28/2023] [Indexed: 05/30/2023] Open
Abstract
PURPOSE Delineation of the clinical target volume (CTV) and organs-at-risk (OARs) is important in cervical cancer radiotherapy. But it is generally labor-intensive, time-consuming, and subjective. This paper proposes a parallel-path attention fusion network (PPAF-net) to overcome these disadvantages in the delineation task. METHODS The PPAF-net utilizes both the texture and structure information of CTV and OARs by employing a U-Net network to capture the high-level texture information, and an up-sampling and down-sampling (USDS) network to capture the low-level structure information to accentuate the boundaries of CTV and OARs. Multi-level features extracted from both networks are then fused together through an attention module to generate the delineation result. RESULTS The dataset contains 276 computed tomography (CT) scans of patients with cervical cancer of staging IB-IIA. The images are provided by the West China Hospital of Sichuan University. Simulation results demonstrate that PPAF-net performs favorably on the delineation of the CTV and OARs (e.g., rectum, bladder and etc.) and achieves the state-of-the-art delineation accuracy, respectively, for the CTV and OARs. In terms of the Dice Similarity Coefficient (DSC) and the Hausdorff Distance (HD), 88.61% and 2.25 cm for the CTV, 92.27% and 0.73 cm for the rectum, 96.74% and 0.68 cm for the bladder, 96.38% and 0.65 cm for the left kidney, 96.79% and 0.63 cm for the right kidney, 93.42% and 0.52 cm for the left femoral head, 93.69% and 0.51 cm for the right femoral head, 87.53% and 1.07 cm for the small intestine, and 91.50% and 0.84 cm for the spinal cord. CONCLUSIONS The proposed automatic delineation network PPAF-net performs well on CTV and OARs segmentation tasks, which has great potential for reducing the burden of radiation oncologists and increasing the accuracy of delineation. In future, radiation oncologists from the West China Hospital of Sichuan University will further evaluate the results of network delineation, making this method helpful in clinical practice.
Collapse
Affiliation(s)
- Miao Tian
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Hongqiu Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Xingang Liu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yuyun Ye
- Department of Electrical and Computer Engineering, University of Tulsa, Tulsa, USA
| | - Ganlu Ouyang
- Department of Radiation Oncology, Cancer Center, the West China Hospital of Sichuan University, Chengdu, China
| | - Yali Shen
- Department of Radiation Oncology, Cancer Center, the West China Hospital of Sichuan University, Chengdu, China
| | - Zhiping Li
- Department of Radiation Oncology, Cancer Center, the West China Hospital of Sichuan University, Chengdu, China
| | - Xin Wang
- Department of Radiation Oncology, Cancer Center, the West China Hospital of Sichuan University, Chengdu, China
| | - Shaozhi Wu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
| |
Collapse
|
21
|
Paladugu PS, Ong J, Nelson N, Kamran SA, Waisberg E, Zaman N, Kumar R, Dias RD, Lee AG, Tavakkoli A. Generative Adversarial Networks in Medicine: Important Considerations for this Emerging Innovation in Artificial Intelligence. Ann Biomed Eng 2023; 51:2130-2142. [PMID: 37488468 DOI: 10.1007/s10439-023-03304-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 07/03/2023] [Indexed: 07/26/2023]
Abstract
The advent of artificial intelligence (AI) and machine learning (ML) has revolutionized the field of medicine. Although highly effective, the rapid expansion of this technology has created some anticipated and unanticipated bioethical considerations. With these powerful applications, there is a necessity for framework regulations to ensure equitable and safe deployment of technology. Generative Adversarial Networks (GANs) are emerging ML techniques that have immense applications in medical imaging due to their ability to produce synthetic medical images and aid in medical AI training. Producing accurate synthetic images with GANs can address current limitations in AI development for medical imaging and overcome current dataset type and size constraints. Offsetting these constraints can dramatically improve the development and implementation of AI medical imaging and restructure the practice of medicine. As observed with its other AI predecessors, considerations must be taken into place to help regulate its development for clinical use. In this paper, we discuss the legal, ethical, and technical challenges for future safe integration of this technology in the healthcare sector.
Collapse
Affiliation(s)
- Phani Srivatsav Paladugu
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Nicolas Nelson
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
| | - Sharif Amit Kamran
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Ethan Waisberg
- University College Dublin School of Medicine, Belfield, Dublin, Ireland
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | | | - Roger Daglius Dias
- Department of Emergency Medicine, Harvard Medical School, Boston, MA, USA
- STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Boston, MA, USA
| | - Andrew Go Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, USA
- The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX, USA
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, USA
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX, USA
- University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Texas A&M College of Medicine, Bryan, TX, USA
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA.
| |
Collapse
|
22
|
Mori S, Hirai R, Sakata Y, Tachibana Y, Koto M, Ishikawa H. Deep neural network-based synthetic image digital fluoroscopy using digitally reconstructed tomography. Phys Eng Sci Med 2023; 46:1227-1237. [PMID: 37349631 DOI: 10.1007/s13246-023-01290-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/16/2023] [Indexed: 06/24/2023]
Abstract
We developed a deep neural network (DNN) to generate X-ray flat panel detector (FPD) images from digitally reconstructed radiographic (DRR) images. FPD and treatment planning CT images were acquired from patients with prostate and head and neck (H&N) malignancies. The DNN parameters were optimized for FPD image synthesis. The synthetic FPD images' features were evaluated to compare to the corresponding ground-truth FPD images using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). The image quality of the synthetic FPD image was also compared with that of the DRR image to understand the performance of our DNN. For the prostate cases, the MAE of the synthetic FPD image was improved (= 0.12 ± 0.02) from that of the input DRR image (= 0.35 ± 0.08). The synthetic FPD image showed higher PSNRs (= 16.81 ± 1.54 dB) than those of the DRR image (= 8.74 ± 1.56 dB), while SSIMs for both images (= 0.69) were almost the same. All metrics for the synthetic FPD images of the H&N cases were improved (MAE 0.08 ± 0.03, PSNR 19.40 ± 2.83 dB, and SSIM 0.80 ± 0.04) compared to those for the DRR image (MAE 0.48 ± 0.11, PSNR 5.74 ± 1.63 dB, and SSIM 0.52 ± 0.09). Our DNN successfully generated FPD images from DRR images. This technique would be useful to increase throughput when images from two different modalities are compared by visual inspection.
Collapse
Affiliation(s)
- Shinichiro Mori
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan.
| | - Ryusuke Hirai
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yukinobu Sakata
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yasuhiko Tachibana
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan
| | - Masashi Koto
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| | - Hitoshi Ishikawa
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
23
|
Shao Y, Guo J, Wang J, Huang Y, Gan W, Zhang X, Wu G, Sun D, Gu Y, Gu Q, Yue NJ, Yang G, Xie G, Xu Z. Novel in-house knowledge-based automated planning system for lung cancer treated with intensity-modulated radiotherapy. Strahlenther Onkol 2023:10.1007/s00066-023-02126-1. [PMID: 37603050 DOI: 10.1007/s00066-023-02126-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 07/10/2023] [Indexed: 08/22/2023]
Abstract
PURPOSE The goal of this study was to propose a knowledge-based planning system which could automatically design plans for lung cancer patients treated with intensity-modulated radiotherapy (IMRT). METHODS AND MATERIALS From May 2018 to June 2020, 612 IMRT treatment plans of lung cancer patients were retrospectively selected to construct a planning database. Knowledge-based planning (KBP) architecture named αDiar was proposed in this study. It consisted of two parts separated by a firewall. One was the in-hospital workstation, and the other was the search engine in the cloud. Based on our previous study, A‑Net in the in-hospital workstation was used to generate predicted virtual dose images. A search engine including a three-dimensional convolutional neural network (3D CNN) was constructed to derive the feature vectors of dose images. By comparing the similarity of the features between virtual dose images and the clinical dose images in the database, the most similar feature was found. The optimization parameters (OPs) of the treatment plan corresponding to the most similar feature were assigned to the new plan, and the design of a new treatment plan was automatically completed. After αDiar was developed, we performed two studies. The first retrospective study was conducted to validate whether this architecture was qualified for clinical practice and involved 96 patients. The second comparative study was performed to investigate whether αDiar could assist dosimetrists in improving the quality of planning for the patients. Two dosimetrists were involved and designed plans for only one trial with and without αDiar; 26 patients were involved in this study. RESULTS The first study showed that about 54% (52/96) of the automatically generated plans would achieve the dosimetric constraints of the Radiation Therapy Oncology Group (RTOG) and about 93% (89/96) of the automatically generated plans would achieve the dosimetric constraints of the National Comprehensive Cancer Network (NCCN). The second study showed that the quality of treatment planning designed by junior dosimetrists was improved with the help of αDiar. CONCLUSIONS Our results showed that αDiar was an effective tool to improve planning quality. Over half of the patients' plans could be designed automatically. For the remaining patients, although the automatically designed plans did not fully meet the clinical requirements, their quality was also better than that of manual plans.
Collapse
Affiliation(s)
- Yan Shao
- Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jindong Guo
- Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jiyong Wang
- Shanghai Pulse Medical Technology Inc., Shanghai, China
| | - Ying Huang
- Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Wutian Gan
- School of Physics and Technology, University of Wuhan, Wuhan, China
| | - Xiaoying Zhang
- School of Information Science and Engineering, Xiamen University, Xiamen, China
| | - Ge Wu
- Ping An Healthcare Technology Co. Ltd., Shanghai, China
| | - Dong Sun
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Yu Gu
- School of Engineering, Hong Kong University of Science and Technology, Hong Kong SAR, China
| | - Qingtao Gu
- School of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Ning Jeff Yue
- Department of Radiation Oncology, Rutgers Cancer Institute of New Jersey, Rutgers University, New Brunswick, NJ, USA
| | - Guanli Yang
- Radiotherapy Department, Shandong Second Provincial General Hospital, Shandong University, Jinan, China.
| | - Guotong Xie
- Ping An Healthcare Technology Co. Ltd., Shanghai, China.
- Ping An Health Cloud Company Limited, Shanghai, China.
- Ping An International Smart City Technology Co., Ltd., Shanghai, China.
| | - Zhiyong Xu
- Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
24
|
Ribeiro MF, Marschner S, Kawula M, Rabe M, Corradini S, Belka C, Riboldi M, Landry G, Kurz C. Deep learning based automatic segmentation of organs-at-risk for 0.35 T MRgRT of lung tumors. Radiat Oncol 2023; 18:135. [PMID: 37574549 PMCID: PMC10424424 DOI: 10.1186/s13014-023-02330-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 08/03/2023] [Indexed: 08/15/2023] Open
Abstract
BACKGROUND AND PURPOSE Magnetic resonance imaging guided radiotherapy (MRgRT) offers treatment plan adaptation to the anatomy of the day. In the current MRgRT workflow, this requires the time consuming and repetitive task of manual delineation of organs-at-risk (OARs), which is also prone to inter- and intra-observer variability. Therefore, deep learning autosegmentation (DLAS) is becoming increasingly attractive. No investigation of its application to OARs in thoracic magnetic resonance images (MRIs) from MRgRT has been done so far. This study aimed to fill this gap. MATERIALS AND METHODS 122 planning MRIs from patients treated at a 0.35 T MR-Linac were retrospectively collected. Using an 80/19/23 (training/validation/test) split, individual 3D U-Nets for segmentation of the left lung, right lung, heart, aorta, spinal canal and esophagus were trained. These were compared to the clinically used contours based on Dice similarity coefficient (DSC) and Hausdorff distance (HD). They were also graded on their clinical usability by a radiation oncologist. RESULTS Median DSC was 0.96, 0.96, 0.94, 0.90, 0.88 and 0.78 for left lung, right lung, heart, aorta, spinal canal and esophagus, respectively. Median 95th percentile values of the HD were 3.9, 5.3, 5.8, 3.0, 2.6 and 3.5 mm, respectively. The physician preferred the network generated contours over the clinical contours, deeming 85 out of 129 to not require any correction, 25 immediately usable for treatment planning, 15 requiring minor and 4 requiring major corrections. CONCLUSIONS We trained 3D U-Nets on clinical MRI planning data which produced accurate delineations in the thoracic region. DLAS contours were preferred over the clinical contours.
Collapse
Affiliation(s)
- Marvin F Ribeiro
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Sebastian Marschner
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Maria Kawula
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Moritz Rabe
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Stefanie Corradini
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Claus Belka
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
- German Cancer Consortium (DKTK), Partner Site Munich, Munich, Germany
- Bavarian Cancer Research Center (BZKF), Munich, Germany
| | - Marco Riboldi
- Department of Medical Physics, Ludwig-Maximilians-Universität München, Garching, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Christopher Kurz
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany.
| |
Collapse
|
25
|
Cai B, Xu Q, Yang C, Lu Y, Ge C, Wang Z, Liu K, Qiu X, Chang S. Spine MRI image segmentation method based on ASPP and U-Net network. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:15999-16014. [PMID: 37919999 DOI: 10.3934/mbe.2023713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
The spine is one of the most important structures in the human body, serving to support the body, organs, protect nerves, etc. Medical image segmentation for the spine can help doctors in their clinical practice for rapid decision making, surgery planning, skeletal health diagnosis, etc. The current difficulty is mainly the poor segmentation accuracy of skeletal Magnetic Resonance Imaging (MRI) images. To address the problem, we propose a spine MRI image segmentation method, Atrous Spatial Pyramid Pooling (ASPP)-U-shaped network (UNet), which combines an ASPP structure with a U-Net network. This approach improved the network feature extraction by introducing an ASPP structure into the U-Net network down-sampling structure. The medical image segmentation models are trained and tested on publicly available datasets and obtained the Dice coefficient and Mean Intersection over Union coefficients with 0.866 and 0.755, respectively. The experimental results show that ASPP-UNet has higher accuracy for spine MRI image segmentation compared with other mainstream networks.
Collapse
Affiliation(s)
- Biao Cai
- Institute of Bioinformatics and Pharmaceutical Engineering, Jiangsu University of Technology, Changzhou 213001, China
| | - Qing Xu
- Institute of Bioinformatics and Pharmaceutical Engineering, Jiangsu University of Technology, Changzhou 213001, China
| | - Cheng Yang
- The Third Affiliated Hospital of Soochow University, Changzhou 213000, China
| | - Yi Lu
- Institute of Bioinformatics and Pharmaceutical Engineering, Jiangsu University of Technology, Changzhou 213001, China
| | - Cheng Ge
- Key Laboratory of Marine Drugs, Chinese Ministry of Education, School of Medicine and Pharmacy, Ocean University of China, Qingdao 266003, China
| | - Zhichao Wang
- Institute of Bioinformatics and Pharmaceutical Engineering, Jiangsu University of Technology, Changzhou 213001, China
| | - Kai Liu
- Institute of Bioinformatics and Pharmaceutical Engineering, Jiangsu University of Technology, Changzhou 213001, China
| | - Xubin Qiu
- The Third Affiliated Hospital of Soochow University, Changzhou 213000, China
| | - Shan Chang
- Institute of Bioinformatics and Pharmaceutical Engineering, Jiangsu University of Technology, Changzhou 213001, China
| |
Collapse
|
26
|
Zhang F, Wang Q, Lu N, Chen D, Jiang H, Yang A, Yu Y, Wang Y. Applying a novel two-step deep learning network to improve the automatic delineation of esophagus in non-small cell lung cancer radiotherapy. Front Oncol 2023; 13:1174530. [PMID: 37534258 PMCID: PMC10391539 DOI: 10.3389/fonc.2023.1174530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 05/22/2023] [Indexed: 08/04/2023] Open
Abstract
Purpose To introduce a model for automatic segmentation of thoracic organs at risk (OARs), especially the esophagus, in non-small cell lung cancer radiotherapy, using a novel two-step deep learning network. Materials and methods A total of 59 lung cancer patients' CT images were enrolled, of which 39 patients were randomly selected as the training set, 8 patients as the validation set, and 12 patients as the testing set. The automatic segmentations of the six OARs including the esophagus were carried out. In addition, two sets of treatment plans were made on the basis of the manually delineated tumor and OARs (Plan1) as well as the manually delineated tumor and the automatically delineated OARs (Plan2). The Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and average surface distance (ASD) of the proposed model were compared with those of U-Net as a benchmark. Next, two groups of plans were also compared according to the dose-volume histogram parameters. Results The DSC, HD95, and ASD of the proposed model were better than those of U-Net, while the two groups of plans were almost the same. The highest mean DSC of the proposed method was 0.94 for the left lung, and the lowest HD95 and ASD were 3.78 and 1.16 mm for the trachea, respectively. Moreover, the DSC reached 0.73 for the esophagus. Conclusions The two-step segmentation method can accurately segment the OARs of lung cancer. The mean DSC of the esophagus realized preliminary clinical significance (>0.70). Choosing different deep learning networks based on different characteristics of organs offers a new option for automatic segmentation in radiotherapy.
Collapse
Affiliation(s)
- Fuli Zhang
- Radiation Oncology Department, The Seventh Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Qiusheng Wang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Na Lu
- Radiation Oncology Department, The Seventh Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Diandian Chen
- Radiation Oncology Department, The Seventh Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Huayong Jiang
- Radiation Oncology Department, The Seventh Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Anning Yang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Yanjun Yu
- Radiation Oncology Department, The Seventh Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Yadi Wang
- Radiation Oncology Department, The Seventh Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| |
Collapse
|
27
|
Li YZ, Wang Y, Huang YH, Xiang P, Liu WX, Lai QQ, Gao YY, Xu MS, Guo YF. RSU-Net: U-net based on residual and self-attention mechanism in the segmentation of cardiac magnetic resonance images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107437. [PMID: 36863157 DOI: 10.1016/j.cmpb.2023.107437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 11/20/2022] [Accepted: 02/18/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND Automated segmentation techniques for cardiac magnetic resonance imaging (MRI) are beneficial for evaluating cardiac functional parameters in clinical diagnosis. However, due to the characteristics of unclear image boundaries and anisotropic resolution anisotropy produced by cardiac magnetic resonance imaging technology, most of the existing methods still have the problems of intra-class uncertainty and inter-class uncertainty. However, due to the irregularity of the anatomical shape of the heart and the inhomogeneity of tissue density, the boundaries of its anatomical structures become uncertain and discontinuous. Therefore, fast and accurate segmentation of cardiac tissue remains a challenging problem in medical image processing. METHODOLOGY We collected cardiac MRI data from 195 patients as training set and 35patients from different medical centers as external validation set. Our research proposed a U-net network architecture with residual connections and a self-attentive mechanism (Residual Self-Attention U-net, RSU-Net). The network relies on the classic U-net network, adopts the U-shaped symmetric architecture of the encoding and decoding mode, improves the convolution module in the network, introduces skip connections, and improves the network's capacity for feature extraction. Then for solving locality defects of ordinary convolutional networks. To achieve a global receptive field, a self-attention mechanism is introduced at the bottom of the model. The loss function employs a combination of Cross Entropy Loss and Dice Loss to jointly guide network training, resulting in more stable network training. RESULTS In our study, we employ the Hausdorff distance (HD) and the Dice similarity coefficient (DSC) as metrics for assessing segmentation outcomes. Comparsion was made with the segmentation frameworks of other papers, and the comparison results prove that our RSU-Net network performs better and can make accurate segmentation of the heart. New ideas for scientific research. CONCLUSION Our proposed RSU-Net network combines the advantages of residual connections and self-attention. This paper uses the residual links to facilitate the training of the network. In this paper, a self-attention mechanism is introduced, and a bottom self-attention block (BSA Block) is used to aggregate global information. Self-attention aggregates global information, and has achieved good segmentation results on the cardiac segmentation dataset. It facilitates the diagnosis of cardiovascular patients in the future.
Collapse
Affiliation(s)
- Yuan-Zhe Li
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Yi Wang
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Yin-Hui Huang
- Department of Neurology, Jinjiang Municipal Hospital, Quanzhou 362000, China
| | - Ping Xiang
- Department of Radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Traditional Chinese Medicine), Hangzhou 310000, China
| | - Wen-Xi Liu
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Qing-Quan Lai
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Yi-Yuan Gao
- Department of Radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Traditional Chinese Medicine), Hangzhou 310000, China
| | - Mao-Sheng Xu
- Department of Radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Traditional Chinese Medicine), Hangzhou 310000, China.
| | - Yi-Fan Guo
- Department of Radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Traditional Chinese Medicine), Hangzhou 310000, China.
| |
Collapse
|
28
|
Loÿen E, Dasnoy-Sumell D, Macq B. Patient-specific three-dimensional image reconstruction from a single X-ray projection using a convolutional neural network for on-line radiotherapy applications. Phys Imaging Radiat Oncol 2023; 26:100444. [PMID: 37197152 PMCID: PMC10183662 DOI: 10.1016/j.phro.2023.100444] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 04/06/2023] [Accepted: 04/25/2023] [Indexed: 05/19/2023] Open
Abstract
Background and purpose: Radiotherapy is commonly chosen to treat thoracic and abdominal cancers. However, irradiating mobile tumors accurately is extremely complex due to the organs' breathing-related movements. Different methods have been studied and developed to treat mobile tumors properly. The combination of X-ray projection acquisition and implanted markers is used to locate the tumor in two dimensions (2D) but does not provide three-dimensional (3D) information. The aim of this work is to reconstruct a high-quality 3D computed tomography (3D-CT) image based on a single X-ray projection to locate the tumor in 3D without the need for implanted markers. Materials and Methods: Nine patients treated for a lung or liver cancer in radiotherapy were studied. For each patient, a data augmentation tool was used to create 500 new 3D-CT images from the planning four-dimensional computed tomography (4D-CT). For each 3D-CT, the corresponding digitally reconstructed radiograph was generated, and the 500 2D images were input into a convolutional neural network that then learned to reconstruct the 3D-CT. The dice score coefficient, normalized root mean squared error and difference between the ground-truth and the predicted 3D-CT images were computed and used as metrics. Results: Metrics' averages across all patients were 85.5% and 96.2% for the gross target volume, 0.04 and 0.45 Hounsfield unit (HU), respectively. Conclusions: The proposed method allows reconstruction of a 3D-CT image from a single digitally reconstructed radiograph that could be used in real-time for better tumor localization and improved treatment of mobile tumors without the need for implanted markers.
Collapse
|
29
|
Mövik L, Bäck A, Pettersson N. Impact of delineation errors on the estimated organ at risk dose and of dose errors on the normal tissue complication probability model. Med Phys 2023; 50:1879-1892. [PMID: 36693127 DOI: 10.1002/mp.16235] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 12/04/2022] [Accepted: 01/01/2023] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Normal tissue complication probability (NTCP) models are often based on doses retrieved from delineated volumes. For retrospective dose-response studies focusing on organs that have not been delineated historically, automatic segmentation might be considered. However, automatic segmentation risks generating considerable delineation errors and knowledge regarding how these errors impact the estimated organ dose is important. Furthermore, organ-at-risk (OAR) dose uncertainties cannot be eliminated and might affect the resulting NTCP model. Therefore, it is also of interest to study how OAR dose errors impact the NTCP modeling results. PURPOSE To investigate how random delineation errors of the proximal bronchial tree, heart, and esophagus impact the estimated OAR dose, and to investigate how random errors in the doses used for dose-response modeling affect the estimated NTCPs. METHODS We investigated the impact of random delineation errors on the estimated OAR dose using the treatment plans of 39 patients treated with conventionally fractionated radiation therapy of non-small-cell lung cancer. Study-specific reference structures were defined by manually contouring the proximal bronchial tree, heart and esophagus. For each patient and organ, 120 reshaped structures were created by introducing random shifts and margins to the entire reference structure. The mean and near-maximum dose to the reference and reshaped structures were compared. In a separate investigation, the impact of random dose errors on the NTCP model was studied performing dose-response modeling with study sets containing treatment outcomes and OAR doses with and without introduced errors. Universal patient populations with defined population risks, dose-response relationships and distributions of OAR doses were used as ground truth. From such a universal population, we randomly sampled data sets consisting of OAR dose and treatment outcome into reference populations. Study sets of different sizes were created by repeatedly introducing errors to the OAR doses of each reference population. The NTCP models generated with dose errors were compared to the reference NTCP model of the corresponding reference population. RESULTS A total of 14 040 reshaped structures with random delineation errors were created. The delineation errors resulted in systematic mean dose errors of less than 1% of the prescribed dose (PD). Mean dose differences above 15% of PD and near-maximum doses differences above 25% of PD were observed for 211 and 457 reshaped structures, respectively. Introducing random errors to OAR doses used for dose-response modeling resulted in systematic underestimations of the median NTCP. For all investigated scenarios, the median differences in NTCP were within 0.1 percentage points (p.p.) when comparing different study sizes. CONCLUSIONS Introducing random delineation errors to the proximal bronchial tree, heart and esophagus resulted in mean dose and near-maximum dose differences above 15% and 25% of PD, respectively. We did not observe an association between the dose level and the magnitude of the dose errors. For the scenarios investigated in this study, introducing random errors to OAR doses used for dose-response modeling resulted in systematic underestimations of the median NTCP for reference risks higher than the universal population risk. The median NTCP underestimation was similar for different study sizes, all within 0.1 p.p.
Collapse
Affiliation(s)
- Louise Mövik
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Anna Bäck
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Therapeutic Radiation Physics, Department of Medical Physics and Biomedical Engineering, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Niclas Pettersson
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Therapeutic Radiation Physics, Department of Medical Physics and Biomedical Engineering, Sahlgrenska University Hospital, Gothenburg, Sweden
| |
Collapse
|
30
|
Ke J, Lv Y, Ma F, Du Y, Xiong S, Wang J, Wang J. Deep learning-based approach for the automatic segmentation of adult and pediatric temporal bone computed tomography images. Quant Imaging Med Surg 2023; 13:1577-1591. [PMID: 36915310 PMCID: PMC10006112 DOI: 10.21037/qims-22-658] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 12/15/2022] [Indexed: 02/25/2023]
Abstract
Background Automatic segmentation of temporal bone computed tomography (CT) images is fundamental to image-guided otologic surgery and the intelligent analysis of CT images in the field of otology. This study was conducted to test a convolutional neural network (CNN) model that can automatically segment almost all temporal bone anatomy structures in adult and pediatric CT images. Methods A dataset comprising 80 annotated CT volumes was collected, of which 40 samples were obtained from adults and 40 from children. A further 60 annotated CT volumes (30 from adults and 30 from children) were used to train the model. The remaining 20 annotated CT volumes were employed to determine the model's generalizability for automatic segmentation. Finally, the Dice coefficient (DC) and average symmetric surface distance (ASSD) were utilized as metrics to evaluate the performance of the CNN model. Two independent-sample t-tests were used to compare the test set results of adults and children. Results In the adult test set, the mean DC values of all the structures ranged from 0.714 to 0.912, and the ASSD values were less than 0.24 mm for 11 structures. In the pediatric test set, the mean DC values of all the structures ranged from 0.658 to 0.915, and the ASSD values were less than 0.18 mm for 11 structures. There was no statistically significant difference between the adult and child test sets in most temporal bone structures. Conclusions Our CNN model shows excellent automatic segmentation performance and good generalizability for both adult and pediatric temporal bone CT images, which can help to advance otologist education, intelligent imaging diagnosis, surgery simulation, application of augmented reality, and preoperative planning for image-guided otology surgery.
Collapse
Affiliation(s)
- Jia Ke
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China
| | - Yi Lv
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China.,North China Research Institute of Electro-optics, Beijing, China
| | - Furong Ma
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China
| | - Yali Du
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China
| | - Shan Xiong
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Jiang Wang
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China.,Department of Otorhinolaryngology, First Affiliated Hospital, Nanjing Medical University, Nanjing, China
| |
Collapse
|
31
|
Baroudi H, Brock KK, Cao W, Chen X, Chung C, Court LE, El Basha MD, Farhat M, Gay S, Gronberg MP, Gupta AC, Hernandez S, Huang K, Jaffray DA, Lim R, Marquez B, Nealon K, Netherton TJ, Nguyen CM, Reber B, Rhee DJ, Salazar RM, Shanker MD, Sjogreen C, Woodland M, Yang J, Yu C, Zhao Y. Automated Contouring and Planning in Radiation Therapy: What Is 'Clinically Acceptable'? Diagnostics (Basel) 2023; 13:diagnostics13040667. [PMID: 36832155 PMCID: PMC9955359 DOI: 10.3390/diagnostics13040667] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 01/21/2023] [Accepted: 01/30/2023] [Indexed: 02/12/2023] Open
Abstract
Developers and users of artificial-intelligence-based tools for automatic contouring and treatment planning in radiotherapy are expected to assess clinical acceptability of these tools. However, what is 'clinical acceptability'? Quantitative and qualitative approaches have been used to assess this ill-defined concept, all of which have advantages and disadvantages or limitations. The approach chosen may depend on the goal of the study as well as on available resources. In this paper, we discuss various aspects of 'clinical acceptability' and how they can move us toward a standard for defining clinical acceptability of new autocontouring and planning tools.
Collapse
Affiliation(s)
- Hana Baroudi
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kristy K. Brock
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Wenhua Cao
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Xinru Chen
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Caroline Chung
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Laurence E. Court
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Correspondence:
| | - Mohammad D. El Basha
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Maguy Farhat
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Skylar Gay
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Mary P. Gronberg
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Aashish Chandra Gupta
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Soleil Hernandez
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kai Huang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - David A. Jaffray
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Rebecca Lim
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Barbara Marquez
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kelly Nealon
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Tucker J. Netherton
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Callistus M. Nguyen
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Brandon Reber
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Dong Joo Rhee
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Ramon M. Salazar
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Mihir D. Shanker
- The University of Queensland, Saint Lucia 4072, Australia
- The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Carlos Sjogreen
- Department of Physics, University of Houston, Houston, TX 77004, USA
| | - McKell Woodland
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Computer Science, Rice University, Houston, TX 77005, USA
| | - Jinzhong Yang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Cenji Yu
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Yao Zhao
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| |
Collapse
|
32
|
Shen J, Gu P, Wang Y, Wang Z. Advances in automatic delineation of target volume and cardiac substructure in breast cancer radiotherapy (Review). Oncol Lett 2023; 25:110. [PMID: 36817059 PMCID: PMC9932716 DOI: 10.3892/ol.2023.13697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 01/06/2023] [Indexed: 02/05/2023] Open
Abstract
Postoperative adjuvant radiotherapy plays an important role in the treatment of patients with breast cancer. With the continuous development of radiotherapeutic technologies, the requirements for radiotherapeutic accuracy are increasingly high. The accuracy of target volume and organ at risk delineation significantly affects the effect of radiotherapy. Automatic delineation software has been continuously developed for the automatic delineation of target areas and organs at risk. Automatic segmentation based on an atlas and deep learning is a hot topic in current clinical research. Automatic delineation can not only reduce the workload and delineation times, but also establish a uniform delineation standard and reduce inter-observer and intra-observer differences. In patients with breast cancer, especially in patients who undergo left breast radiotherapy, the protection of the heart is particularly important. Treating the whole heart as an organ at risk cannot meet the clinical needs, and it is necessary to limit the dose to specific cardiac substructures. The present review discusses the importance of automatic delineation of target volume and cardiac substructure in radiotherapy for patients with breast cancer.
Collapse
Affiliation(s)
- Jingjing Shen
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200438, P.R. China
| | - Peihua Gu
- Department of Oncology and Radiotherapy, Shidong Hospital Affiliated to University of Shanghai for Science and Technology, Shanghai 200438, P.R. China
| | - Yun Wang
- Department of Oncology and Radiotherapy, Shidong Hospital Affiliated to University of Shanghai for Science and Technology, Shanghai 200438, P.R. China
| | - Zhongming Wang
- Department of Oncology and Radiotherapy, Shidong Hospital Affiliated to University of Shanghai for Science and Technology, Shanghai 200438, P.R. China,Correspondence to: Dr Zhongming Wang, Department of Oncology and Radiotherapy, Shidong Hospital Affiliated to University of Shanghai for Science and Technology, 999 Shiguang Road, Shanghai 200438, P.R. China, E-mail:
| |
Collapse
|
33
|
Kong S, Huang Z, Deng W, Zhan Y, Lv J, Cui Y. Nystagmus patterns classification framework based on deep learning and optical flow. Comput Biol Med 2023; 153:106473. [PMID: 36621190 DOI: 10.1016/j.compbiomed.2022.106473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 11/11/2022] [Accepted: 12/19/2022] [Indexed: 12/31/2022]
Abstract
Benign paroxysmal positional vertigo (BPPV) is the most common vestibular peripheral vertigo disease characterized by brief recurrent vertigo with positional nystagmus. Clinically, it is common to recognize the patterns of nystagmus by analyzing infrared nystagmus videos of patients. However, the existing approaches cannot effectively recognize different patterns of nystagmus, especially the torsional nystagmus. To improve the performance of recognizing different nystagmus patterns, this paper contributes an automatic recognizing method of BPPV nystagmus patterns based on deep learning and optical flow to assist doctors in analyzing the types of BPPV. Firstly, we present an adaptive method for eliminating invalid frames that caused by eyelid occlusion or blinking in nystagmus videos and an adaptive method for segmenting the iris and pupil area from video frames quickly and efficiently. Then, we use a deep learning-based optical flow method to extract nystagmus information. Finally, we propose a nystagmus video classification network (NVCN) to categorize the patterns of nystagmus. We use ConvNeXt to extract eye movement features and then use LSTM to extract temporal features. Experiments conducted on the clinically collected datasets of infrared nystagmus videos show that the NVCN model achieves an accuracy of 94.91% and an F1 score of 93.70% on nystagmus patterns classification task as well as an accuracy of 97.75% and an F1 score of 97.48% on torsional nystagmus recognition task. The experimental results prove that the framework we propose can effectively recognize different patterns of nystagmus.
Collapse
Affiliation(s)
- Sheng Kong
- School of Computer Science and Technology, Guangdong University of Technology, China
| | - Zheming Huang
- Department of Otolaryngology-Head and Neck Surgery, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, China
| | - Weike Deng
- School of Computer Science and Technology, Guangdong University of Technology, China
| | - Yinwei Zhan
- School of Computer Science and Technology, Guangdong University of Technology, China.
| | - Jujian Lv
- School of Computer Science and Technology, Guangdong Polytechnic Normal University, China.
| | - Yong Cui
- Department of Otolaryngology-Head and Neck Surgery, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, China; The Second School of Clinical Medicine, South Medical University, China; School of Medicine, South China University of Technology, China.
| |
Collapse
|
34
|
Osuala R, Kushibar K, Garrucho L, Linardos A, Szafranowska Z, Klein S, Glocker B, Diaz O, Lekadir K. Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging. Med Image Anal 2023; 84:102704. [PMID: 36473414 DOI: 10.1016/j.media.2022.102704] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/02/2022] [Accepted: 11/21/2022] [Indexed: 11/26/2022]
Abstract
Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community.
Collapse
Affiliation(s)
- Richard Osuala
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain.
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Lidia Garrucho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Akis Linardos
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Zuzanna Szafranowska
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Ben Glocker
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, UK
| | - Oliver Diaz
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| |
Collapse
|
35
|
Lai SL, Chen CS, Lin BR, Chang RF. Intraoperative Detection of Surgical Gauze Using Deep Convolutional Neural Network. Ann Biomed Eng 2023; 51:352-362. [PMID: 35972601 DOI: 10.1007/s10439-022-03033-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 07/19/2022] [Indexed: 01/25/2023]
Abstract
During laparoscopic surgery, surgical gauze is usually inserted into the body cavity to help achieve hemostasis. Retention of surgical gauze in the body cavity may necessitate reoperation and increase surgical risk. Using deep learning technology, this study aimed to propose a neural network model for gauze detection from the surgical video to record the presence of the gauze. The model was trained by the training group using YOLO (You Only Look Once)v5x6, then applied to the testing group. Positive predicted value (PPV), sensitivity, and mean average precision (mAP) were calculated. Furthermore, a timeline of gauze presence in the video was drawn by the model as well as human annotation to evaluate the accuracy. After the model was well-trained, the PPV, sensitivity, and mAP in the testing group were 0.920, 0.828, and 0.881, respectively. The inference time was 11.3 ms per image. The average accuracy of the model adding a marking and filtering process was 0.899. In conclusion, surgical gauze can be successfully detected using deep learning in the surgical video. Our model provided a fast detection of surgical gauze, allowing further real-time gauze tracing in laparoscopic surgery that may help surgeons recall the location of the missing gauze.
Collapse
Affiliation(s)
- Shuo-Lun Lai
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, No.1, Sec.4, Roosevelt Road, Taipei, 10617, Taiwan.,Division of Colorectal Surgery, Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Chi-Sheng Chen
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, No.1, Sec.4, Roosevelt Road, Taipei, 10617, Taiwan
| | - Been-Ren Lin
- Division of Colorectal Surgery, Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Ruey-Feng Chang
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, No.1, Sec.4, Roosevelt Road, Taipei, 10617, Taiwan. .,Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
36
|
Liu TJ, Wang H, Christian M, Chang CW, Lai F, Tai HC. Automatic segmentation and measurement of pressure injuries using deep learning models and a LiDAR camera. Sci Rep 2023; 13:680. [PMID: 36639395 PMCID: PMC9839689 DOI: 10.1038/s41598-022-26812-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 12/20/2022] [Indexed: 01/15/2023] Open
Abstract
Pressure injuries are a common problem resulting in poor prognosis, long-term hospitalization, and increased medical costs in an aging society. This study developed a method to do automatic segmentation and area measurement of pressure injuries using deep learning models and a light detection and ranging (LiDAR) camera. We selected the finest photos of patients with pressure injuries, 528 in total, at National Taiwan University Hospital from 2016 to 2020. The margins of the pressure injuries were labeled by three board-certified plastic surgeons. The labeled photos were trained by Mask R-CNN and U-Net for segmentation. After the segmentation model was constructed, we made an automatic wound area measurement via a LiDAR camera. We conducted a prospective clinical study to test the accuracy of this system. For automatic wound segmentation, the performance of the U-Net (Dice coefficient (DC): 0.8448) was better than Mask R-CNN (DC: 0.5006) in the external validation. In the prospective clinical study, we incorporated the U-Net in our automatic wound area measurement system and got 26.2% mean relative error compared with the traditional manual method. Our segmentation model, U-Net, and area measurement system achieved acceptable accuracy, making them applicable in clinical circumstances.
Collapse
Affiliation(s)
- Tom J. Liu
- grid.19188.390000 0004 0546 0241Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan ,grid.256105.50000 0004 1937 1063Division of Plastic Surgery, Department of Surgery, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Hanwei Wang
- grid.19188.390000 0004 0546 0241Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| | - Mesakh Christian
- grid.19188.390000 0004 0546 0241Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Che-Wei Chang
- grid.19188.390000 0004 0546 0241Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan ,grid.414746.40000 0004 0604 4784Division of Plastic Reconstructive and Aesthetic Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei City, Taiwan
| | - Feipei Lai
- grid.19188.390000 0004 0546 0241Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Hao-Chih Tai
- National Taiwan University Hospital and College of Medicine, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
37
|
Ahmadi M, Sharifi A, Jafarian Fard M, Soleimani N. Detection of brain lesion location in MRI images using convolutional neural network and robust PCA. Int J Neurosci 2023; 133:55-66. [PMID: 33517817 DOI: 10.1080/00207454.2021.1883602] [Citation(s) in RCA: 39] [Impact Index Per Article: 39.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Purpose and aim: Detection of brain tumors plays a critical role in the treatment of patients. Before any treatment, tumor segmentation is crucial to protect healthy tissues during treatment and to destroy tumor cells. Tumor segmentation involves the detection, precise identification, and separation of tumor tissues. In this paper, we provide a deep learning method for the segmentation of brain tumors. Material and methods: In this article, we used a convolutional neural network (CNN) to segment tumors in seven types of brain disease consisting of Glioma, Meningioma, Alzheimer's, Alzheimer's plus, Pick, Sarcoma, and Huntington. First, we used the feature-reduction-based method robust principal component analysis to find tumor location and spot in a dataset of Harvard Medical School. Then we present an architecture of the CNN method to detect brain tumors. Results: Results are depicted based on the probability of tumor location in magnetic resonance images. Results show that the presented method provides high accuracy (96%), sensitivity (99.9%), and dice index (91%) regarding other investigations. Conclusion: The provided unsupervised method for tumor clustering and proposed supervised architecture can be potential methods for medical uses.
Collapse
Affiliation(s)
- Mohsen Ahmadi
- Department of Industrial Engineering, Urmia University of Technology, Urmia, Iran
| | - Abbas Sharifi
- Department of Mechanical Engineering, Urmia University of Technology, Urmia, Iran
| | - Mahta Jafarian Fard
- Department of Electrical Engineering, Islamic Azad University Science and Research, Razavi Khorasan, Iran
| | - Nastaran Soleimani
- Department of Electronics and Telecommunications (DET), University of Politecnico di Torino, Turin, Italy
| |
Collapse
|
38
|
Sundell VM, Mäkelä T, Vitikainen AM, Kaasalainen T. Convolutional neural network -based phantom image scoring for mammography quality control. BMC Med Imaging 2022; 22:216. [PMID: 36476319 PMCID: PMC9727908 DOI: 10.1186/s12880-022-00944-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 11/28/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Visual evaluation of phantom images is an important, but time-consuming part of mammography quality control (QC). Consistent scoring of phantom images over the device's lifetime is highly desirable. Recently, convolutional neural networks (CNNs) have been applied to a wide range of image classification problems, performing with a high accuracy. The purpose of this study was to automate mammography QC phantom scoring task by training CNN models to mimic a human reviewer. METHODS Eight CNN variations consisting of three to ten convolutional layers were trained for detecting targets (fibres, microcalcifications and masses) in American College of Radiology (ACR) accreditation phantom images and the results were compared with human scoring. Regular and artificially degraded/improved QC phantom images from eight mammography devices were visually evaluated by one reviewer. These images were used in training the CNN models. A separate test set consisted of daily QC images from the eight devices and separately acquired images with varying dose levels. These were scored by four reviewers and considered the ground truth for CNN performance testing. RESULTS Although hyper-parameter search space was limited, an optimal network depth after which additional layers resulted in decreased accuracy was identified. The highest scoring accuracy (95%) was achieved with the CNN consisting of six convolutional layers. The highest deviation between the CNN and the reviewers was found at lowest dose levels. No significant difference emerged between the visual reviews and CNN results except in case of smallest masses. CONCLUSION A CNN-based automatic mammography QC phantom scoring system can score phantom images in a good agreement with human reviewers, and can therefore be of benefit in mammography QC.
Collapse
Affiliation(s)
- Veli-Matti Sundell
- grid.7737.40000 0004 0410 2071Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland ,grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Teemu Mäkelä
- grid.7737.40000 0004 0410 2071Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland ,grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Anne-Mari Vitikainen
- grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Touko Kaasalainen
- grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| |
Collapse
|
39
|
Huang L, Zhu E, Chen L, Wang Z, Chai S, Zhang B. A transformer-based generative adversarial network for brain tumor segmentation. Front Neurosci 2022; 16:1054948. [PMID: 36532274 PMCID: PMC9750177 DOI: 10.3389/fnins.2022.1054948] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 11/07/2022] [Indexed: 09/19/2023] Open
Abstract
Brain tumor segmentation remains a challenge in medical image segmentation tasks. With the application of transformer in various computer vision tasks, transformer blocks show the capability of learning long-distance dependency in global space, which is complementary to CNNs. In this paper, we proposed a novel transformer-based generative adversarial network to automatically segment brain tumors with multi-modalities MRI. Our architecture consists of a generator and a discriminator, which is trained in min-max game progress. The generator is based on a typical "U-shaped" encoder-decoder architecture, whose bottom layer is composed of transformer blocks with Resnet. Besides, the generator is trained with deep supervision technology. The discriminator we designed is a CNN-based network with multi-scale L 1 loss, which is proved to be effective for medical semantic image segmentation. To validate the effectiveness of our method, we conducted exclusive experiments on BRATS2015 dataset, achieving comparable or better performance than previous state-of-the-art methods. On additional datasets, including BRATS2018 and BRATS2020, experimental results prove that our technique is capable of generalizing successfully.
Collapse
Affiliation(s)
- Liqun Huang
- The School of Automation, Beijing Institute of Technology, Beijing, China
| | - Enjun Zhu
- Department of Cardiac Surgery, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Long Chen
- The School of Automation, Beijing Institute of Technology, Beijing, China
| | - Zhaoyang Wang
- The School of Automation, Beijing Institute of Technology, Beijing, China
| | - Senchun Chai
- The School of Automation, Beijing Institute of Technology, Beijing, China
| | - Baihai Zhang
- The School of Automation, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
40
|
Shi F, Hu W, Wu J, Han M, Wang J, Zhang W, Zhou Q, Zhou J, Wei Y, Shao Y, Chen Y, Yu Y, Cao X, Zhan Y, Zhou XS, Gao Y, Shen D. Deep learning empowered volume delineation of whole-body organs-at-risk for accelerated radiotherapy. Nat Commun 2022; 13:6566. [PMID: 36323677 PMCID: PMC9630370 DOI: 10.1038/s41467-022-34257-x] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 10/19/2022] [Indexed: 11/05/2022] Open
Abstract
In radiotherapy for cancer patients, an indispensable process is to delineate organs-at-risk (OARs) and tumors. However, it is the most time-consuming step as manual delineation is always required from radiation oncologists. Herein, we propose a lightweight deep learning framework for radiotherapy treatment planning (RTP), named RTP-Net, to promote an automatic, rapid, and precise initialization of whole-body OARs and tumors. Briefly, the framework implements a cascade coarse-to-fine segmentation, with adaptive module for both small and large organs, and attention mechanisms for organs and boundaries. Our experiments show three merits: 1) Extensively evaluates on 67 delineation tasks on a large-scale dataset of 28,581 cases; 2) Demonstrates comparable or superior accuracy with an average Dice of 0.95; 3) Achieves near real-time delineation in most tasks with <2 s. This framework could be utilized to accelerate the contouring process in the All-in-One radiotherapy scheme, and thus greatly shorten the turnaround time of patients.
Collapse
Affiliation(s)
- Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Weigang Hu
- grid.452404.30000 0004 1808 0942Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China ,grid.8547.e0000 0001 0125 2443Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Miaofei Han
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jiazhou Wang
- grid.452404.30000 0004 1808 0942Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China ,grid.8547.e0000 0001 0125 2443Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Wei Zhang
- grid.497849.fRadiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Qing Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jingjie Zhou
- grid.497849.fRadiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Shao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yanbo Chen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yue Yu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiaohuan Cao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yaozong Gao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China ,grid.440637.20000 0004 4657 8879School of Biomedical Engineering, ShanghaiTech University, Shanghai, China ,grid.452344.0Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
41
|
Draelos RL, Carin L. Explainable multiple abnormality classification of chest CT volumes. Artif Intell Med 2022; 132:102372. [DOI: 10.1016/j.artmed.2022.102372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 06/09/2022] [Accepted: 07/28/2022] [Indexed: 12/20/2022]
|
42
|
Multi-Organ Segmentation Using a Low-Resource Architecture. INFORMATION 2022. [DOI: 10.3390/info13100472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Since their inception, deep-learning architectures have shown promising results for automatic segmentation. However, despite the technical advances introduced by fully convolutional networks, generative adversarial networks or recurrent neural networks, and their usage in hybrid architectures, automatic segmentation in the medical field is still not used at scale. One main reason is related to data scarcity and quality, which in turn generates a lack of annotated data that hinder the generalization of the models. The second main issue refers to challenges in training deep models. This process uses large amounts of GPU memory (that might exceed current hardware limitations) and requires high training times. In this article, we want to prove that despite these issues, good results can be obtained even when using a lower resource architecture, thus opening the way for more researchers to employ and use deep neural networks. In achieving the multi-organ segmentation, we are employing modern pre-processing techniques, a smart model design and fusion between several models trained on the same dataset. Our architecture is compared against state-of-the-art methods employed in a publicly available challenge and the notable results prove the effectiveness of our method.
Collapse
|
43
|
Detection Method of Athlete Joint Injury Based on Deep Learning Model. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8165580. [PMID: 36092783 PMCID: PMC9462975 DOI: 10.1155/2022/8165580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/09/2022] [Accepted: 08/16/2022] [Indexed: 11/17/2022]
Abstract
The research on accurate and intelligent segmentation of knee joint MRI images is of great significance to reduce the work intensity of clinical doctors and nurses. In order to solve the problem that knee joint MRI image segmentation model needs a large number of high-quality tagged images and excessive labeling workload, a semisupervised learning segmentation network model based on 3D scSE-UNet is proposed. The model adopts a self-training semisupervised learning framework and adds a cSE-block+ module on the basis of the 3D UNet model. This module can enhance the effective features of the feature image from two aspects of space and channel, while suppressing irrelevant features and preserving image edge information more completely. In order to solve the problem of rough edge of pseudolabel caused by model segmentation, a fully connected conditional random field is added to refine the edge of pseudolabel in the process of model training. The effectiveness of the model is verified by open source MRNet dataset and OAI dataset. The results show that the proposed model can achieve the segmentation effect of fully supervised learning through a small number of labeled images and effectively reduce the dependence of knee joint MRI image segmentation on expert labeling data.
Collapse
|
44
|
Cheng T, Zhang Z, Yang X, Lu S, Qian D, Wang X, Zhu H. Automatic delineation of organ at risk in cervical cancer radiotherapy based on ensemble learning. ZHONG NAN DA XUE XUE BAO. YI XUE BAN = JOURNAL OF CENTRAL SOUTH UNIVERSITY. MEDICAL SCIENCES 2022; 47:1058-1064. [PMID: 36097773 PMCID: PMC10950118 DOI: 10.11817/j.issn.1672-7347.2022.220101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Indexed: 06/15/2023]
Abstract
OBJECTIVES The automatic delineation of organs at risk (OARs) can help doctors make radiotherapy plans efficiently and accurately, and effectively improve the accuracy of radiotherapy and the therapeutic effect. Therefore, this study aims to propose an automatic delineation method for OARs in cervical cancer scenarios of both after-loading and external irradiation. At the same time, the similarity of OARs structure between different scenes is used to improve the segmentation accuracy of OARs in difficult segmentations. METHODS Our ensemble model adopted the strategy of ensemble learning. The model obtained from the pre-training based on the after-loading and external irradiation was introduced into the integrated model as a feature extraction module. The data in different scenes were trained alternately, and the personalized features of the OARs within the model and the common features of the OARs between scenes were introduced. Computer tomography (CT) images for 84 cases of after-loading and 46 cases of external irradiation were collected as the train data set. Five-fold cross-validation was adopted to split training sets and test sets. The five-fold average dice similarity coefficient (DSC) served as the figure-of-merit in evaluating the segmentation model. RESULTS The DSCs of the OARs (the rectum and bladder in the after-loading images and the bladder in the external irradiation images) were higher than 0.7. Compared with using an independent residual U-net (convolutional networks for biomedical image segmentation) model [residual U-net (Res-Unet)] delineate OARs, the proposed model can effectively improve the segmentation performance of difficult OARs (the sigmoid in the after-loading CT images and the rectum in the external irradiation images), and the DSCs were increased by more than 3%. CONCLUSIONS Comparing to the dedicated models, our ensemble model achieves the comparable result in segmentation of OARs for different treatment options in cervical cancer radiotherapy, which may be shorten time for doctors to sketch OARs and improve doctor's work efficiency.
Collapse
Affiliation(s)
- Tingting Cheng
- Department of Oncology, Xiangya Hospital, Central South University, Changsha 410008.
- National Clinical Research Center for Geriatric Diseases, Xiangya Hospital, Changsha 410008.
| | - Zijian Zhang
- Department of Oncology, Xiangya Hospital, Central South University, Changsha 410008.
- National Clinical Research Center for Geriatric Diseases, Xiangya Hospital, Changsha 410008.
| | - Xin Yang
- Guangzhou Perception Vision Medical Technologies Limited Company, Guangzhou 510530
| | - Shanfu Lu
- Guangzhou Perception Vision Medical Technologies Limited Company, Guangzhou 510530
| | - Dongdong Qian
- Guangzhou Perception Vision Medical Technologies Limited Company, Guangzhou 510530
| | - Xianliang Wang
- Department of Radiotherapy Center, Sichuan Cancer Hospital, Chengdu 610041, China
| | - Hong Zhu
- Department of Oncology, Xiangya Hospital, Central South University, Changsha 410008
- National Clinical Research Center for Geriatric Diseases, Xiangya Hospital, Changsha 410008
| |
Collapse
|
45
|
Mancosu P, Lambri N, Castiglioni I, Dei D, Iori M, Loiacono D, Russo S, Talamonti C, Villaggi E, Scorsetti M, Avanzo M. Applications of artificial intelligence in stereotactic body radiation therapy. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7e18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 07/04/2022] [Indexed: 11/12/2022]
Abstract
Abstract
This topical review focuses on the applications of artificial intelligence (AI) tools to stereotactic body radiation therapy (SBRT). The high dose per fraction and the limited number of fractions in SBRT require stricter accuracy than standard radiation therapy. The intent of this review is to describe the development and evaluate the possible benefit of AI tools integration into the radiation oncology workflow for SBRT automation. The selected papers were subdivided into four sections, representative of the whole radiotherapy process: ‘AI in SBRT target and organs at risk contouring’, ‘AI in SBRT planning’, ‘AI during the SBRT delivery’, and ‘AI for outcome prediction after SBRT’. Each section summarises the challenges, as well as limits and needs for improvement to achieve better integration of AI tools in the clinical workflow.
Collapse
|
46
|
Elnabawy RH, Abdennadher S, Hellwich O, Eldawlatly S. PVGAN: A generative adversarial network for object simplification in prosthetic vision. J Neural Eng 2022; 19. [PMID: 35981530 DOI: 10.1088/1741-2552/ac8acf] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 08/18/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE By means of electrical stimulation of the visual system, visual prostheses provide promising solution for blind patients through partial restoration of their vision. Despite the great success achieved so far in this field, the limited resolution of the perceived vision using these devices hinders the ability of visual prostheses users to correctly recognize viewed objects. Accordingly, we propose a deep learning approach based on Generative Adversarial Networks (GANs), termed PVGAN, to enhance object recognition for the implanted patients by representing objects in the field of view based on a corresponding simplified clip art version. APPROACH To assess the performance, an axon map model was used to simulate prosthetic vision in experiments involving normally-sighted participants. In these experiments, four types of image representation were examined. The first and second types comprised presenting phosphene simulation of real images containing the actual high-resolution object, and presenting phosphene simulation of the real image followed by the clip art image, respectively. The other two types were utilized to evaluate the performance in the case of electrode dropout, where the third type comprised presenting phosphene simulation of only clip art images without electrode dropout, while the fourth type involved clip art images with electrode dropout. MAIN RESULTS The performance was measured through three evaluation metrics which are the accuracy of the participants in recognizing the objects, the time taken by the participants to correctly recognize the object, and the confidence level of the participants in the recognition process. Results demonstrate that representing the objects using clip art images generated by the PVGAN model results in a significant enhancement in the speed and confidence of the subjects in recognizing the objects. SIGNIFICANCE These results demonstrate the utility of using GANs in enhancing the quality of images perceived using prosthetic vision.
Collapse
Affiliation(s)
- Reham H Elnabawy
- Digital Media Engineering and Technology, German University in Cairo, German University in Cairo Campus, 5th Settlement, New Cairo, Cairo, Egypt, Cairo, 11835, EGYPT
| | - Slim Abdennadher
- Computer Science and Engineering, German University in Cairo, German University in Cairo Campus, 5th Settlement, New Cairo, Cairo, Egypt, Cairo, 11835, EGYPT
| | - Olaf Hellwich
- Department of Computer Vision & Remote Sensing, Technische Universität Berlin, MAR 6-5 Marchstr. 23 D-10587 Berlin, Berlin, 10623, GERMANY
| | - Seif Eldawlatly
- Computer and Systems Engineering, Ain Shams University Faculty of Engineering, 1 El-sarayat st, Cairo, 11517, EGYPT
| |
Collapse
|
47
|
Dourthe B, Shaikh N, Pai S A, Fels S, Brown SHM, Wilson DR, Street J, Oxland TR. Automated Segmentation of Spinal Muscles From Upright Open MRI Using a Multiscale Pyramid 2D Convolutional Neural Network. Spine (Phila Pa 1976) 2022; 47:1179-1186. [PMID: 34919072 DOI: 10.1097/brs.0000000000004308] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/29/2021] [Indexed: 02/01/2023]
Abstract
STUDY DESIGN Randomized trial. OBJECTIVE To implement an algorithm enabling the automated segmentation of spinal muscles from open magnetic resonance images in healthy volunteers and patients with adult spinal deformity (ASD). SUMMARY OF BACKGROUND DATA Understanding spinal muscle anatomy is critical to diagnosing and treating spinal deformity.Muscle boundaries can be extrapolated from medical images using segmentation, which is usually done manually by clinical experts and remains complicated and time-consuming. METHODS Three groups were examined: two healthy volunteer groups (N = 6 for each group) and one ASD group (N = 8 patients) were imaged at the lumbar and thoracic regions of the spine in an upright open magnetic resonance imaging scanner while maintaining different postures (various seated, standing, and supine). For each group and region, a selection of regions of interest (ROIs) was manually segmented. A multiscale pyramid two-dimensional convolutional neural network was implemented to automatically segment all defined ROIs. A five-fold crossvalidation method was applied and distinct models were trained for each resulting set and group and evaluated using Dice coefficients calculated between the model output and the manually segmented target. RESULTS Good to excellent results were found across all ROIs for the ASD (Dice coefficient >0.76) and healthy (dice coefficient > 0.86) groups. CONCLUSION This study represents a fundamental step toward the development of an automated spinal muscle properties extraction pipeline, which will ultimately allow clinicians to have easier access to patient-specific simulations, diagnosis, and treatment.
Collapse
Affiliation(s)
- Benjamin Dourthe
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
| | - Noor Shaikh
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
- Depart-Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Anoosha Pai S
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Sidney Fels
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, BC, Canada
| | - Stephen H M Brown
- Department of Human Health and Nutritional Sciences, University of Guelph, Guelph, ON, Canada
| | - David R Wilson
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- Centre for Hip Health and Mobility, University of British Columbia, Vancouver, BC, Canada
| | - John Street
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
| | - Thomas R Oxland
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- Depart-Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
48
|
Im JH, Lee IJ, Choi Y, Sung J, Ha JS, Lee H. Impact of Denoising on Deep-Learning-Based Automatic Segmentation Framework for Breast Cancer Radiotherapy Planning. Cancers (Basel) 2022; 14:cancers14153581. [PMID: 35892839 PMCID: PMC9332287 DOI: 10.3390/cancers14153581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 07/08/2022] [Accepted: 07/20/2022] [Indexed: 02/04/2023] Open
Abstract
Objective: This study aimed to investigate the segmentation accuracy of organs at risk (OARs) when denoised computed tomography (CT) images are used as input data for a deep-learning-based auto-segmentation framework. Methods: We used non-contrast enhanced planning CT scans from 40 patients with breast cancer. The heart, lungs, esophagus, spinal cord, and liver were manually delineated by two experienced radiation oncologists in a double-blind manner. The denoised CT images were used as input data for the AccuContourTM segmentation software to increase the signal difference between structures of interest and unwanted noise in non-contrast CT. The accuracy of the segmentation was assessed using the Dice similarity coefficient (DSC), and the results were compared with those of conventional deep-learning-based auto-segmentation without denoising. Results: The average DSC outcomes were higher than 0.80 for all OARs except for the esophagus. AccuContourTM-based and denoising-based auto-segmentation demonstrated comparable performance for the lungs and spinal cord but showed limited performance for the esophagus. Denoising-based auto-segmentation for the liver was minimal but had statistically significantly better DSC than AccuContourTM-based auto-segmentation (p < 0.05). Conclusions: Denoising-based auto-segmentation demonstrated satisfactory performance in automatic liver segmentation from non-contrast enhanced CT scans. Further external validation studies with larger cohorts are needed to verify the usefulness of denoising-based auto-segmentation.
Collapse
Affiliation(s)
- Jung Ho Im
- CHA Bundang Medical Center, Department of Radiation Oncology, CHA University School of Medicine, Seongnam 13496, Korea;
| | - Ik Jae Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Yeonho Choi
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Jiwon Sung
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Jin Sook Ha
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Ho Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
- Correspondence: ; Tel.: +82-2-2228-8109; Fax: +82-2-2227-7823
| |
Collapse
|
49
|
Camara JA, Pujol A, Jimenez JJ, Donate J, Ferrer M, Vande Velde G. Lung Volume Calculation in Preclinical MicroCT: A Fast Geometrical Approach. J Imaging 2022; 8:jimaging8080204. [PMID: 35893082 PMCID: PMC9330811 DOI: 10.3390/jimaging8080204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 07/08/2022] [Accepted: 07/18/2022] [Indexed: 12/04/2022] Open
Abstract
In this study, we present a time-efficient protocol for thoracic volume calculation as a proxy for total lung volume. We hypothesize that lung volume can be calculated indirectly from this thoracic volume. We compared the measured thoracic volume with manually segmented and automatically thresholded lung volumes, with manual segmentation as the gold standard. A linear regression formula was obtained and used for calculating the theoretical lung volume. This volume was compared with the gold standard volumes. In healthy animals, thoracic volume was 887.45 mm3, manually delineated lung volume 554.33 mm3 and thresholded aerated lung volume 495.38 mm3 on average. Theoretical lung volume was 554.30 mm3. Finally, the protocol was applied to three animal models of lung pathology (lung metastasis and transgenic primary lung tumor and fungal infection). In confirmed pathologic animals, thoracic volumes were: 893.20 mm3, 860.12 and 1027.28 mm3. Manually delineated volumes were 640.58, 503.91 and 882.42 mm3, respectively. Thresholded lung volumes were 315.92 mm3, 408.72 and 236 mm3, respectively. Theoretical lung volume resulted in 635.28, 524.30 and 863.10.42 mm3. No significant differences were observed between volumes. This confirmed the potential use of this protocol for lung volume calculation in pathologic models.
Collapse
Affiliation(s)
- Juan Antonio Camara
- Preclinical Therapeutics Core, University of California San Francisco, San Francisco, CA 94158, USA
- Correspondence: ; Tel.: +1-628-6293-555
| | - Anna Pujol
- Onna Therapeutics, 08028 Barcelona, Spain;
| | - Juan Jose Jimenez
- Preclinical Imaging Platform, Vall d’Hebron Institute of Research, 08035 Barcelona, Spain; (J.J.J.); (J.D.)
| | - Jaime Donate
- Preclinical Imaging Platform, Vall d’Hebron Institute of Research, 08035 Barcelona, Spain; (J.J.J.); (J.D.)
| | - Marina Ferrer
- Gnotobiotics Core Facility, University of California San Francisco, San Francisco, CA 94158, USA;
| | - Greetje Vande Velde
- Biomedical MRI/MoSAIC, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3001 Leuven, Belgium;
| |
Collapse
|
50
|
Yue C, Ye M, Wang P, Huang D, Lu X. SRV-GAN: A generative adversarial network for segmenting retinal vessels. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:9948-9965. [PMID: 36031977 DOI: 10.3934/mbe.2022464] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In the field of ophthalmology, retinal diseases are often accompanied by complications, and effective segmentation of retinal blood vessels is an important condition for judging retinal diseases. Therefore, this paper proposes a segmentation model for retinal blood vessel segmentation. Generative adversarial networks (GANs) have been used for image semantic segmentation and show good performance. So, this paper proposes an improved GAN. Based on R2U-Net, the generator adds an attention mechanism, channel and spatial attention, which can reduce the loss of information and extract more effective features. We use dense connection modules in the discriminator. The dense connection module has the characteristics of alleviating gradient disappearance and realizing feature reuse. After a certain amount of iterative training, the generated prediction map and label map can be distinguished. Based on the loss function in the traditional GAN, we introduce the mean squared error. By using this loss, we ensure that the synthetic images contain more realistic blood vessel structures. The values of area under the curve (AUC) in the retinal blood vessel pixel segmentation of the three public data sets DRIVE, CHASE-DB1 and STARE of the proposed method are 0.9869, 0.9894 and 0.9885, respectively. The indicators of this experiment have improved compared to previous methods.
Collapse
Affiliation(s)
- Chen Yue
- School of Medical Information, Wannan Medical College, Wuhu 241002, China
- Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu 241002, China
| | - Mingquan Ye
- School of Medical Information, Wannan Medical College, Wuhu 241002, China
- Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu 241002, China
| | - Peipei Wang
- School of Medical Information, Wannan Medical College, Wuhu 241002, China
- Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu 241002, China
| | - Daobin Huang
- School of Medical Information, Wannan Medical College, Wuhu 241002, China
- Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu 241002, China
| | - Xiaojie Lu
- School of Medical Information, Wannan Medical College, Wuhu 241002, China
- Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu 241002, China
| |
Collapse
|