101
|
Osadebey M, Andersen HK, Waaler D, Fossaa K, Martinsen ACT, Pedersen M. Three-stage segmentation of lung region from CT images using deep neural networks. BMC Med Imaging 2021; 21:112. [PMID: 34266391 PMCID: PMC8280386 DOI: 10.1186/s12880-021-00640-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Accepted: 07/06/2021] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND Lung region segmentation is an important stage of automated image-based approaches for the diagnosis of respiratory diseases. Manual methods executed by experts are considered the gold standard, but it is time consuming and the accuracy is dependent on radiologists' experience. Automated methods are relatively fast and reproducible with potential to facilitate physician interpretation of images. However, these benefits are possible only after overcoming several challenges. The traditional methods that are formulated as a three-stage segmentation demonstrate promising results on normal CT data but perform poorly in the presence of pathological features and variations in image quality attributes. The implementation of deep learning methods that can demonstrate superior performance over traditional methods is dependent on the quantity, quality, cost and the time it takes to generate training data. Thus, efficient and clinically relevant automated segmentation method is desired for the diagnosis of respiratory diseases. METHODS We implement each of the three stages of traditional methods using deep learning methods trained on five different configurations of training data with ground truths obtained from the 3D Image Reconstruction for Comparison of Algorithm Database (3DIRCAD) and the Interstitial Lung Diseases (ILD) database. The data was augmented with the Lung Image Database Consortium (LIDC-IDRI) image collection and a realistic phantom. A convolutional neural network (CNN) at the preprocessing stage classifies the input into lung and none lung regions. The processing stage was implemented using a CNN-based U-net while the postprocessing stage utilize another U-net and CNN for contour refinement and filtering out false positives, respectively. RESULTS The performance of the proposed method was evaluated on 1230 and 1100 CT slices from the 3DIRCAD and ILD databases. We investigate the performance of the proposed method on five configurations of training data and three configurations of the segmentation system; three-stage segmentation and three-stage segmentation without a CNN classifier and contrast enhancement, respectively. The Dice-score recorded by the proposed method range from 0.76 to 0.95. CONCLUSION The clinical relevance and segmentation accuracy of deep learning models can improve though deep learning-based three-stage segmentation, image quality evaluation and enhancement as well as augmenting the training data with large volume of cheap and quality training data. We propose a new and novel deep learning-based method of contour refinement.
Collapse
Affiliation(s)
- Michael Osadebey
- Department of Computer Science, Norwegian University of Science and Technology, Gjøvik, Norway
| | - Hilde K. Andersen
- Department of Diagnostic Physics, Oslo University Hospital, Oslo, Norway
| | - Dag Waaler
- Department of Health Sciences, Norwegian University of Science and Technology, Gjøvik, Norway
| | - Kristian Fossaa
- Department of Diagnostic Physics, Oslo University Hospital, Oslo, Norway
| | - Anne C. T. Martinsen
- The Faculty of health sciences, Oslo Metropolitan University, Oslo, Norway
- Sunnaas Rehabilitation Hospital, Nesoddtangen, Norway
| | - Marius Pedersen
- Department of Computer Science, Norwegian University of Science and Technology, Gjøvik, Norway
| |
Collapse
|
102
|
Nemoto T, Futakami N, Kunieda E, Yagi M, Takeda A, Akiba T, Mutu E, Shigematsu N. Effects of sample size and data augmentation on U-Net-based automatic segmentation of various organs. Radiol Phys Technol 2021; 14:318-327. [PMID: 34254251 DOI: 10.1007/s12194-021-00630-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 07/01/2021] [Accepted: 07/05/2021] [Indexed: 10/20/2022]
Abstract
Deep learning has demonstrated high efficacy for automatic segmentation in contour delineation, which is crucial in radiation therapy planning. However, the collection, labeling, and management of medical imaging data can be challenging. This study aims to elucidate the effects of sample size and data augmentation on the automatic segmentation of computed tomography images using U-Net, a deep learning method. For the chest and pelvic regions, 232 and 556 cases are evaluated, respectively. We investigate multiple conditions by changing the sum of the training and validation datasets across a broad range of values: 10-200 and 10-500 cases for the chest and pelvic regions, respectively. A U-Net is constructed, and horizontal-flip data augmentation, which produces left and right inverse images resulting in twice the number of images, is compared with no augmentation for each training session. All lung cases and more than 100 prostate, bladder, and rectum cases indicate that adding horizontal-flip data augmentation is almost as effective as doubling the number of cases. The slope of the Dice similarity coefficient (DSC) in all organs decreases rapidly until approximately 100 cases, stabilizes after 200 cases, and shows minimal changes as the number of cases is increased further. The DSCs stabilize at a smaller sample size with the incorporation of data augmentation in all organs except the heart. This finding is applicable to the automation of radiation therapy for rare cancers, where large datasets may be difficult to obtain.
Collapse
Affiliation(s)
- Takafumi Nemoto
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjuku-ku, Tokyo, 160-8582, Japan.
| | - Natsumi Futakami
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Etsuo Kunieda
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjuku-ku, Tokyo, 160-8582, Japan.,Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Masamichi Yagi
- Platform Technical Engineer Division, HPC and AI Business Department, System Platform Solution Unit, Fujitsu Limited, World Trade Center Building, 4-1, Hamamatsucho 2-chome, Minato-ku, Tokyo, 105-6125, Japan
| | - Atsuya Takeda
- Radiation Oncology Center, Ofuna Chuo Hospital, Kamakura-shi, Kanagawa, 247-0056, Japan
| | - Takeshi Akiba
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Eride Mutu
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Naoyuki Shigematsu
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjuku-ku, Tokyo, 160-8582, Japan
| |
Collapse
|
103
|
Liu X, Li KW, Yang R, Geng LS. Review of Deep Learning Based Automatic Segmentation for Lung Cancer Radiotherapy. Front Oncol 2021; 11:717039. [PMID: 34336704 PMCID: PMC8323481 DOI: 10.3389/fonc.2021.717039] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 06/21/2021] [Indexed: 12/14/2022] Open
Abstract
Lung cancer is the leading cause of cancer-related mortality for males and females. Radiation therapy (RT) is one of the primary treatment modalities for lung cancer. While delivering the prescribed dose to tumor targets, it is essential to spare the tissues near the targets-the so-called organs-at-risk (OARs). An optimal RT planning benefits from the accurate segmentation of the gross tumor volume and surrounding OARs. Manual segmentation is a time-consuming and tedious task for radiation oncologists. Therefore, it is crucial to develop automatic image segmentation to relieve radiation oncologists of the tedious contouring work. Currently, the atlas-based automatic segmentation technique is commonly used in clinical routines. However, this technique depends heavily on the similarity between the atlas and the image segmented. With significant advances made in computer vision, deep learning as a part of artificial intelligence attracts increasing attention in medical image automatic segmentation. In this article, we reviewed deep learning based automatic segmentation techniques related to lung cancer and compared them with the atlas-based automatic segmentation technique. At present, the auto-segmentation of OARs with relatively large volume such as lung and heart etc. outperforms the organs with small volume such as esophagus. The average Dice similarity coefficient (DSC) of lung, heart and liver are over 0.9, and the best DSC of spinal cord reaches 0.9. However, the DSC of esophagus ranges between 0.71 and 0.87 with a ragged performance. In terms of the gross tumor volume, the average DSC is below 0.8. Although deep learning based automatic segmentation techniques indicate significant superiority in many aspects compared to manual segmentation, various issues still need to be solved. We discussed the potential issues in deep learning based automatic segmentation including low contrast, dataset size, consensus guidelines, and network design. Clinical limitations and future research directions of deep learning based automatic segmentation were discussed as well.
Collapse
Affiliation(s)
- Xi Liu
- School of Physics, Beihang University, Beijing, China
| | - Kai-Wen Li
- School of Physics, Beihang University, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, Beihang University, Beijing, China
| | - Ruijie Yang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Li-Sheng Geng
- School of Physics, Beihang University, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, Beihang University, Beijing, China
- Beijing Key Laboratory of Advanced Nuclear Materials and Physics, Beihang University, Beijing, China
- School of Physics and Microelectronics, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
104
|
Lin M, Wynne JF, Zhou B, Wang T, Lei Y, Curran WJ, Liu T, Yang X. Artificial intelligence in tumor subregion analysis based on medical imaging: A review. J Appl Clin Med Phys 2021; 22:10-26. [PMID: 34164913 PMCID: PMC8292694 DOI: 10.1002/acm2.13321] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 04/17/2021] [Accepted: 05/22/2021] [Indexed: 12/20/2022] Open
Abstract
Medical imaging is widely used in the diagnosis and treatment of cancer, and artificial intelligence (AI) has achieved tremendous success in medical image analysis. This paper reviews AI-based tumor subregion analysis in medical imaging. We summarize the latest AI-based methods for tumor subregion analysis and their applications. Specifically, we categorize the AI-based methods by training strategy: supervised and unsupervised. A detailed review of each category is presented, highlighting important contributions and achievements. Specific challenges and potential applications of AI in tumor subregion analysis are discussed.
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jacob F. Wynne
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Boran Zhou
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
105
|
Cao XZ, Luo SZ, Li JC, Pan JH. An optimized automatic prediction of stage and grade in bladder cancer based on U-ResNet. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-210263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
The grade and stage of bladder tumors is an essential key for diagnosing and treating bladder cancer. This study proposed an automated bladder tumor prediction system to automatically assess the bladder tumor grade and stage automatically on Magnetic Resonance Imaging (MRI) images. The system included three modules: tumor segmentation, feature extraction and prediction. We proposed a U-ResNet network that automatically extracts morphological and texture features for detecting tumor regions. These features were used in support vector machine (SVM) classifiers to predict the grade and stage. Our proposed method segmented the tumor area and predicted the grade and stage more accurately compared to different methods in our experiments on MRI images. The accuracy of bladder tumor grade prediction was about 70%, and the accuracy of the data set was about 77.5%. The extensive experiments demonstrated the usefulness and effectiveness of our method.
Collapse
Affiliation(s)
- Xin-Zi Cao
- School of Software, South China Normal University, Guangzhou, China
| | - Sheng-Zhou Luo
- School of Software, South China Normal University, Guangzhou, China
| | - Jing-Cong Li
- School of Software, South China Normal University, Guangzhou, China
| | - Jia-Hui Pan
- School of Software, South China Normal University, Guangzhou, China
| |
Collapse
|
106
|
Lin M, Momin S, Lei Y, Wang H, Curran WJ, Liu T, Yang X. Fully automated segmentation of brain tumor from multiparametric MRI using 3D context deep supervised U-Net. Med Phys 2021; 48:4365-4374. [PMID: 34101845 DOI: 10.1002/mp.15032] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 05/14/2021] [Accepted: 05/31/2021] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Owing to histologic complexities of brain tumors, its diagnosis requires the use of multimodalities to obtain valuable structural information so that brain tumor subregions can be properly delineated. In current clinical workflow, physicians typically perform slice-by-slice delineation of brain tumor subregions, which is a time-consuming process and also more susceptible to intra- and inter-rater variabilities possibly leading to misclassification. To deal with this issue, this study aims to develop an automatic segmentation of brain tumor in MR images using deep learning. METHOD In this study, we develop a context deep-supervised U-Net to segment brain tumor subregions. A context block which aggregates multiscale contextual information for dense segmentation was proposed. This approach enlarges the effective receptive field of convolutional neural networks, which, in turn, improves the segmentation accuracy of brain tumor subregions. We performed the fivefold cross-validation on the Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. The BraTS 2020 testing datasets were obtained via BraTS online website as a hold-out test. For BraTS, the evaluation system divides the tumor into three regions: whole tumor (WT), tumor core (TC), and enhancing tumor (ET). The performance of our proposed method was compared against two state-of-the-arts CNN networks in terms of segmentation accuracy via Dice similarity coefficient (DSC) and Hausdorff distance (HD). The tumor volumes generated by our proposed method were compared with manually contoured volumes via Bland-Altman plots and Pearson analysis. RESULTS The proposed method achieved the segmentation results with a DSC of 0.923 ± 0.047, 0.893 ± 0.176, and 0.846 ± 0.165 and a 95% HD95 of 3.946 ± 7.041, 3.981 ± 6.670, and 10.128 ± 51.136 mm on WT, TC, and ET, respectively. Experimental results demonstrate that our method achieved comparable to significantly (p < 0.05) better segmentation accuracies than other two state-of-the-arts CNN networks. Pearson correlation analysis showed a high positive correlation between the tumor volumes generated by proposed method and manual contour. CONCLUSION Overall qualitative and quantitative results of this work demonstrate the potential of translating proposed technique into clinical practice for segmenting brain tumor subregions, and further facilitate brain tumor radiotherapy workflow.
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Shadab Momin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hesheng Wang
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
107
|
Wang T, Lei Y, Roper J, Ghavidel B, Beitler JJ, McDonald M, Curran WJ, Liu T, Yang X. Head and neck multi-organ segmentation on dual-energy CT using dual pyramid convolutional neural networks. Phys Med Biol 2021; 66. [PMID: 33915524 DOI: 10.1088/1361-6560/abfce2] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/29/2021] [Indexed: 11/11/2022]
Abstract
Organ delineation is crucial to diagnosis and therapy, while it is also labor-intensive and observer-dependent. Dual energy CT (DECT) provides additional image contrast than conventional single energy CT (SECT), which may facilitate automatic organ segmentation. This work aims to develop an automatic multi-organ segmentation approach using deep learning for head-and-neck region on DECT. We proposed a mask scoring regional convolutional neural network (R-CNN) where comprehensive features are firstly learnt from two independent pyramid networks and are then combined via deep attention strategy to highlight the informative ones extracted from both two channels of low and high energy CT. To perform multi-organ segmentation and avoid misclassification, a mask scoring subnetwork was integrated into the Mask R-CNN framework to build the correlation between the class of potential detected organ's region-of-interest (ROI) and the shape of that organ's segmentation within that ROI. We evaluated our model on DECT images from 127 head-and-neck cancer patients (66 training, 61 testing) with manual contours of 19 organs as training target and ground truth. For large- and mid-sized organs such as brain and parotid, the proposed method successfully achieved average Dice similarity coefficient (DSC) larger than 0.8. For small-sized organs with very low contrast such as chiasm, cochlea, lens and optic nerves, the DSCs ranged between around 0.5 and 0.8. With the proposed method, using DECT images outperforms using SECT in almost all 19 organs with statistical significance in DSC (p<0.05). Meanwhile, by using the DECT, the proposed method is also significantly superior to a recently developed FCN-based method in most of organs in terms of DSC and the 95th percentile Hausdorff distance. Quantitative results demonstrated the feasibility of the proposed method, the superiority of using DECT to SECT, and the advantage of the proposed R-CNN over FCN on the head-and-neck patient study. The proposed method has the potential to facilitate the current head-and-neck cancer radiation therapy workflow in treatment planning.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Beth Ghavidel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Jonathan J Beitler
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| |
Collapse
|
108
|
Lei Y, Wang T, Roper J, Jani AB, Patel SA, Curran WJ, Patel P, Liu T, Yang X. Male pelvic multi-organ segmentation on transrectal ultrasound using anchor-free mask CNN. Med Phys 2021; 48:3055-3064. [PMID: 33894057 DOI: 10.1002/mp.14895] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 03/13/2021] [Accepted: 04/06/2021] [Indexed: 02/01/2023] Open
Abstract
PURPOSE Current prostate brachytherapy uses transrectal ultrasound images for implant guidance, where contours of the prostate and organs-at-risk are necessary for treatment planning and dose evaluation. This work aims to develop a deep learning-based method for male pelvic multi-organ segmentation on transrectal ultrasound images. METHODS We developed an anchor-free mask convolutional neural network (CNN) that consists of three subnetworks, that is, a backbone, a fully convolutional one-state object detector (FCOS), and a mask head. The backbone extracts multi-level and multi-scale features from an ultrasound (US) image. The FOCS utilizes these features to detect and label (classify) the volume-of-interests (VOIs) of organs. In contrast to the design of a previously investigated mask regional CNN (Mask R-CNN), the FCOS is anchor-free, which can capture the spatial correlation of multiple organs. The mask head performs segmentation on each detected VOI, where a spatial attention strategy is integrated into the mask head to focus on informative feature elements and suppress noise. For evaluation, we retrospectively investigated 83 prostate cancer patients by fivefold cross-validation and a hold-out test. The prostate, bladder, rectum, and urethra were segmented and compared with manual contours using the Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95 ), mean surface distance (MSD), center of mass distance (CMD), and volume difference (VD). RESULTS The proposed method visually outperforms two competing methods, showing better agreement with manual contours and fewer misidentified speckles. In the cross-validation study, the respective DSC and HD95 results were as follows for each organ: bladder 0.75 ± 0.12, 2.58 ± 0.7 mm; prostate 0.93 ± 0.03, 2.28 ± 0.64 mm; rectum 0.90 ± 0.07, 1.65 ± 0.52 mm; and urethra 0.86 ± 0.07, 1.85 ± 1.71 mm. For the hold-out tests, the DSC and HD95 results were as follows: bladder 0.76 ± 0.13, 2.93 ± 1.29 mm; prostate 0.94 ± 0.03, 2.27 ± 0.79 mm; rectum 0.92 ± 0.03, 1.90 ± 0.28 mm; and urethra 0.85 ± 0.06, 1.81 ± 0.72 mm. Segmentation was performed in under 5 seconds. CONCLUSION The proposed method demonstrated fast and accurate multi-organ segmentation performance. It can expedite the contouring step of prostate brachytherapy and potentially enable auto-planning and auto-evaluation.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Sagar A Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
109
|
Yıldız E, Arslan AT, Yıldız Taş A, Acer AF, Demir S, Şahin A, Erol Barkana D. Generative Adversarial Network Based Automatic Segmentation of Corneal Subbasal Nerves on In Vivo Confocal Microscopy Images. Transl Vis Sci Technol 2021; 10:33. [PMID: 34038501 PMCID: PMC8161698 DOI: 10.1167/tvst.10.6.33] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 05/05/2021] [Indexed: 11/24/2022] Open
Abstract
Purpose In vivo confocal microscopy (IVCM) is a noninvasive, reproducible, and inexpensive diagnostic tool for corneal diseases. However, widespread and effortless image acquisition in IVCM creates serious image analysis workloads on ophthalmologists, and neural networks could solve this problem quickly. We have produced a novel deep learning algorithm based on generative adversarial networks (GANs), and we compare its accuracy for automatic segmentation of subbasal nerves in IVCM images with a fully convolutional neural network (U-Net) based method. Methods We have collected IVCM images from 85 subjects. U-Net and GAN-based image segmentation methods were trained and tested under the supervision of three clinicians for the segmentation of corneal subbasal nerves. Nerve segmentation results for GAN and U-Net-based methods were compared with the clinicians by using Pearson's R correlation, Bland-Altman analysis, and receiver operating characteristics (ROC) statistics. Additionally, different noises were applied on IVCM images to evaluate the performances of the algorithms with noises of biomedical imaging. Results The GAN-based algorithm demonstrated similar correlation and Bland-Altman analysis results with U-Net. The GAN-based method showed significantly higher accuracy compared to U-Net in ROC curves. Additionally, the performance of the U-Net deteriorated significantly with different noises, especially in speckle noise, compared to GAN. Conclusions This study is the first application of GAN-based algorithms on IVCM images. The GAN-based algorithms demonstrated higher accuracy than U-Net for automatic corneal nerve segmentation in IVCM images, in patient-acquired images and noise applied images. This GAN-based segmentation method can be used as a facilitating diagnostic tool in ophthalmology clinics. Translational Relevance Generative adversarial networks are emerging deep learning models for medical image processing, which could be important clinical tools for rapid segmentation and analysis of corneal subbasal nerves in IVCM images.
Collapse
Affiliation(s)
- Erdost Yıldız
- Koç University Research Center for Translational Medicine, Koç University, Istanbul, Turkey
| | | | - Ayşe Yıldız Taş
- Department of Ophthalmology, Koç University School of Medicine, Istanbul, Turkey
| | | | - Sertaç Demir
- Techy Bilişim Ltd., Eskişehir, Turkey
- Department of Computer Engineering, Eskişehir Osmangazi University, Eskişehir, Turkey
| | - Afsun Şahin
- Koç University Research Center for Translational Medicine, Koç University, Istanbul, Turkey
- Department of Ophthalmology, Koç University School of Medicine, Istanbul, Turkey
| | - Duygun Erol Barkana
- Department of Electrical and Electronics Engineering, Yeditepe University, Istanbul, Turkey
| |
Collapse
|
110
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 71] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
111
|
Lei Y, Wang T, Tian S, Fu Y, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Male pelvic CT multi-organ segmentation using synthetic MRI-aided dual pyramid networks. Phys Med Biol 2021; 66. [PMID: 33780918 DOI: 10.1088/1361-6560/abf2f9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 03/29/2021] [Indexed: 12/17/2022]
Abstract
The delineation of the prostate and organs-at-risk (OARs) is fundamental to prostate radiation treatment planning, but is currently labor-intensive and observer-dependent. We aimed to develop an automated computed tomography (CT)-based multi-organ (bladder, prostate, rectum, left and right femoral heads (RFHs)) segmentation method for prostate radiation therapy treatment planning. The proposed method uses synthetic MRIs (sMRIs) to offer superior soft-tissue information for male pelvic CT images. Cycle-consistent adversarial networks (CycleGAN) were used to generate CT-based sMRIs. Dual pyramid networks (DPNs) extracted features from both CTs and sMRIs. A deep attention strategy was integrated into the DPNs to select the most relevant features from both CTs and sMRIs to identify organ boundaries. The CT-based sMRI generated from our previously trained CycleGAN and its corresponding CT images were inputted to the proposed DPNs to provide complementary information for pelvic multi-organ segmentation. The proposed method was trained and evaluated using datasets from 140 patients with prostate cancer, and were then compared against state-of-art methods. The Dice similarity coefficients and mean surface distances between our results and ground truth were 0.95 ± 0.05, 1.16 ± 0.70 mm; 0.88 ± 0.08, 1.64 ± 1.26 mm; 0.90 ± 0.04, 1.27 ± 0.48 mm; 0.95 ± 0.04, 1.08 ± 1.29 mm; and 0.95 ± 0.04, 1.11 ± 1.49 mm for bladder, prostate, rectum, left and RFHs, respectively. Mean center of mass distances was within 3 mm for all organs. Our results performed significantly better than those of competing methods in most evaluation metrics. We demonstrated the feasibility of sMRI-aided DPNs for multi-organ segmentation on pelvic CT images, and its superiority over other networks. The proposed method could be used in routine prostate cancer radiotherapy treatment planning to rapidly segment the prostate and standard OARs.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| |
Collapse
|
112
|
Ohno Y, Seo JB, Parraga G, Lee KS, Gefter WB, Fain SB, Schiebler ML, Hatabu H. Pulmonary Functional Imaging: Part 1-State-of-the-Art Technical and Physiologic Underpinnings. Radiology 2021; 299:508-523. [PMID: 33825513 DOI: 10.1148/radiol.2021203711] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Over the past few decades, pulmonary imaging technologies have advanced from chest radiography and nuclear medicine methods to high-spatial-resolution or low-dose chest CT and MRI. It is currently possible to identify and measure pulmonary pathologic changes before these are obvious even to patients or depicted on conventional morphologic images. Here, key technological advances are described, including multiparametric CT image processing methods, inhaled hyperpolarized and fluorinated gas MRI, and four-dimensional free-breathing CT and MRI methods to measure regional ventilation, perfusion, gas exchange, and biomechanics. The basic anatomic and physiologic underpinnings of these pulmonary functional imaging techniques are explained. In addition, advances in image analysis and computational and artificial intelligence (machine learning) methods pertinent to functional lung imaging are discussed. The clinical applications of pulmonary functional imaging, including both the opportunities and challenges for clinical translation and deployment, will be discussed in part 2 of this review. Given the technical advances in these sophisticated imaging methods and the wealth of information they can provide, it is anticipated that pulmonary functional imaging will be increasingly used in the care of patients with lung disease. © RSNA, 2021 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Yoshiharu Ohno
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Joon Beom Seo
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Grace Parraga
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Kyung Soo Lee
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Warren B Gefter
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Sean B Fain
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Mark L Schiebler
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Hiroto Hatabu
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| |
Collapse
|
113
|
Wang M, Zhu W, Yu K, Chen Z, Shi F, Zhou Y, Ma Y, Peng Y, Bao D, Feng S, Ye L, Xiang D, Chen X. Semi-Supervised Capsule cGAN for Speckle Noise Reduction in Retinal OCT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1168-1183. [PMID: 33395391 DOI: 10.1109/tmi.2020.3048975] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Speckle noise is the main cause of poor optical coherence tomography (OCT) image quality. Convolutional neural networks (CNNs) have shown remarkable performances for speckle noise reduction. However, speckle noise denoising still meets great challenges because the deep learning-based methods need a large amount of labeled data whose acquisition is time-consuming or expensive. Besides, many CNNs-based methods design complex structure based networks with lots of parameters to improve the denoising performance, which consume hardware resources severely and are prone to overfitting. To solve these problems, we propose a novel semi-supervised learning based method for speckle noise denoising in retinal OCT images. First, to improve the model's ability to capture complex and sparse features in OCT images, and avoid the problem of a great increase of parameters, a novel capsule conditional generative adversarial network (Caps-cGAN) with small number of parameters is proposed to construct the semi-supervised learning system. Then, to tackle the problem of retinal structure information loss in OCT images caused by lack of detailed guidance during unsupervised learning, a novel joint semi-supervised loss function composed of unsupervised loss and supervised loss is proposed to train the model. Compared with other state-of-the-art methods, the proposed semi-supervised method is suitable for retinal OCT images collected from different OCT devices and can achieve better performance even only using half of the training data.
Collapse
|
114
|
Zhang X, Li Y, Liu Y, Tang SX, Liu X, Punithakumar K, Shi D. Automatic spinal cord segmentation from axial-view MRI slices using CNN with grayscale regularized active contour propagation. Comput Biol Med 2021; 132:104345. [PMID: 33780869 DOI: 10.1016/j.compbiomed.2021.104345] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Revised: 03/09/2021] [Accepted: 03/12/2021] [Indexed: 11/29/2022]
Abstract
Accurate positioning of the responsible segment for patients with cervical spondylotic myelopathy (CSM) is clinically important not only to the surgery but also to reduce the incidence of surgical trauma and complications. Spinal cord segmentation is a crucial step in the positioning procedure. This study proposed a fully automated approach for spinal cord segmentation from 2D axial-view MRI slices of patients with CSM. The proposed method was trained and tested using clinical data from 20 CSM patients (359 images) acquired by the Peking University Third Hospital, with ground truth labeled by professional radiologists. The accuracy of the proposed method was evaluated using quantitative measures, the reliability metric as well as visual assessment. The proposed method yielded a Dice coefficient of 87.0%, Hausdorff distance of 9.7 mm, root-mean-square error of 5.9 mm. Higher conformance with ground truth was observed for the proposed method in comparison to the state-of-the-art algorithms. The results are also statistically significant with p-values calculated between state-of-the-art methods and the proposed methods.
Collapse
Affiliation(s)
- Xiaoran Zhang
- School of Automation, Beijing Institute of Technology, Beijing, 100081, China; Department of Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, CA, 90095-1594, USA.
| | - Yan Li
- Department of Orthopaedics, Peking University Third Hospital and the Engineering Research Center of Bone and Joint Precision Medicine, Ministry of Education, Beijing, China.
| | - Yicun Liu
- School of Automation, Beijing Institute of Technology, Beijing, 100081, China.
| | - Shu-Xia Tang
- Department of Mechanical Engineering, Texas Tech University, Lubbock, TX, 79409, USA.
| | - Xiaoguang Liu
- Department of Orthopaedics, Peking University Third Hospital and the Engineering Research Center of Bone and Joint Precision Medicine, Ministry of Education, Beijing, China.
| | - Kumaradevan Punithakumar
- Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, Alberta, 8440, Canada.
| | - Dawei Shi
- School of Automation, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
115
|
Park HY, Bae HJ, Hong GS, Kim M, Yun J, Park S, Chung WJ, Kim N. Realistic High-Resolution Body Computed Tomography Image Synthesis by Using Progressive Growing Generative Adversarial Network: Visual Turing Test. JMIR Med Inform 2021; 9:e23328. [PMID: 33609339 PMCID: PMC8077702 DOI: 10.2196/23328] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Revised: 11/15/2020] [Accepted: 02/20/2021] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Generative adversarial network (GAN)-based synthetic images can be viable solutions to current supervised deep learning challenges. However, generating highly realistic images is a prerequisite for these approaches. OBJECTIVE The aim of this study was to investigate and validate the unsupervised synthesis of highly realistic body computed tomography (CT) images by using a progressive growing GAN (PGGAN) trained to learn the probability distribution of normal data. METHODS We trained the PGGAN by using 11,755 body CT scans. Ten radiologists (4 radiologists with <5 years of experience [Group I], 4 radiologists with 5-10 years of experience [Group II], and 2 radiologists with >10 years of experience [Group III]) evaluated the results in a binary approach by using an independent validation set of 300 images (150 real and 150 synthetic) to judge the authenticity of each image. RESULTS The mean accuracy of the 10 readers in the entire image set was higher than random guessing (1781/3000, 59.4% vs 1500/3000, 50.0%, respectively; P<.001). However, in terms of identifying synthetic images as fake, there was no significant difference in the specificity between the visual Turing test and random guessing (779/1500, 51.9% vs 750/1500, 50.0%, respectively; P=.29). The accuracy between the 3 reader groups with different experience levels was not significantly different (Group I, 696/1200, 58.0%; Group II, 726/1200, 60.5%; and Group III, 359/600, 59.8%; P=.36). Interreader agreements were poor (κ=0.11) for the entire image set. In subgroup analysis, the discrepancies between real and synthetic CT images occurred mainly in the thoracoabdominal junction and in the anatomical details. CONCLUSIONS The GAN can synthesize highly realistic high-resolution body CT images that are indistinguishable from real images; however, it has limitations in generating body images of the thoracoabdominal junction and lacks accuracy in the anatomical details.
Collapse
Affiliation(s)
- Ho Young Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine & Asan Medical Center, Seoul, Republic of Korea
| | - Hyun-Jin Bae
- Department of Medicine, University of Ulsan College of Medicine & Asan Medical Center, Seoul, Republic of Korea
| | - Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine & Asan Medical Center, Seoul, Republic of Korea
| | - Minjee Kim
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea, Seoul, Republic of Korea
| | - JiHye Yun
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine & Asan Medical Center, Seoul, Republic of Korea
| | - Sungwon Park
- Department of Health Screening and Promotion Center, University of Ulsan College of Medicine & Asan Medical Center, Seoul, Republic of Korea
| | - Won Jung Chung
- Department of Health Screening and Promotion Center, University of Ulsan College of Medicine & Asan Medical Center, Seoul, Republic of Korea
| | - NamKug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine & Asan Medical Center, Seoul, Republic of Korea
- Department of Convergence Medicine, University of Ulsan College of Medicine & Asan Medical Center, Seoul, Republic of Korea
| |
Collapse
|
116
|
Alam SR, Li T, Zhang P, Zhang SY, Nadeem S. Generalizable cone beam CT esophagus segmentation using physics-based data augmentation. Phys Med Biol 2021; 66:065008. [PMID: 33535199 DOI: 10.1088/1361-6560/abe2eb] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Automated segmentation of the esophagus is critical in image-guided/adaptive radiotherapy of lung cancer to minimize radiation-induced toxicities such as acute esophagitis. We have developed a semantic physics-based data augmentation method for segmenting the esophagus in both planning CT (pCT) and cone beam CT (CBCT) using 3D convolutional neural networks. One hundred and ninety-one cases with their pCTs and CBCTs from four independent datasets were used to train a modified 3D U-Net architecture and a multi-objective loss function specifically designed for soft-tissue organs such as the esophagus. Scatter artifacts and noises were extracted from week-1 CBCTs using a power-law adaptive histogram equalization method and induced to the corresponding pCT were reconstructed using CBCT reconstruction parameters. Moreover, we leveraged physics-based artifact induction in pCTs to drive the esophagus segmentation in real weekly CBCTs. Segmentations were evaluated using the geometric Dice coefficient and Hausdorff distance as well as dosimetrically using mean esophagus dose and D 5cc. Due to the physics-based data augmentation, our model trained just on the synthetic CBCTs was robust and generalizable enough to also produce state-of-the-art results on the pCTs and CBCTs, achieving Dice overlaps of 0.81 and 0.74, respectively. It is concluded that our physics-based data augmentation spans the realistic noise/artifact spectrum across patient CBCT/pCT data and can generalize well across modalities, eventually improving the accuracy of treatment setup and response analysis.
Collapse
Affiliation(s)
- Sadegh R Alam
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States of America
| | | | | | | | | |
Collapse
|
117
|
Selvaraj D, Venkatesan A, Mahesh VGV, Joseph Raj AN. An integrated feature frame work for automated segmentation of COVID-19 infection from lung CT images. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2021; 31:28-46. [PMID: 33362346 PMCID: PMC7753711 DOI: 10.1002/ima.22525] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Revised: 10/28/2020] [Accepted: 10/31/2020] [Indexed: 05/03/2023]
Abstract
The novel coronavirus disease (SARS-CoV-2 or COVID-19) is spreading across the world and is affecting public health and the world economy. Artificial Intelligence (AI) can play a key role in enhancing COVID-19 detection. However, lung infection by COVID-19 is not quantifiable due to a lack of studies and the difficulty involved in the collection of large datasets. Segmentation is a preferred technique to quantify and contour the COVID-19 region on the lungs using computed tomography (CT) scan images. To address the dataset problem, we propose a deep neural network (DNN) model trained on a limited dataset where features are selected using a region-specific approach. Specifically, we apply the Zernike moment (ZM) and gray level co-occurrence matrix (GLCM) to extract the unique shape and texture features. The feature vectors computed from these techniques enable segmentation that illustrates the severity of the COVID-19 infection. The proposed algorithm was compared with other existing state-of-the-art deep neural networks using the Radiopedia and COVID-19 CT Segmentation datasets presented specificity, sensitivity, sensitivity, mean absolute error (MAE), enhance-alignment measure (EMφ), and structure measure (S m) of 0.942, 0.701, 0.082, 0.867, and 0.783, respectively. The metrics demonstrate the performance of the model in quantifying the COVID-19 infection with limited datasets.
Collapse
Affiliation(s)
- Deepika Selvaraj
- Department of Micro and Nano Electronics, School of Electronics EngineeringVellore Institute of TechnologyVelloreIndia
| | - Arunachalam Venkatesan
- Department of Micro and Nano Electronics, School of Electronics EngineeringVellore Institute of TechnologyVelloreIndia
| | | | - Alex Noel Joseph Raj
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic EngineeringCollege of Engineering, Shantou UniversityShantouChina
| |
Collapse
|
118
|
Barragán-Montero A, Javaid U, Valdés G, Nguyen D, Desbordes P, Macq B, Willems S, Vandewinckele L, Holmström M, Löfman F, Michiels S, Souris K, Sterpin E, Lee JA. Artificial intelligence and machine learning for medical imaging: A technology review. Phys Med 2021; 83:242-256. [PMID: 33979715 PMCID: PMC8184621 DOI: 10.1016/j.ejmp.2021.04.016] [Citation(s) in RCA: 90] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 04/15/2021] [Accepted: 04/18/2021] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium.
| | - Umair Javaid
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, USA
| | - Paul Desbordes
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Benoit Macq
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Steven Michiels
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium; KU Leuven, Department of Oncology, Laboratory of Experimental Radiotherapy, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| |
Collapse
|
119
|
He X, Guo BJ, Lei Y, Tian S, Wang T, Curran WJ, Zhang LJ, Liu T, Yang X. Thyroid gland delineation in noncontrast-enhanced CTs using deep convolutional neural networks. Phys Med Biol 2021; 66:055007. [PMID: 33590826 DOI: 10.1088/1361-6560/abc5a6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
The purpose of this study is to develop a deep learning method for thyroid delineation with high accuracy, efficiency, and robustness in noncontrast-enhanced head and neck CTs. The cross-sectional analysis consisted of six tests, including randomized cross-validation and hold-out experiments, tests of prediction accuracy between cancer and benign and cross-gender analysis were performed to evaluate the proposed deep-learning-based performance method. CT images of 1977 patients with suspected thyroid carcinoma were retrospectively investigated. The automatically segmented thyroid gland volume was compared against physician-approved clinical contours using metrics, the Pearson correlation and Bland-Altman analysis. Quantitative metrics included: the Dice similarity coefficient (DSC), sensitivity, specificity, Jaccard index (JAC), Hausdorff distance (HD), mean surface distance (MSD), residual mean square distance (RMSD) and the center of mass distance (CMD). The robustness of the proposed method was further tested using the nonparametric Kruskal-Wallis test to assess the equality of distribution of DSC values. The proposed method's accuracy remained high through all the tests, with the median DSC, JAC, sensitivity and specificity higher than 0.913, 0.839, 0.856 and 0.979, respectively. The proposed method also resulted in median MSD, RMSD, HD and CMD, of less than 0.31 mm, 0.48 mm, 2.06 mm and 0.50 mm, respectively. The MSD and RMSD were 0.40 ± 0.29 mm and 0.70 ± 0.46 mm, respectively. Concurrent testing of the proposed method with 3D U-Net and V-Net showed that the proposed method had significantly improved performance. The proposed deep-learning method achieved accurate and robust performance through six cross-sectional analysis tests.
Collapse
Affiliation(s)
- Xiuxiu He
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | | | | | | | | | | | | | | | | |
Collapse
|
120
|
Sundell VM, Kortesniemi M, Siiskonen T, Kosunen A, Rosendahl S, Büermann L. PATIENT-SPECIFIC DOSE ESTIMATES IN DYNAMIC COMPUTED TOMOGRAPHY MYOCARDIAL PERFUSION EXAMINATION. RADIATION PROTECTION DOSIMETRY 2021; 193:24-36. [PMID: 33693932 PMCID: PMC8227483 DOI: 10.1093/rpd/ncab016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 12/09/2020] [Accepted: 01/26/2021] [Indexed: 05/07/2023]
Abstract
The study aimed to implement realistic source models of a computed tomography (CT) scanner and Monte Carlo simulations to actual patient data and to calculate patient-specific organ and effective dose estimates for patients undergoing dynamic CT myocardial perfusion examinations. Source models including bowtie filter, tube output and x-ray spectra were determined for a dual-source Siemens Somatom Definition Flash scanner. Twenty CT angiography patient datasets were merged with a scaled International Commission on Radiological Protection (ICRP) 110 voxel phantom. Dose simulations were conducted with ImpactMC software. Effective dose estimates varied from 5.0 to 14.6 mSv for the 80 kV spectrum and from 8.9 to 24.7 mSv for the 100 kV spectrum. Significant differences in organ doses and effective doses between patients emphasise the need to use actual patient data merged with matched anthropomorphic anatomy in the dose simulations to achieve a reasonable level of accuracy in the dose estimation procedure.
Collapse
Affiliation(s)
- V-M Sundell
- HUS Medical Imaging Center, Helsinki University Central Hospital, Helsinki, Uusimaa, Finland
- Department of Physics, University of Helsinki, P.O. Box 64, 00014 University of Helsinki, Finland
| | - M Kortesniemi
- HUS Medical Imaging Center, Helsinki University Central Hospital, Helsinki, Uusimaa, Finland
| | - T Siiskonen
- STUK-Radiation and Nuclear Safety Authority, Laippatie 4, Helsinki 00880, Finland
| | - A Kosunen
- STUK-Radiation and Nuclear Safety Authority, Laippatie 4, Helsinki 00880, Finland
| | - S Rosendahl
- Department 6.2 Dosimetry for radiation therapy and diagnostic radiology, Physikalisch-Technische Bundesanstalt, Bundesallee 100, Braunschweig 38116, Germany
| | - L Büermann
- Department 6.2 Dosimetry for radiation therapy and diagnostic radiology, Physikalisch-Technische Bundesanstalt, Bundesallee 100, Braunschweig 38116, Germany
| |
Collapse
|
121
|
Tan W, Zhou L, Li X, Yang X, Chen Y, Yang J. Automated vessel segmentation in lung CT and CTA images via deep neural networks. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2021; 29:1123-1137. [PMID: 34421004 DOI: 10.3233/xst-210955] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
BACKGROUND The distribution of pulmonary vessels in computed tomography (CT) and computed tomography angiography (CTA) images of lung is important for diagnosing disease, formulating surgical plans and pulmonary research. PURPOSE Based on the pulmonary vascular segmentation task of International Symposium on Image Computing and Digital Medicine 2020 challenge, this paper reviews 12 different pulmonary vascular segmentation algorithms of lung CT and CTA images and then objectively evaluates and compares their performances. METHODS First, we present the annotated reference dataset of lung CT and CTA images. A subset of the dataset consisting 7,307 slices for training and 3,888 slices for testing was made available for participants. Second, by analyzing the performance comparison of different convolutional neural networks from 12 different institutions for pulmonary vascular segmentation, the reasons for some defects and improvements are summarized. The models are mainly based on U-Net, Attention, GAN, and multi-scale fusion network. The performance is measured in terms of Dice coefficient, over segmentation rate and under segmentation rate. Finally, we discuss several proposed methods to improve the pulmonary vessel segmentation results using deep neural networks. RESULTS By comparing with the annotated ground truth from both lung CT and CTA images, most of 12 deep neural network algorithms do an admirable job in pulmonary vascular extraction and segmentation with the dice coefficients ranging from 0.70 to 0.85. The dice coefficients for the top three algorithms are about 0.80. CONCLUSIONS Study results show that integrating methods that consider spatial information, fuse multi-scale feature map, or have an excellent post-processing to deep neural network training and optimization process are significant for further improving the accuracy of pulmonary vascular segmentation.
Collapse
Affiliation(s)
- Wenjun Tan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
- College of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Luyu Zhou
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
- College of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Xiaoshuo Li
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
- College of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Xiaoyu Yang
- College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Yufei Chen
- College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
- College of Computer Science and Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
122
|
Lei Y, Tian Z, Wang T, Higgins K, Bradley JD, Curran WJ, Liu T, Yang X. Deep learning-based real-time volumetric imaging for lung stereotactic body radiation therapy: a proof of concept study. Phys Med Biol 2020; 65:235003. [PMID: 33080578 DOI: 10.1088/1361-6560/abc303] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Due to the inter- and intra- variation of respiratory motion, it is highly desired to provide real-time volumetric images during the treatment delivery of lung stereotactic body radiation therapy (SBRT) for accurate and active motion management. In this proof-of-concept study, we propose a novel generative adversarial network integrated with perceptual supervision to derive instantaneous volumetric images from a single 2D projection. Our proposed network, named TransNet, consists of three modules, i.e. encoding, transformation and decoding modules. Rather than only using image distance loss between the generated 3D images and the ground truth 3D CT images to supervise the network, perceptual loss in feature space is integrated into loss function to force the TransNet to yield accurate lung boundary. Adversarial supervision is also used to improve the realism of generated 3D images. We conducted a simulation study on 20 patient cases, who had received lung SBRT treatments in our institution and undergone 4D-CT simulation, and evaluated the efficacy and robustness of our method for four different projection angles, i.e. 0°, 30°, 60° and 90°. For each 3D CT image set of a breathing phase, we simulated its 2D projections at these angles. For each projection angle, a patient's 3D CT images of 9 phases and the corresponding 2D projection data were used to train our network for that specific patient, with the remaining phase used for testing. The mean absolute error of the 3D images obtained by our method are 99.3 ± 14.1 HU. The peak signal-to-noise ratio and structural similarity index metric within the tumor region of interest are 15.4 ± 2.5 dB and 0.839 ± 0.090, respectively. The center of mass distance between the manual tumor contours on the 3D images obtained by our method and the manual tumor contours on the corresponding 3D phase CT images are within 2.6 mm, with a mean value of 1.26 mm averaged over all the cases. Our method has also been validated in a simulated challenging scenario with increased respiratory motion amplitude and tumor shrinkage, and achieved acceptable results. Our experimental results demonstrate the feasibility and efficacy of our 2D-to-3D method for lung cancer patients, which provides a potential solution for in-treatment real-time on-board volumetric imaging for tumor tracking and dose delivery verification to ensure the effectiveness of lung SBRT treatment.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America. Co-first author
| | | | | | | | | | | | | | | |
Collapse
|
123
|
Diniz JOB, Ferreira JL, Diniz PHB, Silva AC, de Paiva AC. Esophagus segmentation from planning CT images using an atlas-based deep learning approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105685. [PMID: 32798976 DOI: 10.1016/j.cmpb.2020.105685] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 07/28/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE One of the main steps in the planning of radiotherapy (RT) is the segmentation of organs at risk (OARs) in Computed Tomography (CT). The esophagus is one of the most difficult OARs to segment. The boundaries between the esophagus and other surrounding tissues are not well-defined, and it is presented in several slices of the CT. Thus, manually segment the esophagus requires a lot of experience and takes time. This difficulty in manual segmentation combined with fatigue due to the number of slices to segment can cause human errors. To address these challenges, computational solutions for analyzing medical images and proposing automated segmentation have been developed and explored in recent years. In this work, we propose a fully automatic method for esophagus segmentation for better planning of radiotherapy in CT. METHODS The proposed method is a fully automated segmentation of the esophagus, consisting of 5 main steps: (a) image acquisition; (b) VOI segmentation; (c) preprocessing; (d) esophagus segmentation; and (e) segmentation refinement. RESULTS The method was applied in a database of 36 CT acquired from 3 different institutes. It achieved the best results in literature so far: Dice coefficient value of 82.15%, Jaccard Index of 70.21%, accuracy of 99.69%, sensitivity of 90.61%, specificity of 99.76%, and Hausdorff Distance of 6.1030 mm. CONCLUSIONS With the achieved results, we were able to show how promising the method is, and that applying it in large medical centers, where esophagus segmentation is still an arduous and challenging task, can be of great help to the specialists.
Collapse
Affiliation(s)
| | - Jonnison Lima Ferreira
- Federal University of Maranho, Brazil; Federal Institute of Amazonas - IFAM, Manaus, AM, Brazil
| | | | | | | |
Collapse
|
124
|
Eckl M, Hoppen L, Sarria GR, Boda-Heggemann J, Simeonova-Chergou A, Steil V, Giordano FA, Fleckenstein J. Evaluation of a cycle-generative adversarial network-based cone-beam CT to synthetic CT conversion algorithm for adaptive radiation therapy. Phys Med 2020; 80:308-316. [PMID: 33246190 DOI: 10.1016/j.ejmp.2020.11.007] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 10/29/2020] [Accepted: 11/05/2020] [Indexed: 12/17/2022] Open
Abstract
PURPOSE Image-guided radiation therapy could benefit from implementing adaptive radiation therapy (ART) techniques. A cycle-generative adversarial network (cycle-GAN)-based cone-beam computed tomography (CBCT)-to-synthetic CT (sCT) conversion algorithm was evaluated regarding image quality, image segmentation and dosimetric accuracy for head and neck (H&N), thoracic and pelvic body regions. METHODS Using a cycle-GAN, three body site-specific models were priorly trained with independent paired CT and CBCT datasets of a kV imaging system (XVI, Elekta). sCT were generated based on first-fraction CBCT for 15 patients of each body region. Mean errors (ME) and mean absolute errors (MAE) were analyzed for the sCT. On the sCT, manually delineated structures were compared to deformed structures from the planning CT (pCT) and evaluated with standard segmentation metrics. Treatment plans were recalculated on sCT. A comparison of clinically relevant dose-volume parameters (D98, D50 and D2 of the target volume) and 3D-gamma (3%/3mm) analysis were performed. RESULTS The mean ME and MAE were 1.4, 29.6, 5.4 Hounsfield units (HU) and 77.2, 94.2, 41.8 HU for H&N, thoracic and pelvic region, respectively. Dice similarity coefficients varied between 66.7 ± 8.3% (seminal vesicles) and 94.9 ± 2.0% (lungs). Maximum mean surface distances were 6.3 mm (heart), followed by 3.5 mm (brainstem). The mean dosimetric differences of the target volumes did not exceed 1.7%. Mean 3D gamma pass rates greater than 97.8% were achieved in all cases. CONCLUSIONS The presented method generates sCT images with a quality close to pCT and yielded clinically acceptable dosimetric deviations. Thus, an important prerequisite towards clinical implementation of CBCT-based ART is fulfilled.
Collapse
Affiliation(s)
- Miriam Eckl
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| | - Lea Hoppen
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany.
| | - Gustavo R Sarria
- Department of Radiology and Radiation Oncology, University Hospital Bonn, Germany
| | - Judit Boda-Heggemann
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| | - Anna Simeonova-Chergou
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| | - Volker Steil
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| | - Frank A Giordano
- Department of Radiology and Radiation Oncology, University Hospital Bonn, Germany
| | - Jens Fleckenstein
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| |
Collapse
|
125
|
Wang Z, Chang Y, Peng Z, Lv Y, Shi W, Wang F, Pei X, Xu XG. Evaluation of deep learning-based auto-segmentation algorithms for delineating clinical target volume and organs at risk involving data for 125 cervical cancer patients. J Appl Clin Med Phys 2020; 21:272-279. [PMID: 33238060 PMCID: PMC7769393 DOI: 10.1002/acm2.13097] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 10/03/2020] [Accepted: 10/21/2020] [Indexed: 12/15/2022] Open
Abstract
Objective To evaluate the accuracy of a deep learning‐based auto‐segmentation mode to that of manual contouring by one medical resident, where both entities tried to mimic the delineation "habits" of the same clinical senior physician. Methods This study included 125 cervical cancer patients whose clinical target volumes (CTVs) and organs at risk (OARs) were delineated by the same senior physician. Of these 125 cases, 100 were used for model training and the remaining 25 for model testing. In addition, the medical resident instructed by the senior physician for approximately 8 months delineated the CTVs and OARs for the testing cases. The dice similarity coefficient (DSC) and the Hausdorff Distance (HD) were used to evaluate the delineation accuracy for CTV, bladder, rectum, small intestine, femoral‐head‐left, and femoral‐head‐right. Results The DSC values of the auto‐segmentation model and manual contouring by the resident were, respectively, 0.86 and 0.83 for the CTV (P < 0.05), 0.91 and 0.91 for the bladder (P > 0.05), 0.88 and 0.84 for the femoral‐head‐right (P < 0.05), 0.88 and 0.84 for the femoral‐head‐left (P < 0.05), 0.86 and 0.81 for the small intestine (P < 0.05), and 0.81 and 0.84 for the rectum (P > 0.05). The HD (mm) values were, respectively, 14.84 and 18.37 for the CTV (P < 0.05), 7.82 and 7.63 for the bladder (P > 0.05), 6.18 and 6.75 for the femoral‐head‐right (P > 0.05), 6.17 and 6.31 for the femoral‐head‐left (P > 0.05), 22.21 and 26.70 for the small intestine (P > 0.05), and 7.04 and 6.13 for the rectum (P > 0.05). The auto‐segmentation model took approximately 2 min to delineate the CTV and OARs while the resident took approximately 90 min to complete the same task. Conclusion The auto‐segmentation model was as accurate as the medical resident but with much better efficiency in this study. Furthermore, the auto‐segmentation approach offers additional perceivable advantages of being consistent and ever improving when compared with manual approaches.
Collapse
Affiliation(s)
- Zhi Wang
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China.,Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Yankui Chang
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| | - Zhao Peng
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| | - Yin Lv
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Weijiong Shi
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Fan Wang
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xi Pei
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China.,Anhui Wisdom Technology Co., Ltd., Hefei, Anhui, China
| | - X George Xu
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| |
Collapse
|
126
|
Lei Y, He X, Yao J, Wang T, Wang L, Li W, Curran WJ, Liu T, Xu D, Yang X. Breast tumor segmentation in 3D automatic breast ultrasound using Mask scoring R-CNN. Med Phys 2020; 48:204-214. [PMID: 33128230 DOI: 10.1002/mp.14569] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 10/20/2020] [Accepted: 10/20/2020] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Automatic breast ultrasound (ABUS) imaging has become an essential tool in breast cancer diagnosis since it provides complementary information to other imaging modalities. Lesion segmentation on ABUS is a prerequisite step of breast cancer computer-aided diagnosis (CAD). This work aims to develop a deep learning-based method for breast tumor segmentation using three-dimensional (3D) ABUS automatically. METHODS For breast tumor segmentation in ABUS, we developed a Mask scoring region-based convolutional neural network (R-CNN) that consists of five subnetworks, that is, a backbone, a regional proposal network, a region convolutional neural network head, a mask head, and a mask score head. A network block building direct correlation between mask quality and region class was integrated into a Mask scoring R-CNN based framework for the segmentation of new ABUS images with ambiguous regions of interest (ROIs). For segmentation accuracy evaluation, we retrospectively investigated 70 patients with breast tumor confirmed with needle biopsy and manually delineated on ABUS, of which 40 were used for fivefold cross-validation and 30 were used for hold-out test. The comparison between the automatic breast tumor segmentations and the manual contours was quantified by I) six metrics including Dice similarity coefficient (DSC), Jaccard index, 95% Hausdorff distance (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and center of mass distance (CMD); II) Pearson correlation analysis and Bland-Altman analysis. RESULTS The mean (median) DSC was 85% ± 10.4% (89.4%) and 82.1% ± 14.5% (85.6%) for cross-validation and hold-out test, respectively. The corresponding HD95, MSD, RMSD, and CMD of the two tests was 1.646 ± 1.191 and 1.665 ± 1.129 mm, 0.489 ± 0.406 and 0.475 ± 0.371 mm, 0.755 ± 0.755 and 0.751 ± 0.508 mm, and 0.672 ± 0.612 and 0.665 ± 0.729 mm. The mean volumetric difference (mean and ± 1.96 standard deviation) was 0.47 cc ([-0.77, 1.71)) for the cross-validation and 0.23 cc ([-0.23 0.69]) for hold-out test, respectively. CONCLUSION We developed a novel Mask scoring R-CNN approach for the automated segmentation of the breast tumor in ABUS images and demonstrated its accuracy for breast tumor segmentation. Our learning-based method can potentially assist the clinical CAD of breast cancer using 3D ABUS imaging.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jincao Yao
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital.,Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Lijing Wang
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital.,Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Wei Li
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital.,Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Dong Xu
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital.,Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
127
|
Abstract
This paper presents a review of deep learning (DL)-based medical image registration methods. We summarized the latest developments and applications of DL-based registration methods in the medical field. These methods were classified into seven categories according to their methods, functions and popularity. A detailed review of each category was presented, highlighting important contributions and identifying specific challenges. A short assessment was presented following the detailed review of each category to summarize its achievements and future potential. We provided a comprehensive comparison among DL-based methods for lung and brain registration using benchmark datasets. Lastly, we analyzed the statistics of all the cited works from various aspects, revealing the popularity and future trend of DL-based medical image registration.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | | | | | | | | | | |
Collapse
|
128
|
Total marrow and total lymphoid irradiation in bone marrow transplantation for acute leukaemia. Lancet Oncol 2020; 21:e477-e487. [PMID: 33002443 DOI: 10.1016/s1470-2045(20)30342-9] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 04/20/2020] [Accepted: 05/28/2020] [Indexed: 02/06/2023]
Abstract
The use of total body irradiation as part of conditioning regimens for acute leukaemia is progressively declining because of concerns of late toxic effects and the introduction of radiation-free regimens. Total marrow irradiation and total marrow and lymphoid irradiation represent more targeted forms of radiotherapy compared with total body irradiation that have the potential to decrease toxicity and escalate the dose to the bone marrow for high-risk patients. We review the technological basis and the clinical development of total marrow irradiation and total marrow and lymphoid irradiation, highlighting both the possible advantages as well as the current roadblocks for widespread implementation among transplantation units. The exact role of total marrow irradiation or total marrow and lymphoid irradiation in new conditioning regimens seems dependent on its technological implementation, aiming to make the whole procedure less time consuming, more streamlined, and easier to integrate into the clinical workflow. We also foresee a role for computer-assisted planning, as a way to improve planning and delivery and to incorporate total marrow irradiation and total marrow and lymphoid irradiation in multi-centric phase 2-3 trials.
Collapse
|
129
|
Fu Y, Lei Y, Wang T, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Biomechanically constrained non-rigid MR-TRUS prostate registration using deep learning based 3D point cloud matching. Med Image Anal 2020; 67:101845. [PMID: 33129147 DOI: 10.1016/j.media.2020.101845] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 08/17/2020] [Accepted: 08/31/2020] [Indexed: 01/04/2023]
Abstract
A non-rigid MR-TRUS image registration framework is proposed for prostate interventions. The registration framework consists of a convolutional neural networks (CNN) for MR prostate segmentation, a CNN for TRUS prostate segmentation and a point-cloud based network for rapid 3D point cloud matching. Volumetric prostate point clouds were generated from the segmented prostate masks using tetrahedron meshing. The point cloud matching network was trained using deformation field that was generated by finite element analysis. Therefore, the network implicitly models the underlying biomechanical constraint when performing point cloud matching. A total of 50 patients' datasets were used for the network training and testing. Alignment of prostate shapes after registration was evaluated using three metrics including Dice similarity coefficient (DSC), mean surface distance (MSD) and Hausdorff distance (HD). Internal point-to-point registration accuracy was assessed using target registration error (TRE). Jacobian determinant and strain tensors of the predicted deformation field were calculated to analyze the physical fidelity of the deformation field. On average, the mean and standard deviation were 0.94±0.02, 0.90±0.23 mm, 2.96±1.00 mm and 1.57±0.77 mm for DSC, MSD, HD and TRE, respectively. Robustness of our method to point cloud noise was evaluated by adding different levels of noise to the query point clouds. Our results demonstrated that the proposed method could rapidly perform MR-TRUS image registration with good registration accuracy and robustness.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States
| | - Yang Lei
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Pretesh Patel
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Ashesh B Jani
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Hui Mao
- Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States; Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30322, United States
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Tian Liu
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States.
| |
Collapse
|
130
|
Zhu J, Chen X, Yang B, Bi N, Zhang T, Men K, Dai J. Evaluation of Automatic Segmentation Model With Dosimetric Metrics for Radiotherapy of Esophageal Cancer. Front Oncol 2020; 10:564737. [PMID: 33117694 PMCID: PMC7550908 DOI: 10.3389/fonc.2020.564737] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Accepted: 08/17/2020] [Indexed: 12/11/2022] Open
Abstract
Background and Purpose: Automatic segmentation model is proven to be efficient in delineation of organs at risk (OARs) in radiotherapy; its performance is usually evaluated with geometric differences between automatic and manual delineations. However, dosimetric differences attract more interests than geometric differences in the clinic. Therefore, this study aimed to evaluate the performance of automatic segmentation with dosimetric metrics for volumetric modulated arc therapy of esophageal cancer patients. Methods: Nineteen esophageal cancer cases were included in this study. Clinicians manually delineated the target volumes and the OARs for each case. Another set of OARs was automatically generated using convolutional neural network models. The radiotherapy plans were optimized with the manually delineated targets and the automatically delineated OARs separately. Segmentation accuracy was evaluated by Dice similarity coefficient (DSC) and mean distance to agreement (MDA). Dosimetric metrics of manually and automatically delineated OARs were obtained and compared. The clinically acceptable dose difference and volume difference of OARs between manual and automatic delineations are supposed to be within 1 Gy and 1%, respectively. Results: Average DSC values were greater than 0.92 except for the spinal cord (0.82), and average MDA values were <0.90 mm except for the heart (1.74 mm). Eleven of the 20 dosimetric metrics of the OARs were not significant (P > 0.05). Although there were significant differences (P < 0.05) for the spinal cord (D2%), left lung (V10, V20, V30, and mean dose), and bilateral lung (V10, V20, V30, and mean dose), their absolute differences were small and acceptable for the clinic. The maximum dosimetric metrics differences of OARs between manual and automatic delineations were ΔD2% = 0.35 Gy for the spinal cord and ΔV30 = 0.4% for the bilateral lung, which were within the clinical criteria in this study. Conclusion: Dosimetric metrics were proposed to evaluate the automatic delineation in radiotherapy planning of esophageal cancer. Consequently, the automatic delineation could substitute the manual delineation for esophageal cancer radiotherapy planning based on the dosimetric evaluation in this study.
Collapse
Affiliation(s)
- Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Nan Bi
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Tao Zhang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
131
|
Fan G, Liu H, Wang D, Feng C, Li Y, Yin B, Zhou Z, Gu X, Zhang H, Lu Y, He S. Deep learning-based lumbosacral reconstruction for difficulty prediction of percutaneous endoscopic transforaminal discectomy at L5/S1 level: A retrospective cohort study. Int J Surg 2020; 82:162-169. [PMID: 32882401 DOI: 10.1016/j.ijsu.2020.08.036] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 07/30/2020] [Accepted: 08/19/2020] [Indexed: 01/11/2023]
Abstract
BACKGROUND Deep learning has been validated as a promising technique for automatic segmentation and rapid three-dimensional (3D) reconstruction of lumbosacral structures on CT. Simulated foraminoplasty of percutaneous endoscopic transforaminal discectomy (PETD) through the Kambin triangle may benefit viability assessment of PETD at L5/S1 level. MATERIAL AND METHODS Medical records and radiographic data of patients with L5/S1 lumbar disc herniation (LDH) who received a single-level PETD from March 2013 to February 2018 were retrospectively collected and analyzed. Deep learning was adopted to achieve semantic segmentation of lumbosacral structures (nerve, bone, disc) on CT, and the segmented masks on reconstructed 3D models. Two observers measured the area of the Kambin triangle on 6 selected deep learning-derived 3D (DL-3D) models and ground truth-derived 3D (GT-3D) models, and intraclass correlation coefficient (ICC) was calculated to assess the test-retest and interobserver reliability. Foraminoplasty of PETD was simulated on L5/S1 lumbosacral 3D models. Patients with extended foraminoplasty or stuck canula occurs on simulations were predicted as PETD-difficult cases (Group A). The remaining patients were regarded as PETD-normal cases (Group B). Clinical information and outcomes were compared between the two groups. RESULTS Deep learning-derived 3D models of lumbosacral structures (nerves, bones, and disc) from thin-layer CT were reliable. The area of the Kambin triangle was 161.27 ± 40.10 mm2 on DL-3D models and 153.57 ± 32.37 mm2 on GT-3D models (p = 0.206). Reliability test revealed strong test-retest reliability (ICC between 0.947 and 0.971) and interobserver reliability of multiple measurements (ICC between 0.866 and 0.961). The average operation time was 99.62 ± 17.39 min in Group A and 88.93 ± 21.87 min in Group B (P = 0.025). No significant differences in patient-reported outcomes or complications were observed between the two groups (P > 0.05). CONCLUSION Deep learning achieved accurate and rapid segmentations of lumbosacral structures on CT, and deep learning-based 3D reconstructions were efficacious and reliable. Foraminoplasty simulation with deep learning-based lumbosacral reconstructions may benefit surgical difficulty prediction of PETD at L5/S1 level.
Collapse
Affiliation(s)
- Guoxin Fan
- Department of Spine Surgery, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China; Department of Orthopaedics, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China.
| | - Huaqing Liu
- Artificial Intelligence Innovation Center, Research Institute of Tsinghua, Pearl River Delta, Guangzhou, 510735, China
| | - Dongdong Wang
- Department of Orthopaedic Trauma, East Hospital, Tongji University School of Medicine, Shanghai, China
| | - Chaobo Feng
- Department of Orthopaedics, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China; Spinal Pain Research Institute, Tongji University School of Medicine, Shanghai, China
| | - Yufeng Li
- Department of Sports Medicine, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Bangde Yin
- Department of Orthopaedics, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China
| | - Zhi Zhou
- Department of Orthopaedics, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China; Spinal Pain Research Institute, Tongji University School of Medicine, Shanghai, China
| | - Xin Gu
- Department of Orthopaedics, Changzheng Hospital Affiliated to the Second Military Medical University, Shanghai, China
| | - Hailong Zhang
- Department of Orthopaedics, Shanghai Putuo People's Hospital, Tongji University School of Medicine, Shanghai, China
| | - Yi Lu
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Shisheng He
- Department of Orthopaedics, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China; Spinal Pain Research Institute, Tongji University School of Medicine, Shanghai, China.
| |
Collapse
|
132
|
Sultana S, Robinson A, Song DY, Lee J. Automatic multi-organ segmentation in computed tomography images using hierarchical convolutional neural network. JOURNAL OF MEDICAL IMAGING (BELLINGHAM, WASH.) 2020; 7:055001. [PMID: 33102622 DOI: 10.1117/1.jmi.7.5.055001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 09/28/2020] [Indexed: 01/17/2023]
Abstract
Purpose: Accurate segmentation of treatment planning computed tomography (CT) images is important for radiation therapy (RT) planning. However, low soft tissue contrast in CT makes the segmentation task challenging. We propose a two-step hierarchical convolutional neural network (CNN) segmentation strategy to automatically segment multiple organs from CT. Approach: The first step generates a coarse segmentation from which organ-specific regions of interest (ROIs) are produced. The second step produces detailed segmentation of each organ. The ROIs are generated using UNet, which automatically identifies the area of each organ and improves computational efficiency by eliminating irrelevant background information. For the fine segmentation step, we combined UNet with a generative adversarial network. The generator is designed as a UNet that is trained to segment organ structures and the discriminator is a fully convolutional network, which distinguishes whether the segmentation is real or generator-predicted, thus improving the segmentation accuracy. We validated the proposed method on male pelvic and head and neck (H&N) CTs used for RT planning of prostate and H&N cancer, respectively. For the pelvic structure segmentation, the network was trained to segment the prostate, bladder, and rectum. For H&N, the network was trained to segment the parotid glands (PG) and submandibular glands (SMG). Results: The trained segmentation networks were tested on 15 pelvic and 20 H&N independent datasets. The H&N segmentation network was also tested on a public domain dataset ( N = 38 ) and showed similar performance. The average dice similarity coefficients ( mean ± SD ) of pelvic structures are 0.91 ± 0.05 (prostate), 0.95 ± 0.06 (bladder), 0.90 ± 0.09 (rectum), and H&N structures are 0.87 ± 0.04 (PG) and 0.86 ± 0.05 (SMG). The segmentation for each CT takes < 10 s on average. Conclusions: Experimental results demonstrate that the proposed method can produce fast, accurate, and reproducible segmentation of multiple organs of different sizes and shapes and show its potential to be applicable to different disease sites.
Collapse
Affiliation(s)
- Sharmin Sultana
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Adam Robinson
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Daniel Y Song
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Junghoon Lee
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| |
Collapse
|
133
|
Liu Y, Lei Y, Fu Y, Wang T, Tang X, Jiang X, Curran WJ, Liu T, Patel P, Yang X. CT-based multi-organ segmentation using a 3D self-attention U-net network for pancreatic radiotherapy. Med Phys 2020; 47:4316-4324. [PMID: 32654153 DOI: 10.1002/mp.14386] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Revised: 07/05/2020] [Accepted: 07/06/2020] [Indexed: 01/24/2023] Open
Abstract
PURPOSE Segmentation of organs-at-risk (OARs) is a weak link in radiotherapeutic treatment planning process because the manual contouring action is labor-intensive and time-consuming. This work aimed to develop a deep learning-based method for rapid and accurate pancreatic multi-organ segmentation that can expedite the treatment planning process. METHODS We retrospectively investigated one hundred patients with computed tomography (CT) simulation scanned and contours delineated. Eight OARs including large bowel, small bowel, duodenum, left kidney, right kidney, liver, spinal cord and stomach were the target organs to be segmented. The proposed three-dimensional (3D) deep attention U-Net is featured with a deep attention strategy to effectively differentiate multiple organs. Performance of the proposed method was evaluated using six metrics, including Dice similarity coefficient (DSC), sensitivity, specificity, Hausdorff distance 95% (HD95), mean surface distance (MSD) and residual mean square distance (RMSD). RESULTS The contours generated by the proposed method closely resemble the ground-truth manual contours, as evidenced by encouraging quantitative results in terms of DSC, sensitivity, specificity, HD95, MSD and RMSD. For DSC, mean values of 0.91 ± 0.03, 0.89 ± 0.06, 0.86 ± 0.06, 0.95 ± 0.02, 0.95 ± 0.02, 0.96 ± 0.01, 0.87 ± 0.05 and 0.93 ± 0.03 were achieved for large bowel, small bowel, duodenum, left kidney, right kidney, liver, spinal cord and stomach, respectively. CONCLUSIONS The proposed method could significantly expedite the treatment planning process by rapidly segmenting multiple OARs. The method could potentially be used in pancreatic adaptive radiotherapy to increase dose delivery accuracy and minimize gastrointestinal toxicity.
Collapse
Affiliation(s)
- Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaojun Jiang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
134
|
Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) - A Systematic Review. Acad Radiol 2020; 27:1175-1185. [PMID: 32035758 DOI: 10.1016/j.acra.2019.12.024] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 12/24/2019] [Accepted: 12/27/2019] [Indexed: 12/22/2022]
Abstract
RATIONALE AND OBJECTIVES Generative adversarial networks (GANs) are deep learning models aimed at generating fake realistic looking images. These novel models made a great impact on the computer vision field. Our study aims to review the literature on GANs applications in radiology. MATERIALS AND METHODS This systematic review followed the PRISMA guidelines. Electronic datasets were searched for studies describing applications of GANs in radiology. We included studies published up-to September 2019. RESULTS Data were extracted from 33 studies published between 2017 and 2019. Eighteen studies focused on CT images generation, ten on MRI, three on PET/MRI and PET/CT, one on ultrasound and one on X-ray. Applications in radiology included image reconstruction and denoising for dose and scan time reduction (fourteen studies), data augmentation (six studies), transfer between modalities (eight studies) and image segmentation (five studies). All studies reported that generated images improved the performance of the developed algorithms. CONCLUSION GANs are increasingly studied for various radiology applications. They enable the creation of new data, which can be used to improve clinical care, education and research.
Collapse
|
135
|
Wang T, Lei Y, Fu Y, Curran WJ, Liu T, Nye JA, Yang X. Machine learning in quantitative PET: A review of attenuation correction and low-count image reconstruction methods. Phys Med 2020; 76:294-306. [PMID: 32738777 PMCID: PMC7484241 DOI: 10.1016/j.ejmp.2020.07.028] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/13/2020] [Accepted: 07/21/2020] [Indexed: 02/08/2023] Open
Abstract
The rapid expansion of machine learning is offering a new wave of opportunities for nuclear medicine. This paper reviews applications of machine learning for the study of attenuation correction (AC) and low-count image reconstruction in quantitative positron emission tomography (PET). Specifically, we present the developments of machine learning methodology, ranging from random forest and dictionary learning to the latest convolutional neural network-based architectures. For application in PET attenuation correction, two general strategies are reviewed: 1) generating synthetic CT from MR or non-AC PET for the purposes of PET AC, and 2) direct conversion from non-AC PET to AC PET. For low-count PET reconstruction, recent deep learning-based studies and the potential advantages over conventional machine learning-based methods are presented and discussed. In each application, the proposed methods, study designs and performance of published studies are listed and compared with a brief discussion. Finally, the overall contributions and remaining challenges are summarized.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Jonathon A Nye
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
136
|
Peña-Solórzano CA, Albrecht DW, Bassed RB, Gillam J, Harris PC, Dimmock MR. Semi-supervised labelling of the femur in a whole-body post-mortem CT database using deep learning. Comput Biol Med 2020; 122:103797. [PMID: 32658723 DOI: 10.1016/j.compbiomed.2020.103797] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 04/29/2020] [Accepted: 04/29/2020] [Indexed: 01/16/2023]
Abstract
A deep learning pipeline was developed and used to localize and classify a variety of implants in the femur contained in whole-body post-mortem computed tomography (PMCT) scans. The results provide a proof-of-principle approach for labelling content not described in medical/autopsy reports. The pipeline, which incorporated residual networks and an autoencoder, was trained and tested using n = 450 full-body PMCT scans. For the localization component, Dice scores of 0.99, 0.96, and 0.98 and mean absolute errors of 3.2, 7.1, and 4.2 mm were obtained in the axial, coronal, and sagittal views, respectively. A regression analysis found the orientation of the implant to the scanner axis and also the relative positioning of extremities to be statistically significant factors. For the classification component, test cases were properly labelled as nail (N+), hip replacement (H+), knee replacement (K+) or without-implant (I-) with an accuracy >97%. The recall for I- and H+ cases was 1.00, but fell to 0.82 and 0.65 for cases with K+ and N+. This semi-automatic approach provides a generalized structure for image-based labelling of features, without requiring time-consuming segmentation.
Collapse
Affiliation(s)
- C A Peña-Solórzano
- Department of Medical Imaging and Radiation Sciences, Monash University, Wellington Rd, Clayton, Melbourne, VIC, 3800, Australia.
| | - D W Albrecht
- Clayton School of Information Technology, Monash University, Wellington Rd, Clayton, Melbourne, VIC, 3800, Australia.
| | - R B Bassed
- Victorian Institute of Forensic Medicine, 57-83 Kavanagh St., Southbank, Melbourne, VIC, 3006, Australia; Department of Forensic Medicine, Monash University, Wellington Rd, Clayton, Melbourne, VIC, 3800, Australia.
| | - J Gillam
- Land Division, Defence Science and Technology Group, Fishermans Bend, Melbourne, VIC, 3207, Australia.
| | - P C Harris
- The Royal Children's Hospital Melbourne, 50 Flemington Road, Parkville, Melbourne, VIC, 3052, Australia; Department of Orthopaedic Surgery, Western Health, Footscray Hospital, Gordon St, Footscray, Melbourne, VIC, 3011, Australia.
| | - M R Dimmock
- Department of Medical Imaging and Radiation Sciences, Monash University, Wellington Rd, Clayton, Melbourne, VIC, 3800, Australia.
| |
Collapse
|
137
|
Dai X, Lei Y, Zhang Y, Qiu RLJ, Wang T, Dresser SA, Curran WJ, Patel P, Liu T, Yang X. Automatic multi-catheter detection using deeply supervised convolutional neural network in MRI-guided HDR prostate brachytherapy. Med Phys 2020; 47:4115-4124. [PMID: 32484573 DOI: 10.1002/mp.14307] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 05/19/2020] [Accepted: 05/24/2020] [Indexed: 12/19/2022] Open
Abstract
PURPOSE High-dose-rate (HDR) brachytherapy is an established technique to be used as monotherapy option or focal boost in conjunction with external beam radiation therapy (EBRT) for treating prostate cancer. Radiation source path reconstruction is a critical procedure in HDR treatment planning. Manually identifying the source path is labor intensive and time inefficient. In recent years, magnetic resonance imaging (MRI) has become a valuable imaging modality for image-guided HDR prostate brachytherapy due to its superb soft-tissue contrast for target delineation and normal tissue contouring. The purpose of this study is to investigate a deep-learning-based method to automatically reconstruct multiple catheters in MRI for prostate cancer HDR brachytherapy treatment planning. METHODS Attention gated U-Net incorporated with total variation (TV) regularization model was developed for multi-catheter segmentation in MRI. The attention gates were used to improve the accuracy of identifying small catheter points, while TV regularization was adopted to encode the natural spatial continuity of catheters into the model. The model was trained using the binary catheter annotation images offered by experienced physicists as ground truth paired with original MRI images. After the network was trained, MR images of a new prostate cancer patient receiving HDR brachytherapy were fed into the model to predict the locations and shapes of all the catheters. Quantitative assessments of our proposed method were based on catheter shaft and tip errors compared to the ground truth. RESULTS Our method detected 299 catheters from 20 patients receiving HDR prostate brachytherapy with a catheter tip error of 0.37 ± 1.68 mm and a catheter shaft error of 0.93 ± 0.50 mm. For detection of catheter tips, our method resulted in 87% of the catheter tips within an error of less than ± 2.0 mm, and more than 71% of the tips can be localized within an absolute error of no >1.0 mm. For catheter shaft localization, 97% of catheters were detected with an error of <2.0 mm, while 63% were within 1.0 mm. CONCLUSIONS In this study, we proposed a novel multi-catheter detection method to precisely localize the tips and shafts of catheters in three-dimensional MRI images of HDR prostate brachytherapy. It paves the way for elevating the quality and outcome of MRI-guided HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Sean A Dresser
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| |
Collapse
|
138
|
Schreier J, Attanasi F, Laaksonen H. Generalization vs. Specificity: In Which Cases Should a Clinic Train its Own Segmentation Models? Front Oncol 2020; 10:675. [PMID: 32477941 PMCID: PMC7241256 DOI: 10.3389/fonc.2020.00675] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Accepted: 04/09/2020] [Indexed: 11/25/2022] Open
Abstract
As artificial intelligence for image segmentation becomes increasingly available, the question whether these solutions generalize between different hospitals and geographies arises. The present study addresses this question by comparing multi-institutional models to site-specific models. Using CT data sets from four clinics for organs-at-risk of the female breast, female pelvis and male pelvis, we differentiate between the effect from population differences and differences in clinical practice. Our study, thus, provides guidelines to hospitals, in which case the training of a custom, hospital-specific deep neural network is to be advised and when a network provided by a third-party can be used. The results show that for the organs of the female pelvis and the heart the segmentation quality is influenced solely on bases of the training set size, while the patient population variability affects the female breast segmentation quality above the effect of the training set size. In the comparison of site-specific contours on the male pelvis, we see that for a sufficiently large data set size, a custom, hospital-specific model outperforms a multi-institutional one on some of the organs. However, for small hospital-specific data sets a multi-institutional model provides the better segmentation quality.
Collapse
Affiliation(s)
- Jan Schreier
- Varian Medical Systems (United States), Palo Alto, CA, United States
| | | | | |
Collapse
|
139
|
He X, Guo BJ, Lei Y, Wang T, Fu Y, Curran WJ, Zhang LJ, Liu T, Yang X. Automatic segmentation and quantification of epicardial adipose tissue from coronary computed tomography angiography. Phys Med Biol 2020; 65:095012. [PMID: 32182595 DOI: 10.1088/1361-6560/ab8077] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Epicardial adipose tissue (EAT) is a visceral fat deposit, that's known for its association with factors, such as obesity, diabetes mellitus, age, and hypertension. Segmentation of the EAT in a fast and reproducible way is important for the interpretation of its role as an independent risk marker intricate. However, EAT has a variable distribution, and various diseases may affect the volume of the EAT, which can increase the complexity of the already time-consuming manual segmentation work. We propose a 3D deep attention U-Net method to automatically segment the EAT from coronary computed tomography angiography (CCTA). Five-fold cross-validation and hold-out experiments were used to evaluate the proposed method through a retrospective investigation of 200 patients. The automatically segmented EAT volume was compared with physician-approved clinical contours. Quantitative metrics used were the Dice similarity coefficient (DSC), sensitivity, specificity, Jaccard index (JAC), Hausdorff distance (HD), mean surface distance (MSD), residual mean square distance (RMSD), and the center of mass distance (CMD). For cross-validation, the median DSC, sensitivity, and specificity were 92.7%, 91.1%, and 95.1%, respectively, with JAC, HD, CMD, MSD, and RMSD are 82.9% ± 8.8%, 3.77 ± 1.86 mm, 1.98 ± 1.50 mm, 0.37 ± 0.24 mm, and 0.65 ± 0.37 mm, respectively. For the hold-out test, the accuracy of the proposed method remained high. We developed a novel deep learning-based approach for the automated segmentation of the EAT on CCTA images. We demonstrated the high accuracy of the proposed learning-based segmentation method through comparison with ground truth contour of 200 clinical patient cases using 8 quantitative metrics, Pearson correlation, and Bland-Altman analysis. Our automatic EAT segmentation results show the potential of the proposed method to be used in computer-aided diagnosis of coronary artery diseases (CADs) in clinical settings.
Collapse
Affiliation(s)
- Xiuxiu He
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America. Co-first author
| | | | | | | | | | | | | | | | | |
Collapse
|
140
|
Hwang EJ, Park CM. Clinical Implementation of Deep Learning in Thoracic Radiology: Potential Applications and Challenges. Korean J Radiol 2020; 21:511-525. [PMID: 32323497 PMCID: PMC7183830 DOI: 10.3348/kjr.2019.0821] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2019] [Accepted: 01/31/2020] [Indexed: 12/25/2022] Open
Abstract
Chest X-ray radiography and computed tomography, the two mainstay modalities in thoracic radiology, are under active investigation with deep learning technology, which has shown promising performance in various tasks, including detection, classification, segmentation, and image synthesis, outperforming conventional methods and suggesting its potential for clinical implementation. However, the implementation of deep learning in daily clinical practice is in its infancy and facing several challenges, such as its limited ability to explain the output results, uncertain benefits regarding patient outcomes, and incomplete integration in daily workflow. In this review article, we will introduce the potential clinical applications of deep learning technology in thoracic radiology and discuss several challenges for its implementation in daily clinical practice.
Collapse
Affiliation(s)
- Eui Jin Hwang
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Chang Min Park
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.
| |
Collapse
|
141
|
Feng X, Bernard ME, Hunter T, Chen Q. Improving accuracy and robustness of deep convolutional neural network based thoracic OAR segmentation. Phys Med Biol 2020; 65:07NT01. [PMID: 32079002 DOI: 10.1088/1361-6560/ab7877] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Deep convolutional neural network (DCNN) has shown great success in various medical image segmentation tasks, including organ-at-risk (OAR) segmentation from computed tomography (CT) images. However, most studies use the dataset from the same source(s) for training and testing so that the ability of a trained DCNN to generalize to a different dataset is not well studied, as well as the strategy to address the issue of performance drop on a different dataset. In this study we investigated the performance of a well-trained DCNN model from a public dataset for thoracic OAR segmentation on a local dataset and explored the systematic differences between the datasets. We observed that a subtle shift of organs inside patient body due to the abdominal compression technique during image acquisition caused significantly worse performance on the local dataset. Furthermore, we developed an optimal strategy via incorporating different numbers of new cases from the local institution and using transfer learning to improve the accuracy and robustness of the trained DCNN model. We found that by adding as few as 10 cases from the local institution, the performance can reach the same level as in the original dataset. With transfer learning, the training time can be significantly shortened with slightly worse performance for heart segmentation.
Collapse
Affiliation(s)
- Xue Feng
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22903, United States of America. Carina Medical LLC, 145 Graham Ave, A168, Lexington, KY 40536, United States of America
| | | | | | | |
Collapse
|
142
|
Liu Y, Lei Y, Wang T, Fu Y, Tang X, Curran WJ, Liu T, Patel P, Yang X. CBCT-based synthetic CT generation using deep-attention cycleGAN for pancreatic adaptive radiotherapy. Med Phys 2020; 47:2472-2483. [PMID: 32141618 DOI: 10.1002/mp.14121] [Citation(s) in RCA: 103] [Impact Index Per Article: 25.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 02/27/2020] [Accepted: 02/27/2020] [Indexed: 12/18/2022] Open
Abstract
PURPOSE Current clinical application of cone-beam CT (CBCT) is limited to patient setup. Imaging artifacts and Hounsfield unit (HU) inaccuracy make the process of CBCT-based adaptive planning presently impractical. In this study, we developed a deep-learning-based approach to improve CBCT image quality and HU accuracy for potential extended clinical use in CBCT-guided pancreatic adaptive radiotherapy. METHODS Thirty patients previously treated with pancreas SBRT were included. The CBCT acquired prior to the first fraction of treatment was registered to the planning CT for training and generation of synthetic CT (sCT). A self-attention cycle generative adversarial network (cycleGAN) was used to generate CBCT-based sCT. For the cohort of 30 patients, the CT-based contours and treatment plans were transferred to the first fraction CBCTs and sCTs for dosimetric comparison. RESULTS At the site of abdomen, mean absolute error (MAE) between CT and sCT was 56.89 ± 13.84 HU, comparing to 81.06 ± 15.86 HU between CT and the raw CBCT. No significant differences (P > 0.05) were observed in the PTV and OAR dose-volume-histogram (DVH) metrics between the CT- and sCT-based plans, while significant differences (P < 0.05) were found between the CT- and the CBCT-based plans. CONCLUSIONS The image similarity and dosimetric agreement between the CT and sCT-based plans validated the dose calculation accuracy carried by sCT. The CBCT-based sCT approach can potentially increase treatment precision and thus minimize gastrointestinal toxicity.
Collapse
Affiliation(s)
- Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
143
|
Nemoto T, Futakami N, Yagi M, Kumabe A, Takeda A, Kunieda E, Shigematsu N. Efficacy evaluation of 2D, 3D U-Net semantic segmentation and atlas-based segmentation of normal lungs excluding the trachea and main bronchi. JOURNAL OF RADIATION RESEARCH 2020; 61:257-264. [PMID: 32043528 PMCID: PMC7246058 DOI: 10.1093/jrr/rrz086] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Revised: 09/23/2019] [Accepted: 12/28/2019] [Indexed: 05/29/2023]
Abstract
This study aimed to examine the efficacy of semantic segmentation implemented by deep learning and to confirm whether this method is more effective than a commercially dominant auto-segmentation tool with regards to delineating normal lung excluding the trachea and main bronchi. A total of 232 non-small-cell lung cancer cases were examined. The computed tomography (CT) images of these cases were converted from Digital Imaging and Communications in Medicine (DICOM) Radiation Therapy (RT) formats to arrays of 32 × 128 × 128 voxels and input into both 2D and 3D U-Net, which are deep learning networks for semantic segmentation. The number of training, validation and test sets were 160, 40 and 32, respectively. Dice similarity coefficients (DSCs) of the test set were evaluated employing Smart SegmentationⓇ Knowledge Based Contouring (Smart segmentation is an atlas-based segmentation tool), as well as the 2D and 3D U-Net. The mean DSCs of the test set were 0.964 [95% confidence interval (CI), 0.960-0.968], 0.990 (95% CI, 0.989-0.992) and 0.990 (95% CI, 0.989-0.991) with Smart segmentation, 2D and 3D U-Net, respectively. Compared with Smart segmentation, both U-Nets presented significantly higher DSCs by the Wilcoxon signed-rank test (P < 0.01). There was no difference in mean DSC between the 2D and 3D U-Net systems. The newly-devised 2D and 3D U-Net approaches were found to be more effective than a commercial auto-segmentation tool. Even the relatively shallow 2D U-Net which does not require high-performance computational resources was effective enough for the lung segmentation. Semantic segmentation using deep learning was useful in radiation treatment planning for lung cancers.
Collapse
Affiliation(s)
- Takafumi Nemoto
- Division of Radiation Oncology, Saiseikai Yokohamashi Tobu-Hospital, Shimosueyoshi 3-6-1, Tsurumi-ku, Yokohama-shi, Kanagawa, 230-8765, Japan
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjyuku-ku, Tokyo, 160-8582, Japan
| | - Natsumi Futakami
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Masamichi Yagi
- HPC&AI Business Dept., Platform Technical Engineer Div., System Platform Solution Unit, Fujitsu Limited, World Trade Center Building, 4-1, Hamamatsucho 2-chome, Minato-ku, Tokyo, 105-6125, Japan
| | - Atsuhiro Kumabe
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjyuku-ku, Tokyo, 160-8582, Japan
| | - Atsuya Takeda
- Radiation Oncology Center, Ofuna Chuo Hospital, Kamakura, 247-0056, Japan
| | - Etsuo Kunieda
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Naoyuki Shigematsu
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjyuku-ku, Tokyo, 160-8582, Japan
| |
Collapse
|
144
|
Jun Guo B, He X, Lei Y, Harms J, Wang T, Curran WJ, Liu T, Jiang Zhang L, Yang X. Automated left ventricular myocardium segmentation using 3D deeply supervised attention U‐net for coronary computed tomography angiography; CT myocardium segmentation. Med Phys 2020; 47:1775-1785. [DOI: 10.1002/mp.14066] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Revised: 01/22/2020] [Accepted: 01/28/2020] [Indexed: 01/30/2023] Open
Affiliation(s)
- Bang Jun Guo
- Department of Medical Imaging Jinling Hospital The First School of Clinical Medicine Southern Medical University Nanjing210002China
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Joseph Harms
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Long Jiang Zhang
- Department of Medical Imaging Jinling Hospital The First School of Clinical Medicine Southern Medical University Nanjing210002China
- Department of Medical Imaging Jinling Hospital Medical School of Nanjing University Nanjing210002China
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| |
Collapse
|
145
|
Dong P, Xing L. Deep DoseNet: a deep neural network for accurate dosimetric transformation between different spatial resolutions and/or different dose calculation algorithms for precision radiation therapy. Phys Med Biol 2020; 65:035010. [PMID: 31869825 DOI: 10.1088/1361-6560/ab652d] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The purpose of this work is to introduce a novel deep learning strategy to obtain highly accurate dose plan by transforming from a dose distribution calculated using a low-cost algorithm (or algorithmic settings). 25 168 slices of dose distribution are calculated using Eclipse treatment planning system V15.6 (Varian Medical Systems, Palo Alto, CA) on ten patient CTs whose treatment sites ranging from lung, brain, abdomen and pelvis, with a grid size of 1.25 × 1.25 × 1.25 mm using both anisotropic analytical algorithm (AAA) in 5 mm resolution and Acuros XB algorithm (AXB) in 1.25 mm resolution. The AAA dose slices, and the corresponding down sampled CT slices are combined to form a tensor with a size of 2 × 64 × 64, working as the input to the deep learning-based dose calculation network (deep DoseNet), which outputs the calculated Acuros dose with a size of 256 × 256. The deep DoseNet (DDN) consists of a feature extraction component and an upscaling part. The DDN converges after ~100 epochs with a learning rate of [Formula: see text], using ADAM. We compared up sampled AAA dose and DDN output with that of AXB. For the evaluation set, the average mean-square-error decreased from 4.7 × [Formula: see text] between AAA and AXB to 7.0 × 10-5 between DDN and AXB, with an average improvement of ~12 times. The average Gamma index passing rate at 3mm3% improved from 76% between AAA and AXB to 91% between DDN and AXB. The average calculation time is less than 1 ms for a single slice on a NVIDIA DGX workstation. DDN, trained with a large amount of dosimetric data, can be employed as a general-purpose dose calculation acceleration engine across various dose calculation algorithms.
Collapse
Affiliation(s)
- Peng Dong
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305-5847, United States of America
| | | |
Collapse
|
146
|
Lei Y, Wang T, Tian S, Dong X, Jani AB, Schuster D, Curran WJ, Patel P, Liu T, Yang X. Male pelvic multi-organ segmentation aided by CBCT-based synthetic MRI. Phys Med Biol 2020; 65:035013. [PMID: 31851956 DOI: 10.1088/1361-6560/ab63bb] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
To develop an automated cone-beam computed tomography (CBCT) multi-organ segmentation method for potential CBCT-guided adaptive radiation therapy workflow. The proposed method combines the deep leaning-based image synthesis method, which generates magnetic resonance images (MRIs) with superior soft-tissue contrast from on-board setup CBCT images to aid CBCT segmentation, with a deep attention strategy, which focuses on learning discriminative features for differentiating organ margins. The whole segmentation method consists of 3 major steps. First, a cycle-consistent adversarial network (CycleGAN) was used to estimate a synthetic MRI (sMRI) from CBCT images. Second, a deep attention network was trained based on sMRI and its corresponding manual contours. Third, the segmented contours for a query patient was obtained by feeding the patient's CBCT images into the trained sMRI estimation and segmentation model. In our retrospective study, we included 100 prostate cancer patients, each of whom has CBCT acquired with prostate, bladder and rectum contoured by physicians with MRI guidance as ground truth. We trained and tested our model with separate datasets among these patients. The resulting segmentations were compared with physicians' manual contours. The Dice similarity coefficient and mean surface distance indices between our segmented and physicians' manual contours (bladder, prostate, and rectum) were 0.95 ± 0.02, 0.44 ± 0.22 mm, 0.86 ± 0.06, 0.73 ± 0.37 mm, and 0.91 ± 0.04, 0.72 ± 0.65 mm, respectively. We have proposed a novel CBCT-only pelvic multi-organ segmentation strategy using CBCT-based sMRI and validated its accuracy against manual contours. This technique could provide accurate organ volume for treatment planning without requiring MR images acquisition, greatly facilitating routine clinical workflow.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America. Co-first author
| | | | | | | | | | | | | | | | | | | |
Collapse
|
147
|
El Naqa I, Haider MA, Giger ML, Ten Haken RK. Artificial Intelligence: reshaping the practice of radiological sciences in the 21st century. Br J Radiol 2020; 93:20190855. [PMID: 31965813 PMCID: PMC7055429 DOI: 10.1259/bjr.20190855] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 01/12/2020] [Accepted: 01/13/2020] [Indexed: 12/15/2022] Open
Abstract
Advances in computing hardware and software platforms have led to the recent resurgence in artificial intelligence (AI) touching almost every aspect of our daily lives by its capability for automating complex tasks or providing superior predictive analytics. AI applications are currently spanning many diverse fields from economics to entertainment, to manufacturing, as well as medicine. Since modern AI's inception decades ago, practitioners in radiological sciences have been pioneering its development and implementation in medicine, particularly in areas related to diagnostic imaging and therapy. In this anniversary article, we embark on a journey to reflect on the learned lessons from past AI's chequered history. We further summarize the current status of AI in radiological sciences, highlighting, with examples, its impressive achievements and effect on re-shaping the practice of medical imaging and radiotherapy in the areas of computer-aided detection, diagnosis, prognosis, and decision support. Moving beyond the commercial hype of AI into reality, we discuss the current challenges to overcome, for AI to achieve its promised hope of providing better precision healthcare for each patient while reducing cost burden on their families and the society at large.
Collapse
Affiliation(s)
- Issam El Naqa
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| | - Masoom A Haider
- Department of Medical Imaging and Lunenfeld-Tanenbaum Research Institute, University of Toronto, Toronto, ON, Canada
| | | | - Randall K Ten Haken
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
148
|
Liu Z, Liu X, Xiao B, Wang S, Miao Z, Sun Y, Zhang F. Segmentation of organs-at-risk in cervical cancer CT images with a convolutional neural network. Phys Med 2020; 69:184-191. [PMID: 31918371 DOI: 10.1016/j.ejmp.2019.12.008] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Revised: 11/12/2019] [Accepted: 12/08/2019] [Indexed: 02/06/2023] Open
Abstract
PURPOSE We introduced and evaluated an end-to-end organs-at-risk (OARs) segmentation model that can provide accurate and consistent OARs segmentation results in much less time. METHODS We collected 105 patients' Computed Tomography (CT) scans that diagnosed locally advanced cervical cancer and treated with radiotherapy in one hospital. Seven organs, including the bladder, bone marrow, left femoral head, right femoral head, rectum, small intestine and spinal cord were defined as OARs. The annotated contours of the OARs previously delineated manually by the patient's radiotherapy oncologist and confirmed by the professional committee consisted of eight experienced oncologists before the radiotherapy were used as the ground truth masks. A multi-class segmentation model based on U-Net was designed to fulfil the OARs segmentation task. The Dice Similarity Coefficient (DSC) and 95th Hausdorff Distance (HD) are used as quantitative evaluation metrics to evaluate the proposed method. RESULTS The mean DSC values of the proposed method are 0.924, 0.854, 0.906, 0.900, 0.791, 0.833 and 0.827 for the bladder, bone marrow, femoral head left, femoral head right, rectum, small intestine, and spinal cord, respectively. The mean HD values are 5.098, 1.993, 1.390, 1.435, 5.949, 5.281 and 3.269 for the above OARs respectively. CONCLUSIONS Our proposed method can help reduce the inter-observer and intra-observer variability of manual OARs delineation and lessen oncologists' efforts. The experimental results demonstrate that our model outperforms the benchmark U-Net model and the oncologists' evaluations show that the segmentation results are highly acceptable to be used in radiation therapy planning.
Collapse
Affiliation(s)
- Zhikai Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Xia Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Bin Xiao
- MedMind Technology Co., Ltd., Beijing 100080, China
| | - Shaobin Wang
- MedMind Technology Co., Ltd., Beijing 100080, China
| | - Zheng Miao
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Yuliang Sun
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Fuquan Zhang
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
149
|
Tang X. The role of artificial intelligence in medical imaging research. BJR Open 2019; 2:20190031. [PMID: 33178962 PMCID: PMC7594889 DOI: 10.1259/bjro.20190031] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Revised: 10/01/2019] [Accepted: 11/13/2019] [Indexed: 12/22/2022] Open
Abstract
Without doubt, artificial intelligence (AI) is the most discussed topic today in medical imaging research, both in diagnostic and therapeutic. For diagnostic imaging alone, the number of publications on AI has increased from about 100-150 per year in 2007-2008 to 1000-1100 per year in 2017-2018. Researchers have applied AI to automatically recognizing complex patterns in imaging data and providing quantitative assessments of radiographic characteristics. In radiation oncology, AI has been applied on different image modalities that are used at different stages of the treatment. i.e. tumor delineation and treatment assessment. Radiomics, the extraction of a large number of image features from radiation images with a high-throughput approach, is one of the most popular research topics today in medical imaging research. AI is the essential boosting power of processing massive number of medical images and therefore uncovers disease characteristics that fail to be appreciated by the naked eyes. The objectives of this paper are to review the history of AI in medical imaging research, the current role, the challenges need to be resolved before AI can be adopted widely in the clinic, and the potential future.
Collapse
|
150
|
Dong X, Lei Y, Tian S, Wang T, Patel P, Curran WJ, Jani AB, Liu T, Yang X. Synthetic MRI-aided multi-organ segmentation on male pelvic CT using cycle consistent deep attention network. Radiother Oncol 2019; 141:192-199. [PMID: 31630868 DOI: 10.1016/j.radonc.2019.09.028] [Citation(s) in RCA: 72] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 09/24/2019] [Accepted: 09/29/2019] [Indexed: 11/17/2022]
Abstract
BACKGROUND AND PURPOSE Manual contouring is labor intensive, and subject to variations in operator knowledge, experience and technique. This work aims to develop an automated computed tomography (CT) multi-organ segmentation method for prostate cancer treatment planning. METHODS AND MATERIALS The proposed method exploits the superior soft-tissue information provided by synthetic MRI (sMRI) to aid the multi-organ segmentation on pelvic CT images. A cycle generative adversarial network (CycleGAN) was used to estimate sMRIs from CT images. A deep attention U-Net (DAUnet) was trained on sMRI and corresponding multi-organ contours for auto-segmentation. The deep attention strategy was introduced to identify the most relevant features to differentiate different organs. Deep supervision was incorporated into the DAUnet to enhance the features' discriminative ability. Segmented contours of a patient were obtained by feeding CT image into the trained CycleGAN to generate sMRI, which was then fed to the trained DAUnet to generate organ contours. We trained and evaluated our model with 140 datasets from prostate patients. RESULTS The Dice similarity coefficient and mean surface distance between our segmented and bladder, prostate, and rectum manual contours were 0.95 ± 0.03, 0.52 ± 0.22 mm; 0.87 ± 0.04, 0.93 ± 0.51 mm; and 0.89 ± 0.04, 0.92 ± 1.03 mm, respectively. CONCLUSION We proposed a sMRI-aided multi-organ automatic segmentation method on pelvic CT images. By integrating deep attention and deep supervision strategy, the proposed network provides accurate and consistent prostate, bladder and rectum segmentation, and has the potential to facilitate routine prostate-cancer radiotherapy treatment planning.
Collapse
Affiliation(s)
- Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States.
| |
Collapse
|