1
|
Wang Y, Peng Y, Wang T, Li H, Zhao Z, Gong L, Peng B. The evolution and current situation in the application of dual-energy computed tomography: a bibliometric study. Quant Imaging Med Surg 2023; 13:6801-6813. [PMID: 37869341 PMCID: PMC10585566 DOI: 10.21037/qims-23-467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 08/09/2023] [Indexed: 10/24/2023]
Abstract
Background Dual-energy computed tomography (DECT) has received extensive attention in clinical practice; however, a quantitative assessment of published literature in this domain is presently lacking. This study thus aimed to characterize the application conditions, developmental trends, and research hot spots of DECT using bibliometric analysis. Methods All literature on DECT was retrieved from the Web of Science Core Collection (WoSCC) on January 22, 2023. The co-occurrence, cooperation network, and co-citation of countries, institutions, references, authors, journals, and keywords were analyzed using CiteSpace, VOSviewer, and R-bibliometrix software. Results In total, 4,720 original articles and reviews were included. The number of publications related to DECT has rapidly increased since 2006. The USA (n=1,662) and Mayo Clinic (n=178) were found to be the most productive country and institution, respectively. The most cited article was published by Johnson TRC et al., while the article published by McCollough CH et al. in 2015 had the most co-citations. Schoepf UJ ranked first with most articles among 16,838 authors. The journal with the most published articles was European Radiology, with 411 publications. The timeline analysis indicated that material decomposition was the most recent topic, followed by gout, radiomics, proton therapy, and bone marrow edema. Conclusions An increasing number of researchers are committed to researching DECT, with the USA making the most significant contributions in this area. Prior studies have primarily concentrated on cardiovascular diseases, and contemporary hot spots include expansion into to other fields, such as iodine quantification, deep learning, and bone marrow edema.
Collapse
Affiliation(s)
- Ya Wang
- Department of Radiology, The Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Yun Peng
- Department of Radiology, The Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Tongtong Wang
- Department of Radiology, The Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Hui Li
- Department of Radiology, The Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Zhen Zhao
- Department of Radiology, The Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Lianggeng Gong
- Department of Radiology, The Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Bibo Peng
- Department of Radiology, The Second Affiliated Hospital of Nanchang University, Nanchang, China
| |
Collapse
|
2
|
Xie K, Gao L, Zhang H, Zhang S, Xi Q, Zhang F, Sun J, Lin T, Sui J, Ni X. Inpainting truncated areas of CT images based on generative adversarial networks with gated convolution for radiotherapy. Med Biol Eng Comput 2023:10.1007/s11517-023-02809-y. [PMID: 36897469 DOI: 10.1007/s11517-023-02809-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Accepted: 02/20/2023] [Indexed: 03/11/2023]
Abstract
This study aimed to inpaint the truncated areas of CT images by using generative adversarial networks with gated convolution (GatedConv) and apply these images to dose calculations in radiotherapy. CT images were collected from 100 patients with esophageal cancer under thermoplastic membrane placement, and 85 cases were used for training based on randomly generated circle masks. In the prediction stage, 15 cases of data were used to evaluate the accuracy of the inpainted CT in anatomy and dosimetry based on the mask with a truncated volume covering 40% of the arm volume, and they were compared with the inpainted CT synthesized by U-Net, pix2pix, and PConv with partial convolution. The results showed that GatedConv could directly and effectively inpaint incomplete CT images in the image domain. For the results of U-Net, pix2pix, PConv, and GatedConv, the mean absolute errors for the truncated tissue were 195.54, 196.20, 190.40, and 158.45 HU, respectively. The mean dose of the planning target volume, heart, and lung in the truncated CT was statistically different (p < 0.05) from those of the ground truth CT ([Formula: see text]). The differences in dose distribution between the inpainted CT obtained by the four models and [Formula: see text] were minimal. The inpainting effect of clinical truncated CT images based on GatedConv showed better stability compared with the other models. GatedConv can effectively inpaint the truncated areas with high image quality, and it is closer to [Formula: see text] in terms of image visualization and dosimetry than other inpainting models.
Collapse
Affiliation(s)
- Kai Xie
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Liugang Gao
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Heng Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics, Changzhou, 213000, China
| | - Sai Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics, Changzhou, 213000, China
| | - Qianyi Xi
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics, Changzhou, 213000, China
| | - Fan Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics, Changzhou, 213000, China
| | - Jiawei Sun
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Tao Lin
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Jianfeng Sui
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213000, China.
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China.
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.
- Key Laboratory of Medical Physics, Changzhou, 213000, China.
| |
Collapse
|
3
|
Douglass M, Gorayski P, Patel S, Santos A. Synthetic cranial MRI from 3D optical surface scans using deep learning for radiation therapy treatment planning. Phys Eng Sci Med 2023; 46:367-375. [PMID: 36752996 PMCID: PMC10030422 DOI: 10.1007/s13246-023-01229-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 01/29/2023] [Indexed: 02/09/2023]
Abstract
BACKGROUND Optical scanning technologies are increasingly being utilised to supplement treatment workflows in radiation oncology, such as surface-guided radiotherapy or 3D printing custom bolus. One limitation of optical scanning devices is the absence of internal anatomical information of the patient being scanned. As a result, conventional radiation therapy treatment planning using this imaging modality is not feasible. Deep learning is useful for automating various manual tasks in radiation oncology, most notably, organ segmentation and treatment planning. Deep learning models have also been used to transform MRI datasets into synthetic CT datasets, facilitating the development of MRI-only radiation therapy planning. AIMS To train a pix2pix generative adversarial network to transform 3D optical scan data into estimated MRI datasets for a given patient to provide additional anatomical data for a select few radiation therapy treatment sites. The proposed network may provide useful anatomical information for treatment planning of surface mould brachytherapy, total body irradiation, and total skin electron therapy, for example, without delivering any imaging dose. METHODS A 2D pix2pix GAN was trained on 15,000 axial MRI slices of healthy adult brains paired with corresponding external mask slices. The model was validated on a further 5000 previously unseen external mask slices. The predictions were compared with the "ground-truth" MRI slices using the multi-scale structural similarity index (MSSI) metric. A certified neuro-radiologist was subsequently consulted to provide an independent review of the model's performance in terms of anatomical accuracy and consistency. The network was then applied to a 3D photogrammetry scan of a test subject to demonstrate the feasibility of this novel technique. RESULTS The trained pix2pix network predicted MRI slices with a mean MSSI of 0.831 ± 0.057 for the 5000 validation images indicating that it is possible to estimate a significant proportion of a patient's gross cranial anatomy from a patient's exterior contour. When independently reviewed by a certified neuro-radiologist, the model's performance was described as "quite amazing, but there are limitations in the regions where there is wide variation within the normal population." When the trained network was applied to a 3D model of a human subject acquired using optical photogrammetry, the network could estimate the corresponding MRI volume for that subject with good qualitative accuracy. However, a ground-truth MRI baseline was not available for quantitative comparison. CONCLUSIONS A deep learning model was developed, to transform 3D optical scan data of a patient into an estimated MRI volume, potentially increasing the usefulness of optical scanning in radiation therapy planning. This work has demonstrated that much of the human cranial anatomy can be predicted from the external shape of the head and may provide an additional source of valuable imaging data. Further research is required to investigate the feasibility of this approach for use in a clinical setting and further improve the model's accuracy.
Collapse
Affiliation(s)
- Michael Douglass
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia.
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia.
- School of Physical Sciences, University of Adelaide, Adelaide, SA, 5005, Australia.
| | - Peter Gorayski
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia
- University of South Australia, Allied Health & Human Performance, Adelaide, SA, 5000, Australia
| | - Sandy Patel
- Department of Radiology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
| | - Alexandre Santos
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia
- School of Physical Sciences, University of Adelaide, Adelaide, SA, 5005, Australia
| |
Collapse
|
4
|
Yang M, Wohlfahrt P, Shen C, Bouchard H. Dual- and multi-energy CT for particle stopping-power estimation: current state, challenges and potential. Phys Med Biol 2023; 68. [PMID: 36595276 DOI: 10.1088/1361-6560/acabfa] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022]
Abstract
Range uncertainty has been a key factor preventing particle radiotherapy from reaching its full physical potential. One of the main contributing sources is the uncertainty in estimating particle stopping power (ρs) within patients. Currently, theρsdistribution in a patient is derived from a single-energy CT (SECT) scan acquired for treatment planning by converting CT number expressed in Hounsfield units (HU) of each voxel toρsusing a Hounsfield look-up table (HLUT), also known as the CT calibration curve. HU andρsshare a linear relationship with electron density but differ in their additional dependence on elemental composition through different physical properties, i.e. effective atomic number and mean excitation energy, respectively. Because of that, the HLUT approach is particularly sensitive to differences in elemental composition between real human tissues and tissue surrogates as well as tissue variations within and among individual patients. The use of dual-energy CT (DECT) forρsprediction has been shown to be effective in reducing the uncertainty inρsestimation compared to SECT. The acquisition of CT data over different x-ray spectra yields additional information on the material elemental composition. Recently, multi-energy CT (MECT) has been explored to deduct material-specific information with higher dimensionality, which has the potential to further improve the accuracy ofρsestimation. Even though various DECT and MECT methods have been proposed and evaluated over the years, these approaches are still only scarcely implemented in routine clinical practice. In this topical review, we aim at accelerating this translation process by providing: (1) a comprehensive review of the existing DECT/MECT methods forρsestimation with their respective strengths and weaknesses; (2) a general review of uncertainties associated with DECT/MECT methods; (3) a general review of different aspects related to clinical implementation of DECT/MECT methods; (4) other potential advanced DECT/MECT applications beyondρsestimation.
Collapse
Affiliation(s)
- Ming Yang
- The University of Texas MD Anderson Cancer Center, Department of Radiation Physics, 1515 Holcombe Blvd Houston, TX 77030, United States of America
| | - Patrick Wohlfahrt
- Massachusetts General Hospital and Harvard Medical School, Department of Radiation Oncology, Boston, MA 02115, United States of America
| | - Chenyang Shen
- University of Texas Southwestern Medical Center, Department of Radiation Oncology, 2280 Inwood Rd Dallas, TX 75235, United States of America
| | - Hugo Bouchard
- Département de physique, Université de Montréal, Complexe des sciences, 1375 Avenue Thérèse-Lavoie-Roux, Montréal, Québec H2V0B3, Canada.,Centre de recherche du Centre hospitalier de l'Université de Montréal, 900 Rue Saint-Denis, Montréal, Québec, H2X 0A9, Canada.,Département de radio-oncologie, Centre hospitalier de l'Université de Montréal, 1051 Rue Sanguinet, Montréal, Québec H2X 3E4, Canada
| |
Collapse
|
5
|
Marschner S, Datarb M, Gaasch A, Xu Z, Grbic S, Chabin G, Geiger B, Rosenman J, Corradini S, Niyazi M, Heimann T, Möhler C, Vega F, Belka C, Thieke C. A deep image-to-image network organ segmentation algorithm for radiation treatment planning: principles and evaluation. Radiat Oncol 2022; 17:129. [PMID: 35869525 PMCID: PMC9308364 DOI: 10.1186/s13014-022-02102-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 06/28/2022] [Indexed: 01/02/2023] Open
Abstract
Background We describe and evaluate a deep network algorithm which automatically contours organs at risk in the thorax and pelvis on computed tomography (CT) images for radiation treatment planning. Methods The algorithm identifies the region of interest (ROI) automatically by detecting anatomical landmarks around the specific organs using a deep reinforcement learning technique. The segmentation is restricted to this ROI and performed by a deep image-to-image network (DI2IN) based on a convolutional encoder-decoder architecture combined with multi-level feature concatenation. The algorithm is commercially available in the medical products “syngo.via RT Image Suite VB50” and “AI-Rad Companion Organs RT VA20” (Siemens Healthineers). For evaluation, thoracic CT images of 237 patients and pelvic CT images of 102 patients were manually contoured following the Radiation Therapy Oncology Group (RTOG) guidelines and compared to the DI2IN results using metrics for volume, overlap and distance, e.g., Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD95). The contours were also compared visually slice by slice. Results We observed high correlations between automatic and manual contours. The best results were obtained for the lungs (DSC 0.97, HD95 2.7 mm/2.9 mm for left/right lung), followed by heart (DSC 0.92, HD95 4.4 mm), bladder (DSC 0.88, HD95 6.7 mm) and rectum (DSC 0.79, HD95 10.8 mm). Visual inspection showed excellent agreements with some exceptions for heart and rectum. Conclusions The DI2IN algorithm automatically generated contours for organs at risk close to those by a human expert, making the contouring step in radiation treatment planning simpler and faster. Few cases still required manual corrections, mainly for heart and rectum.
Collapse
|
6
|
van Dijk RHW, Staut N, Wolfs CJA, Verhaegen F. A novel multichannel deep learning model for fast denoising of Monte Carlo dose calculations: preclinical applications. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac8390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 07/22/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. In preclinical radiotherapy with kilovolt (kV) x-ray beams, accurate treatment planning is needed to improve the translation potential to clinical trials. Monte Carlo based radiation transport simulations are the gold standard to calculate the absorbed dose distribution in external beam radiotherapy. However, these simulations are notorious for their long computation time, causing a bottleneck in the workflow. Previous studies have used deep learning models to speed up these simulations for clinical megavolt (MV) beams. For kV beams, dose distributions are more affected by tissue type than for MV beams, leading to steep dose gradients. This study aims to speed up preclinical kV dose simulations by proposing a novel deep learning pipeline. Approach. A deep learning model is proposed that denoises low precision (∼106 simulated particles) dose distributions to produce high precision (109 simulated particles) dose distributions. To effectively denoise the steep dose gradients in preclinical kV dose distributions, the model uses the novel approach to use the low precision Monte Carlo dose calculation as well as the Monte Carlo uncertainty (MCU) map and the mass density map as additional input channels. The model was trained on a large synthetic dataset and tested on a real dataset with a different data distribution. To keep model inference time to a minimum, a novel method for inference optimization was developed as well. Main results. The proposed model provides dose distributions which achieve a median gamma pass rate (3%/0.3 mm) of 98% with a lower bound of 95% when compared to the high precision Monte Carlo dose distributions from the test set, which represents a different dataset distribution than the training set. Using the proposed model together with the novel inference optimization method, the total computation time was reduced from approximately 45 min to less than six seconds on average. Significance. This study presents the first model that can denoise preclinical kV instead of clinical MV Monte Carlo dose distributions. This was achieved by using the MCU and mass density maps as additional model inputs. Additionally, this study shows that training such a model on a synthetic dataset is not only a viable option, but even increases the generalization of the model compared to training on real data due to the sheer size and variety of the synthetic dataset. The application of this model will enable speeding up treatment plan optimization in the preclinical workflow.
Collapse
|
7
|
Im JH, Lee IJ, Choi Y, Sung J, Ha JS, Lee H. Impact of Denoising on Deep-Learning-Based Automatic Segmentation Framework for Breast Cancer Radiotherapy Planning. Cancers (Basel) 2022; 14:cancers14153581. [PMID: 35892839 PMCID: PMC9332287 DOI: 10.3390/cancers14153581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 07/08/2022] [Accepted: 07/20/2022] [Indexed: 02/04/2023] Open
Abstract
Objective: This study aimed to investigate the segmentation accuracy of organs at risk (OARs) when denoised computed tomography (CT) images are used as input data for a deep-learning-based auto-segmentation framework. Methods: We used non-contrast enhanced planning CT scans from 40 patients with breast cancer. The heart, lungs, esophagus, spinal cord, and liver were manually delineated by two experienced radiation oncologists in a double-blind manner. The denoised CT images were used as input data for the AccuContourTM segmentation software to increase the signal difference between structures of interest and unwanted noise in non-contrast CT. The accuracy of the segmentation was assessed using the Dice similarity coefficient (DSC), and the results were compared with those of conventional deep-learning-based auto-segmentation without denoising. Results: The average DSC outcomes were higher than 0.80 for all OARs except for the esophagus. AccuContourTM-based and denoising-based auto-segmentation demonstrated comparable performance for the lungs and spinal cord but showed limited performance for the esophagus. Denoising-based auto-segmentation for the liver was minimal but had statistically significantly better DSC than AccuContourTM-based auto-segmentation (p < 0.05). Conclusions: Denoising-based auto-segmentation demonstrated satisfactory performance in automatic liver segmentation from non-contrast enhanced CT scans. Further external validation studies with larger cohorts are needed to verify the usefulness of denoising-based auto-segmentation.
Collapse
Affiliation(s)
- Jung Ho Im
- CHA Bundang Medical Center, Department of Radiation Oncology, CHA University School of Medicine, Seongnam 13496, Korea;
| | - Ik Jae Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Yeonho Choi
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Jiwon Sung
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Jin Sook Ha
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Ho Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
- Correspondence: ; Tel.: +82-2-2228-8109; Fax: +82-2-2227-7823
| |
Collapse
|
8
|
Fu Y, Zhang H, Morris ED, Glide-Hurst CK, Pai S, Traverso A, Wee L, Hadzic I, Lønne PI, Shen C, Liu T, Yang X. Artificial Intelligence in Radiation Therapy. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:158-181. [PMID: 35992632 PMCID: PMC9385128 DOI: 10.1109/trpms.2021.3107454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Artificial intelligence (AI) has great potential to transform the clinical workflow of radiotherapy. Since the introduction of deep neural networks, many AI-based methods have been proposed to address challenges in different aspects of radiotherapy. Commercial vendors have started to release AI-based tools that can be readily integrated to the established clinical workflow. To show the recent progress in AI-aided radiotherapy, we have reviewed AI-based studies in five major aspects of radiotherapy including image reconstruction, image registration, image segmentation, image synthesis, and automatic treatment planning. In each section, we summarized and categorized the recently published methods, followed by a discussion of the challenges, concerns, and future development. Given the rapid development of AI-aided radiotherapy, the efficiency and effectiveness of radiotherapy in the future could be substantially improved through intelligent automation of various aspects of radiotherapy.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Hao Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Eric D. Morris
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA 90095, USA
| | - Carri K. Glide-Hurst
- Department of Human Oncology, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53792, USA
| | - Suraj Pai
- Maastricht University Medical Centre, Netherlands
| | | | - Leonard Wee
- Maastricht University Medical Centre, Netherlands
| | | | - Per-Ivar Lønne
- Department of Medical Physics, Oslo University Hospital, PO Box 4953 Nydalen, 0424 Oslo, Norway
| | - Chenyang Shen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75002, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
9
|
Chen S, Zhong X, Dorn S, Ravikumar N, Tao Q, Huang X, Lell M, Kachelriess M, Maier A. Improving Generalization Capability of Multiorgan Segmentation Models Using Dual-Energy CT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3055199] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
10
|
Kruis MF. Improving radiation physics, tumor visualisation, and treatment quantification in radiotherapy with spectral or dual-energy CT. J Appl Clin Med Phys 2021; 23:e13468. [PMID: 34743405 PMCID: PMC8803285 DOI: 10.1002/acm2.13468] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 10/13/2021] [Accepted: 10/19/2021] [Indexed: 12/11/2022] Open
Abstract
Over the past decade, spectral or dual‐energy CT has gained relevancy, especially in oncological radiology. Nonetheless, its use in the radiotherapy (RT) clinic remains limited. This review article aims to give an overview of the current state of spectral CT and to explore opportunities for applications in RT. In this article, three groups of benefits of spectral CT over conventional CT in RT are recognized. Firstly, spectral CT provides more information of physical properties of the body, which can improve dose calculation. Furthermore, it improves the visibility of tumors, for a wide variety of malignancies as well as organs‐at‐risk OARs, which could reduce treatment uncertainty. And finally, spectral CT provides quantitative physiological information, which can be used to personalize and quantify treatment.
Collapse
|
11
|
A Segmentation Method of Foramen Ovale Based on Multiatlas. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:5221111. [PMID: 34589137 PMCID: PMC8476260 DOI: 10.1155/2021/5221111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 08/25/2021] [Indexed: 11/17/2022]
Abstract
Trigeminal neuralgia is a neurological disease. It is often treated by puncturing the trigeminal nerve through the skin and the oval foramen of the skull to selectively destroy the pain nerve. The process of puncture operation is difficult because the morphology of the foramen ovale in the skull base is varied and the surrounding anatomical structure is complex. Computer-aided puncture guidance technology is extremely valuable for the treatment of trigeminal neuralgia. Computer-aided guidance can help doctors determine the puncture target by accurately locating the foramen ovale in the skull base. Foramen ovale segmentation is a prerequisite for locating but is a tedious and error-prone task if done manually. In this paper, we present an image segmentation solution based on the multiatlas method that automatically segments the foramen ovale. We developed a data set of 30 CT scans containing 20 foramen ovale atlas and 10 CT scans for testing. Our approach can perform foramen ovale segmentation in puncture operation scenarios based solely on limited data. We propose to utilize this method as an enabler in clinical work.
Collapse
|
12
|
Robert C, Munoz A, Moreau D, Mazurier J, Sidorski G, Gasnier A, Beldjoudi G, Grégoire V, Deutsch E, Meyer P, Simon L. Clinical implementation of deep-learning based auto-contouring tools-Experience of three French radiotherapy centers. Cancer Radiother 2021; 25:607-616. [PMID: 34389243 DOI: 10.1016/j.canrad.2021.06.023] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 06/17/2021] [Accepted: 06/18/2021] [Indexed: 12/23/2022]
Abstract
Deep-learning (DL)-based auto-contouring solutions have recently been proposed as a convincing alternative to decrease workload of target volumes and organs-at-risk (OAR) delineation in radiotherapy planning and improve inter-observer consistency. However, there is minimal literature of clinical implementations of such algorithms in a clinical routine. In this paper we first present an update of the state-of-the-art of DL-based solutions. We then summarize recent recommendations proposed by the European society for radiotherapy and oncology (ESTRO) to be followed before any clinical implementation of artificial intelligence-based solutions in clinic. The last section describes the methodology carried out by three French radiation oncology departments to deploy CE-marked commercial solutions. Based on the information collected, a majority of OAR are retained by the centers among those proposed by the manufacturers, validating the usefulness of DL-based models to decrease clinicians' workload. Target volumes, with the exception of lymph node areas in breast, head and neck and pelvic regions, whole breast, breast wall, prostate and seminal vesicles, are not available in the three commercial solutions at this time. No implemented workflows are currently available to continuously improve the models, but these can be adapted/retrained in some solutions during the commissioning phase to best fit local practices. In reported experiences, automatic workflows were implemented to limit human interactions and make the workflow more fluid. Recommendations published by the ESTRO group will be of importance for guiding physicists in the clinical implementation of patient specific and regular quality assurances.
Collapse
Affiliation(s)
- C Robert
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France.
| | - A Munoz
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - D Moreau
- Department of Radiotherapy, Hôpital Européen Georges-Pompidou, Paris, France
| | - J Mazurier
- Department of Radiotherapy, Clinique Pasteur-Oncorad, Toulouse, France
| | - G Sidorski
- Department of Radiotherapy, Clinique Pasteur-Oncorad, Toulouse, France
| | - A Gasnier
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France
| | - G Beldjoudi
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - V Grégoire
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - E Deutsch
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France
| | - P Meyer
- Service d'Oncologie Radiothérapie, Institut de Cancérologie Strasbourg Europe (Icans), Strasbourg, France
| | - L Simon
- Institut Claudius Regaud (ICR), Institut Universitaire du Cancer de Toulouse - Oncopole (IUCT-O), Toulouse, France
| |
Collapse
|
13
|
Wang T, Lei Y, Roper J, Ghavidel B, Beitler JJ, McDonald M, Curran WJ, Liu T, Yang X. Head and neck multi-organ segmentation on dual-energy CT using dual pyramid convolutional neural networks. Phys Med Biol 2021; 66:115008. [PMID: 33915524 DOI: 10.1088/1361-6560/abfce2] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/29/2021] [Indexed: 11/11/2022]
Abstract
Organ delineation is crucial to diagnosis and therapy, while it is also labor-intensive and observer-dependent. Dual energy CT (DECT) provides additional image contrast than conventional single energy CT (SECT), which may facilitate automatic organ segmentation. This work aims to develop an automatic multi-organ segmentation approach using deep learning for head-and-neck region on DECT. We proposed a mask scoring regional convolutional neural network (R-CNN) where comprehensive features are firstly learnt from two independent pyramid networks and are then combined via deep attention strategy to highlight the informative ones extracted from both two channels of low and high energy CT. To perform multi-organ segmentation and avoid misclassification, a mask scoring subnetwork was integrated into the Mask R-CNN framework to build the correlation between the class of potential detected organ's region-of-interest (ROI) and the shape of that organ's segmentation within that ROI. We evaluated our model on DECT images from 127 head-and-neck cancer patients (66 training, 61 testing) with manual contours of 19 organs as training target and ground truth. For large- and mid-sized organs such as brain and parotid, the proposed method successfully achieved average Dice similarity coefficient (DSC) larger than 0.8. For small-sized organs with very low contrast such as chiasm, cochlea, lens and optic nerves, the DSCs ranged between around 0.5 and 0.8. With the proposed method, using DECT images outperforms using SECT in almost all 19 organs with statistical significance in DSC (p<0.05). Meanwhile, by using the DECT, the proposed method is also significantly superior to a recently developed FCN-based method in most of organs in terms of DSC and the 95th percentile Hausdorff distance. Quantitative results demonstrated the feasibility of the proposed method, the superiority of using DECT to SECT, and the advantage of the proposed R-CNN over FCN on the head-and-neck patient study. The proposed method has the potential to facilitate the current head-and-neck cancer radiation therapy workflow in treatment planning.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Beth Ghavidel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Jonathan J Beitler
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| |
Collapse
|
14
|
van der Heyden B, Cohilis M, Souris K, de Freitas Nascimento L, Sterpin E. Artificial intelligence supported single detector multi-energy proton radiography system. Phys Med Biol 2021; 66. [PMID: 33621962 DOI: 10.1088/1361-6560/abe918] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 02/23/2021] [Indexed: 12/12/2022]
Abstract
Proton radiography imaging was proposed as a promising technique to evaluate internal anatomical changes, to enable pre-treatment patient alignment, and most importantly, to optimize the patient specific CT number to stopping-power ratio conversion. The clinical implementation rate of proton radiography systems is still limited due to their complex bulky design, together with the persistent problem of (in)elastic nuclear interactions and multiple Coulomb scattering (i.e. range mixing). In this work, a compact multi-energy proton radiography system was proposed in combination with an artificial intelligence network architecture (ProtonDSE) to remove the persistent problem of proton scatter in proton radiography. A realistic Monte Carlo model of the Proteus®One accelerator was built at 200 and 220 MeV to isolate the scattered proton signal in 236 proton radiographies of 80 digital anthropomorphic phantoms. ProtonDSE was trained to predict the proton scatter distribution at two beam energies in a 60%/25%/15% scheme for training, testing, and validation. A calibration procedure was proposed to derive the water equivalent thickness image based on the detector dose response relationship at both beam energies. ProtonDSE network performance was evaluated with quantitative metrics that showed an overall mean absolute percentage error below 1.4% ± 0.4% in our test dataset. For one example patient, detector dose to WET conversions were performed based on the total dose (ITotal), the primary proton dose (IPrimary), and the ProtonDSE corrected detector dose (ICorrected). The determined WET accuracy was compared with respect to the reference WET by idealistic raytracing in a manually delineated region-of-interest inside the brain. The error was determined 4.3% ± 4.1% forWET(ITotal),2.2% ± 1.4% forWET(IPrimary),and 2.5% ± 2.0% forWET(ICorrected).
Collapse
Affiliation(s)
- Brent van der Heyden
- KU Leuven, Department of Oncology, Laboratory of Experimental Radiotherapy, Leuven, Belgium
| | - Marie Cohilis
- UCLouvain, Institut de recherche expérimentale et clinique, Molecular Imaging Radiotherapy and Oncology Lab, Brussels, Belgium
| | - Kevin Souris
- UCLouvain, Institut de recherche expérimentale et clinique, Molecular Imaging Radiotherapy and Oncology Lab, Brussels, Belgium
| | | | - Edmond Sterpin
- KU Leuven, Department of Oncology, Laboratory of Experimental Radiotherapy, Leuven, Belgium.,UCLouvain, Institut de recherche expérimentale et clinique, Molecular Imaging Radiotherapy and Oncology Lab, Brussels, Belgium
| |
Collapse
|
15
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 77] [Impact Index Per Article: 25.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
16
|
Park J, Lee JS, Oh D, Ryoo HG, Han JH, Lee WW. Quantitative salivary gland SPECT/CT using deep convolutional neural networks. Sci Rep 2021; 11:7842. [PMID: 33837284 PMCID: PMC8035179 DOI: 10.1038/s41598-021-87497-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 03/30/2021] [Indexed: 11/08/2022] Open
Abstract
Quantitative single-photon emission computed tomography/computed tomography (SPECT/CT) using Tc-99m pertechnetate aids in evaluating salivary gland function. However, gland segmentation and quantitation of gland uptake is challenging. We develop a salivary gland SPECT/CT with automated segmentation using a deep convolutional neural network (CNN). The protocol comprises SPECT/CT at 20 min, sialagogue stimulation, and SPECT at 40 min post-injection of Tc-99m pertechnetate (555 MBq). The 40-min SPECT was reconstructed using the 20-min CT after misregistration correction. Manual salivary gland segmentation for %injected dose (%ID) by human experts proved highly reproducible, but took 15 min per scan. An automatic salivary segmentation method was developed using a modified 3D U-Net for end-to-end learning from the human experts (n = 333). The automatic segmentation performed comparably with human experts in voxel-wise comparison (mean Dice similarity coefficient of 0.81 for parotid and 0.79 for submandibular, respectively) and gland %ID correlation (R2 = 0.93 parotid, R2 = 0.95 submandibular) with an operating time less than 1 min. The algorithm generated results that were comparable to the reference data. In conclusion, with the aid of a CNN, we developed a quantitative salivary gland SPECT/CT protocol feasible for clinical applications. The method saves analysis time and manual effort while reducing patients' radiation exposure.
Collapse
Affiliation(s)
- Junyoung Park
- Department of Biomedical Sciences, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
| | - Jae Sung Lee
- Department of Biomedical Sciences, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, Korea
| | - Dongkyu Oh
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, 13620, Gyeonggi-do, Korea
| | - Hyun Gee Ryoo
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, 13620, Gyeonggi-do, Korea
| | - Jeong Hee Han
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, 13620, Gyeonggi-do, Korea
| | - Won Woo Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea.
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, 13620, Gyeonggi-do, Korea.
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, Korea.
| |
Collapse
|
17
|
Vrtovec T, Močnik D, Strojan P, Pernuš F, Ibragimov B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med Phys 2020; 47:e929-e950. [PMID: 32510603 DOI: 10.1002/mp.14320] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 02/06/2023] Open
Abstract
Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.
Collapse
Affiliation(s)
- Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Domen Močnik
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Primož Strojan
- Institute of Oncology Ljubljana, Zaloška cesta 2, Ljubljana, SI-1000, Slovenia
| | - Franjo Pernuš
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia.,Department of Computer Science, University of Copenhagen, Universitetsparken 1, Copenhagen, D-2100, Denmark
| |
Collapse
|
18
|
Morone D, Marazza A, Bergmann TJ, Molinari M. Deep learning approach for quantification of organelles and misfolded polypeptide delivery within degradative compartments. Mol Biol Cell 2020; 31:1512-1524. [PMID: 32401604 PMCID: PMC7359569 DOI: 10.1091/mbc.e20-04-0269] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Endolysosomal compartments maintain cellular fitness by clearing dysfunctional organelles and proteins from cells. Modulation of their activity offers therapeutic opportunities. Quantification of cargo delivery to and/or accumulation within endolysosomes is instrumental for characterizing lysosome-driven pathways at the molecular level and monitoring consequences of genetic or environmental modifications. Here we introduce LysoQuant, a deep learning approach for segmentation and classification of fluorescence images capturing cargo delivery within endolysosomes for clearance. LysoQuant is trained for unbiased and rapid recognition with human-level accuracy, and the pipeline informs on a series of quantitative parameters such as endolysosome number, size, shape, position within cells, and occupancy, which report on activity of lysosome-driven pathways. In our selected examples, LysoQuant successfully determines the magnitude of mechanistically distinct catabolic pathways that ensure lysosomal clearance of a model organelle, the endoplasmic reticulum, and of a model protein, polymerogenic ATZ. It does so with accuracy and velocity compatible with those of high-throughput analyses.
Collapse
Affiliation(s)
- Diego Morone
- Università della Svizzera italiana, CH-6900 Lugano, Switzerland.,Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland
| | - Alessandro Marazza
- Università della Svizzera italiana, CH-6900 Lugano, Switzerland.,Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland.,Graduate School for Cellular and Biomedical Sciences, University of Bern, CH-3000 Bern, Switzerland
| | - Timothy J Bergmann
- Università della Svizzera italiana, CH-6900 Lugano, Switzerland.,Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland
| | - Maurizio Molinari
- Università della Svizzera italiana, CH-6900 Lugano, Switzerland.,Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland.,École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland
| |
Collapse
|
19
|
Wohlfahrt P, Richter C. Status and innovations in pre-treatment CT imaging for proton therapy. Br J Radiol 2020; 93:20190590. [PMID: 31642709 PMCID: PMC7066941 DOI: 10.1259/bjr.20190590] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Revised: 10/04/2019] [Accepted: 10/21/2019] [Indexed: 12/19/2022] Open
Abstract
Pre-treatment CT imaging is a topic of growing importance in particle therapy. Improvements in the accuracy of stopping-power prediction are demanded to allow for a dose conformality that is not inferior to state-of-the-art image-guided photon therapy. Although range uncertainty has been kept practically constant over the last decades, recent technological and methodological developments, like the clinical application of dual-energy CT, have been introduced or arise at least on the horizon to improve the accuracy and precision of range prediction. This review gives an overview of the current status, summarizes the innovations in dual-energy CT and its potential impact on the field as well as potential alternative technologies for stopping-power prediction.
Collapse
Affiliation(s)
- Patrick Wohlfahrt
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | | |
Collapse
|
20
|
Nadeem MW, Ghamdi MAA, Hussain M, Khan MA, Khan KM, Almotiri SH, Butt SA. Brain Tumor Analysis Empowered with Deep Learning: A Review, Taxonomy, and Future Challenges. Brain Sci 2020; 10:brainsci10020118. [PMID: 32098333 PMCID: PMC7071415 DOI: 10.3390/brainsci10020118] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 02/07/2020] [Accepted: 02/13/2020] [Indexed: 12/17/2022] Open
Abstract
Deep Learning (DL) algorithms enabled computational models consist of multiple processing layers that represent data with multiple levels of abstraction. In recent years, usage of deep learning is rapidly proliferating in almost every domain, especially in medical image processing, medical image analysis, and bioinformatics. Consequently, deep learning has dramatically changed and improved the means of recognition, prediction, and diagnosis effectively in numerous areas of healthcare such as pathology, brain tumor, lung cancer, abdomen, cardiac, and retina. Considering the wide range of applications of deep learning, the objective of this article is to review major deep learning concepts pertinent to brain tumor analysis (e.g., segmentation, classification, prediction, evaluation.). A review conducted by summarizing a large number of scientific contributions to the field (i.e., deep learning in brain tumor analysis) is presented in this study. A coherent taxonomy of research landscape from the literature has also been mapped, and the major aspects of this emerging field have been discussed and analyzed. A critical discussion section to show the limitations of deep learning techniques has been included at the end to elaborate open research challenges and directions for future work in this emergent area.
Collapse
Affiliation(s)
- Muhammad Waqas Nadeem
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (M.A.K.); (K.M.K.)
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan;
- Correspondence:
| | - Mohammed A. Al Ghamdi
- Department of Computer Science, Umm Al-Qura University, Makkah 23500, Saudi Arabia; (M.A.A.G.); (S.H.A.)
| | - Muzammil Hussain
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan;
| | - Muhammad Adnan Khan
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (M.A.K.); (K.M.K.)
| | - Khalid Masood Khan
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (M.A.K.); (K.M.K.)
| | - Sultan H. Almotiri
- Department of Computer Science, Umm Al-Qura University, Makkah 23500, Saudi Arabia; (M.A.A.G.); (S.H.A.)
| | - Suhail Ashfaq Butt
- Department of Information Sciences, Division of Science and Technology, University of Education Township, Lahore 54700, Pakistan;
| |
Collapse
|
21
|
El Naqa I, Haider MA, Giger ML, Ten Haken RK. Artificial Intelligence: reshaping the practice of radiological sciences in the 21st century. Br J Radiol 2020; 93:20190855. [PMID: 31965813 PMCID: PMC7055429 DOI: 10.1259/bjr.20190855] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 01/12/2020] [Accepted: 01/13/2020] [Indexed: 12/15/2022] Open
Abstract
Advances in computing hardware and software platforms have led to the recent resurgence in artificial intelligence (AI) touching almost every aspect of our daily lives by its capability for automating complex tasks or providing superior predictive analytics. AI applications are currently spanning many diverse fields from economics to entertainment, to manufacturing, as well as medicine. Since modern AI's inception decades ago, practitioners in radiological sciences have been pioneering its development and implementation in medicine, particularly in areas related to diagnostic imaging and therapy. In this anniversary article, we embark on a journey to reflect on the learned lessons from past AI's chequered history. We further summarize the current status of AI in radiological sciences, highlighting, with examples, its impressive achievements and effect on re-shaping the practice of medical imaging and radiotherapy in the areas of computer-aided detection, diagnosis, prognosis, and decision support. Moving beyond the commercial hype of AI into reality, we discuss the current challenges to overcome, for AI to achieve its promised hope of providing better precision healthcare for each patient while reducing cost burden on their families and the society at large.
Collapse
Affiliation(s)
- Issam El Naqa
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| | - Masoom A Haider
- Department of Medical Imaging and Lunenfeld-Tanenbaum Research Institute, University of Toronto, Toronto, ON, Canada
| | | | - Randall K Ten Haken
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
22
|
Liu Z, Liu X, Xiao B, Wang S, Miao Z, Sun Y, Zhang F. Segmentation of organs-at-risk in cervical cancer CT images with a convolutional neural network. Phys Med 2020; 69:184-191. [PMID: 31918371 DOI: 10.1016/j.ejmp.2019.12.008] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Revised: 11/12/2019] [Accepted: 12/08/2019] [Indexed: 02/06/2023] Open
Abstract
PURPOSE We introduced and evaluated an end-to-end organs-at-risk (OARs) segmentation model that can provide accurate and consistent OARs segmentation results in much less time. METHODS We collected 105 patients' Computed Tomography (CT) scans that diagnosed locally advanced cervical cancer and treated with radiotherapy in one hospital. Seven organs, including the bladder, bone marrow, left femoral head, right femoral head, rectum, small intestine and spinal cord were defined as OARs. The annotated contours of the OARs previously delineated manually by the patient's radiotherapy oncologist and confirmed by the professional committee consisted of eight experienced oncologists before the radiotherapy were used as the ground truth masks. A multi-class segmentation model based on U-Net was designed to fulfil the OARs segmentation task. The Dice Similarity Coefficient (DSC) and 95th Hausdorff Distance (HD) are used as quantitative evaluation metrics to evaluate the proposed method. RESULTS The mean DSC values of the proposed method are 0.924, 0.854, 0.906, 0.900, 0.791, 0.833 and 0.827 for the bladder, bone marrow, femoral head left, femoral head right, rectum, small intestine, and spinal cord, respectively. The mean HD values are 5.098, 1.993, 1.390, 1.435, 5.949, 5.281 and 3.269 for the above OARs respectively. CONCLUSIONS Our proposed method can help reduce the inter-observer and intra-observer variability of manual OARs delineation and lessen oncologists' efforts. The experimental results demonstrate that our model outperforms the benchmark U-Net model and the oncologists' evaluations show that the segmentation results are highly acceptable to be used in radiation therapy planning.
Collapse
Affiliation(s)
- Zhikai Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Xia Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Bin Xiao
- MedMind Technology Co., Ltd., Beijing 100080, China
| | - Shaobin Wang
- MedMind Technology Co., Ltd., Beijing 100080, China
| | - Zheng Miao
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Yuliang Sun
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Fuquan Zhang
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
23
|
Chen S, Zhong X, Hu S, Dorn S, Kachelrieß M, Lell M, Maier A. Automatic multi-organ segmentation in dual-energy CT (DECT) with dedicated 3D fully convolutional DECT networks. Med Phys 2020; 47:552-562. [PMID: 31816095 DOI: 10.1002/mp.13950] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2019] [Revised: 11/14/2019] [Accepted: 11/21/2019] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Dual-energy computed tomography (DECT) has shown great potential in many clinical applications. By incorporating the information from two different energy spectra, DECT provides higher contrast and reveals more material differences of tissues compared to conventional single-energy CT (SECT). Recent research shows that automatic multi-organ segmentation of DECT data can improve DECT clinical applications. However, most segmentation methods are designed for SECT, while DECT has been significantly less pronounced in research. Therefore, a novel approach is required that is able to take full advantage of the extra information provided by DECT. METHODS In the scope of this work, we proposed four three-dimensional (3D) fully convolutional neural network algorithms for the automatic segmentation of DECT data. We incorporated the extra energy information differently and embedded the fusion of information in each of the network architectures. RESULTS Quantitative evaluation using 45 thorax/abdomen DECT datasets acquired with a clinical dual-source CT system was investigated. The segmentation of six thoracic and abdominal organs (left and right lungs, liver, spleen, and left and right kidneys) were evaluated using a fivefold cross-validation strategy. In all of the tests, we achieved the best average Dice coefficients of 98% for the right lung, 98% for the left lung, 96% for the liver, 92% for the spleen, 95% for the right kidney, 93% for the left kidney, respectively. The network architectures exploit dual-energy spectra and outperform deep learning for SECT. CONCLUSIONS The results of the cross-validation show that our methods are feasible and promising. Successful tests on special clinical cases reveal that our methods have high adaptability in the practical application.
Collapse
Affiliation(s)
- Shuqing Chen
- Pattern Recognition Lab, Universität Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Xia Zhong
- Pattern Recognition Lab, Universität Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Shiyang Hu
- Pattern Recognition Lab, Universität Erlangen-Nürnberg, Erlangen, 91058, Germany.,Erlangen Graduate School in Advanced Optical Technologies, Erlangen, 91058, Germany
| | - Sabrina Dorn
- German Cancer Research Center, Heidelberg, 69120, Germany.,Ruprecht-Karls-University Heidelberg, Heidelberg, 69120, Germany
| | - Marc Kachelrieß
- German Cancer Research Center, Heidelberg, 69120, Germany.,Ruprecht-Karls-University Heidelberg, Heidelberg, 69120, Germany
| | - Michael Lell
- University Hospital Nürnberg, Nürnberg, 90419, Germany.,Paracelsus Medical University, Nürnberg, 90419, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Universität Erlangen-Nürnberg, Erlangen, 91058, Germany.,Erlangen Graduate School in Advanced Optical Technologies, Erlangen, 91058, Germany
| |
Collapse
|