1
|
Xu K, Kang H. A Review of Machine Learning Approaches for Brain Positron Emission Tomography Data Analysis. Nucl Med Mol Imaging 2024; 58:203-212. [PMID: 38932757 PMCID: PMC11196571 DOI: 10.1007/s13139-024-00845-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 01/19/2024] [Accepted: 01/25/2024] [Indexed: 06/28/2024] Open
Abstract
Positron emission tomography (PET) imaging has moved forward the development of medical diagnostics and research across various domains, including cardiology, neurology, infection detection, and oncology. The integration of machine learning (ML) algorithms into PET data analysis has further enhanced their capabilities of including disease diagnosis and classification, image segmentation, and quantitative analysis. ML algorithms empower researchers and clinicians to extract valuable insights from complex big PET datasets, which enabling automated pattern recognition, predictive health outcome modeling, and more efficient data analysis. This review explains the basic knowledge of PET imaging, statistical methods for PET image analysis, and challenges of PET data analysis. We also discussed the improvement of analysis capabilities by combining PET data with machine learning algorithms and the application of this combination in various aspects of PET image research. This review also highlights current trends and future directions in PET imaging, emphasizing the driving and critical role of machine learning and big PET image data analytics in improving diagnostic accuracy and personalized medical approaches. Integration between PET imaging will shape the future of medical diagnosis and research.
Collapse
Affiliation(s)
- Ke Xu
- Department of Biostatistics, Vanderbilt University Medical Center, 2525 West End Avenue, Suite 1100, Nashville, TN 37203 USA
| | - Hakmook Kang
- Department of Biostatistics, Vanderbilt University Medical Center, 2525 West End Avenue, Suite 1100, Nashville, TN 37203 USA
| |
Collapse
|
2
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
3
|
Karimipourfard M, Sina S, Mahani H, Alavi M, Yazdi M. Impact of deep learning-based multiorgan segmentation methods on patient-specific internal dosimetry in PET/CT imaging: A comparative study. J Appl Clin Med Phys 2024; 25:e14254. [PMID: 38214349 PMCID: PMC10860559 DOI: 10.1002/acm2.14254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 10/29/2023] [Accepted: 11/30/2023] [Indexed: 01/13/2024] Open
Abstract
PURPOSE Accurate and fast multiorgan segmentation is essential in image-based internal dosimetry in nuclear medicine. While conventional manual PET image segmentation is widely used, it suffers from both being time-consuming as well as subject to human error. This study exploited 2D and 3D deep learning (DL) models. Key organs in the trunk of the body were segmented and then used as a reference for networks. METHODS The pre-trained p2p-U-Net-GAN and HighRes3D architectures were fine-tuned with PET-only images as inputs. Additionally, the HighRes3D model was alternatively trained with PET/CT images. Evaluation metrics such as sensitivity (SEN), specificity (SPC), intersection over union (IoU), and Dice scores were considered to assess the performance of the networks. The impact of DL-assisted PET image segmentation methods was further assessed using the Monte Carlo (MC)-derived S-values to be used for internal dosimetry. RESULTS A fair comparison with manual low-dose CT-aided segmentation of the PET images was also conducted. Although both 2D and 3D models performed well, the HighRes3D offers superior performance with Dice scores higher than 0.90. Key evaluation metrics such as SEN, SPC, and IoU vary between 0.89-0.93, 0.98-0.99, and 0.87-0.89 intervals, respectively, indicating the encouraging performance of the models. The percentage differences between the manual and DL segmentation methods in the calculated S-values varied between 0.1% and 6% with a maximum attributed to the stomach. CONCLUSION The findings prove while the incorporation of anatomical information provided by the CT data offers superior performance in terms of Dice score, the performance of HighRes3D remains comparable without the extra CT channel. It is concluded that both proposed DL-based methods provide automated and fast segmentation of whole-body PET/CT images with promising evaluation metrics. Between them, the HighRes3D is more pronounced by providing better performance and can therefore be the method of choice for 18F-FDG-PET image segmentation.
Collapse
Affiliation(s)
| | - Sedigheh Sina
- Department of Ray‐Medical EngineeringShiraz UniversityShirazIran
- Radiation Research CenterShiraz UniversityShirazIran
| | - Hojjat Mahani
- Radiation Applications Research SchoolNuclear Science and Technology Research InstituteTehranIran
| | - Mehrosadat Alavi
- Department of Nuclear MedicineShiraz University of Medical SciencesShirazIran
| | - Mehran Yazdi
- School of Electrical and Computer EngineeringShiraz UniversityShirazIran
| |
Collapse
|
4
|
Jimenez-Mesa C, Arco JE, Martinez-Murcia FJ, Suckling J, Ramirez J, Gorriz JM. Applications of machine learning and deep learning in SPECT and PET imaging: General overview, challenges and future prospects. Pharmacol Res 2023; 197:106984. [PMID: 37940064 DOI: 10.1016/j.phrs.2023.106984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/04/2023] [Accepted: 11/04/2023] [Indexed: 11/10/2023]
Abstract
The integration of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques with machine learning (ML) algorithms, including deep learning (DL) models, is a promising approach. This integration enhances the precision and efficiency of current diagnostic and treatment strategies while offering invaluable insights into disease mechanisms. In this comprehensive review, we delve into the transformative impact of ML and DL in this domain. Firstly, a brief analysis is provided of how these algorithms have evolved and which are the most widely applied in this domain. Their different potential applications in nuclear imaging are then discussed, such as optimization of image adquisition or reconstruction, biomarkers identification, multimodal fusion and the development of diagnostic, prognostic, and disease progression evaluation systems. This is because they are able to analyse complex patterns and relationships within imaging data, as well as extracting quantitative and objective measures. Furthermore, we discuss the challenges in implementation, such as data standardization and limited sample sizes, and explore the clinical opportunities and future horizons, including data augmentation and explainable AI. Together, these factors are propelling the continuous advancement of more robust, transparent, and reliable systems.
Collapse
Affiliation(s)
- Carmen Jimenez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan E Arco
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Communications Engineering, University of Malaga, 29010, Spain
| | | | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK.
| |
Collapse
|
5
|
Ferrández MC, Golla SSV, Eertink JJ, de Vries BM, Lugtenburg PJ, Wiegers SE, Zwezerijnen GJC, Pieplenbosch S, Kurch L, Hüttmann A, Hanoun C, Dührsen U, de Vet HCW, Zijlstra JM, Boellaard R. An artificial intelligence method using FDG PET to predict treatment outcome in diffuse large B cell lymphoma patients. Sci Rep 2023; 13:13111. [PMID: 37573446 PMCID: PMC10423266 DOI: 10.1038/s41598-023-40218-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 08/07/2023] [Indexed: 08/14/2023] Open
Abstract
Convolutional neural networks (CNNs) may improve response prediction in diffuse large B-cell lymphoma (DLBCL). The aim of this study was to investigate the feasibility of a CNN using maximum intensity projection (MIP) images from 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography (PET) baseline scans to predict the probability of time-to-progression (TTP) within 2 years and compare it with the International Prognostic Index (IPI), i.e. a clinically used score. 296 DLBCL 18F-FDG PET/CT baseline scans collected from a prospective clinical trial (HOVON-84) were analysed. Cross-validation was performed using coronal and sagittal MIPs. An external dataset (340 DLBCL patients) was used to validate the model. Association between the probabilities, metabolic tumour volume and Dmaxbulk was assessed. Probabilities for PET scans with synthetically removed tumors were also assessed. The CNN provided a 2-year TTP prediction with an area under the curve (AUC) of 0.74, outperforming the IPI-based model (AUC = 0.68). Furthermore, high probabilities (> 0.6) of the original MIPs were considerably decreased after removing the tumours (< 0.4, generally). These findings suggest that MIP-based CNNs are able to predict treatment outcome in DLBCL.
Collapse
Affiliation(s)
- Maria C Ferrández
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, The Netherlands.
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands.
| | - Sandeep S V Golla
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Jakoba J Eertink
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Department of Hematology, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Bart M de Vries
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Pieternella J Lugtenburg
- Department of Hematology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Sanne E Wiegers
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Gerben J C Zwezerijnen
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Simone Pieplenbosch
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Department of Hematology, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Lars Kurch
- Department of Nuclear Medicine, Clinic and Polyclinic for Nuclear Medicine, University of Leipzig, Leipzig, Germany
| | - Andreas Hüttmann
- Department of Hematology, West German Cancer Center, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Christine Hanoun
- Department of Hematology, West German Cancer Center, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Ulrich Dührsen
- Department of Hematology, West German Cancer Center, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Henrica C W de Vet
- Department of Epidemiology and Data Science, Amsterdam Public Health Research Institute, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Department of Methodology, Amsterdam Public Health Research Institute, Methodology, Amsterdam, The Netherlands
| | - Josée M Zijlstra
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Department of Hematology, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Ronald Boellaard
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| |
Collapse
|
6
|
Gou S, Xu Y, Yang H, Tong N, Zhang X, Wei L, Zhao L, Zheng M, Liu W. Automated cervical tumor segmentation on MR images using multi-view feature attention network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
7
|
Breto AL, Spieler B, Zavala-Romero O, Alhusseini M, Patel NV, Asher DA, Xu IR, Baikovitz JB, Mellon EA, Ford JC, Stoyanova R, Portelance L. Deep Learning for Per-Fraction Automatic Segmentation of Gross Tumor Volume (GTV) and Organs at Risk (OARs) in Adaptive Radiotherapy of Cervical Cancer. Front Oncol 2022; 12:854349. [PMID: 35664789 PMCID: PMC9159296 DOI: 10.3389/fonc.2022.854349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 03/29/2022] [Indexed: 11/13/2022] Open
Abstract
Background/Hypothesis MRI-guided online adaptive radiotherapy (MRI-g-OART) improves target coverage and organs-at-risk (OARs) sparing in radiation therapy (RT). For patients with locally advanced cervical cancer (LACC) undergoing RT, changes in bladder and rectal filling contribute to large inter-fraction target volume motion. We hypothesized that deep learning (DL) convolutional neural networks (CNN) can be trained to accurately segment gross tumor volume (GTV) and OARs both in planning and daily fractions' MRI scans. Materials/Methods We utilized planning and daily treatment fraction setup (RT-Fr) MRIs from LACC patients, treated with stereotactic body RT to a dose of 45-54 Gy in 25 fractions. Nine structures were manually contoured. MASK R-CNN network was trained and tested under three scenarios: (i) Leave-one-out (LOO), using the planning images of N- 1 patients for training; (ii) the same network, tested on the RT-Fr MRIs of the "left-out" patient, (iii) including the planning MRI of the "left-out" patient as an additional training sample, and tested on RT-Fr MRIs. The network performance was evaluated using the Dice Similarity Coefficient (DSC) and Hausdorff distances. The association between the structures' volume and corresponding DSCs was investigated using Pearson's Correlation Coefficient, r. Results MRIs from fifteen LACC patients were analyzed. In the LOO scenario the DSC for Rectum, Femur, and Bladder was >0.8, followed by the GTV, Uterus, Mesorectum and Parametrium (0.6-0.7). The results for Vagina and Sigmoid were suboptimal. The performance of the network was similar for most organs when tested on RT-Fr MRI. Including the planning MRI in the training did not improve the segmentation of the RT-Fr MRI. There was a significant correlation between the average organ volume and the corresponding DSC (r = 0.759, p = 0.018). Conclusion We have established a robust workflow for training MASK R-CNN to automatically segment GTV and OARs in MRI-g-OART of LACC. Albeit the small number of patients in this pilot project, the network was trained to successfully identify several structures while challenges remain, especially in relatively small organs. With the increase of the LACC cases, the performance of the network will improve. A robust auto-contouring tool would improve workflow efficiency and patient tolerance of the OART process.
Collapse
Affiliation(s)
- Adrian L Breto
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Benjamin Spieler
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Olmo Zavala-Romero
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Mohammad Alhusseini
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Nirav V Patel
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - David A Asher
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Isaac R Xu
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Jacqueline B Baikovitz
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Eric A Mellon
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - John C Ford
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Radka Stoyanova
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Lorraine Portelance
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| |
Collapse
|
8
|
Carlsen EA, Lindholm K, Hindsholm A, Gæde M, Ladefoged CN, Loft M, Johnbeck CB, Langer SW, Oturai P, Knigge U, Kjaer A, Andersen FL. A convolutional neural network for total tumor segmentation in [ 64Cu]Cu-DOTATATE PET/CT of patients with neuroendocrine neoplasms. EJNMMI Res 2022; 12:30. [PMID: 35633448 PMCID: PMC9148347 DOI: 10.1186/s13550-022-00901-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 05/07/2022] [Indexed: 11/24/2022] Open
Abstract
Background Segmentation of neuroendocrine neoplasms (NENs) in [64Cu]Cu-DOTATATE positron emission tomography makes it possible to extract quantitative measures useable for prognostication of patients. However, manual tumor segmentation is cumbersome and time-consuming. Therefore, we aimed to implement and test an artificial intelligence (AI) network for tumor segmentation. Patients with gastroenteropancreatic or lung NEN with [64Cu]Cu-DOTATATE PET/CT performed were included in our training (n = 117) and test cohort (n = 41). Further, 10 patients with no signs of NEN were included as negative controls. Ground truth segmentations were obtained by a standardized semiautomatic method for tumor segmentation by a physician. The nnU-Net framework was used to set up a deep learning U-net architecture. Dice score, sensitivity and precision were used for selection of the final model. AI segmentations were implemented in a clinical imaging viewer where a physician evaluated performance and performed manual adjustments. Results Cross-validation training was used to generate models and an ensemble model. The ensemble model performed best overall with a lesion-wise dice of 0.850 and pixel-wise dice, precision and sensitivity of 0.801, 0.786 and 0.872, respectively. Performance of the ensemble model was acceptable with some degree of manual adjustment in 35/41 (85%) patients. Final tumor segmentation could be obtained from the AI model with manual adjustments in 5 min versus 17 min for ground truth method, p < 0.01. Conclusion We implemented and validated an AI model that achieved a high similarity with ground truth segmentation and resulted in faster tumor segmentation. With AI, total tumor segmentation may become feasible in the clinical routine. Supplementary Information The online version contains supplementary material available at 10.1186/s13550-022-00901-2.
Collapse
Affiliation(s)
- Esben Andreas Carlsen
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark.,ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Kristian Lindholm
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark.,ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Amalie Hindsholm
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark.,ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Mathias Gæde
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark.,ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Claes Nøhr Ladefoged
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark.,ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Mathias Loft
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark.,ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Camilla Bardram Johnbeck
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark.,ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Seppo Wang Langer
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark.,Department of Oncology, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark.,Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Peter Oturai
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark.,ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Ulrich Knigge
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark.,Department of Clinical Endocrinology and Surgical Gastroenterology, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Andreas Kjaer
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark. .,ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark.
| | - Flemming Littrup Andersen
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark.,ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| |
Collapse
|
9
|
Decentralized Distributed Multi-institutional PET Image Segmentation Using a Federated Deep Learning Framework. Clin Nucl Med 2022; 47:606-617. [PMID: 35442222 DOI: 10.1097/rlu.0000000000004194] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
PURPOSE The generalizability and trustworthiness of deep learning (DL)-based algorithms depend on the size and heterogeneity of training datasets. However, because of patient privacy concerns and ethical and legal issues, sharing medical images between different centers is restricted. Our objective is to build a federated DL-based framework for PET image segmentation utilizing a multicentric dataset and to compare its performance with the centralized DL approach. METHODS PET images from 405 head and neck cancer patients from 9 different centers formed the basis of this study. All tumors were segmented manually. PET images converted to SUV maps were resampled to isotropic voxels (3 × 3 × 3 mm3) and then normalized. PET image subvolumes (12 × 12 × 12 cm3) consisting of whole tumors and background were analyzed. Data from each center were divided into train/validation (80% of patients) and test sets (20% of patients). The modified R2U-Net was used as core DL model. A parallel federated DL model was developed and compared with the centralized approach where the data sets are pooled to one server. Segmentation metrics, including Dice similarity and Jaccard coefficients, percent relative errors (RE%) of SUVpeak, SUVmean, SUVmedian, SUVmax, metabolic tumor volume, and total lesion glycolysis were computed and compared with manual delineations. RESULTS The performance of the centralized versus federated DL methods was nearly identical for segmentation metrics: Dice (0.84 ± 0.06 vs 0.84 ± 0.05) and Jaccard (0.73 ± 0.08 vs 0.73 ± 0.07). For quantitative PET parameters, we obtained comparable RE% for SUVmean (6.43% ± 4.72% vs 6.61% ± 5.42%), metabolic tumor volume (12.2% ± 16.2% vs 12.1% ± 15.89%), and total lesion glycolysis (6.93% ± 9.6% vs 7.07% ± 9.85%) and negligible RE% for SUVmax and SUVpeak. No significant differences in performance (P > 0.05) between the 2 frameworks (centralized vs federated) were observed. CONCLUSION The developed federated DL model achieved comparable quantitative performance with respect to the centralized DL model. Federated DL models could provide robust and generalizable segmentation, while addressing patient privacy and legal and ethical issues in clinical data sharing.
Collapse
|
10
|
Shen G, Jin X, Sun C, Li Q. Artificial Intelligence Radiotherapy Planning: Automatic Segmentation of Human Organs in CT Images Based on a Modified Convolutional Neural Network. Front Public Health 2022; 10:813135. [PMID: 35493368 PMCID: PMC9051073 DOI: 10.3389/fpubh.2022.813135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Accepted: 03/24/2022] [Indexed: 11/13/2022] Open
Abstract
Objective:Precise segmentation of human organs and anatomic structures (especially organs at risk, OARs) is the basis and prerequisite for the treatment planning of radiation therapy. In order to ensure rapid and accurate design of radiotherapy treatment planning, an automatic organ segmentation technique was investigated based on deep learning convolutional neural network.MethodA deep learning convolutional neural network (CNN) algorithm called BCDU-Net has been modified and developed further by us. Twenty two thousand CT images and the corresponding organ contours of 17 types delineated manually by experienced physicians from 329 patients were used to train and validate the algorithm. The CT images randomly selected were employed to test the modified BCDU-Net algorithm. The weight parameters of the algorithm model were acquired from the training of the convolutional neural network.ResultThe average Dice similarity coefficient (DSC) of the automatic segmentation and manual segmentation of the human organs of 17 types reached 0.8376, and the best coefficient reached up to 0.9676. It took 1.5–2 s and about 1 h to automatically segment the contours of an organ in an image of the CT dataset for a patient and the 17 organs for the CT dataset with the method developed by us, respectively.ConclusionThe modified deep neural network algorithm could be used to automatically segment human organs of 17 types quickly and accurately. The accuracy and speed of the method meet the requirements of its application in radiotherapy.
Collapse
Affiliation(s)
- Guosheng Shen
- Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, China
- Key Laboratory of Basic Research on Heavy Ion Radiation Application in Medicine, Lanzhou, China
- Key Laboratory of Heavy Ion Radiation Biology and Medicine of Chinese Academy of Sciences, Lanzhou, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xiaodong Jin
- Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, China
- Key Laboratory of Basic Research on Heavy Ion Radiation Application in Medicine, Lanzhou, China
- Key Laboratory of Heavy Ion Radiation Biology and Medicine of Chinese Academy of Sciences, Lanzhou, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Chao Sun
- Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, China
- Key Laboratory of Basic Research on Heavy Ion Radiation Application in Medicine, Lanzhou, China
- Key Laboratory of Heavy Ion Radiation Biology and Medicine of Chinese Academy of Sciences, Lanzhou, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Qiang Li
- Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, China
- Key Laboratory of Basic Research on Heavy Ion Radiation Application in Medicine, Lanzhou, China
- Key Laboratory of Heavy Ion Radiation Biology and Medicine of Chinese Academy of Sciences, Lanzhou, China
- University of Chinese Academy of Sciences, Beijing, China
- *Correspondence: Qiang Li
| |
Collapse
|
11
|
Groendahl AR, Moe YM, Kaushal CK, Huynh BN, Rusten E, Tomic O, Hernes E, Hanekamp B, Undseth C, Guren MG, Malinen E, Futsaether CM. Deep learning-based automatic delineation of anal cancer gross tumour volume: a multimodality comparison of CT, PET and MRI. Acta Oncol 2022; 61:89-96. [PMID: 34783610 DOI: 10.1080/0284186x.2021.1994645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Accurate target volume delineation is a prerequisite for high-precision radiotherapy. However, manual delineation is resource-demanding and prone to interobserver variation. An automatic delineation approach could potentially save time and increase delineation consistency. In this study, the applicability of deep learning for fully automatic delineation of the gross tumour volume (GTV) in patients with anal squamous cell carcinoma (ASCC) was evaluated for the first time. An extensive comparison of the effects single modality and multimodality combinations of computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) have on automatic delineation quality was conducted. MATERIAL AND METHODS 18F-fluorodeoxyglucose PET/CT and contrast-enhanced CT (ceCT) images were collected for 86 patients with ASCC. A subset of 36 patients also underwent a study-specific 3T MRI examination including T2- and diffusion-weighted imaging. The resulting two datasets were analysed separately. A two-dimensional U-Net convolutional neural network (CNN) was trained to delineate the GTV in axial image slices based on single or multimodality image input. Manual GTV delineations constituted the ground truth for CNN model training and evaluation. Models were evaluated using the Dice similarity coefficient (Dice) and surface distance metrics computed from five-fold cross-validation. RESULTS CNN-generated automatic delineations demonstrated good agreement with the ground truth, resulting in mean Dice scores of 0.65-0.76 and 0.74-0.83 for the 86 and 36-patient datasets, respectively. For both datasets, the highest mean Dice scores were obtained using a multimodal combination of PET and ceCT (0.76-0.83). However, models based on single modality ceCT performed comparably well (0.74-0.81). T2W-only models performed acceptably but were somewhat inferior to the PET/ceCT and ceCT-based models. CONCLUSION CNNs provided high-quality automatic GTV delineations for both single and multimodality image input, indicating that deep learning may prove a versatile tool for target volume delineation in future patients with ASCC.
Collapse
Affiliation(s)
| | - Yngve Mardal Moe
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | | | - Bao Ngoc Huynh
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Espen Rusten
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
| | - Oliver Tomic
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Eivor Hernes
- Department of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | - Bettina Hanekamp
- Department of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | | | - Marianne Grønlie Guren
- Department of Oncology, Oslo University Hospital, Oslo, Norway
- Division of Cancer Medicine, Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Eirik Malinen
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| | | |
Collapse
|
12
|
Shiri I, Arabi H, Sanaat A, Jenabi E, Becker M, Zaidi H. Fully Automated Gross Tumor Volume Delineation From PET in Head and Neck Cancer Using Deep Learning Algorithms. Clin Nucl Med 2021; 46:872-883. [PMID: 34238799 DOI: 10.1097/rlu.0000000000003789] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE The availability of automated, accurate, and robust gross tumor volume (GTV) segmentation algorithms is critical for the management of head and neck cancer (HNC) patients. In this work, we evaluated 3 state-of-the-art deep learning algorithms combined with 8 different loss functions for PET image segmentation using a comprehensive training set and evaluated its performance on an external validation set of HNC patients. PATIENTS AND METHODS 18F-FDG PET/CT images of 470 patients presenting with HNC on which manually defined GTVs serving as standard of reference were used for training (340 patients), evaluation (30 patients), and testing (100 patients from different centers) of these algorithms. PET image intensity was converted to SUVs and normalized in the range (0-1) using the SUVmax of the whole data set. PET images were cropped to 12 × 12 × 12 cm3 subvolumes using isotropic voxel spacing of 3 × 3 × 3 mm3 containing the whole tumor and neighboring background including lymph nodes. We used different approaches for data augmentation, including rotation (-15 degrees, +15 degrees), scaling (-20%, 20%), random flipping (3 axes), and elastic deformation (sigma = 1 and proportion to deform = 0.7) to increase the number of training sets. Three state-of-the-art networks, including Dense-VNet, NN-UNet, and Res-Net, with 8 different loss functions, including Dice, generalized Wasserstein Dice loss, Dice plus XEnt loss, generalized Dice loss, cross-entropy, sensitivity-specificity, and Tversky, were used. Overall, 28 different networks were built. Standard image segmentation metrics, including Dice similarity, image-derived PET metrics, first-order, and shape radiomic features, were used for performance assessment of these algorithms. RESULTS The best results in terms of Dice coefficient (mean ± SD) were achieved by cross-entropy for Res-Net (0.86 ± 0.05; 95% confidence interval [CI], 0.85-0.87), Dense-VNet (0.85 ± 0.058; 95% CI, 0.84-0.86), and Dice plus XEnt for NN-UNet (0.87 ± 0.05; 95% CI, 0.86-0.88). The difference between the 3 networks was not statistically significant (P > 0.05). The percent relative error (RE%) of SUVmax quantification was less than 5% in networks with a Dice coefficient more than 0.84, whereas a lower RE% (0.41%) was achieved by Res-Net with cross-entropy loss. For maximum 3-dimensional diameter and sphericity shape features, all networks achieved a RE ≤ 5% and ≤10%, respectively, reflecting a small variability. CONCLUSIONS Deep learning algorithms exhibited promising performance for automated GTV delineation on HNC PET images. Different loss functions performed competitively when using different networks and cross-entropy for Res-Net, Dense-VNet, and Dice plus XEnt for NN-UNet emerged as reliable networks for GTV delineation. Caution should be exercised for clinical deployment owing to the occurrence of outliers in deep learning-based algorithms.
Collapse
Affiliation(s)
- Isaac Shiri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Elnaz Jenabi
- Research Centre for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | | | | |
Collapse
|
13
|
Satake H, Ishigaki S, Ito R, Naganawa S. Radiomics in breast MRI: current progress toward clinical application in the era of artificial intelligence. Radiol Med 2021; 127:39-56. [PMID: 34704213 DOI: 10.1007/s11547-021-01423-y] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 10/14/2021] [Indexed: 12/11/2022]
Abstract
Breast magnetic resonance imaging (MRI) is the most sensitive imaging modality for breast cancer diagnosis and is widely used clinically. Dynamic contrast-enhanced MRI is the basis for breast MRI, but ultrafast images, T2-weighted images, and diffusion-weighted images are also taken to improve the characteristics of the lesion. Such multiparametric MRI with numerous morphological and functional data poses new challenges to radiologists, and thus, new tools for reliable, reproducible, and high-volume quantitative assessments are warranted. In this context, radiomics, which is an emerging field of research involving the conversion of digital medical images into mineable data for clinical decision-making and outcome prediction, has been gaining ground in oncology. Recent development in artificial intelligence has promoted radiomics studies in various fields including breast cancer treatment and numerous studies have been conducted. However, radiomics has shown a translational gap in clinical practice, and many issues remain to be solved. In this review, we will outline the steps of radiomics workflow and investigate clinical application of radiomics focusing on breast MRI based on published literature, as well as current discussion about limitations and challenges in radiomics.
Collapse
Affiliation(s)
- Hiroko Satake
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan.
| | - Satoko Ishigaki
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
14
|
Litvin AA, Burkin DA, Kropinov AA, Paramzin FN. Radiomics and Digital Image Texture Analysis in Oncology (Review). Sovrem Tekhnologii Med 2021; 13:97-104. [PMID: 34513082 PMCID: PMC8353717 DOI: 10.17691/stm2021.13.2.11] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Indexed: 12/12/2022] Open
Abstract
One of the most promising areas of diagnosis and prognosis of diseases is radiomics, a science combining radiology, mathematical modeling, and deep machine learning. The main concept of radiomics is image biomarkers (IBMs), the parameters characterizing various pathological changes and calculated based on the analysis of digital image texture. IBMs are used for quantitative assessment of digital imaging results (CT, MRI, ultrasound, PET). The use of IBMs in the form of “virtual biopsy” is of particular relevance in oncology. The article provides the basic concepts of radiomics identifying the main stages of obtaining IBMs: data collection and preprocessing, tumor segmentation, data detection and extraction, modeling, statistical processing, and data validation. The authors have analyzed the possibilities of using IBMs in oncology, describing the currently known features and advantages of using radiomics and image texture analysis in the diagnosis and prognosis of cancer. The limitations and problems associated with the use of radiomics data are considered. Although the novel effective tool for performing virtual biopsy of human tissue is at the development stage, quite a few projects have already been implemented, and medical software packages for radiomics analysis of digital images have been created.
Collapse
Affiliation(s)
- A A Litvin
- Professor, Department of Surgical Disciplines, Immanuel Kant Baltic Federal University, 14 A. Nevskogo St., Kaliningrad, 236016, Russia; Deputy Head Physician for Medical Aspects, Regional Clinical Hospital of the Kaliningrad Region, 74 Klinicheskaya St., Kaliningrad, 236016, Russia
| | - D A Burkin
- PhD Student in Information Science and Computer Engineering, Immanuel Kant Baltic Federal University, 14 A. Nevskogo St., Kaliningrad, 236016, Russia
| | - A A Kropinov
- Therapeutist, Central City Clinical Hospital, 3 Letnyaya St., Kaliningrad, 236005, Russia
| | - F N Paramzin
- Oncologist, Central City Clinical Hospital, 3 Letnyaya St., Kaliningrad, 236005, Russia
| |
Collapse
|
15
|
Liu Z, Chen W, Guan H, Zhen H, Shen J, Liu X, Liu A, Li R, Geng J, You J, Wang W, Li Z, Zhang Y, Chen Y, Du J, Chen Q, Chen Y, Wang S, Zhang F, Qiu J. An Adversarial Deep-Learning-Based Model for Cervical Cancer CTV Segmentation With Multicenter Blinded Randomized Controlled Validation. Front Oncol 2021; 11:702270. [PMID: 34490103 PMCID: PMC8417437 DOI: 10.3389/fonc.2021.702270] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/29/2021] [Indexed: 12/31/2022] Open
Abstract
Purpose To propose a novel deep-learning-based auto-segmentation model for CTV delineation in cervical cancer and to evaluate whether it can perform comparably well to manual delineation by a three-stage multicenter evaluation framework. Methods An adversarial deep-learning-based auto-segmentation model was trained and configured for cervical cancer CTV contouring using CT data from 237 patients. Then CT scans of additional 20 consecutive patients with locally advanced cervical cancer were collected to perform a three-stage multicenter randomized controlled evaluation involving nine oncologists from six medical centers. This evaluation system is a combination of objective performance metrics, radiation oncologist assessment, and finally the head-to-head Turing imitation test. Accuracy and effectiveness were evaluated step by step. The intra-observer consistency of each oncologist was also tested. Results In stage-1 evaluation, the mean DSC and the 95HD value of the proposed model were 0.88 and 3.46 mm, respectively. In stage-2, the oncologist grading evaluation showed the majority of AI contours were comparable to the GT contours. The average CTV scores for AI and GT were 2.68 vs. 2.71 in week 0 (P = .206), and 2.62 vs. 2.63 in week 2 (P = .552), with no significant statistical differences. In stage-3, the Turing imitation test showed that the percentage of AI contours, which were judged to be better than GT contours by ≥5 oncologists, was 60.0% in week 0 and 42.5% in week 2. Most oncologists demonstrated good consistency between the 2 weeks (P > 0.05). Conclusions The tested AI model was demonstrated to be accurate and comparable to the manual CTV segmentation in cervical cancer patients when assessed by our three-stage evaluation framework.
Collapse
Affiliation(s)
- Zhikai Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wanqi Chen
- Department of Nuclear Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Hui Guan
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Hongnan Zhen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jing Shen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xia Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - An Liu
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA, United States
| | - Richard Li
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA, United States
| | - Jianhao Geng
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Jing You
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Weihu Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Zhouyu Li
- Department of Radiation Oncology, Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou, China
| | - Yongfeng Zhang
- Department of Radiation Oncology, The Fourth Hospital of Jilin University (FAW General Hospital), Jilin, China
| | - Yuanyuan Chen
- Oncology Department, Cangzhou Hospital of Integrated Traditional Chinese and Western Medicine, Hebei, China
| | - Junjie Du
- Department of Radiation Oncology, Yangquan First People's Hospital, Shanxi, China
| | - Qi Chen
- Research and Development Department, MedMind Technology Co., Ltd., Beijing, China
| | - Yu Chen
- Research and Development Department, MedMind Technology Co., Ltd., Beijing, China
| | - Shaobin Wang
- Research and Development Department, MedMind Technology Co., Ltd., Beijing, China
| | - Fuquan Zhang
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jie Qiu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
16
|
Kazemimoghadam M, Chi W, Rahimi A, Kim N, Alluri P, Nwachukwu C, Lu W, Gu X. Saliency-guided deep learning network for automatic tumor bed volume delineation in post-operative breast irradiation. Phys Med Biol 2021; 66:10.1088/1361-6560/ac176d. [PMID: 34298539 PMCID: PMC8639319 DOI: 10.1088/1361-6560/ac176d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 07/23/2021] [Indexed: 11/12/2022]
Abstract
Efficient, reliable and reproducible target volume delineation is a key step in the effective planning of breast radiotherapy. However, post-operative breast target delineation is challenging as the contrast between the tumor bed volume (TBV) and normal breast tissue is relatively low in CT images. In this study, we propose to mimic the marker-guidance procedure in manual target delineation. We developed a saliency-based deep learning segmentation (SDL-Seg) algorithm for accurate TBV segmentation in post-operative breast irradiation. The SDL-Seg algorithm incorporates saliency information in the form of markers' location cues into a U-Net model. The design forces the model to encode the location-related features, which underscores regions with high saliency levels and suppresses low saliency regions. The saliency maps were generated by identifying markers on CT images. Markers' location were then converted to probability maps using a distance transformation coupled with a Gaussian filter. Subsequently, the CT images and the corresponding saliency maps formed a multi-channel input for the SDL-Seg network. Our in-house dataset was comprised of 145 prone CT images from 29 post-operative breast cancer patients, who received 5-fraction partial breast irradiation (PBI) regimen on GammaPod. The 29 patients were randomly split into training (19), validation (5) and test (5) sets. The performance of the proposed method was compared against basic U-Net. Our model achieved mean (standard deviation) of 76.4(±2.7) %, 6.76(±1.83) mm, and 1.9(±0.66) mm for Dice similarity coefficient, 95 percentile Hausdorff distance, and average symmetric surface distance respectively on the test set with computation time of below 11 seconds per one CT volume. SDL-Seg showed superior performance relative to basic U-Net for all the evaluation metrics while preserving low computation cost. The findings demonstrate that SDL-Seg is a promising approach for improving the efficiency and accuracy of the on-line treatment planning procedure of PBI, such as GammaPod based PBI.
Collapse
Affiliation(s)
- Mahdieh Kazemimoghadam
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Weicheng Chi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, People's Republic of China
| | - Asal Rahimi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Nathan Kim
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Prasanna Alluri
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Chika Nwachukwu
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | | | - Xuejun Gu
- Stanford University, Palo Alto, CA, United States of America
| |
Collapse
|
17
|
Zaidi H, El Naqa I. Quantitative Molecular Positron Emission Tomography Imaging Using Advanced Deep Learning Techniques. Annu Rev Biomed Eng 2021; 23:249-276. [PMID: 33797938 DOI: 10.1146/annurev-bioeng-082420-020343] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The widespread availability of high-performance computing and the popularity of artificial intelligence (AI) with machine learning and deep learning (ML/DL) algorithms at the helm have stimulated the development of many applications involving the use of AI-based techniques in molecular imaging research. Applications reported in the literature encompass various areas, including innovative design concepts in positron emission tomography (PET) instrumentation, quantitative image reconstruction and analysis techniques, computer-aided detection and diagnosis, as well as modeling and prediction of outcomes. This review reflects the tremendous interest in quantitative molecular imaging using ML/DL techniques during the past decade, ranging from the basic principles of ML/DL techniques to the various steps required for obtaining quantitatively accurate PET data, including algorithms used to denoise or correct for physical degrading factors as well as to quantify tracer uptake and metabolic tumor volume for treatment monitoring or radiation therapy treatment planning and response prediction.This review also addresses future opportunities and current challenges facing the adoption of ML/DL approaches and their role in multimodality imaging.
Collapse
Affiliation(s)
- Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211 Geneva, Switzerland; .,Geneva Neuroscience Centre, University of Geneva, 1205 Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, 9700 RB Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, DK-5000 Odense, Denmark
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, Florida 33612, USA.,Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109, USA.,Department of Oncology, McGill University, Montreal, Quebec H3A 1G5, Canada
| |
Collapse
|
18
|
[ 18F]FDG PET radiomics to predict disease-free survival in cervical cancer: a multi-scanner/center study with external validation. Eur J Nucl Med Mol Imaging 2021; 48:3432-3443. [PMID: 33772334 PMCID: PMC8440288 DOI: 10.1007/s00259-021-05303-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Accepted: 03/07/2021] [Indexed: 02/07/2023]
Abstract
Purpose To test the performances of native and tumour to liver ratio (TLR) radiomic features extracted from pre-treatment 2-[18F] fluoro-2-deoxy-D-glucose ([18F]FDG) PET/CT and combined with machine learning (ML) for predicting cancer recurrence in patients with locally advanced cervical cancer (LACC). Methods One hundred fifty-eight patients with LACC from multiple centers were retrospectively included in the study. Tumours were segmented using the Fuzzy Local Adaptive Bayesian (FLAB) algorithm. Radiomic features were extracted from the tumours and from regions drawn over the normal liver. Cox proportional hazard model was used to test statistical significance of clinical and radiomic features. Fivefold cross validation was used to tune the number of features. Seven different feature selection methods and four classifiers were tested. The models with the selected features were trained using bootstrapping and tested in data from each scanner independently. Reproducibility of radiomics features, clinical data added value and effect of ComBat-based harmonisation were evaluated across scanners. Results After a median follow-up of 23 months, 29% of the patients recurred. No individual radiomic or clinical features were significantly associated with cancer recurrence. The best model was obtained using 10 TLR features combined with clinical information. The area under the curve (AUC), F1-score, precision and recall were respectively 0.78 (0.67–0.88), 0.49 (0.25–0.67), 0.42 (0.25–0.60) and 0.63 (0.20–0.80). ComBat did not improve the predictive performance of the best models. Both the TLR and the native models performance varied across scanners used in the test set. Conclusion [18F]FDG PET radiomic features combined with ML add relevant information to the standard clinical parameters in terms of LACC patient’s outcome but remain subject to variability across PET/CT devices. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05303-5.
Collapse
|
19
|
Oh KT, Kim D, Ye BS, Lee S, Yun M, Yoo SK. Segmentation of white matter hyperintensities on 18F-FDG PET/CT images with a generative adversarial network. Eur J Nucl Med Mol Imaging 2021; 48:3422-3431. [PMID: 33693968 DOI: 10.1007/s00259-021-05285-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 02/25/2021] [Indexed: 11/28/2022]
Abstract
PURPOSE White matter hyperintensities (WMH) are typically segmented using MRI because WMH are hardly visible on 18F-FDG PET/CT. This retrospective study was conducted to segment WMH and estimate their volumes from 18F-FDG PET with a generative adversarial network (WhyperGAN). METHODS We selected patients whose interval between MRI and FDG PET/CT scans was within 3 months, from January 2017 to December 2018, and classified them into mild, moderate, and severe groups by following the semiquantitative rating method of Fazekas. For each group, 50 patients were selected, and of them, we randomly selected 35 patients for training and 15 for testing. WMH were automatically segmented from FLAIR MRI with manual adjustment. Patches of WMH were extracted from 18F-FDG PET and segmented MRI. WhyperGAN was compared with H-DenseUnet, a deep learning method widely used for segmentation tasks, for segmentation performance based on the dice similarity coefficient (DSC), recall, and average volume differences (AVD). For volume estimation, the predicted WMH volumes from PET were compared with ground truth volumes. RESULTS The DSC values were associated with WMH volumes on MRI. For volumes >60 mL, the DSC values were 0.751 for WhyperGAN and 0.564 for H-DenseUnet. For volumes ≤60 mL, the DSC values rapidly decreased as the volume decreased (0.362 for WhyperGAN vs. 0.237 for H-DenseUnet). For recall, WhyperGAN achieved the highest value in the severe group (0.579 for WhyperGAN vs. 0.509 for H-DenseUnet). For AVD, WhyperGAN achieved the lowest score in the severe group (0.494 for WhyperGAN vs. 0.941 for H-DenseUnet). For the WMH volume estimation, WhyperGAN performed better than H-DenseUnet and yielded excellent correlation coefficients (r = 0.998, 0.983, and 0.908 in the severe, moderate, and mild group). CONCLUSIONS Although limited by visual analysis, the WhyperGAN based can be used to automatically segment and estimate volumes of WMH from 18F-FDG PET/CT. This would increase the usefulness of 18F-FDG PET/CT for the evaluation of WMH in patients with cognitive impairment.
Collapse
Affiliation(s)
- Kyeong Taek Oh
- Department of Medical Engineering, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Dongwoo Kim
- Department of Nuclear Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Byoung Seok Ye
- Department of Neurology, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Sangwon Lee
- Department of Nuclear Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Mijin Yun
- Department of Nuclear Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea.
| | - Sun Kook Yoo
- Department of Medical Engineering, Yonsei University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
20
|
Deep learning in Nuclear Medicine—focus on CNN-based approaches for PET/CT and PET/MR: where do we stand? Clin Transl Imaging 2021. [DOI: 10.1007/s40336-021-00411-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
21
|
Tamal M. Intensity threshold based solid tumour segmentation method for Positron Emission Tomography (PET) images: A review. Heliyon 2020; 6:e05267. [PMID: 33163642 PMCID: PMC7610228 DOI: 10.1016/j.heliyon.2020.e05267] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 05/14/2020] [Accepted: 10/12/2020] [Indexed: 12/02/2022] Open
Abstract
Accurate, robust and reproducible delineation of tumour in Positron Emission Tomography (PET) is essential for diagnosis, treatment planning and response assessment. Since standardized uptake value (SUV) – a normalized semiquantitative parameter used in PET is represented by the intensity of the PET images and related to the radiotracer uptake, a SUV based threshold method is a natural choice to delineate the tumour. However, determination of an optimum threshold value is a challenging task due to low spatial resolution, and signal-to-noise ratio (SNR) along with finite image sampling constraint. The aim of the review is to summarize different fixed and adaptive threshold-based PET image segmentation approaches under a common mathematical framework Advantages and disadvantages of different threshold based methods are also highlighted from the perspectives of diagnosis, treatment planning and response assessment. Several fixed threshold values (30%–70% of the maximum SUV of the tumour (SUVmaxT)) have been investigated. It has been reported that the fixed threshold-based method is very much dependent on the SNR, tumour to background ratio (TBR) and the size of the tumour. Adaptive threshold-based method, an alternative to fixed threshold, can minimize these dependencies by accounting for tumour to background ratio (TBR) and tumour size. However, the parameters for the adaptive methods need to be calibrated for each PET camera system (e.g., scanner geometry, image acquisition protocol, reconstruction algorithm etc.) and it is not straight forward to implement the same procedure to other PET systems to obtain similar results. It has been reported that the performance of the adaptive methods is also not optimum for smaller volumes with lower TBR and SNR. Statistical analysis carried out on the NEMA thorax phantom images also indicates that regions segmented by the fixed threshold method are significantly different for all cases. On the other hand, the adaptive method provides significantly different segmented regions only for low TBR with different SNR. From this viewpoint, a robust threshold based segmentation method that will be less sensitive to SUVmaxT, SNR, TBR and volume needs to be developed. It was really challenging to compare the performance of different threshold-based methods because the performance of each method was tested on dissimilar data set with different data acquisition and reconstruction protocols along with different TBR, SNR and volumes. To avoid such difficulties, it will be desirable to have a common database of clinical PET images acquired with different image acquisition protocols and different PET cameras to compare the performance of automatic segmentation methods. It is also suggested to report the changes in SNR and TBR while reporting the response using threshold based methods.
Collapse
Affiliation(s)
- Mahbubunnabi Tamal
- Department of Biomedical Engineering, Imam Abdulrahman Bin Faisal University, PO Box 1982, Dammam, 31441, Saudi Arabia
| |
Collapse
|
22
|
Bao H, Sun X, Zhang Y, Pang B, Li H, Zhou L, Wu F, Cao D, Wang J, Turic B, Wang L. The artificial intelligence-assisted cytology diagnostic system in large-scale cervical cancer screening: A population-based cohort study of 0.7 million women. Cancer Med 2020; 9:6896-6906. [PMID: 32697872 PMCID: PMC7520355 DOI: 10.1002/cam4.3296] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 05/20/2020] [Accepted: 06/22/2020] [Indexed: 02/06/2023] Open
Abstract
Background Adequate cytology is limited by insufficient cytologists in a large‐scale cervical cancer screening. We aimed to develop an artificial intelligence (AI)‐assisted cytology system in cervical cancer screening program. Methods We conducted a perspective cohort study within a population‐based cervical cancer screening program for 0.7 million women, using a validated AI‐assisted cytology system. For comparison, cytologists examined all slides classified by AI as abnormal and a randomly selected 10% of normal slides. Each woman with slides classified as abnormal by either AI‐assisted or manual reading was diagnosed by colposcopy and biopsy. The outcomes were histologically confirmed cervical intraepithelial neoplasia grade 2 or worse (CIN2+). Results Finally, we recruited 703 103 women, of whom 98 549 were independently screened by AI and manual reading. The overall agreement rate between AI and manual reading was 94.7% (95% confidential interval [CI], 94.5%‐94.8%), and kappa was 0.92 (0.91‐0.92). The detection rates of CIN2+ increased with the severity of cytology abnormality performed by both AI and manual reading (Ptrend < 0.001). General estimated equations showed that detection of CIN2+ among women with ASC‐H or HSIL by AI were significantly higher than corresponding groups classified by cytologists (for ASC‐H: odds ratio [OR] = 1.22, 95%CI 1.11‐1.34, P < .001; for HSIL: OR = 1.41, 1.28‐1.55, P < .001). AI‐assisted cytology was 5.8% (3.0%‐8.6%) more sensitive for detection of CIN2+ than manual reading with a slight reduction in specificity. Conclusions AI‐assisted cytology system could exclude most of normal cytology, and improve sensitivity with clinically equivalent specificity for detection of CIN2+ compared with manual cytology reading. Overall, the results support AI‐based cytology system for the primary cervical cancer screening in large‐scale population.
Collapse
Affiliation(s)
- Heling Bao
- Department of Maternal and Child Health, School of Public Health, Peking University, Beijing, China.,National Center for Chronic and Non-communicable Disease Control and Prevention, Chinese Center for Disease Control and Prevention, Beijing, China
| | - Xiaorong Sun
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Yi Zhang
- Electronic and Information Engineering Department, Wenhua College, Wuhan, China
| | - Baochuan Pang
- Landing Artificial Intelligence Center for Pathological Diagnosis, Wuhan University, Wuhan, China
| | - Hua Li
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Liang Zhou
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Fengpin Wu
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Dehua Cao
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Jian Wang
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Bojana Turic
- Landing Cloud Medical Laboratory Co., Wuhan, China
| | - Linhong Wang
- Landing Cloud Medical Laboratory Co., Wuhan, China
| |
Collapse
|
23
|
Shen C, Nguyen D, Zhou Z, Jiang SB, Dong B, Jia X. An introduction to deep learning in medical physics: advantages, potential, and challenges. Phys Med Biol 2020; 65:05TR01. [PMID: 31972556 PMCID: PMC7101509 DOI: 10.1088/1361-6560/ab6f51] [Citation(s) in RCA: 86] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
As one of the most popular approaches in artificial intelligence, deep learning (DL) has attracted a lot of attention in the medical physics field over the past few years. The goals of this topical review article are twofold. First, we will provide an overview of the method to medical physics researchers interested in DL to help them start the endeavor. Second, we will give in-depth discussions on the DL technology to make researchers aware of its potential challenges and possible solutions. As such, we divide the article into two major parts. The first part introduces general concepts and principles of DL and summarizes major research resources, such as computational tools and databases. The second part discusses challenges faced by DL, present available methods to mitigate some of these challenges, as well as our recommendations.
Collapse
Affiliation(s)
- Chenyang Shen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America. Innovative Technology Of Radiotherapy Computation and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | | | | | | | | | | |
Collapse
|
24
|
Polidori A, Salvatore C, Castiglioni I, Cerasa A. The eye of nuclear medicine. Clin Transl Imaging 2019. [DOI: 10.1007/s40336-019-00340-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
25
|
Shen C, Gonzalez Y, Klages P, Qin N, Jung H, Chen L, Nguyen D, Jiang SB, Jia X. Intelligent inverse treatment planning via deep reinforcement learning, a proof-of-principle study in high dose-rate brachytherapy for cervical cancer. Phys Med Biol 2019; 64:115013. [PMID: 30978709 DOI: 10.1088/1361-6560/ab18bf] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Inverse treatment planning in radiation therapy is formulated as solving optimization problems. The objective function and constraints consist of multiple terms designed for different clinical and practical considerations. Weighting factors of these terms are needed to define the optimization problem. While a treatment planning optimization engine can solve the optimization problem with given weights, adjusting the weights to yield a high-quality plan is typically performed by a human planner. Yet the weight-tuning task is labor intensive, time consuming, and it critically affects the final plan quality. An automatic weight-tuning approach is strongly desired. The procedure of weight adjustment to improve the plan quality is essentially a decision-making problem. Motivated by the tremendous success in deep learning for decision making with human-level intelligence, we propose a novel framework to adjust the weights in a human-like manner. This study used inverse treatment planning in high-dose-rate brachytherapy (HDRBT) for cervical cancer as an example. We developed a weight-tuning policy network (WTPN) that observes dose volume histograms of a plan and outputs an action to adjust organ weighting factors, similar to the behaviors of a human planner. We trained the WTPN via end-to-end deep reinforcement learning. Experience replay was performed with the epsilon greedy algorithm. After training was completed, we applied the trained WTPN to guide treatment planning of five testing patient cases. It was found that the trained WTPN successfully learnt the treatment planning goals and was able to guide the weight tuning process. On average, the quality score of plans generated under the WTPN's guidance was improved by ~8.5% compared to the initial plan with arbitrarily set weights, and by 10.7% compared to the plans generated by human planners. To our knowledge, this was the first time that a tool was developed to adjust organ weights for the treatment planning optimization problem in a human-like fashion based on intelligence learnt from a training process, which was different from existing strategies based on pre-defined rules. The study demonstrated potential feasibility to develop intelligent treatment planning approaches via deep reinforcement learning.
Collapse
Affiliation(s)
- Chenyang Shen
- Innovative Technology Of Radiotherapy Computation and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75287, United States of America. Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75287, United States of America
| | | | | | | | | | | | | | | | | |
Collapse
|