1
|
Oliver J, Alapati R, Lee J, Bur A. Artificial Intelligence in Head and Neck Surgery. Otolaryngol Clin North Am 2024:S0030-6665(24)00070-7. [PMID: 38910064 DOI: 10.1016/j.otc.2024.05.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
This article explores artificial intelligence's (AI's) role in otolaryngology for head and neck cancer diagnosis and management. It highlights AI's potential in pattern recognition for early cancer detection, prognostication, and treatment planning, primarily through image analysis using clinical, endoscopic, and histopathologic images. Radiomics is also discussed at length, as well as the many ways that radiologic image analysis can be utilized, including for diagnosis, lymph node metastasis prediction, and evaluation of treatment response. The study highlights AI's promise and limitations, underlining the need for clinician-data scientist collaboration to enhance head and neck cancer care.
Collapse
Affiliation(s)
- Jamie Oliver
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Rahul Alapati
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Jason Lee
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Andrés Bur
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA.
| |
Collapse
|
2
|
Xiong X, Smith BJ, Graves SA, Graham MM, Buatti JM, Beichel RR. Head and Neck Cancer Segmentation in FDG PET Images: Performance Comparison of Convolutional Neural Networks and Vision Transformers. Tomography 2023; 9:1933-1948. [PMID: 37888743 PMCID: PMC10611182 DOI: 10.3390/tomography9050151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/11/2023] [Accepted: 10/13/2023] [Indexed: 10/28/2023] Open
Abstract
Convolutional neural networks (CNNs) have a proven track record in medical image segmentation. Recently, Vision Transformers were introduced and are gaining popularity for many computer vision applications, including object detection, classification, and segmentation. Machine learning algorithms such as CNNs or Transformers are subject to an inductive bias, which can have a significant impact on the performance of machine learning models. This is especially relevant for medical image segmentation applications where limited training data are available, and a model's inductive bias should help it to generalize well. In this work, we quantitatively assess the performance of two CNN-based networks (U-Net and U-Net-CBAM) and three popular Transformer-based segmentation network architectures (UNETR, TransBTS, and VT-UNet) in the context of HNC lesion segmentation in volumetric [F-18] fluorodeoxyglucose (FDG) PET scans. For performance assessment, 272 FDG PET-CT scans of a clinical trial (ACRIN 6685) were utilized, which includes a total of 650 lesions (primary: 272 and secondary: 378). The image data used are highly diverse and representative for clinical use. For performance analysis, several error metrics were utilized. The achieved Dice coefficient ranged from 0.833 to 0.809 with the best performance being achieved by CNN-based approaches. U-Net-CBAM, which utilizes spatial and channel attention, showed several advantages for smaller lesions compared to the standard U-Net. Furthermore, our results provide some insight regarding the image features relevant for this specific segmentation application. In addition, results highlight the need to utilize primary as well as secondary lesions to derive clinically relevant segmentation performance estimates avoiding biases.
Collapse
Affiliation(s)
- Xiaofan Xiong
- Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242, USA
| | - Brian J. Smith
- Department of Biostatistics, The University of Iowa, Iowa City, IA 52242, USA
| | - Stephen A. Graves
- Department of Radiology, The University of Iowa, Iowa City, IA 52242, USA; (S.A.G.)
| | - Michael M. Graham
- Department of Radiology, The University of Iowa, Iowa City, IA 52242, USA; (S.A.G.)
| | - John M. Buatti
- Department of Radiation Oncology, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Reinhard R. Beichel
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA
| |
Collapse
|
3
|
He J, Zhang Y, Chung M, Wang M, Wang K, Ma Y, Ding X, Li Q, Pu Y. Whole-body tumor segmentation from PET/CT images using a two-stage cascaded neural network with camouflaged object detection mechanisms. Med Phys 2023; 50:6151-6162. [PMID: 37134002 DOI: 10.1002/mp.16438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 03/25/2023] [Accepted: 04/12/2023] [Indexed: 05/04/2023] Open
Abstract
BACKGROUND Whole-body Metabolic Tumor Volume (MTVwb) is an independent prognostic factor for overall survival in lung cancer patients. Automatic segmentation methods have been proposed for MTV calculation. Nevertheless, most of existing methods for patients with lung cancer only segment tumors in the thoracic region. PURPOSE In this paper, we present a Two-Stage cascaded neural network integrated with Camouflaged Object Detection mEchanisms (TS-Code-Net) for automatic segmenting tumors from whole-body PET/CT images. METHODS Firstly, tumors are detected from the Maximum Intensity Projection (MIP) images of PET/CT scans, and tumors' approximate localizations along z-axis are identified. Secondly, the segmentations are performed on PET/CT slices that contain tumors identified by the first step. Camouflaged object detection mechanisms are utilized to distinguish the tumors from their surrounding regions that have similar Standard Uptake Values (SUV) and texture appearance. Finally, the TS-Code-Net is trained by minimizing the total loss that incorporates the segmentation accuracy loss and the class imbalance loss. RESULTS The performance of the TS-Code-Net is tested on a whole-body PET/CT image data-set including 480 Non-Small Cell Lung Cancer (NSCLC) patients with five-fold cross-validation using image segmentation metrics. Our method achieves 0.70, 0.76, and 0.70, for Dice, Sensitivity and Precision, respectively, which demonstrates the superiority of the TS-Code-Net over several existing methods related to metastatic lung cancer segmentation from whole-body PET/CT images. CONCLUSIONS The proposed TS-Code-Net is effective for whole-body tumor segmentation of PET/CT images. Codes for TS-Code-Net are available at: https://github.com/zyj19/TS-Code-Net.
Collapse
Affiliation(s)
- Jiangping He
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Yangjie Zhang
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Maggie Chung
- Department of Radiology, University of California, San Francisco, California, USA
| | - Michael Wang
- Department of Pathology, University of California, San Francisco, California, USA
| | - Kun Wang
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Yan Ma
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Xiaoyang Ding
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Qiang Li
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Yonglin Pu
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| |
Collapse
|
4
|
Gifford R, Jhawar SR, Krening S. Deep Learning Architecture to Improve Edge Accuracy of Auto-Contouring for Head and Neck Radiotherapy. Diagnostics (Basel) 2023; 13:2159. [PMID: 37443553 DOI: 10.3390/diagnostics13132159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 06/16/2023] [Accepted: 06/22/2023] [Indexed: 07/15/2023] Open
Abstract
Deep learning (DL) methods have shown great promise in auto-segmentation problems. However, for head and neck cancer, we show that DL methods fail at the axial edges of the gross tumor volume (GTV) where the segmentation is dependent on information closer to the center of the tumor. These failures may decrease trust and usage of proposed auto-contouring systems. To increase performance at the axial edges, we propose the spatially adjusted recurrent convolution U-Net (SARC U-Net). Our method uses convolutional recurrent neural networks and spatial transformer networks to push information from salient regions out to the axial edges. On average, our model increased the Sørensen-Dice coefficient (DSC) at the axial edges of the GTV by 11% inferiorly and 19.3% superiorly over a baseline 2D U-Net, which has no inherent way to capture information between adjacent slices. Over all slices, our proposed architecture achieved a DSC of 0.613, whereas a 3D and 2D U-Net achieved a DSC of 0.586 and 0.540, respectively. SARC U-Net can increase accuracy at the axial edges of GTV contours while also increasing accuracy over baseline models, creating a more robust contour.
Collapse
Affiliation(s)
- Ryan Gifford
- Department of Integrated Systems Engineering, The Ohio State University, 1971 Neil Ave, Columbus, OH 43210, USA
| | - Sachin R Jhawar
- Comprehensive Cancer Center, Department of Radiation Oncology, The Ohio State University, 410 W 10th Ave, Columbus, OH 43210, USA
| | - Samantha Krening
- Department of Integrated Systems Engineering, The Ohio State University, 1971 Neil Ave, Columbus, OH 43210, USA
| |
Collapse
|
5
|
Avery EW, Joshi K, Mehra S, Mahajan A. Role of PET/CT in Oropharyngeal Cancers. Cancers (Basel) 2023; 15:2651. [PMID: 37174116 PMCID: PMC10177278 DOI: 10.3390/cancers15092651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 04/03/2023] [Accepted: 05/03/2023] [Indexed: 05/15/2023] Open
Abstract
Oropharyngeal squamous cell carcinoma (OPSCC) comprises cancers of the tonsils, tongue base, soft palate, and uvula. The staging of oropharyngeal cancers varies depending upon the presence or absence of human papillomavirus (HPV)-directed pathogenesis. The incidence of HPV-associated oropharyngeal cancer (HPV + OPSCC) is expected to continue to rise over the coming decades. PET/CT is a useful modality for the diagnosis, staging, and follow up of patients with oropharyngeal cancers undergoing treatment and surveillance.
Collapse
Affiliation(s)
- Emily W. Avery
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT 06520, USA
| | - Kavita Joshi
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT 06520, USA
| | - Saral Mehra
- Department of Otolaryngology, Yale University School of Medicine, New Haven, CT 06520, USA
| | - Amit Mahajan
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT 06520, USA
| |
Collapse
|
6
|
Ghezzo S, Mongardi S, Bezzi C, Samanes Gajate AM, Preza E, Gotuzzo I, Baldassi F, Jonghi-Lavarini L, Neri I, Russo T, Brembilla G, De Cobelli F, Scifo P, Mapelli P, Picchio M. External validation of a convolutional neural network for the automatic segmentation of intraprostatic tumor lesions on 68Ga-PSMA PET images. Front Med (Lausanne) 2023; 10:1133269. [PMID: 36910493 PMCID: PMC9995820 DOI: 10.3389/fmed.2023.1133269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 02/07/2023] [Indexed: 02/25/2023] Open
Abstract
Introduction State of the art artificial intelligence (AI) models have the potential to become a "one-stop shop" to improve diagnosis and prognosis in several oncological settings. The external validation of AI models on independent cohorts is essential to evaluate their generalization ability, hence their potential utility in clinical practice. In this study we tested on a large, separate cohort a recently proposed state-of-the-art convolutional neural network for the automatic segmentation of intraprostatic cancer lesions on PSMA PET images. Methods Eighty-five biopsy proven prostate cancer patients who underwent 68Ga PSMA PET for staging purposes were enrolled in this study. Images were acquired with either fully hybrid PET/MRI (N = 46) or PET/CT (N = 39); all participants showed at least one intraprostatic pathological finding on PET images that was independently segmented by two Nuclear Medicine physicians. The trained model was available at https://gitlab.com/dejankostyszyn/prostate-gtv-segmentation and data processing has been done in agreement with the reference work. Results When compared to the manual contouring, the AI model yielded a median dice score = 0.74, therefore showing a moderately good performance. Results were robust to the modality used to acquire images (PET/CT or PET/MRI) and to the ground truth labels (no significant difference between the model's performance when compared to reader 1 or reader 2 manual contouring). Discussion In conclusion, this AI model could be used to automatically segment intraprostatic cancer lesions for research purposes, as instance to define the volume of interest for radiomics or deep learning analysis. However, more robust performance is needed for the generation of AI-based decision support technologies to be proposed in clinical practice.
Collapse
Affiliation(s)
- Samuele Ghezzo
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Sofia Mongardi
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy
| | - Carolina Bezzi
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | | | - Erik Preza
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Irene Gotuzzo
- School of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
| | - Francesco Baldassi
- School of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
| | | | - Ilaria Neri
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Tommaso Russo
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Radiology, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Giorgio Brembilla
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Radiology, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Francesco De Cobelli
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Radiology, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Paola Scifo
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Paola Mapelli
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Maria Picchio
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| |
Collapse
|
7
|
Kuker RA, Lehmkuhl D, Kwon D, Zhao W, Lossos IS, Moskowitz CH, Alderuccio JP, Yang F. A Deep Learning-Aided Automated Method for Calculating Metabolic Tumor Volume in Diffuse Large B-Cell Lymphoma. Cancers (Basel) 2022; 14:5221. [PMID: 36358642 PMCID: PMC9653575 DOI: 10.3390/cancers14215221] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 10/18/2022] [Accepted: 10/20/2022] [Indexed: 08/20/2023] Open
Abstract
Metabolic tumor volume (MTV) is a robust prognostic biomarker in diffuse large B-cell lymphoma (DLBCL). The available semiautomatic software for calculating MTV requires manual input limiting its routine application in clinical research. Our objective was to develop a fully automated method (AM) for calculating MTV and to validate the method by comparing its results with those from two nuclear medicine (NM) readers. The automated method designed for this study employed a deep convolutional neural network to segment normal physiologic structures from the computed tomography (CT) scans that demonstrate intense avidity on positron emission tomography (PET) scans. The study cohort consisted of 100 patients with newly diagnosed DLBCL who were randomly selected from the Alliance/CALGB 50,303 (NCT00118209) trial. We observed high concordance in MTV calculations between the AM and readers with Pearson's correlation coefficients and interclass correlations comparing reader 1 to AM of 0.9814 (p < 0.0001) and 0.98 (p < 0.001; 95%CI = 0.96 to 0.99), respectively; and comparing reader 2 to AM of 0.9818 (p < 0.0001) and 0.98 (p < 0.0001; 95%CI = 0.96 to 0.99), respectively. The Bland-Altman plots showed only relatively small systematic errors between the proposed method and readers for both MTV and maximum standardized uptake value (SUVmax). This approach may possess the potential to integrate PET-based biomarkers in clinical trials.
Collapse
Affiliation(s)
- Russ A. Kuker
- Department of Radiology, Division of Nuclear Medicine, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - David Lehmkuhl
- Department of Radiology, Division of Nuclear Medicine, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Deukwoo Kwon
- Department of Public Health Sciences, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Weizhao Zhao
- Department of Biomedical Engineering, University of Miami, Coral Gables, FL 33146, USA
| | - Izidore S. Lossos
- Sylvester Comprehensive Cancer Center, Department of Medicine, Division of Hematology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Craig H. Moskowitz
- Sylvester Comprehensive Cancer Center, Department of Medicine, Division of Hematology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Juan Pablo Alderuccio
- Sylvester Comprehensive Cancer Center, Department of Medicine, Division of Hematology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Fei Yang
- Sylvester Comprehensive Cancer Center, Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| |
Collapse
|
8
|
Xu J, Zeng B, Egger J, Wang C, Smedby Ö, Jiang X, Chen X. A review on AI-based medical image computing in head and neck surgery. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac840f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/25/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.
Collapse
|
9
|
Jin L, Chen Q, Shi A, Wang X, Ren R, Zheng A, Song P, Zhang Y, Wang N, Wang C, Wang N, Cheng X, Wang S, Ge H. Deep Learning for Automated Contouring of Gross Tumor Volumes in Esophageal Cancer. Front Oncol 2022; 12:892171. [PMID: 35924169 PMCID: PMC9339638 DOI: 10.3389/fonc.2022.892171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 06/21/2022] [Indexed: 12/03/2022] Open
Abstract
Purpose The aim of this study was to propose and evaluate a novel three-dimensional (3D) V-Net and two-dimensional (2D) U-Net mixed (VUMix-Net) architecture for a fully automatic and accurate gross tumor volume (GTV) in esophageal cancer (EC)–delineated contours. Methods We collected the computed tomography (CT) scans of 215 EC patients. 3D V-Net, 2D U-Net, and VUMix-Net were developed and further applied simultaneously to delineate GTVs. The Dice similarity coefficient (DSC) and 95th-percentile Hausdorff distance (95HD) were used as quantitative metrics to evaluate the performance of the three models in ECs from different segments. The CT data of 20 patients were randomly selected as the ground truth (GT) masks, and the corresponding delineation results were generated by artificial intelligence (AI). Score differences between the two groups (GT versus AI) and the evaluation consistency were compared. Results In all patients, there was a significant difference in the 2D DSCs from U-Net, V-Net, and VUMix-Net (p=0.01). In addition, VUMix-Net showed achieved better 3D-DSC and 95HD values. There was a significant difference among the 3D-DSC (mean ± STD) and 95HD values for upper-, middle-, and lower-segment EC (p<0.001), and the middle EC values were the best. In middle-segment EC, VUMix-Net achieved the highest 2D-DSC values (p<0.001) and lowest 95HD values (p=0.044). Conclusion The new model (VUMix-Net) showed certain advantages in delineating the GTVs of EC. Additionally, it can generate the GTVs of EC that meet clinical requirements and have the same quality as human-generated contours. The system demonstrated the best performance for the ECs of the middle segment.
Collapse
Affiliation(s)
- Linzhi Jin
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, China
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Qi Chen
- Department of Research and Development, MedMind Technology Co, Ltd., Beijing, China
| | - Aiwei Shi
- Department of Research and Development, MedMind Technology Co, Ltd., Beijing, China
| | - Xiaomin Wang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Runchuan Ren
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Anping Zheng
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Ping Song
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Yaowen Zhang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Nan Wang
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, China
| | - Chenyu Wang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Nengchao Wang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Xinyu Cheng
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Shaobin Wang
- Department of Research and Development, MedMind Technology Co, Ltd., Beijing, China
| | - Hong Ge
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, China
- *Correspondence: Hong Ge,
| |
Collapse
|
10
|
High SUVs Have More Robust Repeatability in Patients with Metastatic Prostate Cancer: Results from a Prospective Test-Retest Cohort Imaged with 18F-DCFPyL. Mol Imaging 2022; 2022:7056983. [PMID: 35283693 PMCID: PMC8896803 DOI: 10.1155/2022/7056983] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 01/11/2022] [Accepted: 02/02/2022] [Indexed: 12/27/2022] Open
Abstract
Objectives In patients with prostate cancer (PC) receiving prostate-specific membrane antigen- (PSMA-) targeted radioligand therapy (RLT), higher baseline standardized uptake values (SUVs) are linked to improved outcome. Thus, readers deciding on RLT must have certainty on the repeatability of PSMA uptake metrics. As such, we aimed to evaluate the test-retest repeatability of lesion uptake in a large cohort of patients imaged with 18F-DCFPyL. Methods In this prospective, IRB-approved trial (NCT03793543), 21 patients with history of histologically proven PC underwent two 18F-DCFPyL PET/CTs within 7 days (mean 3.7, range 1 to 7 days). Lesions in the bone, lymph nodes (LN), and other organs were manually segmented on both scans, and uptake parameters were assessed (maximum (SUVmax) and mean (SUVmean) SUVs), PSMA-tumor volume (PSMA-TV), and total lesion PSMA (TL-PSMA, defined as PSMA − TV × SUVmean)). Repeatability was determined using Pearson's correlations, within-subject coefficient of variation (wCOV), and Bland-Altman analysis. Results In total, 230 pairs of lesions (177 bone, 38 LN, and 15 other) were delineated, demonstrating a wide range of SUVmax (1.5–80.5) and SUVmean (1.4–24.8). Including all sites of suspected disease, SUVs had a strong interscan correlation (R2 ≥ 0.99), with high repeatability for SUVmean and SUVmax (wCOV, 7.3% and 12.1%, respectively). High SUVs showed significantly improved wCOV relative to lower SUVs (P < 0.0001), indicating that high SUVs are more repeatable, relative to the magnitude of the underlying SUV. Repeatability for PSMA-TV and TL-PSMA, however, was low (wCOV ≥ 23.5%). Across all metrics for LN and bone lesions, interscan correlation was again strong (R2 ≥ 0.98). Moreover, LN-based SUVmean also achieved the best wCOV (3.8%), which was significantly reduced when compared to osseous lesions (7.8%, P < 0.0001). This was also noted for SUVmax (wCOV, LN 8.8% vs. bone 12.0%, P < 0.03). On a compartment-based level, wCOVs for volumetric features were ≥22.8%, demonstrating no significant differences between LN and bone lesions (PSMA-TV, P =0.63; TL-PSMA, P =0.9). Findings on an entire tumor burden level were also corroborated in a hottest lesion analysis investigating the SUVmax of the most intense lesion per patient (R2, 0.99; wCOV, 11.2%). Conclusion In this prospective test-retest setting, SUV parameters demonstrated high repeatability, in particular in LNs, while volumetric parameters demonstrated low repeatability. Further, the large number of lesions and wide distribution of SUVs included in this analysis allowed for the demonstration of a dependence of repeatability on SUV, with higher SUVs having more robust repeatability.
Collapse
|
11
|
Marin T, Zhuo Y, Lahoud RM, Tian F, Ma X, Xing F, Moteabbed M, Liu X, Grogg K, Shusharina N, Woo J, Ma C, Chen YLE, El Fakhri G. Deep learning-based GTV contouring modeling inter- and intra- observer variability in sarcomas. Radiother Oncol 2022; 167:269-276. [PMID: 34808228 PMCID: PMC8934266 DOI: 10.1016/j.radonc.2021.09.034] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 09/21/2021] [Accepted: 09/29/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND AND PURPOSE The delineation of the gross tumor volume (GTV) is a critical step for radiation therapy treatment planning. The delineation procedure is typically performed manually which exposes two major issues: cost and reproducibility. Delineation is a time-consuming process that is subject to inter- and intra-observer variability. While methods have been proposed to predict GTV contours, typical approaches ignore variability and therefore fail to utilize the valuable confidence information offered by multiple contours. MATERIALS AND METHODS In this work we propose an automatic GTV contouring method for soft-tissue sarcomas from X-ray computed tomography (CT) images, using deep learning by integrating inter- and intra-observer variability in the learned model. Sixty-eight patients with soft tissue and bone sarcomas were considered in this evaluation, all underwent pre-operative CT imaging used to perform GTV delineation. Four radiation oncologists and radiologists performed three contouring trials each for all patients. We quantify variability by defining confidence levels based on the frequency of inclusion of a given voxel into the GTV and use a deep convolutional neural network to learn GTV confidence maps. RESULTS Results were compared to confidence maps from the four readers as well as ground-truth consensus contours established jointly by all readers. The resulting continuous Dice score between predicted and true confidence maps was 87% and the Hausdorff distance was 14 mm. CONCLUSION Results demonstrate the ability of the proposed method to predict accurate contours while utilizing variability and as such it can be used to improve clinical workflow.
Collapse
Affiliation(s)
- Thibault Marin
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Yue Zhuo
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Rita Maria Lahoud
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Fei Tian
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Xiaoyue Ma
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Fangxu Xing
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Maryam Moteabbed
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Department of Radiation Oncology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Xiaofeng Liu
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Kira Grogg
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Nadya Shusharina
- Department of Radiation Oncology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Jonghye Woo
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Chao Ma
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Yen-Lin E. Chen
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Department of Radiation Oncology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America,Corresponding author,
| |
Collapse
|
12
|
Tao G, Li H, Huang J, Han C, Chen J, Ruan G, Huang W, Hu Y, Dan T, Zhang B, He S, Liu L, Cai H. SeqSeg: A Sequential Method to Achieve Nasopharyngeal Carcinoma Segmentation Free from Background Dominance. Med Image Anal 2022; 78:102381. [DOI: 10.1016/j.media.2022.102381] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 01/18/2022] [Accepted: 01/31/2022] [Indexed: 11/30/2022]
|
13
|
Agarwal P, Yadav A, Mathur P, Pal V, Chakrabarty A. BID-Net: An Automated System for Bone Invasion Detection Occurring at Stage T4 in Oral Squamous Carcinoma Using Deep Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4357088. [PMID: 35140773 PMCID: PMC8818426 DOI: 10.1155/2022/4357088] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 11/26/2021] [Accepted: 01/03/2022] [Indexed: 11/22/2022]
Abstract
Detection of the presence and absence of bone invasion by the tumor in oral squamous cell carcinoma (OSCC) patients is very significant for their treatment planning and surgical resection. For bone invasion detection, CT scan imaging is the preferred choice of radiologists because of its high sensitivity and specificity. In the present work, deep learning algorithm based model, BID-Net, has been proposed for the automation of bone invasion detection. BID-Net performs the binary classification of CT scan images as the images with bone invasion and images without bone invasion. The proposed BID-Net model has achieved an outstanding accuracy of 93.62%. The model is also compared with six Transfer Learning models like VGG16, VGG19, ResNet-50, MobileNetV2, DenseNet-121, ResNet-101 and BID-Net outperformed over the other models. As there exists no previous studies on bone invasion detection using Deep Learning models, so the results of the proposed model have been validated from the experts of practitioner radiologists, S.M.S. hospital, Jaipur, India.
Collapse
Affiliation(s)
| | | | | | - Vipin Pal
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Amitabha Chakrabarty
- Department of Computer Science and Engineering, Brac University, Dhaka, Bangladesh
| |
Collapse
|
14
|
Schouten JPE, Noteboom S, Martens RM, Mes SW, Leemans CR, de Graaf P, Steenwijk MD. Automatic segmentation of head and neck primary tumors on MRI using a multi-view CNN. Cancer Imaging 2022; 22:8. [PMID: 35033188 PMCID: PMC8761340 DOI: 10.1186/s40644-022-00445-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 12/31/2021] [Indexed: 12/24/2022] Open
Abstract
Background Accurate segmentation of head and neck squamous cell cancer (HNSCC) is important for radiotherapy treatment planning. Manual segmentation of these tumors is time-consuming and vulnerable to inconsistencies between experts, especially in the complex head and neck region. The aim of this study is to introduce and evaluate an automatic segmentation pipeline for HNSCC using a multi-view CNN (MV-CNN). Methods The dataset included 220 patients with primary HNSCC and availability of T1-weighted, STIR and optionally contrast-enhanced T1-weighted MR images together with a manual reference segmentation of the primary tumor by an expert. A T1-weighted standard space of the head and neck region was created to register all MRI sequences to. An MV-CNN was trained with these three MRI sequences and evaluated in terms of volumetric and spatial performance in a cross-validation by measuring intra-class correlation (ICC) and dice similarity score (DSC), respectively. Results The average manual segmented primary tumor volume was 11.8±6.70 cm3 with a median [IQR] of 13.9 [3.22-15.9] cm3. The tumor volume measured by MV-CNN was 22.8±21.1 cm3 with a median [IQR] of 16.0 [8.24-31.1] cm3. Compared to the manual segmentations, the MV-CNN scored an average ICC of 0.64±0.06 and a DSC of 0.49±0.19. Improved segmentation performance was observed with increasing primary tumor volume: the smallest tumor volume group (<3 cm3) scored a DSC of 0.26±0.16 and the largest group (>15 cm3) a DSC of 0.63±0.11 (p<0.001). The automated segmentation tended to overestimate compared to the manual reference, both around the actual primary tumor and in false positively classified healthy structures and pathologically enlarged lymph nodes. Conclusion An automatic segmentation pipeline was evaluated for primary HNSCC on MRI. The MV-CNN produced reasonable segmentation results, especially on large tumors, but overestimation decreased overall performance. In further research, the focus should be on decreasing false positives and make it valuable in treatment planning.
Collapse
Affiliation(s)
- Jens P E Schouten
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Samantha Noteboom
- Department of Anatomy and Neurosciences, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Roland M Martens
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Steven W Mes
- Department of Otolaryngology - Head and Neck Surgery, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - C René Leemans
- Department of Otolaryngology - Head and Neck Surgery, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Pim de Graaf
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Martijn D Steenwijk
- Department of Anatomy and Neurosciences, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands. .,, De Boelelaan 1108, 1081 HZ, Amsterdam, The Netherlands.
| |
Collapse
|
15
|
Wahid KA, Ahmed S, He R, van Dijk LV, Teuwen J, McDonald BA, Salama V, Mohamed AS, Salzillo T, Dede C, Taku N, Lai SY, Fuller CD, Naser MA. Evaluation of deep learning-based multiparametric MRI oropharyngeal primary tumor auto-segmentation and investigation of input channel effects: Results from a prospective imaging registry. Clin Transl Radiat Oncol 2022; 32:6-14. [PMID: 34765748 PMCID: PMC8570930 DOI: 10.1016/j.ctro.2021.10.003] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 09/24/2021] [Accepted: 10/10/2021] [Indexed: 12/09/2022] Open
Abstract
BACKGROUND/PURPOSE Oropharyngeal cancer (OPC) primary gross tumor volume (GTVp) segmentation is crucial for radiotherapy. Multiparametric MRI (mpMRI) is increasingly used for OPC adaptive radiotherapy but relies on manual segmentation. Therefore, we constructed mpMRI deep learning (DL) OPC GTVp auto-segmentation models and determined the impact of input channels on segmentation performance. MATERIALS/METHODS GTVp ground truth segmentations were manually generated for 30 OPC patients from a clinical trial. We evaluated five mpMRI input channels (T2, T1, ADC, Ktrans, Ve). 3D Residual U-net models were developed and assessed using leave-one-out cross-validation. A baseline T2 model was compared to mpMRI models (T2 + T1, T2 + ADC, T2 + Ktrans, T2 + Ve, all five channels [ALL]) primarily using the Dice similarity coefficient (DSC). False-negative DSC (FND), false-positive DSC, sensitivity, positive predictive value, surface DSC, Hausdorff distance (HD), 95% HD, and mean surface distance were also assessed. For the best model, ground truth and DL-generated segmentations were compared through a blinded Turing test using three physician observers. RESULTS Models yielded mean DSCs from 0.71 ± 0.12 (ALL) to 0.73 ± 0.12 (T2 + T1). Compared to the T2 model, performance was significantly improved for FND, sensitivity, surface DSC, HD, and 95% HD for the T2 + T1 model (p < 0.05) and for FND for the T2 + Ve and ALL models (p < 0.05). No model demonstrated significant correlations between tumor size and DSC (p > 0.05). Most models demonstrated significant correlations between tumor size and HD or Surface DSC (p < 0.05), except those that included ADC or Ve as input channels (p > 0.05). On average, there were no significant differences between ground truth and DL-generated segmentations for all observers (p > 0.05). CONCLUSION DL using mpMRI provides reasonably accurate segmentations of OPC GTVp that may be comparable to ground truth segmentations generated by clinical experts. Incorporating additional mpMRI channels may increase the performance of FND, sensitivity, surface DSC, HD, and 95% HD, and improve model robustness to tumor size.
Collapse
Affiliation(s)
- Kareem A. Wahid
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Sara Ahmed
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Renjie He
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Lisanne V. van Dijk
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - Brigid A. McDonald
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Vivian Salama
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Abdallah S.R. Mohamed
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Travis Salzillo
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Cem Dede
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Nicolette Taku
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Stephen Y. Lai
- Department of Head and Neck Surgery, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Clifton D. Fuller
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Mohamed A. Naser
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| |
Collapse
|
16
|
Oreiller V, Andrearczyk V, Jreige M, Boughdad S, Elhalawani H, Castelli J, Vallières M, Zhu S, Xie J, Peng Y, Iantsen A, Hatt M, Yuan Y, Ma J, Yang X, Rao C, Pai S, Ghimire K, Feng X, Naser MA, Fuller CD, Yousefirizi F, Rahmim A, Chen H, Wang L, Prior JO, Depeursinge A. Head and neck tumor segmentation in PET/CT: The HECKTOR challenge. Med Image Anal 2021; 77:102336. [PMID: 35016077 DOI: 10.1016/j.media.2021.102336] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 10/13/2021] [Accepted: 12/14/2021] [Indexed: 12/23/2022]
Abstract
This paper relates the post-analysis of the first edition of the HEad and neCK TumOR (HECKTOR) challenge. This challenge was held as a satellite event of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2020, and was the first of its kind focusing on lesion segmentation in combined FDG-PET and CT image modalities. The challenge's task is the automatic segmentation of the Gross Tumor Volume (GTV) of Head and Neck (H&N) oropharyngeal primary tumors in FDG-PET/CT images. To this end, the participants were given a training set of 201 cases from four different centers and their methods were tested on a held-out set of 53 cases from a fifth center. The methods were ranked according to the Dice Score Coefficient (DSC) averaged across all test cases. An additional inter-observer agreement study was organized to assess the difficulty of the task from a human perspective. 64 teams registered to the challenge, among which 10 provided a paper detailing their approach. The best method obtained an average DSC of 0.7591, showing a large improvement over our proposed baseline method and the inter-observer agreement, associated with DSCs of 0.6610 and 0.61, respectively. The automatic methods proved to successfully leverage the wealth of metabolic and structural properties of combined PET and CT modalities, significantly outperforming human inter-observer agreement level, semi-automatic thresholding based on PET images as well as other single modality-based methods. This promising performance is one step forward towards large-scale radiomics studies in H&N cancer, obviating the need for error-prone and time-consuming manual delineation of GTVs.
Collapse
Affiliation(s)
- Valentin Oreiller
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland.
| | - Vincent Andrearczyk
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
| | - Mario Jreige
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Sarah Boughdad
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Hesham Elhalawani
- Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Joel Castelli
- Radiotherapy Department, Cancer Institute Eugène Marquis, Rennes, France
| | - Martin Vallières
- Department of Computer Science, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Simeng Zhu
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA
| | - Juanying Xie
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, PR China
| | - Ying Peng
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, PR China
| | - Andrei Iantsen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Yading Yuan
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, Jiangsu, China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Jiangsu, China
| | - Chinmay Rao
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Suraj Pai
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | | | - Xue Feng
- Carina Medical, Lexington, KY, 40513, USA; Department of Biomedical Engineering, University of Virginia, Charlottesville VA 22903, USA
| | - Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030, USA
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver BC, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver BC, Canada
| | - Huai Chen
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shangai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - Lisheng Wang
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shangai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - John O Prior
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Adrien Depeursinge
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| |
Collapse
|
17
|
Wendler T, van Leeuwen FWB, Navab N, van Oosterom MN. How molecular imaging will enable robotic precision surgery : The role of artificial intelligence, augmented reality, and navigation. Eur J Nucl Med Mol Imaging 2021; 48:4201-4224. [PMID: 34185136 PMCID: PMC8566413 DOI: 10.1007/s00259-021-05445-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 06/01/2021] [Indexed: 02/08/2023]
Abstract
Molecular imaging is one of the pillars of precision surgery. Its applications range from early diagnostics to therapy planning, execution, and the accurate assessment of outcomes. In particular, molecular imaging solutions are in high demand in minimally invasive surgical strategies, such as the substantially increasing field of robotic surgery. This review aims at connecting the molecular imaging and nuclear medicine community to the rapidly expanding armory of surgical medical devices. Such devices entail technologies ranging from artificial intelligence and computer-aided visualization technologies (software) to innovative molecular imaging modalities and surgical navigation (hardware). We discuss technologies based on their role at different steps of the surgical workflow, i.e., from surgical decision and planning, over to target localization and excision guidance, all the way to (back table) surgical verification. This provides a glimpse of how innovations from the technology fields can realize an exciting future for the molecular imaging and surgery communities.
Collapse
Affiliation(s)
- Thomas Wendler
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technische Universität München, Boltzmannstr. 3, 85748 Garching bei München, Germany
| | - Fijs W. B. van Leeuwen
- Department of Radiology, Interventional Molecular Imaging Laboratory, Leiden University Medical Center, Leiden, The Netherlands
- Department of Urology, The Netherlands Cancer Institute - Antonie van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Orsi Academy, Melle, Belgium
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technische Universität München, Boltzmannstr. 3, 85748 Garching bei München, Germany
- Chair for Computer Aided Medical Procedures Laboratory for Computational Sensing + Robotics, Johns-Hopkins University, Baltimore, MD USA
| | - Matthias N. van Oosterom
- Department of Radiology, Interventional Molecular Imaging Laboratory, Leiden University Medical Center, Leiden, The Netherlands
- Department of Urology, The Netherlands Cancer Institute - Antonie van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| |
Collapse
|
18
|
Gharavi SMH, Faghihimehr A. Clinical Application of Artificial Intelligence in PET Imaging of Head and Neck Cancer. PET Clin 2021; 17:65-76. [PMID: 34809871 DOI: 10.1016/j.cpet.2021.09.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Applications of "artificial intelligence" (AI) have been exponentially expanding in health care. Readily accessible archives of enormous digital data in medical imaging have made radiology a leader in exploring and taking advantage of this technology. AI-assisted radiology has paved the way toward another level of precision in medicine. In this article, the authors aim to review current AI applications in PET imaging of head and neck cancers, beginning with radiomics and followed by deep learning in each section.
Collapse
Affiliation(s)
- Seyed Mohammad H Gharavi
- Virginia Commonwealth University, VCU School of Medicine, Department of Radiology, West Hospital, 1200 East Broad Street, North Wing, Room 2-013, Box 980470, Richmond, VA 23298-0470, USA.
| | - Armaghan Faghihimehr
- Virginia Commonwealth University, VCU School of Medicine, Department of Radiology, West Hospital, 1200 East Broad Street, North Wing, Room 2-013, Box 980470, Richmond, VA 23298-0470, USA
| |
Collapse
|
19
|
Yousefirizi F, Jha AK, Brosch-Lenz J, Saboury B, Rahmim A. Toward High-Throughput Artificial Intelligence-Based Segmentation in Oncological PET Imaging. PET Clin 2021; 16:577-596. [PMID: 34537131 DOI: 10.1016/j.cpet.2021.06.001] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Artificial intelligence (AI) techniques for image-based segmentation have garnered much attention in recent years. Convolutional neural networks have shown impressive results and potential toward fully automated segmentation in medical imaging, and particularly PET imaging. To cope with the limited access to annotated data needed in supervised AI methods, given tedious and prone-to-error manual delineations, semi-supervised and unsupervised AI techniques have also been explored for segmentation of tumors or normal organs in single- and bimodality scans. This work reviews existing AI techniques for segmentation tasks and the evaluation criteria for translational AI-based segmentation efforts toward routine adoption in clinical workflows.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Abhinav K Jha
- Department of Biomedical Engineering, Washington University in St. Louis, St Louis, MO 63130, USA; Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO 63110, USA
| | - Julia Brosch-Lenz
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada; Department of Physics, University of British Columbia, Senior Scientist & Provincial Medical Imaging Physicist, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada
| |
Collapse
|
20
|
López F, Mäkitie A, de Bree R, Franchi A, de Graaf P, Hernández-Prera JC, Strojan P, Zidar N, Strojan Fležar M, Rodrigo JP, Rinaldo A, Centeno BA, Ferlito A. Qualitative and Quantitative Diagnosis in Head and Neck Cancer. Diagnostics (Basel) 2021; 11:diagnostics11091526. [PMID: 34573868 PMCID: PMC8466857 DOI: 10.3390/diagnostics11091526] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Revised: 08/14/2021] [Accepted: 08/20/2021] [Indexed: 12/11/2022] Open
Abstract
The diagnosis is the art of determining the nature of a disease, and an accurate diagnosis is the true cornerstone on which rational treatment should be built. Within the workflow in the management of head and neck tumours, there are different types of diagnosis. The purpose of this work is to point out the differences and the aims of the different types of diagnoses and to highlight their importance in the management of patients with head and neck tumours. Qualitative diagnosis is performed by a pathologist and is essential in determining the management and can provide guidance on prognosis. The evolution of immunohistochemistry and molecular biology techniques has made it possible to obtain more precise diagnoses and to identify prognostic markers and precision factors. Quantitative diagnosis is made by the radiologist and consists of identifying a mass lesion and the estimation of the tumour volume and extent using imaging techniques, such as CT, MRI, and PET. The distinction between the two types of diagnosis is clear, as the methodology is different. The accurate establishment of both diagnoses plays an essential role in treatment planning. Getting the right diagnosis is a key aspect of health care, and it provides an explanation of a patient’s health problem and informs subsequent decision. Deep learning and radiomics approaches hold promise for improving diagnosis.
Collapse
Affiliation(s)
- Fernando López
- Department of Otorhinolaryngology, Head and Neck Surgery, Hospital Universitario Central de Asturias, 33011 Oviedo, Spain;
- Instituto de Investigación Sanitaria del Principado de Asturias (ISPA), Instituto Universitario de Oncología del Principado de Asturias (IUOPA), University of Oviedo CIBERONC-ISCIII, 33011 Oviedo, Spain
- Correspondence:
| | - Antti Mäkitie
- Department of Otorhinolaryngology–Head and Neck Surgery, University of Helsinki and Helsinki University Hospital, 00029 Helsinki, Finland;
| | - Remco de Bree
- Department of Head and Neck Surgical Oncology, University Medical Center Utrecht, 3584CX Utrecht, The Netherlands;
| | - Alessandro Franchi
- Department of Translational Research, School of Medicine, University of Pisa, 56124 Pisa, Italy;
| | - Pim de Graaf
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam UMC, Vrije Universiteit Amsterdam, 1081 Amsterdam, The Netherlands;
| | | | - Primoz Strojan
- Department of Radiation Oncology, Institute of Oncology, 1000 Ljubljana, Slovenia;
| | - Nina Zidar
- Department of Head and Neck Pathology, Faculty of Medicine, Institute of Pathology, University of Ljubljana, 1000 Ljubljana, Slovenia;
| | - Margareta Strojan Fležar
- Department of Cytopathology, Faculty of Medicine, Institute of Pathology, University of Ljubljana, 1000 Ljubljana, Slovenia;
| | - Juan P. Rodrigo
- Department of Otorhinolaryngology, Head and Neck Surgery, Hospital Universitario Central de Asturias, 33011 Oviedo, Spain;
- Instituto de Investigación Sanitaria del Principado de Asturias (ISPA), Instituto Universitario de Oncología del Principado de Asturias (IUOPA), University of Oviedo CIBERONC-ISCIII, 33011 Oviedo, Spain
| | | | - Barbara A. Centeno
- Department of Pathology, Moffitt Cancer Center, Tampa, FL 33612, USA; (J.C.H.-P.); (B.A.C.)
| | - Alfio Ferlito
- Coordinator of the International Head and Neck Scientific Group, 35100 Padua, Italy;
| |
Collapse
|
21
|
Huang D, Bai H, Wang L, Hou Y, Li L, Xia Y, Yan Z, Chen W, Chang L, Li W. The Application and Development of Deep Learning in Radiotherapy: A Systematic Review. Technol Cancer Res Treat 2021; 20:15330338211016386. [PMID: 34142614 PMCID: PMC8216350 DOI: 10.1177/15330338211016386] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.
Collapse
Affiliation(s)
- Danju Huang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Han Bai
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Wang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yu Hou
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Lan Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yaoxiong Xia
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Zhirui Yan
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenrui Chen
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Chang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenhui Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| |
Collapse
|
22
|
Sadaghiani MS, Rowe SP, Sheikhbahaei S. Applications of artificial intelligence in oncologic 18F-FDG PET/CT imaging: a systematic review. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:823. [PMID: 34268436 PMCID: PMC8246218 DOI: 10.21037/atm-20-6162] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Accepted: 03/25/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) is a growing field of research that is emerging as a promising adjunct to assist physicians in detection and management of patients with cancer. 18F-FDG PET imaging helps physicians in detection and management of patients with cancer. In this study we discuss the possible applications of AI in 18F-FDG PET imaging based on the published studies. A systematic literature review was performed in PubMed on early August 2020 to find the relevant studies. A total of 65 studies were available for review against the inclusion criteria which included studies that developed an AI model based on 18F-FDG PET data in cancer to diagnose, differentiate, delineate, stage, assess response to therapy, determine prognosis, or improve image quality. Thirty-two studies met the inclusion criteria and are discussed in this review. The majority of studies are related to lung cancer. Other studied cancers included breast cancer, cervical cancer, head and neck cancer, lymphoma, pancreatic cancer, and sarcoma. All studies were based on human patients except for one which was performed on rats. According to the included studies, machine learning (ML) models can help in detection, differentiation from benign lesions, segmentation, staging, response assessment, and prognosis determination. Despite the potential benefits of AI in cancer imaging and management, the routine implementation of AI-based models and 18F-FDG PET-derived radiomics in clinical practice is limited at least partially due to lack of standardized, reproducible, generalizable, and precise techniques.
Collapse
Affiliation(s)
- Mohammad S Sadaghiani
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Steven P Rowe
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Sara Sheikhbahaei
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
23
|
Boyle AJ, Gaudet VC, Black SE, Vasdev N, Rosa-Neto P, Zukotynski KA. Artificial intelligence for molecular neuroimaging. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:822. [PMID: 34268435 PMCID: PMC8246223 DOI: 10.21037/atm-20-6220] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 01/08/2021] [Indexed: 11/25/2022]
Abstract
In recent years, artificial intelligence (AI) or the study of how computers and machines can gain intelligence, has been increasingly applied to problems in medical imaging, and in particular to molecular imaging of the central nervous system. Many AI innovations in medical imaging include improving image quality, segmentation, and automating classification of disease. These advances have led to an increased availability of supportive AI tools to assist physicians in interpreting images and making decisions affecting patient care. This review focuses on the role of AI in molecular neuroimaging, primarily applied to positron emission tomography (PET) and single photon emission computed tomography (SPECT). We emphasize technical innovations such as AI in computed tomography (CT) generation for the purposes of attenuation correction and disease localization, as well as applications in neuro-oncology and neurodegenerative diseases. Limitations and future prospects for AI in molecular brain imaging are also discussed. Just as new equipment such as SPECT and PET revolutionized the field of medical imaging a few decades ago, AI and its related technologies are now poised to bring on further disruptive changes. An understanding of these new technologies and how they work will help physicians adapt their practices and succeed with these new tools.
Collapse
Affiliation(s)
- Amanda J Boyle
- Azrieli Centre for Neuro-Radiochemistry, Brain Health Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Vincent C Gaudet
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | - Sandra E Black
- Department of Medicine (Neurology), Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada
| | - Neil Vasdev
- Azrieli Centre for Neuro-Radiochemistry, Brain Health Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada.,Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Pedro Rosa-Neto
- Translational Neuroimaging Laboratory, McGill University Research Centre for Studies in Aging, Douglas Research Institute, McGill University, Montréal, Québec, Canada
| | | |
Collapse
|
24
|
Cherian Kurian N, Sethi A, Reddy Konduru A, Mahajan A, Rane SU. A 2021 update on cancer image analytics with deep learning. WIRES DATA MINING AND KNOWLEDGE DISCOVERY 2021. [DOI: 10.1002/widm.1410] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Nikhil Cherian Kurian
- Department of Electrical Engineering Indian Institute of Technology, Bombay Mumbai India
| | - Amit Sethi
- Department of Electrical Engineering Indian Institute of Technology, Bombay Mumbai India
| | - Anil Reddy Konduru
- Department of Pathology Tata Memorial Center‐ACTREC, HBNI Navi Mumbai India
| | - Abhishek Mahajan
- Department of Radiology Tata Memorial Hospital, HBNI Mumbai India
| | - Swapnil Ulhas Rane
- Department of Pathology Tata Memorial Center‐ACTREC, HBNI Navi Mumbai India
| |
Collapse
|
25
|
Iantsen A, Ferreira M, Lucia F, Jaouen V, Reinhold C, Bonaffini P, Alfieri J, Rovira R, Masson I, Robin P, Mervoyer A, Rousseau C, Kridelka F, Decuypere M, Lovinfosse P, Pradier O, Hustinx R, Schick U, Visvikis D, Hatt M. Convolutional neural networks for PET functional volume fully automatic segmentation: development and validation in a multi-center setting. Eur J Nucl Med Mol Imaging 2021; 48:3444-3456. [PMID: 33772335 PMCID: PMC8440243 DOI: 10.1007/s00259-021-05244-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 02/07/2021] [Indexed: 11/12/2022]
Abstract
Purpose In this work, we addressed fully automatic determination of tumor functional uptake from positron emission tomography (PET) images without relying on other image modalities or additional prior constraints, in the context of multicenter images with heterogeneous characteristics. Methods In cervical cancer, an additional challenge is the location of the tumor uptake near or even stuck to the bladder. PET datasets of 232 patients from five institutions were exploited. To avoid unreliable manual delineations, the ground truth was generated with a semi-automated approach: a volume containing the tumor and excluding the bladder was first manually determined, then a well-validated, semi-automated approach relying on the Fuzzy locally Adaptive Bayesian (FLAB) algorithm was applied to generate the ground truth. Our model built on the U-Net architecture incorporates residual blocks with concurrent spatial squeeze and excitation modules, as well as learnable non-linear downsampling and upsampling blocks. Experiments relied on cross-validation (four institutions for training and validation, and the fifth for testing). Results The model achieved good Dice similarity coefficient (DSC) with little variability across institutions (0.80 ± 0.03), with higher recall (0.90 ± 0.05) than precision (0.75 ± 0.05) and improved results over the standard U-Net (DSC 0.77 ± 0.05, recall 0.87 ± 0.02, precision 0.74 ± 0.08). Both vastly outperformed a fixed threshold at 40% of SUVmax (DSC 0.33 ± 0.15, recall 0.52 ± 0.17, precision 0.30 ± 0.16). In all cases, the model could determine the tumor uptake without including the bladder. Neither shape priors nor anatomical information was required to achieve efficient training. Conclusion The proposed method could facilitate the deployment of a fully automated radiomics pipeline in such a challenging multicenter context. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05244-z.
Collapse
Affiliation(s)
- Andrei Iantsen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France.
| | - Marta Ferreira
- GIGA-CRC in vivo Imaging, University of Liège, Liège, Belgium
| | - Francois Lucia
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Vincent Jaouen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Caroline Reinhold
- Department of Radiology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Pietro Bonaffini
- Department of Radiology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Joanne Alfieri
- Department of Radiation Oncology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Ramon Rovira
- Gynecology Oncology and Laparoscopy Department, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Ingrid Masson
- Department of Radiation Oncology, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Philippe Robin
- Nuclear Medicine Department, University Hospital, Brest, France
| | - Augustin Mervoyer
- Department of Radiation Oncology, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Caroline Rousseau
- Nuclear Medicine Department, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Frédéric Kridelka
- Division of Oncological Gynecology, University Hospital of Liège, Liège, Belgium
| | - Marjolein Decuypere
- Division of Oncological Gynecology, University Hospital of Liège, Liège, Belgium
| | - Pierre Lovinfosse
- Division of Nuclear Medicine and Oncological Imaging, University Hospital of Liège, Liège, Belgium
| | | | - Roland Hustinx
- GIGA-CRC in vivo Imaging, University of Liège, Liège, Belgium
| | - Ulrike Schick
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | | | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| |
Collapse
|
26
|
Eyuboglu S, Angus G, Patel BN, Pareek A, Davidzon G, Long J, Dunnmon J, Lungren MP. Multi-task weak supervision enables anatomically-resolved abnormality detection in whole-body FDG-PET/CT. Nat Commun 2021; 12:1880. [PMID: 33767174 PMCID: PMC7994797 DOI: 10.1038/s41467-021-22018-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Accepted: 02/16/2021] [Indexed: 11/09/2022] Open
Abstract
Computational decision support systems could provide clinical value in whole-body FDG-PET/CT workflows. However, limited availability of labeled data combined with the large size of PET/CT imaging exams make it challenging to apply existing supervised machine learning systems. Leveraging recent advancements in natural language processing, we describe a weak supervision framework that extracts imperfect, yet highly granular, regional abnormality labels from free-text radiology reports. Our framework automatically labels each region in a custom ontology of anatomical regions, providing a structured profile of the pathologies in each imaging exam. Using these generated labels, we then train an attention-based, multi-task CNN architecture to detect and estimate the location of abnormalities in whole-body scans. We demonstrate empirically that our multi-task representation is critical for strong performance on rare abnormalities with limited training data. The representation also contributes to more accurate mortality prediction from imaging data, suggesting the potential utility of our framework beyond abnormality detection and location estimation.
Collapse
Affiliation(s)
- Sabri Eyuboglu
- Department of Computer Science, Stanford University, Stanford, CA, USA.
| | - Geoffrey Angus
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Bhavik N Patel
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Anuj Pareek
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Guido Davidzon
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Jin Long
- Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Stanford, CA, USA
| | - Jared Dunnmon
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | | |
Collapse
|
27
|
Abstract
Head–Neck Cancer (HNC) has a relevant impact on the oncology patient population and for this reason, the present review is dedicated to this type of neoplastic disease. In particular, a collection of methods aimed at tumor delineation is presented, because this is a fundamental task to perform efficient radiotherapy. Such a segmentation task is often performed on uni-modal data (usually Positron Emission Tomography (PET)) even though multi-modal images are preferred (PET-Computerized Tomography (CT)/PET-Magnetic Resonance (MR)). Datasets can be private or freely provided by online repositories on the web. The adopted techniques can belong to the well-known image processing/computer-vision algorithms or the newest deep learning/artificial intelligence approaches. All these aspects are analyzed in the present review and comparison among various approaches is performed. From the present review, the authors draw the conclusion that despite the encouraging results of computerized approaches, their performance is far from handmade tumor delineation result.
Collapse
|
28
|
Groendahl AR, Skjei Knudtsen I, Huynh BN, Mulstad M, Moe YM, Knuth F, Tomic O, Indahl UG, Torheim T, Dale E, Malinen E, Futsaether CM. A comparison of methods for fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers. Phys Med Biol 2021; 66:065012. [PMID: 33666176 DOI: 10.1088/1361-6560/abe553] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single versus multimodality input on segmentation quality was also assessed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen-Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-validation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.
Collapse
|
29
|
Torres-Velázquez M, Chen WJ, Li X, McMillan AB. Application and Construction of Deep Learning Networks in Medical Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:137-159. [PMID: 34017931 PMCID: PMC8132932 DOI: 10.1109/trpms.2020.3030611] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning (DL) approaches are part of the machine learning (ML) subfield concerned with the development of computational models to train artificial intelligence systems. DL models are characterized by automatically extracting high-level features from the input data to learn the relationship between matching datasets. Thus, its implementation offers an advantage over common ML methods that often require the practitioner to have some domain knowledge of the input data to select the best latent representation. As a result of this advantage, DL has been successfully applied within the medical imaging field to address problems, such as disease classification and tumor segmentation for which it is difficult or impossible to determine which image features are relevant. Therefore, taking into consideration the positive impact of DL on the medical imaging field, this article reviews the key concepts associated with its evolution and implementation. The sections of this review summarize the milestones related to the development of the DL field, followed by a description of the elements of deep neural network and an overview of its application within the medical imaging field. Subsequently, the key steps necessary to implement a supervised DL application are defined, and associated limitations are discussed.
Collapse
Affiliation(s)
- Maribel Torres-Velázquez
- Department of Biomedical Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Wei-Jie Chen
- Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Xue Li
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA, and also with the Department of Medical Physics, University of Wisconsin-Madison, Madison, WI 53705 USA
| |
Collapse
|
30
|
Moe YM, Groendahl AR, Tomic O, Dale E, Malinen E, Futsaether CM. Deep learning-based auto-delineation of gross tumour volumes and involved nodes in PET/CT images of head and neck cancer patients. Eur J Nucl Med Mol Imaging 2021; 48:2782-2792. [PMID: 33559711 PMCID: PMC8263429 DOI: 10.1007/s00259-020-05125-x] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 11/15/2020] [Indexed: 11/29/2022]
Abstract
PURPOSE Identification and delineation of the gross tumour and malignant nodal volume (GTV) in medical images are vital in radiotherapy. We assessed the applicability of convolutional neural networks (CNNs) for fully automatic delineation of the GTV from FDG-PET/CT images of patients with head and neck cancer (HNC). CNN models were compared to manual GTV delineations made by experienced specialists. New structure-based performance metrics were introduced to enable in-depth assessment of auto-delineation of multiple malignant structures in individual patients. METHODS U-Net CNN models were trained and evaluated on images and manual GTV delineations from 197 HNC patients. The dataset was split into training, validation and test cohorts (n= 142, n = 15 and n = 40, respectively). The Dice score, surface distance metrics and the new structure-based metrics were used for model evaluation. Additionally, auto-delineations were manually assessed by an oncologist for 15 randomly selected patients in the test cohort. RESULTS The mean Dice scores of the auto-delineations were 55%, 69% and 71% for the CT-based, PET-based and PET/CT-based CNN models, respectively. The PET signal was essential for delineating all structures. Models based on PET/CT images identified 86% of the true GTV structures, whereas models built solely on CT images identified only 55% of the true structures. The oncologist reported very high-quality auto-delineations for 14 out of the 15 randomly selected patients. CONCLUSIONS CNNs provided high-quality auto-delineations for HNC using multimodality PET/CT. The introduced structure-wise evaluation metrics provided valuable information on CNN model strengths and weaknesses for multi-structure auto-delineation.
Collapse
Affiliation(s)
- Yngve Mardal Moe
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | | | - Oliver Tomic
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Einar Dale
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | - Eirik Malinen
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway.,Department of Physics, University of Oslo, Oslo, Norway
| | | |
Collapse
|
31
|
Naser MA, van Dijk LV, He R, Wahid KA, Fuller CD. Tumor Segmentation in Patients with Head and Neck Cancers Using Deep Learning Based-on Multi-modality PET/CT Images. HEAD AND NECK TUMOR SEGMENTATION : FIRST CHALLENGE, HECKTOR 2020, HELD IN CONJUNCTION WITH MICCAI 2020, LIMA, PERU, OCTOBER 4, 2020, PROCEEDINGS 2021; 12603:85-98. [PMID: 33724743 PMCID: PMC7929493 DOI: 10.1007/978-3-030-67194-5_10] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Segmentation of head and neck cancer (HNC) primary tumors onmedical images is an essential, yet labor-intensive, aspect of radiotherapy.PET/CT imaging offers a unique ability to capture metabolic and anatomicinformation, which is invaluable for tumor detection and border definition. Anautomatic segmentation tool that could leverage the dual streams of informationfrom PET and CT imaging simultaneously, could substantially propel HNCradiotherapy workflows forward. Herein, we leverage a multi-institutionalPET/CT dataset of 201 HNC patients, as part of the MICCAI segmentationchallenge, to develop novel deep learning architectures for primary tumor auto-segmentation for HNC patients. We preprocess PET/CT images by normalizingintensities and applying data augmentation to mitigate overfitting. Both 2D and3D convolutional neural networks based on the U-net architecture, which wereoptimized with a model loss function based on a combination of dice similaritycoefficient (DSC) and binary cross entropy, were implemented. The median andmean DSC values comparing the predicted tumor segmentation with the groundtruth achieved by the models through 5-fold cross validation are 0.79 and 0.69for the 3D model, respectively, and 0.79 and 0.67 for the 2D model, respec-tively. These promising results show potential to provide an automatic, accurate,and efficient approach for primary tumor auto-segmentation to improve theclinical practice of HNC treatment.
Collapse
Affiliation(s)
- Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| | - Lisanne V van Dijk
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| | - Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| |
Collapse
|
32
|
Comelli A. Fully 3D Active Surface with Machine Learning for PET Image Segmentation. J Imaging 2020; 6:jimaging6110113. [PMID: 34460557 PMCID: PMC8321170 DOI: 10.3390/jimaging6110113] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 10/16/2020] [Accepted: 10/20/2020] [Indexed: 12/12/2022] Open
Abstract
In order to tackle three-dimensional tumor volume reconstruction from Positron Emission Tomography (PET) images, most of the existing algorithms rely on the segmentation of independent PET slices. To exploit cross-slice information, typically overlooked in these 2D implementations, I present an algorithm capable of achieving the volume reconstruction directly in 3D, by leveraging an active surface algorithm. The evolution of such surface performs the segmentation of the whole stack of slices simultaneously and can handle changes in topology. Furthermore, no artificial stop condition is required, as the active surface will naturally converge to a stable topology. In addition, I include a machine learning component to enhance the accuracy of the segmentation process. The latter consists of a forcing term based on classification results from a discriminant analysis algorithm, which is included directly in the mathematical formulation of the energy function driving surface evolution. It is worth noting that the training of such a component requires minimal data compared to more involved deep learning methods. Only eight patients (i.e., two lung, four head and neck, and two brain cancers) were used for training and testing the machine learning component, while fifty patients (i.e., 10 lung, 25 head and neck, and 15 brain cancers) were used to test the full 3D reconstruction algorithm. Performance evaluation is based on the same dataset of patients discussed in my previous work, where the segmentation was performed using the 2D active contour. The results confirm that the active surface algorithm is superior to the active contour algorithm, outperforming the earlier approach on all the investigated anatomical districts with a dice similarity coefficient of 90.47 ± 2.36% for lung cancer, 88.30 ± 2.89% for head and neck cancer, and 90.29 ± 2.52% for brain cancer. Based on the reported results, it can be claimed that the migration into a 3D system yielded a practical benefit justifying the effort to rewrite an existing 2D system for PET imaging segmentation.
Collapse
|
33
|
Unkelbach J, Bortfeld T, Cardenas CE, Gregoire V, Hager W, Heijmen B, Jeraj R, Korreman SS, Ludwig R, Pouymayou B, Shusharina N, Söderberg J, Toma-Dasu I, Troost EGC, Vasquez Osorio E. The role of computational methods for automating and improving clinical target volume definition. Radiother Oncol 2020; 153:15-25. [PMID: 33039428 DOI: 10.1016/j.radonc.2020.10.002] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 10/01/2020] [Accepted: 10/01/2020] [Indexed: 12/25/2022]
Abstract
Treatment planning in radiotherapy distinguishes three target volume concepts: the gross tumor volume (GTV), the clinical target volume (CTV), and the planning target volume (PTV). Over time, GTV definition and PTV margins have improved through the development of novel imaging techniques and better image guidance, respectively. CTV definition is sometimes considered the weakest element in the planning process. CTV definition is particularly complex since the extension of microscopic disease cannot be seen using currently available in-vivo imaging techniques. Instead, CTV definition has to incorporate knowledge of the patterns of tumor progression. While CTV delineation has largely been considered the domain of radiation oncologists, this paper, arising from a 2019 ESTRO Physics research workshop, discusses the contributions that medical physics and computer science can make by developing computational methods to support CTV definition. First, we overview the role of image segmentation algorithms, which may in part automate CTV delineation through segmentation of lymph node stations or normal tissues representing anatomical boundaries of microscopic tumor progression. The recent success of deep convolutional neural networks has also enabled learning entire CTV delineations from examples. Second, we discuss the use of mathematical models of tumor progression for CTV definition, using as example the application of glioma growth models to facilitate GTV-to-CTV expansion for glioblastoma that is consistent with neuroanatomy. We further consider statistical machine learning models to quantify lymphatic metastatic progression of tumors, which may eventually improve elective CTV definition. Lastly, we discuss approaches to incorporate uncertainty in CTV definition into treatment plan optimization as well as general limitations of the CTV concept in the case of infiltrating tumors without natural boundaries.
Collapse
Affiliation(s)
- Jan Unkelbach
- Department of Radiation Oncology, University Hospital Zurich, Switzerland.
| | - Thomas Bortfeld
- Division of Radiation Biophysics, Massachusetts General Hospital and Harvard Medical School, Boston, USA
| | - Carlos E Cardenas
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, USA
| | | | - Wille Hager
- Department of Physics, Medical Radiation Physics, Stockholm University and Department of Oncology and Pathology, Medical Radiation Physics, Karolinska Institutet, Stockholm, Sweden
| | - Ben Heijmen
- Department of Radiation Oncology, Erasmus University Medical Center (Erasmus MC), Rotterdam, The Netherlands
| | - Robert Jeraj
- Department of Medical Physics, University of Wisconsin, Madison, USA
| | - Stine S Korreman
- Department of Oncology and Danish Center for Particle Therapy, Aarhus University Hospital, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Roman Ludwig
- Department of Radiation Oncology, University Hospital Zurich, Switzerland
| | - Bertrand Pouymayou
- Department of Radiation Oncology, University Hospital Zurich, Switzerland
| | - Nadya Shusharina
- Division of Radiation Biophysics, Massachusetts General Hospital and Harvard Medical School, Boston, USA
| | | | - Iuliana Toma-Dasu
- Department of Physics, Medical Radiation Physics, Stockholm University and Department of Oncology and Pathology, Medical Radiation Physics, Karolinska Institutet, Stockholm, Sweden
| | - Esther G C Troost
- Dept. of Radiotherapy and Radiation Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; OncoRay - National Center for Radiation Research in Oncology, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Institute of Radiooncology - OncoRay, Dresden, Germany
| | - Eliana Vasquez Osorio
- Division of Cancer Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, UK
| |
Collapse
|
34
|
Zukotynski K, Gaudet V, Uribe CF, Mathotaarachchi S, Smith KC, Rosa-Neto P, Bénard F, Black SE. Machine Learning in Nuclear Medicine: Part 2-Neural Networks and Clinical Aspects. J Nucl Med 2020; 62:22-29. [PMID: 32978286 DOI: 10.2967/jnumed.119.231837] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Accepted: 08/13/2020] [Indexed: 12/12/2022] Open
Abstract
This article is the second part in our machine learning series. Part 1 provided a general overview of machine learning in nuclear medicine. Part 2 focuses on neural networks. We start with an example illustrating how neural networks work and a discussion of potential applications. Recognizing that there is a spectrum of applications, we focus on recent publications in the areas of image reconstruction, low-dose PET, disease detection, and models used for diagnosis and outcome prediction. Finally, since the way machine learning algorithms are reported in the literature is extremely variable, we conclude with a call to arms regarding the need for standardized reporting of design and outcome metrics and we propose a basic checklist our community might follow going forward.
Collapse
Affiliation(s)
- Katherine Zukotynski
- Departments of Medicine and Radiology, McMaster University, Hamilton, Ontario, Canada
| | - Vincent Gaudet
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | - Carlos F Uribe
- PET Functional Imaging, BC Cancer, Vancouver, British Columbia, Canada
| | | | - Kenneth C Smith
- Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Pedro Rosa-Neto
- Translational Neuroimaging Lab, McGill University, Montreal, Quebec, Canada
| | - François Bénard
- PET Functional Imaging, BC Cancer, Vancouver, British Columbia, Canada.,Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; and
| | - Sandra E Black
- Department of Medicine (Neurology), Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
35
|
Arabi H, Zaidi H. Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy. Eur J Hybrid Imaging 2020; 4:17. [PMID: 34191161 PMCID: PMC8218135 DOI: 10.1186/s41824-020-00086-8] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 08/10/2020] [Indexed: 12/22/2022] Open
Abstract
This brief review summarizes the major applications of artificial intelligence (AI), in particular deep learning approaches, in molecular imaging and radiation therapy research. To this end, the applications of artificial intelligence in five generic fields of molecular imaging and radiation therapy, including PET instrumentation design, PET image reconstruction quantification and segmentation, image denoising (low-dose imaging), radiation dosimetry and computer-aided diagnosis, and outcome prediction are discussed. This review sets out to cover briefly the fundamental concepts of AI and deep learning followed by a presentation of seminal achievements and the challenges facing their adoption in clinical setting.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700, Groningen, RB, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
| |
Collapse
|
36
|
Abstract
CLINICAL ISSUE Hybrid imaging enables the precise visualization of cellular metabolism by combining anatomical and metabolic information. Advances in artificial intelligence (AI) offer new methods for processing and evaluating this data. METHODOLOGICAL INNOVATIONS This review summarizes current developments and applications of AI methods in hybrid imaging. Applications in image processing as well as methods for disease-related evaluation are presented and discussed. MATERIALS AND METHODS This article is based on a selective literature search with the search engines PubMed and arXiv. ASSESSMENT Currently, there are only a few AI applications using hybrid imaging data and no applications are established in clinical routine yet. Although the first promising approaches are emerging, they still need to be evaluated prospectively. In the future, AI applications will support radiologists and nuclear medicine radiologists in diagnosis and therapy.
Collapse
Affiliation(s)
- Christian Strack
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland
- Heidelberg University, Heidelberg, Deutschland
| | - Robert Seifert
- Department of Nuclear Medicine, Medical Faculty, University Hospital Essen, Essen, Deutschland
| | - Jens Kleesiek
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland.
- German Cancer Consortium (DKTK), Heidelberg, Deutschland.
| |
Collapse
|
37
|
Jemaa S, Fredrickson J, Carano RAD, Nielsen T, de Crespigny A, Bengtsson T. Tumor Segmentation and Feature Extraction from Whole-Body FDG-PET/CT Using Cascaded 2D and 3D Convolutional Neural Networks. J Digit Imaging 2020; 33:888-894. [PMID: 32378059 PMCID: PMC7522127 DOI: 10.1007/s10278-020-00341-1] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
18F-Fluorodeoxyglucose-positron emission tomography (FDG-PET) is commonly used in clinical practice and clinical drug development to identify and quantify metabolically active tumors. Manual or computer-assisted tumor segmentation in FDG-PET images is a common way to assess tumor burden, such approaches are both labor intensive and may suffer from high inter-reader variability. We propose an end-to-end method leveraging 2D and 3D convolutional neural networks to rapidly identify and segment tumors and to extract metabolic information in eyes to thighs (whole body) FDG-PET/CT scans. The developed architecture is computationally efficient and devised to accommodate the size of whole-body scans, the extreme imbalance between tumor burden and the volume of healthy tissue, and the heterogeneous nature of the input images. Our dataset consists of a total of 3664 eyes to thighs FDG-PET/CT scans, from multi-site clinical trials in patients with non-Hodgkin's lymphoma (NHL) and advanced non-small cell lung cancer (NSCLC). Tumors were segmented and reviewed by board-certified radiologists. We report a mean 3D Dice score of 88.6% on an NHL hold-out set of 1124 scans and a 93% sensitivity on 274 NSCLC hold-out scans. The method is a potential tool for radiologists to rapidly assess eyes to thighs FDG-avid tumor burden.
Collapse
|
38
|
Duffy IR, Boyle AJ, Vasdev N. Improving PET Imaging Acquisition and Analysis With Machine Learning: A Narrative Review With Focus on Alzheimer's Disease and Oncology. Mol Imaging 2020; 18:1536012119869070. [PMID: 31429375 PMCID: PMC6702769 DOI: 10.1177/1536012119869070] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Machine learning (ML) algorithms have found increasing utility in the medical imaging field and numerous applications in the analysis of digital biomarkers within positron emission tomography (PET) imaging have emerged. Interest in the use of artificial intelligence in PET imaging for the study of neurodegenerative diseases and oncology stems from the potential for such techniques to streamline decision support for physicians providing early and accurate diagnosis and allowing personalized treatment regimens. In this review, the use of ML to improve PET image acquisition and reconstruction is presented, along with an overview of its applications in the analysis of PET images for the study of Alzheimer's disease and oncology.
Collapse
Affiliation(s)
- Ian R Duffy
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Amanda J Boyle
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Neil Vasdev
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada.,2 Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
39
|
Shusharina N, Söderberg J, Edmunds D, Löfman F, Shih H, Bortfeld T. Automated delineation of the clinical target volume using anatomically constrained 3D expansion of the gross tumor volume. Radiother Oncol 2020; 146:37-43. [PMID: 32114264 PMCID: PMC10660950 DOI: 10.1016/j.radonc.2020.01.028] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Revised: 01/29/2020] [Accepted: 01/30/2020] [Indexed: 01/05/2023]
Abstract
PURPOSE Delineation of the clinical target volume (CTV) is arguably the weakest link in the treatment planning chain. This work aims to support clinicians in this crucial task. METHODS AND MATERIALS While the CTV itself is ambiguous, it is much easier to identify structures that do not belong to the CTV and serve as barriers to the spread of the disease. We segment the known barrier structures using a convolutional neural network (CNN). The CTV is then obtained by starting from the manually delineated gross tumor volume (GTV) and expanding it while taking into account the barrier structures. Mathematically, we define the CTV as an iso-surface in the 3D map of shortest paths of all voxels from the GTV. The shortest paths are found with the Dijkstra algorithm. While the method is generally applicable, we test it on 206 glioma and glioblastoma cases. RESULTS The auto-segmented barrier structures for the brain cases include the ventricles, falx cerebri, tentorium cerebelli, brain sinuses, and the outer surface of the brain. Manual and auto-segmented barrier structures agree with surface Dice Similarity Coefficients (DSC) ranging from 0.91 to 0.97 at 2 mm tolerance. Comparison of manual and automatically delineated CTVs shows a median surface DSC of 0.79. CONCLUSIONS Barrier structures for CTV definition can be auto-delineated with outstanding precision using a CNN. An algorithm for automated calculation of the CTV by 3D expansion of the GTV while respecting anatomical barriers has been developed. It shows good agreement with manual CTV definition for brain tumors.
Collapse
Affiliation(s)
- Nadya Shusharina
- Division of Radiation Biophysics, Massachusetts General Hospital and Harvard Medical School, Boston, USA
| | | | - David Edmunds
- Division of Radiation Biophysics, Massachusetts General Hospital and Harvard Medical School, Boston, USA
| | | | - Helen Shih
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, USA
| | - Thomas Bortfeld
- Division of Radiation Biophysics, Massachusetts General Hospital and Harvard Medical School, Boston, USA.
| |
Collapse
|
40
|
Ng SP, Cardenas CE, Elhalawani H, Pollard C, Elgohari B, Fang P, Meheissen M, Guha-Thakurta N, Bahig H, Johnson JM, Kamal M, Garden AS, Reddy JP, Su SY, Ferrarotto R, Frank SJ, Brandon Gunn G, Moreno AC, Rosenthal DI, Fuller CD, Phan J. Comparison of tumor delineation using dual energy computed tomography versus magnetic resonance imaging in head and neck cancer re-irradiation cases. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2020; 14:1-5. [PMID: 33458306 PMCID: PMC7807720 DOI: 10.1016/j.phro.2020.04.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 04/16/2020] [Accepted: 04/21/2020] [Indexed: 02/06/2023]
Abstract
GTVs on the 60 kV and 140 kV from DECT, and the T1c and T2 from MRI were compared. Delineation was the most consistent using T1c (no interobserver difference in DSC). T1c MRI provided higher interobserver agreement for skull base tumors. 60 kV DECT provided higher interobserver agreement for non-skull base tumors.
In treatment planning, multiple imaging modalities can be employed to improve the accuracy of tumor delineation but this can be costly. This study aimed to compare the interobserver consistency of using dual energy computed tomography (DECT) versus magnetic resonance imaging (MRI) for delineating tumors in the head and neck cancer (HNC) re-irradiation scenario. Twenty-three patients with recurrent HNC and had planning DECT and MRI were identified. Contoured tumor volumes by seven radiation oncologists were compared. Overall, T1c MRI performed the best with median DSC of 0.58 (0–0.91) for T1c. T1c MRI provided higher interobserver agreement for skull base sites and 60 kV DECT provided higher interobserver agreement for non-skull base sites.
Collapse
Affiliation(s)
- Sweet Ping Ng
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA.,Department of Radiation Oncology, Peter MacCallum Cancer Centre, Melbourne, Australia
| | - Carlos E Cardenas
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Hesham Elhalawani
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Courtney Pollard
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Baher Elgohari
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Penny Fang
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Mohamed Meheissen
- Department of Clinical Oncology and Nuclear Medicine, University of Alexandria, Alexandria, Egypt
| | - Nandita Guha-Thakurta
- Department of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Houda Bahig
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montreal, Quebec, Canada
| | - Jason M Johnson
- Department of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Mona Kamal
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Adam S Garden
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jay P Reddy
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Shirley Y Su
- Department of Head and Neck Surgery, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Renata Ferrarotto
- Department of Thoracic Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Steven J Frank
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - G Brandon Gunn
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Amy C Moreno
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - David I Rosenthal
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jack Phan
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
41
|
Alasal SA, AL Bashabsheh E, Najadat H. Overview of Positron Emission Tomography (PET) for Brain Functions Degeneration Classification. 2020 11TH INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION SYSTEMS (ICICS) 2020. [DOI: 10.1109/icics49469.2020.239500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
42
|
Ye Y, Cai Z, Huang B, He Y, Zeng P, Zou G, Deng W, Chen H, Huang B. Fully-Automated Segmentation of Nasopharyngeal Carcinoma on Dual-Sequence MRI Using Convolutional Neural Networks. Front Oncol 2020; 10:166. [PMID: 32154168 PMCID: PMC7045897 DOI: 10.3389/fonc.2020.00166] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2019] [Accepted: 01/30/2020] [Indexed: 11/13/2022] Open
Abstract
In this study, we proposed an automated method based on convolutional neural network (CNN) for nasopharyngeal carcinoma (NPC) segmentation on dual-sequence magnetic resonance imaging (MRI). T1-weighted (T1W) and T2-weighted (T2W) MRI images were collected from 44 NPC patients. We developed a dense connectivity embedding U-net (DEU) and trained the network based on the two-dimensional dual-sequence MRI images in the training dataset and applied post-processing to remove the false positive results. In order to justify the effectiveness of dual-sequence MRI images, we performed an experiment with different inputs in eight randomly selected patients. We evaluated DEU's performance by using a 10-fold cross-validation strategy and compared the results with the previous studies. The Dice similarity coefficient (DSC) of the method using only T1W, only T2W and dual-sequence of 10-fold cross-validation as different inputs were 0.620 ± 0.0642, 0.642 ± 0.118 and 0.721 ± 0.036, respectively. The median DSC in 10-fold cross-validation experiment with DEU was 0.735. The average DSC of seven external subjects was 0.87. To summarize, we successfully proposed and verified a fully automatic NPC segmentation method based on DEU and dual-sequence MRI images with accurate and stable performance. If further verified, our proposed method would be of use in clinical practice of NPC.
Collapse
Affiliation(s)
- Yufeng Ye
- Department of Radiology, Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Zongyou Cai
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Shenzhen University General Hospital Clinical Research Center for Neurological Diseases, Shenzhen, China
| | - Bin Huang
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Shenzhen University General Hospital Clinical Research Center for Neurological Diseases, Shenzhen, China
| | - Yan He
- Department of Oncology, Panyu Central Hospital, Guangzhou, China
- Cancer Institute of Panyu, Guangzhou, China
| | - Ping Zeng
- Department of Radiology, Shenzhen University General Hospital, Shenzhen, China
| | - Guorong Zou
- Department of Oncology, Panyu Central Hospital, Guangzhou, China
- Cancer Institute of Panyu, Guangzhou, China
| | - Wei Deng
- Department of Radiology, Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Hanwei Chen
- Department of Radiology, Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Bingsheng Huang
- Medical Imaging Institute of Panyu, Guangzhou, China
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| |
Collapse
|
43
|
Ariji Y, Fukuda M, Kise Y, Nozawa M, Nagao T, Nakayama A, Sugita Y, Katumata A, Ariji E. A preliminary application of intraoral Doppler ultrasound images to deep learning techniques for predicting late cervical lymph node metastasis in early tongue cancers. ACTA ACUST UNITED AC 2019. [DOI: 10.1002/osi2.1039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Affiliation(s)
- Yoshiko Ariji
- Department of Oral and Maxillofacial Radiology Aichi‐Gakuin University School of Dentistry Nagoya Japan
| | - Motoki Fukuda
- Department of Oral and Maxillofacial Radiology Aichi‐Gakuin University School of Dentistry Nagoya Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology Aichi‐Gakuin University School of Dentistry Nagoya Japan
| | - Michihito Nozawa
- Department of Oral and Maxillofacial Radiology Aichi‐Gakuin University School of Dentistry Nagoya Japan
| | - Toru Nagao
- Department of Maxillofacial Surgery Aichi‐Gakuin University School of Dentistry Nagoya Japan
| | - Atsushi Nakayama
- Department of Oral and Maxillofacial Surgery Aichi‐Gakuin University School of Dentistry Nagoya Japan
| | - Yoshihiko Sugita
- Department of Oral Pathology Aichi‐Gakuin University School of Dentistry Nagoya Japan
| | - Akitoshi Katumata
- Department of Oral Radiology Asahi University School of Dentistry Mizuho Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology Aichi‐Gakuin University School of Dentistry Nagoya Japan
| |
Collapse
|
44
|
Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09788-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
45
|
Guo Z, Guo N, Gong K, Zhong S, Li Q. Gross tumor volume segmentation for head and neck cancer radiotherapy using deep dense multi-modality network. Phys Med Biol 2019; 64:205015. [PMID: 31514173 PMCID: PMC7186044 DOI: 10.1088/1361-6560/ab440d] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
In radiation therapy, the accurate delineation of gross tumor volume (GTV) is crucial for treatment planning. However, it is challenging for head and neck cancer (HNC) due to the morphology complexity of various organs in the head, low targets to background contrast and potential artifacts on conventional planning CT images. Thus, manual delineation of GTV on anatomical images is extremely time consuming and suffers from inter-observer variability that leads to planning uncertainty. With the wide use of PET/CT imaging in oncology, complementary functional and anatomical information can be utilized for tumor contouring and bring a significant advantage for radiation therapy planning. In this study, by taking advantage of multi-modality PET and CT images, we propose an automatic GTV segmentation framework based on deep learning for HNC. The backbone of this segmentation framework is based on 3D convolution with dense connections which enables a better information propagation and takes full advantage of the features extracted from multi-modality input images. We evaluate our proposed framework on a dataset including 250 HNC patients. Each patient receives both planning CT and PET/CT imaging before radiation therapy (RT). Manually delineated GTV contours by radiation oncologists are used as ground truth in this study. To further investigate the advantage of our proposed Dense-Net framework, we also compared with the framework using 3D U-Net which is the state-of-the-art in segmentation tasks. Meanwhile, for each frame, the performance comparison between single modality input (PET or CT image) and multi-modality input (both PET/CT) is conducted. Dice coefficient, mean surface distance (MSD), 95th-percentile Hausdorff distance (HD95) and displacement of mass centroid (DMC) are calculated for quantitative evaluation. The dataset is split into train (140 patients), validation (35 patients) and test (75 patients) groups to optimize the network. Based on the results on independent test group, our proposed multi-modality Dense-Net (Dice 0.73) shows better performance than the compared network (Dice 0.71). Furthermore, the proposed Dense-Net structure has less trainable parameters than the 3D U-Net, which reduces the prediction variability. In conclusion, our proposed multi-modality Dense-Net can enable satisfied GTV segmentation for HNC using multi-modality images and yield superior performance than conventional methods. Our proposed method provides an automatic, fast and consistent solution for GTV segmentation and shows potentials to be generally applied for radiation therapy planning of a variety of cancer (e.g. lung, sarcoma, liver and so on).
Collapse
Affiliation(s)
- Zhe Guo
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China 100081
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA 02114
| | - Ning Guo
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA 02114
| | - Kuang Gong
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA 02114
| | - Shun’an Zhong
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China 100081
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA 02114
| |
Collapse
|
46
|
Nensa F, Demircioglu A, Rischpler C. Artificial Intelligence in Nuclear Medicine. J Nucl Med 2019; 60:29S-37S. [DOI: 10.2967/jnumed.118.220590] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 05/16/2019] [Indexed: 02/06/2023] Open
|