1
|
Liu Y, Xia K, Cen Y, Ying S, Zhao Z. Artificial intelligence for caries detection: a novel diagnostic tool using deep learning algorithms. Oral Radiol 2024; 40:375-384. [PMID: 38498223 DOI: 10.1007/s11282-024-00741-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 01/26/2024] [Indexed: 03/20/2024]
Abstract
OBJECTIVES The aim of this study was to develop an assessment tool for automatic detection of dental caries in periapical radiographs using convolutional neural network (CNN) architecture. METHODS A novel diagnostic model named ResNet + SAM was established using numerous periapical radiographs (4278 images) annotated by medical experts to automatically detect dental caries. The performance of the model was compared to the traditional CNNs (VGG19, ResNet-50), and the dentists. The Gradient-weighted Class Activation Mapping (Grad-CAM) technique shows the region of interest in the image for the CNNs. RESULTS ResNet + SAM demonstrated significantly improved performance compared to the modified ResNet-50 model, with an average F1 score of 0.886 (95% CI 0.855-0.918), accuracy of 0.885 (95% CI 0.862-0.901) and AUC of 0.954 (95% CI 0.924-0.980). The comparison between the performance of the model and the dentists revealed that the model achieved higher accuracy than that of the junior dentists. With the assist of the tool, the dentists achieved superior metrics with a mean F1 score of 0.827 and the interobserver agreement for dental caries is enhanced from 0.592/0.610 to 0.706/0.723. CONCLUSIONS According to the results obtained from the experiments, the automatic assessment tool using the ResNet + SAM model shows remarkable performance and has excellent possibilities in identifying dental caries. The use of the assessment tool in clinical practice can be of great benefit as a clinical decision-making support in dentistry and reduce the workload of dentists.
Collapse
Affiliation(s)
- Yiliang Liu
- College of Computer Science, Sichuan University, No.24 South Section 1, Yihuan Road, Chengdu, 610065, China
- State Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, Chengdu, 610064, Sichuan, China
| | - Kai Xia
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, No. 14, 3rd section, South Renmin Road, Chengdu, 610041, Sichuan, China
| | - Yueyan Cen
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, No. 14, 3rd section, South Renmin Road, Chengdu, 610041, Sichuan, China
| | - Sancong Ying
- College of Computer Science, Sichuan University, No.24 South Section 1, Yihuan Road, Chengdu, 610065, China.
- State Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, Chengdu, 610064, Sichuan, China.
| | - Zhihe Zhao
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, No. 14, 3rd section, South Renmin Road, Chengdu, 610041, Sichuan, China
| |
Collapse
|
2
|
Wan K, Li L, Jia D, Gao S, Qian W, Wu Y, Lin H, Mu X, Gao X, Wang S, Wu F, Zhuang X. Multi-target landmark detection with incomplete images via reinforcement learning and shape prior embedding. Med Image Anal 2023; 89:102875. [PMID: 37441881 DOI: 10.1016/j.media.2023.102875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 05/05/2023] [Accepted: 06/13/2023] [Indexed: 07/15/2023]
Abstract
Medical images are generally acquired with limited field-of-view (FOV), which could lead to incomplete regions of interest (ROI), and thus impose a great challenge on medical image analysis. This is particularly evident for the learning-based multi-target landmark detection, where algorithms could be misleading to learn primarily the variation of background due to the varying FOV, failing the detection of targets. Based on learning a navigation policy, instead of predicting targets directly, reinforcement learning (RL)-based methods have the potential to tackle this challenge in an efficient manner. Inspired by this, in this work we propose a multi-agent RL framework for simultaneous multi-target landmark detection. This framework is aimed to learn from incomplete or (and) complete images to form an implicit knowledge of global structure, which is consolidated during the training stage for the detection of targets from either complete or incomplete test images. To further explicitly exploit the global structural information from incomplete images, we propose to embed a shape model into the RL process. With this prior knowledge, the proposed RL model can not only localize dozens of targets simultaneously, but also work effectively and robustly in the presence of incomplete images. We validated the applicability and efficacy of the proposed method on various multi-target detection tasks with incomplete images from practical clinics, using body dual-energy X-ray absorptiometry (DXA), cardiac MRI and head CT datasets. Results showed that our method could predict whole set of landmarks with incomplete training images up to 80% missing proportion (average distance error 2.29 cm on body DXA), and could detect unseen landmarks in regions with missing image information outside FOV of target images (average distance error 6.84 mm on 3D half-head CT). Our code will be released via https://zmiclab.github.io/projects.html.
Collapse
Affiliation(s)
- Kaiwen Wan
- School of Data Science, Fudan University, Shanghai, 200433, China
| | - Lei Li
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| | - Dengqiang Jia
- School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai, China; Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China
| | - Shangqi Gao
- School of Data Science, Fudan University, Shanghai, 200433, China
| | - Wei Qian
- Shanghai Institute of Nutrition and Health, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Yingzhi Wu
- Department of Plastic Surgery, Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Huandong Lin
- Department of Endocrinology and Metabolism, Zhong Shan Hospital, Fudan University, 200032 Shanghai, China
| | - Xiongzheng Mu
- Department of Plastic Surgery, Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Xin Gao
- Department of Endocrinology and Metabolism, Zhong Shan Hospital, Fudan University, 200032 Shanghai, China
| | - Sijia Wang
- Shanghai Institute of Nutrition and Health, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Fuping Wu
- Nuffield Department of Population Health, University of Oxford, Oxford, UK
| | - Xiahai Zhuang
- School of Data Science, Fudan University, Shanghai, 200433, China.
| |
Collapse
|
3
|
Wang Y, Chen H, Lin J, Dong S, Zhang W. Automatic detection and recognition of nasopharynx gross tumour volume (GTVnx) by deep learning for nasopharyngeal cancer radiotherapy through magnetic resonance imaging. Radiat Oncol 2023; 18:76. [PMID: 37158943 PMCID: PMC10165804 DOI: 10.1186/s13014-023-02260-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 04/05/2023] [Indexed: 05/10/2023] Open
Abstract
BACKGROUND In this study, we propose the deep learning model-based framework to automatically delineate nasopharynx gross tumor volume (GTVnx) in MRI images. METHODS MRI images from 200 patients were collected for training-validation and testing set. Three popular deep learning models (FCN, U-Net, Deeplabv3) are proposed to automatically delineate GTVnx. FCN was the first and simplest fully convolutional model. U-Net was proposed specifically for medical image segmentation. In Deeplabv3, the proposed Atrous Spatial Pyramid Pooling (ASPP) block, and fully connected Conditional Random Field(CRF) may improve the detection of the small scattered distributed tumor parts due to its different scale of spatial pyramid layers. The three models are compared under same fair criteria, except the learning rate set for the U-Net. Two widely applied evaluation standards, mIoU and mPA, are employed for the detection result evaluation. RESULTS The extensive experiments show that the results of FCN and Deeplabv3 are promising as the benchmark of automatic nasopharyngeal cancer detection. Deeplabv3 performs best with the detection of mIoU 0.8529 ± 0.0017 and mPA 0.9103 ± 0.0039. FCN performs slightly worse in term of detection accuracy. However, both consume similar GPU memory and training time. U-Net performs obviously worst in both detection accuracy and memory consumption. Thus U-Net is not suggested for automatic GTVnx delineation. CONCLUSIONS The proposed framework for automatic target delineation of GTVnx in nasopharynx bring us the desirable and promising results, which could not only be labor-saving, but also make the contour evaluation more objective. This preliminary results provide us with clear directions for further study.
Collapse
Affiliation(s)
- Yandan Wang
- Faculty of Computer Science and Technology, Wenzhou University, WenZhou, China
| | - Hehe Chen
- College of Intelligent Manufacturing, Wenzhou Polytechnic, Wenzhou, China
| | - Jie Lin
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, WenZhou, China
| | - Shi Dong
- Department of Radiotherapy, Wenzhou Central Hospital, Dingli Clinical Medical School of Wenzhou Medical University, Wenzhou, China
| | - Wenyi Zhang
- Department of Radiotherapy, The First Affiliated Hospital of Wenzhou Medical University, WenZhou, China.
| |
Collapse
|
4
|
Yang X, Wu J, Chen X. Application of Artificial Intelligence to the Diagnosis and Therapy of Nasopharyngeal Carcinoma. J Clin Med 2023; 12:jcm12093077. [PMID: 37176518 PMCID: PMC10178972 DOI: 10.3390/jcm12093077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 04/12/2023] [Accepted: 04/18/2023] [Indexed: 05/15/2023] Open
Abstract
Artificial intelligence (AI) is an interdisciplinary field that encompasses a wide range of computer science disciplines, including image recognition, machine learning, human-computer interaction, robotics and so on. Recently, AI, especially deep learning algorithms, has shown excellent performance in the field of image recognition, being able to automatically perform quantitative evaluation of complex medical image features to improve diagnostic accuracy and efficiency. AI has a wider and deeper application in the medical field of diagnosis, treatment and prognosis. Nasopharyngeal carcinoma (NPC) occurs frequently in southern China and Southeast Asian countries and is the most common head and neck cancer in the region. Detecting and treating NPC early is crucial for a good prognosis. This paper describes the basic concepts of AI, including traditional machine learning and deep learning algorithms, and their clinical applications of detecting and assessing NPC lesions, facilitating treatment and predicting prognosis. The main limitations of current AI technologies are briefly described, including interpretability issues, privacy and security and the need for large amounts of annotated data. Finally, we discuss the remaining challenges and the promising future of using AI to diagnose and treat NPC.
Collapse
Affiliation(s)
- Xinggang Yang
- Division of Biotherapy, Cancer Center, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
| | - Juan Wu
- Out-Patient Department, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
| | - Xiyang Chen
- Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
| |
Collapse
|
5
|
Hasan Z, Key S, Habib AR, Wong E, Aweidah L, Kumar A, Sacks R, Singh N. Convolutional Neural Networks in ENT Radiology: Systematic Review of the Literature. Ann Otol Rhinol Laryngol 2023; 132:417-430. [PMID: 35651308 DOI: 10.1177/00034894221095899] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
INTRODUCTION Convolutional neural networks (CNNs) represent a state-of-the-art methodological technique in AI and deep learning, and were specifically created for image classification and computer vision tasks. CNNs have been applied in radiology in a number of different disciplines, mostly outside otolaryngology, potentially due to a lack of familiarity with this technology within the otolaryngology community. CNNs have the potential to revolutionize clinical practice by reducing the time required to perform manual tasks. This literature search aims to present a comprehensive systematic review of the published literature with regard to CNNs and their utility to date in ENT radiology. METHODS Data were extracted from a variety of databases including PubMED, Proquest, MEDLINE Open Knowledge Maps, and Gale OneFile Computer Science. Medical subject headings (MeSH) terms and keywords were used to extract related literature from each databases inception to October 2020. Inclusion criteria were studies where CNNs were used as the main intervention and CNNs focusing on radiology relevant to ENT. Titles and abstracts were reviewed followed by the contents. Once the final list of articles was obtained, their reference lists were also searched to identify further articles. RESULTS Thirty articles were identified for inclusion in this study. Studies utilizing CNNs in most ENT subspecialties were identified. Studies utilized CNNs for a number of tasks including identification of structures, presence of pathology, and segmentation of tumors for radiotherapy planning. All studies reported a high degree of accuracy of CNNs in performing the chosen task. CONCLUSION This study provides a better understanding of CNN methodology used in ENT radiology demonstrating a myriad of potential uses for this exciting technology including nodule and tumor identification, identification of anatomical variation, and segmentation of tumors. It is anticipated that this field will continue to evolve and these technologies and methodologies will become more entrenched in our everyday practice.
Collapse
Affiliation(s)
- Zubair Hasan
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| | - Seraphina Key
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, VIC, Australia
| | - Al-Rahim Habib
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Princess Alexandra Hospital, Woolloongabba, QLD, Australia
| | - Eugene Wong
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| | - Layal Aweidah
- Faculty of Medicine, University of Notre Dame, Darlinghurst, NSW, Australia
| | - Ashnil Kumar
- School of Biomedical Engineering, Faculty of Engineering, University of Sydney, Darlington, NSW, Australia
| | - Raymond Sacks
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Concord Hospital, Concord, NSW, Australia
| | - Narinder Singh
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| |
Collapse
|
6
|
Xie H, Chen Z, Deng J, Zhang J, Duan H, Li Q. Automatic segmentation of the gross target volume in radiotherapy for lung cancer using transresSEUnet 2.5D Network. J Transl Med 2022; 20:524. [PMID: 36371220 PMCID: PMC9652981 DOI: 10.1186/s12967-022-03732-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 10/28/2022] [Indexed: 11/15/2022] Open
Abstract
Objective This paper intends to propose a method of using TransResSEUnet2.5D network for accurate automatic segmentation of the Gross Target Volume (GTV) in Radiotherapy for lung cancer. Methods A total of 11,370 computed tomograms (CT), deriving from 137 cases, of lung cancer patients under radiotherapy developed by radiotherapists were used as the training set; 1642 CT images in 20 cases were used as the validation set, and 1685 CT images in 20 cases were used as the test set. The proposed network was tuned and trained to obtain the best segmentation model and its performance was measured by the Dice Similarity Coefficient (DSC) and with 95% Hausdorff distance (HD95). Lastly, as to demonstrate the accuracy of the automatic segmentation of the network proposed in this study, all possible mirrors of the input images were put into Unet2D, Unet2.5D, Unet3D, ResSEUnet3D, ResSEUnet2.5D, and TransResUnet2.5D, and their respective segmentation performances were compared and assessed. Results The segmentation results of the test set showed that TransResSEUnet2.5D performed the best in the DSC (84.08 ± 0.04) %, HD95 (8.11 ± 3.43) mm and time (6.50 ± 1.31) s metrics compared to the other three networks. Conclusions The TransResSEUnet 2.5D proposed in this study can automatically segment the GTV of radiotherapy for lung cancer patients with more accuracy.
Collapse
|
7
|
Domoguen JKL, Manuel JJA, Cañal JPA, Naval PC. Automatic segmentation of nasopharyngeal carcinoma on CT images using efficient UNet‐2.5D ensemble with semi‐supervised pretext task pretraining. Front Oncol 2022; 12:980312. [DOI: 10.3389/fonc.2022.980312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 10/17/2022] [Indexed: 11/12/2022] Open
Abstract
Nasopharyngeal carcinoma (NPC) is primarily treated with radiation therapy. Accurate delineation of target volumes and organs at risk is important. However, manual delineation is time-consuming, variable, and subjective depending on the experience of the radiation oncologist. This work explores the use of deep learning methods to automate the segmentation of NPC primary gross tumor volume (GTVp) in planning computer tomography (CT) images. A total of sixty-three (63) patients diagnosed with NPC were included in this study. Although a number of studies applied have shown the effectiveness of deep learning methods in medical imaging, their high performance has mainly been due to the wide availability of data. In contrast, the data for NPC is scarce and inaccessible. To tackle this problem, we propose two sequential approaches. First we propose a much simpler architecture which follows the UNet design but using 2D convolutional network for 3D segmentation. We find that this specific architecture is much more effective in the segmentation of GTV in NPC. We highlight its efficacy over other more popular and modern architecture by achieving significantly higher performance. Moreover to further improve performance, we trained the model using multi-scale dataset to create an ensemble of models. However, the performance of the model is ultimately dependent on the availability of labelled data. Hence building on top of this proposed architecture, we employ the use of semi-supervised learning by proposing the use of a combined pre-text tasks. Specifically we use the combination of 3D rotation and 3D relative-patch location pre-texts tasks to pretrain the feature extractor. We use an additional 50 CT images of healthy patients which have no annotation or labels. By semi-supervised pretraining the feature extractor can be frozen after pretraining which essentially makes it much more efficient in terms of the number of parameters since only the decoder is trained. Finally it is not only efficient in terms of parameters but also data, which is shown when the pretrained model with only portion of the labelled training data was able to achieve very close performance to the model trained with the full labelled data.
Collapse
|
8
|
Using an Ultrasound Tissue Phantom Model for Hybrid Training of Deep Learning Models for Shrapnel Detection. J Imaging 2022; 8:jimaging8100270. [PMID: 36286364 PMCID: PMC9604600 DOI: 10.3390/jimaging8100270] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 09/27/2022] [Accepted: 09/29/2022] [Indexed: 11/04/2022] Open
Abstract
Tissue phantoms are important for medical research to reduce the use of animal or human tissue when testing or troubleshooting new devices or technology. Development of machine-learning detection tools that rely on large ultrasound imaging data sets can potentially be streamlined with high quality phantoms that closely mimic important features of biological tissue. Here, we demonstrate how an ultrasound-compliant tissue phantom comprised of multiple layers of gelatin to mimic bone, fat, and muscle tissue types can be used for machine-learning training. This tissue phantom has a heterogeneous composition to introduce tissue level complexity and subject variability in the tissue phantom. Various shrapnel types were inserted into the phantom for ultrasound imaging to supplement swine shrapnel image sets captured for applications such as deep learning algorithms. With a previously developed shrapnel detection algorithm, blind swine test image accuracy reached more than 95% accuracy when training was comprised of 75% tissue phantom images, with the rest being swine images. For comparison, a conventional MobileNetv2 deep learning model was trained with the same training image set and achieved over 90% accuracy in swine predictions. Overall, the tissue phantom demonstrated high performance for developing deep learning models for ultrasound image classification.
Collapse
|
9
|
Liao W, He J, Luo X, Wu M, Shen Y, Li C, Xiao J, Wang G, Chen N. Automatic Delineation of Gross Tumor Volume Based on Magnetic Resonance Imaging by Performing a Novel Semisupervised Learning Framework in Nasopharyngeal Carcinoma. Int J Radiat Oncol Biol Phys 2022; 113:893-902. [PMID: 35381322 DOI: 10.1016/j.ijrobp.2022.03.031] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 03/14/2022] [Accepted: 03/24/2022] [Indexed: 02/05/2023]
Abstract
PURPOSE We aimed to validate the accuracy and clinical value of a novel semisupervised learning framework for gross tumor volume (GTV) delineation in nasopharyngeal carcinoma. METHODS AND MATERIALS Two hundred fifty-eight patients with magnetic resonance imaging data sets were divided into training (n = 180), validation (n = 20), and testing (n = 58) cohorts. Ground truth contours of nasopharynx GTV (GTVnx) and node GTV (GTVnd) were manually delineated by 2 experienced radiation oncologists. Twenty percent (n = 36) labeled and 80% (n = 144) unlabeled images were used to train the model, producing model-generated contours for patients from the testing cohort. Nine experienced experts were invited to revise model-generated GTV in 20 randomly selected patients from the testing cohort. Six junior oncologists were asked to delineate GTV in 12 randomly selected patients from the testing cohort without and with the assistance of the model, and revision degrees were compared under these 2 modes. The Dice similarity coefficient (DSC) was used to quantify the accuracy of the model. RESULTS The model-generated contours showed a high accuracy compared with ground truth contours, with an average DSC score of 0.83 and 0.80 for GTVnx and GTVnd, respectively. There was no significant difference in DSC score between T1-2 and T3-4 patients (0.81 vs 0.83; P = .223), or between N1-2 and N3 patients (0.80 vs 0.79; P = .807). The mean revision degree was lower than 10% in 19 (95%) patients for GTVnx and in 16 (80%) patients for GTVnd. With assistance of the model, the mean revision degree for GTVnx and GTVnd by junior oncologists was reduced from 25.63% to 7.75% and from 21.38% to 14.44%, respectively. Meanwhile, the delineating efficiency was improved by over 60%. CONCLUSIONS The proposed semisupervised learning-based model showed a high accuracy for delineating GTV of nasopharyngeal carcinoma. It was clinically applicable and could assist junior oncologists to improve GTV contouring accuracy and save contouring time.
Collapse
Affiliation(s)
- Wenjun Liao
- Department of Radiation Oncology, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China; Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Jinlan He
- Department of Radiation Oncology, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China
| | - Xiangde Luo
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Mengwan Wu
- Cancer Clinical Research Center, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Yuanyuan Shen
- Cancer Clinical Research Center, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Churong Li
- Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Jianghong Xiao
- Department of Radiation Oncology, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Nianyong Chen
- Department of Radiation Oncology, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
10
|
Li S, Liu J, Zhou Z, Zhou Z, Wu X, Li Y, Wang S, Liao W, Ying S, Zhao Z. Artificial intelligence for caries and periapical periodontitis detection. J Dent 2022; 122:104107. [PMID: 35341892 DOI: 10.1016/j.jdent.2022.104107] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 03/16/2022] [Accepted: 03/22/2022] [Indexed: 02/05/2023] Open
Abstract
OBJECTIVES Periapical periodontitis and caries are common chronic oral diseases affecting most teenagers and adults worldwide. The purpose of this study was to develop an evaluation tool to automatically detect dental caries and periapical periodontitis on periapical radiographs using deep learning. METHODS A modified deep learning model was developed using a large dataset (4,129 images) with high-quality annotations to support the automatic detection of both dental caries and periapical periodontitis. The performance of the model was compared to the classification performance of dentists. RESULTS The deep learning model automatically distinguished dental caries with an F1-score of 0.829 and periapical periodontitis with an F1-score of 0.828. The comparison of model-only and expert-only detection performance showed that the accuracy of the fully automatic method was significantly higher than that of the junior dentists. With deep learning assistance, the experts not only reached a higher diagnostic accuracy with an average F1-score of 0.7844 for dental caries and 0.8208 for periapical periodontitis compared to expert-only scenarios but also increased interobserver agreement from 0.585/0.590 to 0.726/0.713 for dental caries and from 0.623/0.563 to 0.752/0.740 for periapical periodontitis. CONCLUSIONS Based on the experimental results, deep learning can improve the accuracy and consistency of evaluating dental caries and periapical periodontitis on periapical radiographs. CLINICAL SIGNIFICANCE Deep learning models can improve accuracy and consistency and reduce the workload of dentists, making AI a powerful tool for clinical practice.
Collapse
Affiliation(s)
- Shihao Li
- National Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, No. 24 South Section 1, Yihuan Road, Chengdu Sichuan, China, 610065.
| | - Jialing Liu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Zirui Zhou
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Zilin Zhou
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Xiaoyue Wu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Yazhen Li
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Shida Wang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Wen Liao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Sancong Ying
- College of Computer Science, Sichuan University, Chengdu, Sichuan 610041, China.
| | - Zhihe Zhao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| |
Collapse
|
11
|
Yang G, Dai Z, Zhang Y, Zhu L, Tan J, Chen Z, Zhang B, Cai C, He Q, Li F, Wang X, Yang W. Multiscale Local Enhancement Deep Convolutional Networks for the Automated 3D Segmentation of Gross Tumor Volumes in Nasopharyngeal Carcinoma: A Multi-Institutional Dataset Study. Front Oncol 2022; 12:827991. [PMID: 35387126 PMCID: PMC8979212 DOI: 10.3389/fonc.2022.827991] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 02/24/2022] [Indexed: 01/10/2023] Open
Abstract
Purpose Accurate segmentation of gross target volume (GTV) from computed tomography (CT) images is a prerequisite in radiotherapy for nasopharyngeal carcinoma (NPC). However, this task is very challenging due to the low contrast at the boundary of the tumor and the great variety of sizes and morphologies of tumors between different stages. Meanwhile, the data source also seriously affect the results of segmentation. In this paper, we propose a novel three-dimensional (3D) automatic segmentation algorithm that adopts cascaded multiscale local enhancement of convolutional neural networks (CNNs) and conduct experiments on multi-institutional datasets to address the above problems. Materials and Methods In this study, we retrospectively collected CT images of 257 NPC patients to test the performance of the proposed automatic segmentation model, and conducted experiments on two additional multi-institutional datasets. Our novel segmentation framework consists of three parts. First, the segmentation framework is based on a 3D Res-UNet backbone model that has excellent segmentation performance. Then, we adopt a multiscale dilated convolution block to enhance the receptive field and focus on the target area and boundary for segmentation improvement. Finally, a central localization cascade model for local enhancement is designed to concentrate on the GTV region for fine segmentation to improve the robustness. The Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average symmetric surface distance (ASSD) and 95% Hausdorff distance (HD95) are utilized as qualitative evaluation criteria to estimate the performance of our automated segmentation algorithm. Results The experimental results show that compared with other state-of-the-art methods, our modified version 3D Res-UNet backbone has excellent performance and achieves the best results in terms of the quantitative metrics DSC, PPR, ASSD and HD95, which reached 74.49 ± 7.81%, 79.97 ± 13.90%, 1.49 ± 0.65 mm and 5.06 ± 3.30 mm, respectively. It should be noted that the receptive field enhancement mechanism and cascade architecture can have a great impact on the stable output of automatic segmentation results with high accuracy, which is critical for an algorithm. The final DSC, SEN, ASSD and HD95 values can be increased to 76.23 ± 6.45%, 79.14 ± 12.48%, 1.39 ± 5.44mm, 4.72 ± 3.04mm. In addition, the outcomes of multi-institution experiments demonstrate that our model is robust and generalizable and can achieve good performance through transfer learning. Conclusions The proposed algorithm could accurately segment NPC in CT images from multi-institutional datasets and thereby may improve and facilitate clinical applications.
Collapse
Affiliation(s)
- Geng Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Zhenhui Dai
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Lin Zhu
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Junwen Tan
- Department of Oncology, The Fourth Affiliated Hospital of Guangxi Medical University, Liuzhou, China
| | - Zefeiyun Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Bailin Zhang
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Chunya Cai
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Qiang He
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Fei Li
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xuetao Wang
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
- *Correspondence: Wei Yang, ; Xuetao Wang,
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
- *Correspondence: Wei Yang, ; Xuetao Wang,
| |
Collapse
|
12
|
Tao G, Li H, Huang J, Han C, Chen J, Ruan G, Huang W, Hu Y, Dan T, Zhang B, He S, Liu L, Cai H. SeqSeg: A Sequential Method to Achieve Nasopharyngeal Carcinoma Segmentation Free from Background Dominance. Med Image Anal 2022; 78:102381. [DOI: 10.1016/j.media.2022.102381] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 01/18/2022] [Accepted: 01/31/2022] [Indexed: 11/30/2022]
|
13
|
Zhang J, Gu L, Han G, Liu X. AttR2U-Net: A Fully Automated Model for MRI Nasopharyngeal Carcinoma Segmentation Based on Spatial Attention and Residual Recurrent Convolution. Front Oncol 2022; 11:816672. [PMID: 35155206 PMCID: PMC8832031 DOI: 10.3389/fonc.2021.816672] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 12/17/2021] [Indexed: 11/13/2022] Open
Abstract
Radiotherapy is an essential method for treating nasopharyngeal carcinoma (NPC), and the segmentation of NPC is a crucial process affecting the treatment. However, manual segmentation of NPC is inefficient. Besides, the segmentation results of different doctors might vary considerably. To improve the efficiency and the consistency of NPC segmentation, we propose a novel AttR2U-Net model which automatically and accurately segments nasopharyngeal carcinoma from MRI images. This model is based on the classic U-Net and incorporates advanced mechanisms such as spatial attention, residual connection, recurrent convolution, and normalization to improve the segmentation performance. Our model features recurrent convolution and residual connections in each layer to improve its ability to extract details. Moreover, spatial attention is fused into the network by skip connections to pinpoint cancer areas more accurately. Our model achieves a DSC value of 0.816 on the NPC segmentation task and obtains the best performance compared with six other state-of-the-art image segmentation models.
Collapse
Affiliation(s)
- Jiajing Zhang
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| | - Lin Gu
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo, Japan
| | - Guanghui Han
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou, China
- *Correspondence: Xiujian Liu, ; Guanghui Han,
| | - Xiujian Liu
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
- *Correspondence: Xiujian Liu, ; Guanghui Han,
| |
Collapse
|
14
|
Ng WT, But B, Choi HCW, de Bree R, Lee AWM, Lee VHF, López F, Mäkitie AA, Rodrigo JP, Saba NF, Tsang RKY, Ferlito A. Application of Artificial Intelligence for Nasopharyngeal Carcinoma Management - A Systematic Review. Cancer Manag Res 2022; 14:339-366. [PMID: 35115832 PMCID: PMC8801370 DOI: 10.2147/cmar.s341583] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 12/25/2021] [Indexed: 12/15/2022] Open
Abstract
INTRODUCTION Nasopharyngeal carcinoma (NPC) is endemic to Eastern and South-Eastern Asia, and, in 2020, 77% of global cases were diagnosed in these regions. Apart from its distinct epidemiology, the natural behavior, treatment, and prognosis are different from other head and neck cancers. With the growing trend of artificial intelligence (AI), especially deep learning (DL), in head and neck cancer care, we sought to explore the unique clinical application and implementation direction of AI in the management of NPC. METHODS The search protocol was performed to collect publications using AI, machine learning (ML) and DL in NPC management from PubMed, Scopus and Embase. The articles were filtered using inclusion and exclusion criteria, and the quality of the papers was assessed. Data were extracted from the finalized articles. RESULTS A total of 78 articles were reviewed after removing duplicates and papers that did not meet the inclusion and exclusion criteria. After quality assessment, 60 papers were included in the current study. There were four main types of applications, which were auto-contouring, diagnosis, prognosis, and miscellaneous applications (especially on radiotherapy planning). The different forms of convolutional neural networks (CNNs) accounted for the majority of DL algorithms used, while the artificial neural network (ANN) was the most frequent ML model implemented. CONCLUSION There is an overall positive impact identified from AI implementation in the management of NPC. With improving AI algorithms, we envisage AI will be available as a routine application in a clinical setting soon.
Collapse
Affiliation(s)
- Wai Tong Ng
- Clinical Oncology Center, The University of Hong Kong-Shenzhen Hospital, Shenzhen, People’s Republic of China
- Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China
| | - Barton But
- Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China
| | - Horace C W Choi
- Department of Public Health, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China
| | - Remco de Bree
- Department of Head and Neck Surgical Oncology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Anne W M Lee
- Clinical Oncology Center, The University of Hong Kong-Shenzhen Hospital, Shenzhen, People’s Republic of China
- Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China
| | - Victor H F Lee
- Clinical Oncology Center, The University of Hong Kong-Shenzhen Hospital, Shenzhen, People’s Republic of China
- Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China
| | - Fernando López
- Department of Otolaryngology, Hospital Universitario Central de Asturias (HUCA), Instituto de Investigación Sanitaria del Principado de Asturias (ISPA), Instituto Universitario de Oncología del Principado de Asturias (IUOPA), University of Oviedo, Oviedo, 33011, Spain
- Spanish Biomedical Research Network Centre in Oncology, CIBERONC, Madrid, 28029, Spain
| | - Antti A Mäkitie
- Department of Otorhinolaryngology - Head and Neck Surgery, HUS Helsinki University Hospital and University of Helsinki, Helsinki, Finland
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Division of Ear, Nose and Throat Diseases, Department of Clinical Sciences, Intervention and Technology, Karolinska Institutet and Karolinska University Hospital, Stockholm, Sweden
| | - Juan P Rodrigo
- Department of Otolaryngology, Hospital Universitario Central de Asturias (HUCA), Instituto de Investigación Sanitaria del Principado de Asturias (ISPA), Instituto Universitario de Oncología del Principado de Asturias (IUOPA), University of Oviedo, Oviedo, 33011, Spain
- Spanish Biomedical Research Network Centre in Oncology, CIBERONC, Madrid, 28029, Spain
| | - Nabil F Saba
- Department of Hematology and Medical Oncology, Emory University School of Medicine, Atlanta, GA, USA
| | - Raymond K Y Tsang
- Division of Otorhinolaryngology, Department of Surgery, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, People's Republic of China
| | - Alfio Ferlito
- Coordinator of the International Head and Neck Scientific Group, Padua, Italy
| |
Collapse
|
15
|
Zhou H, Li Y, Gu Y, Shen Z, Zhu X, Ge Y. A deep learning based automatic segmentation approach for anatomical structures in intensity modulation radiotherapy. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:7506-7524. [PMID: 34814260 DOI: 10.3934/mbe.2021371] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To evaluate the automatic segmentation approach for organ at risk (OARs) and compare the parameters of dose volume histogram (DVH) in radiotherapy. METHODOLOGY Thirty-three patients were selected to contour OARs using automatic segmentation approach which based on U-Net, applying them to a number of the nasopharyngeal carcinoma (NPC), breast, and rectal cancer respectively. The automatic contours were transferred to the Pinnacle System to evaluate contour accuracy and compare the DVH parameters. RESULTS The time for manual contour was 56.5 ± 9, 23.12 ± 4.23 and 45.23 ± 2.39min for the OARs of NPC, breast and rectal cancer, and for automatic contour was 1.5 ± 0.23, 1.45 ± 0.78 and 1.8 ± 0.56 min. Automatic contours of Eye with the best Dice-similarity coefficients (DSC) of 0.907 ± 0.02 while with the poorest DSC of 0.459 ± 0.112 of Spinal Cord for NPC; And Lung with the best DSC of 0.944 ± 0.03 while with the poorest DSC of 0.709 ± 0.1 of Spinal Cord for breast; And Bladder with the best DSC of 0.91 ± 0.04 while with the poorest DSC of 0.43 ± 0.1 of Femoral heads for rectal cancer. The contours of Spinal Cord in H & N had poor results due to the division of the medulla oblongata. The contours of Femoral head, which different from what we expect, also due to manual contour result in poor DSC. CONCLUSION The automatic contour approach based deep learning method with sufficient accuracy for research purposes. However, the value of DSC does not fully reflect the accuracy of dose distribution, but can cause dose changes due to the changes in the OARs volume and DSC from the data. Considering the significantly time-saving and good performance in partial OARs, the automatic contouring also plays a supervisory role.
Collapse
Affiliation(s)
- Han Zhou
- School of Electronic Science and Engineering, Nanjing University, Nanjing, Jiangsu 210046, China
- Department of Radiation Oncology The Fourth Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu, 210002, China
| | - Yikun Li
- Department of Radiation Oncology, Jinling Hospital, Nanjing, Jiangsu, 210002, China
| | - Ying Gu
- Department of Radiation Oncology, Jinling Hospital, Nanjing, Jiangsu, 210002, China
| | - Zetian Shen
- Department of Radiation Oncology The Fourth Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu, 210002, China
| | - Xixu Zhu
- Department of Radiation Oncology, Jinling Hospital, Nanjing, Jiangsu, 210002, China
| | - Yun Ge
- School of Electronic Science and Engineering, Nanjing University, Nanjing, Jiangsu 210046, China
| |
Collapse
|
16
|
Analytical performance of aPROMISE: automated anatomic contextualization, detection, and quantification of [ 18F]DCFPyL (PSMA) imaging for standardized reporting. Eur J Nucl Med Mol Imaging 2021; 49:1041-1051. [PMID: 34463809 PMCID: PMC8803714 DOI: 10.1007/s00259-021-05497-8] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 07/09/2021] [Indexed: 11/21/2022]
Abstract
Purpose The application of automated image analyses could improve and facilitate standardization and consistency of quantification in [18F]DCFPyL (PSMA) PET/CT scans. In the current study, we analytically validated aPROMISE, a software as a medical device that segments organs in low-dose CT images with deep learning, and subsequently detects and quantifies potential pathological lesions in PSMA PET/CT. Methods To evaluate the deep learning algorithm, the automated segmentations of the low-dose CT component of PSMA PET/CT scans from 20 patients were compared to manual segmentations. Dice scores were used to quantify the similarities between the automated and manual segmentations. Next, the automated quantification of tracer uptake in the reference organs and detection and pre-segmentation of potential lesions were evaluated in 339 patients with prostate cancer, who were all enrolled in the phase II/III OSPREY study. Three nuclear medicine physicians performed the retrospective independent reads of OSPREY images with aPROMISE. Quantitative consistency was assessed by the pairwise Pearson correlations and standard deviation between the readers and aPROMISE. The sensitivity of detection and pre-segmentation of potential lesions was evaluated by determining the percent of manually selected abnormal lesions that were automatically detected by aPROMISE. Results The Dice scores for bone segmentations ranged from 0.88 to 0.95. The Dice scores of the PSMA PET/CT reference organs, thoracic aorta and liver, were 0.89 and 0.97, respectively. Dice scores of other visceral organs, including prostate, were observed to be above 0.79. The Pearson correlation for blood pool reference was higher between any manual reader and aPROMISE, than between any pair of manual readers. The standard deviations of reference organ uptake across all patients as determined by aPROMISE (SD = 0.21 blood pool and SD = 1.16 liver) were lower compared to those of the manual readers. Finally, the sensitivity of aPROMISE detection and pre-segmentation was 91.5% for regional lymph nodes, 90.6% for all lymph nodes, and 86.7% for bone in metastatic patients. Conclusion In this analytical study, we demonstrated the segmentation accuracy of the deep learning algorithm, the consistency in quantitative assessment across multiple readers, and the high sensitivity in detecting potential lesions. The study provides a foundational framework for clinical evaluation of aPROMISE in standardized reporting of PSMA PET/CT. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05497-8.
Collapse
|
17
|
Li S, Deng YQ, Zhu ZL, Hua HL, Tao ZZ. A Comprehensive Review on Radiomics and Deep Learning for Nasopharyngeal Carcinoma Imaging. Diagnostics (Basel) 2021; 11:1523. [PMID: 34573865 PMCID: PMC8465998 DOI: 10.3390/diagnostics11091523] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 08/10/2021] [Accepted: 08/19/2021] [Indexed: 12/23/2022] Open
Abstract
Nasopharyngeal carcinoma (NPC) is one of the most common malignant tumours of the head and neck, and improving the efficiency of its diagnosis and treatment strategies is an important goal. With the development of the combination of artificial intelligence (AI) technology and medical imaging in recent years, an increasing number of studies have been conducted on image analysis of NPC using AI tools, especially radiomics and artificial neural network methods. In this review, we present a comprehensive overview of NPC imaging research based on radiomics and deep learning. These studies depict a promising prospect for the diagnosis and treatment of NPC. The deficiencies of the current studies and the potential of radiomics and deep learning for NPC imaging are discussed. We conclude that future research should establish a large-scale labelled dataset of NPC images and that studies focused on screening for NPC using AI are necessary.
Collapse
Affiliation(s)
- Song Li
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| | - Yu-Qin Deng
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| | - Zhi-Ling Zhu
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital Affiliated to Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China;
| | - Hong-Li Hua
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| | - Ze-Zhang Tao
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| |
Collapse
|
18
|
Robert C, Munoz A, Moreau D, Mazurier J, Sidorski G, Gasnier A, Beldjoudi G, Grégoire V, Deutsch E, Meyer P, Simon L. Clinical implementation of deep-learning based auto-contouring tools-Experience of three French radiotherapy centers. Cancer Radiother 2021; 25:607-616. [PMID: 34389243 DOI: 10.1016/j.canrad.2021.06.023] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 06/17/2021] [Accepted: 06/18/2021] [Indexed: 12/23/2022]
Abstract
Deep-learning (DL)-based auto-contouring solutions have recently been proposed as a convincing alternative to decrease workload of target volumes and organs-at-risk (OAR) delineation in radiotherapy planning and improve inter-observer consistency. However, there is minimal literature of clinical implementations of such algorithms in a clinical routine. In this paper we first present an update of the state-of-the-art of DL-based solutions. We then summarize recent recommendations proposed by the European society for radiotherapy and oncology (ESTRO) to be followed before any clinical implementation of artificial intelligence-based solutions in clinic. The last section describes the methodology carried out by three French radiation oncology departments to deploy CE-marked commercial solutions. Based on the information collected, a majority of OAR are retained by the centers among those proposed by the manufacturers, validating the usefulness of DL-based models to decrease clinicians' workload. Target volumes, with the exception of lymph node areas in breast, head and neck and pelvic regions, whole breast, breast wall, prostate and seminal vesicles, are not available in the three commercial solutions at this time. No implemented workflows are currently available to continuously improve the models, but these can be adapted/retrained in some solutions during the commissioning phase to best fit local practices. In reported experiences, automatic workflows were implemented to limit human interactions and make the workflow more fluid. Recommendations published by the ESTRO group will be of importance for guiding physicists in the clinical implementation of patient specific and regular quality assurances.
Collapse
Affiliation(s)
- C Robert
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France.
| | - A Munoz
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - D Moreau
- Department of Radiotherapy, Hôpital Européen Georges-Pompidou, Paris, France
| | - J Mazurier
- Department of Radiotherapy, Clinique Pasteur-Oncorad, Toulouse, France
| | - G Sidorski
- Department of Radiotherapy, Clinique Pasteur-Oncorad, Toulouse, France
| | - A Gasnier
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France
| | - G Beldjoudi
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - V Grégoire
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - E Deutsch
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France
| | - P Meyer
- Service d'Oncologie Radiothérapie, Institut de Cancérologie Strasbourg Europe (Icans), Strasbourg, France
| | - L Simon
- Institut Claudius Regaud (ICR), Institut Universitaire du Cancer de Toulouse - Oncopole (IUCT-O), Toulouse, France
| |
Collapse
|
19
|
Yakar M, Etiz D. Artificial intelligence in radiation oncology. Artif Intell Med Imaging 2021; 2:13-31. [DOI: 10.35711/aimi.v2.i2.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 03/30/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is a computer science that tries to mimic human-like intelligence in machines that use computer software and algorithms to perform specific tasks without direct human input. Machine learning (ML) is a subunit of AI that uses data-driven algorithms that learn to imitate human behavior based on a previous example or experience. Deep learning is an ML technique that uses deep neural networks to create a model. The growth and sharing of data, increasing computing power, and developments in AI have initiated a transformation in healthcare. Advances in radiation oncology have produced a significant amount of data that must be integrated with computed tomography imaging, dosimetry, and imaging performed before each fraction. Of the many algorithms used in radiation oncology, has advantages and limitations with different computational power requirements. The aim of this review is to summarize the radiotherapy (RT) process in workflow order by identifying specific areas in which quality and efficiency can be improved by ML. The RT stage is divided into seven stages: patient evaluation, simulation, contouring, planning, quality control, treatment application, and patient follow-up. A systematic evaluation of the applicability, limitations, and advantages of AI algorithms has been done for each stage.
Collapse
Affiliation(s)
- Melek Yakar
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| | - Durmus Etiz
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| |
Collapse
|
20
|
Cao R, Pei X, Ge N, Zheng C. Clinical Target Volume Auto-Segmentation of Esophageal Cancer for Radiotherapy After Radical Surgery Based on Deep Learning. Technol Cancer Res Treat 2021; 20:15330338211034284. [PMID: 34387104 PMCID: PMC8366129 DOI: 10.1177/15330338211034284] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Radiotherapy plays an important role in controlling the local recurrence of esophageal cancer after radical surgery. Segmentation of the clinical target volume is a key step in radiotherapy treatment planning, but it is time-consuming and operator-dependent. This paper introduces a deep dilated convolutional U-network to achieve fast and accurate clinical target volume auto-segmentation of esophageal cancer after radical surgery. The deep dilated convolutional U-network, which integrates the advantages of dilated convolution and the U-network, is an end-to-end architecture that enables rapid training and testing. A dilated convolution module for extracting multiscale context features containing the original information on fine texture and boundaries is integrated into the U-network architecture to avoid information loss due to down-sampling and improve the segmentation accuracy. In addition, batch normalization is added to the deep dilated convolutional U-network for fast and stable convergence. In the present study, the training and validation loss tended to be stable after 40 training epochs. This deep dilated convolutional U-network model was able to segment the clinical target volume with an overall mean Dice similarity coefficient of 86.7% and a respective 95% Hausdorff distance of 37.4 mm, indicating reasonable volume overlap of the auto-segmented and manual contours. The mean Cohen kappa coefficient was 0.863, indicating that the deep dilated convolutional U-network was robust. Comparisons with the U-network and attention U-network showed that the overall performance of the deep dilated convolutional U-network was best for the Dice similarity coefficient, 95% Hausdorff distance, and Cohen kappa coefficient. The test time for segmentation of the clinical target volume was approximately 25 seconds per patient. This deep dilated convolutional U-network could be applied in the clinical setting to save time in delineation and improve the consistency of contouring.
Collapse
Affiliation(s)
- Ruifen Cao
- College of Computer Science and Technology, 12487Anhui University, Hefei, Anhui, China
- Engineering Research Center of Big Data Application in Private Health Medicine, Fujian Province University, Putian, Fujian, China
| | - Xi Pei
- 12652University of Science and Technology of China, Hefei, Anhui, China
| | - Ning Ge
- The First Affiliated Hospital of USTC West District, 117556Anhui Provincial Cancer Hospital, Hefei, Anhui, China
| | - Chunhou Zheng
- College of Computer Science and Technology, 12487Anhui University, Hefei, Anhui, China
- Engineering Research Center of Big Data Application in Private Health Medicine, Fujian Province University, Putian, Fujian, China
| |
Collapse
|
21
|
A Collaborative Dictionary Learning Model for Nasopharyngeal Carcinoma Segmentation on Multimodalities MR Sequences. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:7562140. [PMID: 32908581 PMCID: PMC7474760 DOI: 10.1155/2020/7562140] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 08/06/2020] [Accepted: 08/12/2020] [Indexed: 11/18/2022]
Abstract
Nasopharyngeal carcinoma (NPC) is the most common malignant tumor of the nasopharynx. The delicate nature of the nasopharyngeal structures means that noninvasive magnetic resonance imaging (MRI) is the preferred diagnostic technique for NPC. However, NPC is a typically infiltrative tumor, usually with a small volume, and thus, it remains challenging to discriminate it from tightly connected surrounding tissues. To address this issue, this study proposes a voxel-wise discriminate method for locating and segmenting NPC from normal tissues in MRI sequences. The located NPC is refined to obtain its accurate segmentation results by an original multiviewed collaborative dictionary classification (CODL) model. The proposed CODL reconstructs a latent intact space and equips it with discriminative power for the collective multiview analysis task. Experiments on synthetic data demonstrate that CODL is capable of finding a discriminative space for multiview orthogonal data. We then evaluated the method on real NPC. Experimental results show that CODL could accurately discriminate and localize NPCs of different volumes. This method achieved superior performances in segmenting NPC compared with benchmark methods. Robust segmentation results show that CODL can effectively assist clinicians in locating NPC.
Collapse
|
22
|
He L, Xiao J, Wei Z, He Y, Wang J, Guan H, Mu X, Peng X. Toxicity and dosimetric analysis of nasopharyngeal carcinoma patients undergoing radiotherapy with IMRT or VMAT: A regional center's experience. Oral Oncol 2020; 109:104978. [PMID: 32861986 DOI: 10.1016/j.oraloncology.2020.104978] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 07/22/2020] [Accepted: 08/10/2020] [Indexed: 02/05/2023]
Abstract
OBJECTIVES To observe the differences of dosimetric parameters and late toxicities in Nasopharyngeal Carcinoma (NPC) patients treated with intensity-modulated radiotherapy (IMRT) or volumetric modulated arc therapy (VMAT), which may provide the selective basis about radiation technology in clinical practices. METHODS AND MATERIALS Dosimetric parameters and late toxicities were collected and retrospectively analyzed from 627 NPC patients (stage as I-IVA/IVB) between January 2010 and December 2015. RESULTS The median D2 of all targets and D50 of PGTVnd (regional lymph nodes) were lower in VAMT than those in IMRT, while the median D95 and D98 of PGTVnx (primary lesions) were higher in VMAT than those in IMRT (p < 0.05). Superior sparing of the organs at risk were observed in VMAT. The maximum dose of the brainstem, spinal cord, temporal lobes, temporomandibular joint, optic chiasm, and lens were lower in VMAT than those in IMRT, where the median dose reduction ranged from 0.56 to 3.56 Gy (p < 0.05). Meanwhile, the median parotid glands V30 in VMAT was reduced by approximately 2% compared to that in IMRT (p = 0.027). Regarding the late toxicities, ototoxicity, trismus, and temporal lobe injury were reduced by VMAT (p < 0.05). Furthermore, the late toxicities were correlative with the radiation dose of the corresponding OARs (p < 0.05). CONCLUSION For NPC treatment plans, the VMAT might provide not only more favorable dose distributions of targets but also better sparing of normal tissue than observed in IMRT. Furthermore, VMAT possibly provides less treatment-related late toxicities such as ototoxicity, trismus, and temporal lobe injury.
Collapse
Affiliation(s)
- Ling He
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Jianghong Xiao
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Zhigong Wei
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Yan He
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Jingjing Wang
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Hui Guan
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Xiaoli Mu
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Xingchen Peng
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China.
| |
Collapse
|
23
|
Wang X, Yang G, Zhang Y, Zhu L, Xue X, Zhang B, Cai C, Jin H, Zheng J, Wu J, Yang W, Dai Z. Automated delineation of nasopharynx gross tumor volume for nasopharyngeal carcinoma by plain CT combining contrast-enhanced CT using deep learning. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2020. [DOI: 10.1080/16878507.2020.1795565] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
- Xuetao Wang
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Geng Yang
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Lin Zhu
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiaoguang Xue
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Bailin Zhang
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Chunya Cai
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Huaizhi Jin
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Jianxiao Zheng
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Jian Wu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | | |
Collapse
|
24
|
Ke L, Deng Y, Xia W, Qiang M, Chen X, Liu K, Jing B, He C, Xie C, Guo X, Lv X, Li C. Development of a self-constrained 3D DenseNet model in automatic detection and segmentation of nasopharyngeal carcinoma using magnetic resonance images. Oral Oncol 2020; 110:104862. [PMID: 32615440 DOI: 10.1016/j.oraloncology.2020.104862] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 05/18/2020] [Accepted: 06/14/2020] [Indexed: 01/31/2023]
Abstract
OBJECTIVES We aimed to develop a dual-task model to detect and segment nasopharyngeal carcinoma (NPC) automatically in magnetic resource images (MRI) based on deep learning method, since the differential diagnosis of NPC and atypical benign hyperplasia was difficult and the radiotherapy target contouring of NPC was labor-intensive. MATERIALS AND METHODS A self-constrained 3D DenseNet (SC-DenseNet) architecture was improved using separated training and validation sets. A total of 4100 individuals were finally enrolled and split into the training, validation and test sets at a proximate ratio of 8:1:1 using simple randomization. The diagnostic metrics of the established model against experienced radiologists was compared in the test set. The dice similarity coefficient (DSC) of manual and model-defined tumor region was used to evaluate the efficacy of segmentation. RESULTS Totally, 3142 nasopharyngeal carcinoma (NPC) and 958 benign hyperplasia were included. The SC-DenseNet model showed encouraging performance in detecting NPC, attained a higher overall accuracy, sensitivity and specificity than those of the experienced radiologists (97.77% vs 95.87%, 99.68% vs 99.24% and 91.67% vs 85.21%, respectively). Moreover, the model also exhibited promising performance in automatic segmentation of tumor region in NPC, with an average DSC at 0.77 ± 0.07 in the test set. CONCLUSIONS The SC-DenseNet model showed competence in automatic detection and segmentation of NPC in MRI, indicating the promising application value as an assistant tool in clinical practice, especially in screening project.
Collapse
Affiliation(s)
- Liangru Ke
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Yishu Deng
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Weixiong Xia
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Mengyun Qiang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Xi Chen
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Kuiyuan Liu
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Bingzhong Jing
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Caisheng He
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Chuanmiao Xie
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Xiang Guo
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Xing Lv
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China.
| | - Chaofeng Li
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China; Precision Medicine Center, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China.
| |
Collapse
|
25
|
Zheng D, Hong JC, Wang C, Zhu X. Radiotherapy Treatment Planning in the Age of AI: Are We Ready Yet? Technol Cancer Res Treat 2020; 18:1533033819894577. [PMID: 31858890 PMCID: PMC6927195 DOI: 10.1177/1533033819894577] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Affiliation(s)
- Dandan Zheng
- Department of Radiation Oncology, University of Nebraska Medical Center, Omaha, NE, USA
| | - Julian C Hong
- Department of Radiation Oncology, University of California, San Francisco, CA, USA
| | - Chunhao Wang
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Xiaofeng Zhu
- Department of Radiation Oncology, Georgetown University Hospital, Rockville, MD, USA
| |
Collapse
|