1
|
Shan G, Yu S, Lai Z, Xuan Z, Zhang J, Wang B, Ge Y. A Review of Artificial Intelligence Application for Radiotherapy. Dose Response 2024; 22:15593258241263687. [PMID: 38912333 PMCID: PMC11193352 DOI: 10.1177/15593258241263687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 05/03/2024] [Indexed: 06/25/2024] Open
Abstract
Background and Purpose Artificial intelligence (AI) is a technique which tries to think like humans and mimic human behaviors. It has been considered as an alternative in a lot of human-dependent steps in radiotherapy (RT), since the human participation is a principal uncertainty source in RT. The aim of this work is to provide a systematic summary of the current literature on AI application for RT, and to clarify its role for RT practice in terms of clinical views. Materials and Methods A systematic literature search of PubMed and Google Scholar was performed to identify original articles involving the AI applications in RT from the inception to 2022. Studies were included if they reported original data and explored the clinical applications of AI in RT. Results The selected studies were categorized into three aspects of RT: organ and lesion segmentation, treatment planning and quality assurance. For each aspect, this review discussed how these AI tools could be involved in the RT protocol. Conclusions Our study revealed that AI was a potential alternative for the human-dependent steps in the complex process of RT.
Collapse
Affiliation(s)
- Guoping Shan
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
- Zhejiang Cancer Hospital, Hangzhou, China
| | - Shunfei Yu
- Zhejiang Provincial Center for Disease Control and Prevention, Hangzhou, China
| | - Zhongjun Lai
- Zhejiang Provincial Center for Disease Control and Prevention, Hangzhou, China
| | - Zhiqiang Xuan
- Zhejiang Provincial Center for Disease Control and Prevention, Hangzhou, China
| | - Jie Zhang
- Zhejiang Cancer Hospital, Hangzhou, China
| | | | - Yun Ge
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| |
Collapse
|
2
|
Rokhshad R, Salehi SN, Yavari A, Shobeiri P, Esmaeili M, Manila N, Motamedian SR, Mohammad-Rahimi H. Deep learning for diagnosis of head and neck cancers through radiographic data: a systematic review and meta-analysis. Oral Radiol 2024; 40:1-20. [PMID: 37855976 DOI: 10.1007/s11282-023-00715-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 09/23/2023] [Indexed: 10/20/2023]
Abstract
PURPOSE This study aims to review deep learning applications for detecting head and neck cancer (HNC) using magnetic resonance imaging (MRI) and radiographic data. METHODS Through January 2023, a PubMed, Scopus, Embase, Google Scholar, IEEE, and arXiv search were carried out. The inclusion criteria were implementing head and neck medical images (computed tomography (CT), positron emission tomography (PET), MRI, Planar scans, and panoramic X-ray) of human subjects with segmentation, object detection, and classification deep learning models for head and neck cancers. The risk of bias was rated with the quality assessment of diagnostic accuracy studies (QUADAS-2) tool. For the meta-analysis diagnostic odds ratio (DOR) was calculated. Deeks' funnel plot was used to assess publication bias. MIDAS and Metandi packages were used to analyze diagnostic test accuracy in STATA. RESULTS From 1967 studies, 32 were found eligible after the search and screening procedures. According to the QUADAS-2 tool, 7 included studies had a low risk of bias for all domains. According to the results of all included studies, the accuracy varied from 82.6 to 100%. Additionally, specificity ranged from 66.6 to 90.1%, sensitivity from 74 to 99.68%. Fourteen studies that provided sufficient data were included for meta-analysis. The pooled sensitivity was 90% (95% CI 0.820.94), and the pooled specificity was 92% (CI 95% 0.87-0.96). The DORs were 103 (27-251). Publication bias was not detected based on the p-value of 0.75 in the meta-analysis. CONCLUSION With a head and neck screening deep learning model, detectable screening processes can be enhanced with high specificity and sensitivity.
Collapse
Affiliation(s)
- Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
| | - Seyyede Niloufar Salehi
- Executive Secretary of Research Committee, Board Director of Scientific Society, Dental Faculty, Azad University, Tehran, Iran
| | - Amirmohammad Yavari
- Student Research Committee, School of Dentistry, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Parnian Shobeiri
- School of Medicine, Tehran University of Medical Science, Tehran, Iran
| | - Mahdieh Esmaeili
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, Tehran, Iran
| | - Nisha Manila
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
- Department of Diagnostic Sciences, Louisiana State University Health Science Center School of Dentistry, Louisiana, USA
| | - Saeed Reza Motamedian
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany.
- Dentofacial Deformities Research Center, Research Institute of Dental, Sciences & Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjou Blvd, Tehran, Iran.
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
| |
Collapse
|
3
|
Shiri I, Razeghi B, Vafaei Sadr A, Amini M, Salimi Y, Ferdowsi S, Boor P, Gündüz D, Voloshynovskiy S, Zaidi H. Multi-institutional PET/CT image segmentation using federated deep transformer learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107706. [PMID: 37506602 DOI: 10.1016/j.cmpb.2023.107706] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 07/02/2023] [Indexed: 07/30/2023]
Abstract
BACKGROUND AND OBJECTIVE Generalizable and trustworthy deep learning models for PET/CT image segmentation necessitates large diverse multi-institutional datasets. However, legal, ethical, and patient privacy issues challenge sharing of datasets between different centers. To overcome these challenges, we developed a federated learning (FL) framework for multi-institutional PET/CT image segmentation. METHODS A dataset consisting of 328 FL (HN) cancer patients who underwent clinical PET/CT examinations gathered from six different centers was enrolled. A pure transformer network was implemented as fully core segmentation algorithms using dual channel PET/CT images. We evaluated different frameworks (single center-based, centralized baseline, as well as seven different FL algorithms) using 68 PET/CT images (20% of each center data). In particular, the implemented FL algorithms include clipping with the quantile estimator (ClQu), zeroing with the quantile estimator (ZeQu), federated averaging (FedAvg), lossy compression (LoCo), robust aggregation (RoAg), secure aggregation (SeAg), and Gaussian differentially private FedAvg with adaptive quantile clipping (GDP-AQuCl). RESULTS The Dice coefficient was 0.80±0.11 for both centralized and SeAg FL algorithms. All FL approaches achieved centralized learning model performance with no statistically significant differences. Among the FL algorithms, SeAg and GDP-AQuCl performed better than the other techniques. However, there was no statistically significant difference. All algorithms, except the center-based approach, resulted in relative errors less than 5% for SUVmax and SUVmean for all FL and centralized methods. Centralized and FL algorithms significantly outperformed the single center-based baseline. CONCLUSIONS The developed FL-based (with centralized method performance) algorithms exhibited promising performance for HN tumor segmentation from PET/CT images.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Behrooz Razeghi
- Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany; Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA 17033, USA
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Sohrab Ferdowsi
- Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Peter Boor
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Deniz Gündüz
- Department of Electrical and Electronic Engineering, Imperial College London, UK
| | | | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland; Geneva University Neurocenter, University of Geneva, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, The Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
4
|
Yu X, He L, Wang Y, Dong Y, Song Y, Yuan Z, Yan Z, Wang W. A deep learning approach for automatic tumor delineation in stereotactic radiotherapy for non-small cell lung cancer using diagnostic PET-CT and planning CT. Front Oncol 2023; 13:1235461. [PMID: 37601687 PMCID: PMC10437048 DOI: 10.3389/fonc.2023.1235461] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 07/10/2023] [Indexed: 08/22/2023] Open
Abstract
Introduction Accurate delineation of tumor targets is crucial for stereotactic body radiation therapy (SBRT) for non-small cell lung cancer (NSCLC). This study aims to develop a deep learning-based segmentation approach to accurately and efficiently delineate NSCLC targets using diagnostic PET-CT and SBRT planning CT (pCT). Methods The diagnostic PET was registered to pCT using the transform matrix from registering diagnostic CT to the pCT. We proposed a 3D-UNet-based segmentation method to segment NSCLC tumor targets on dual-modality PET-pCT images. This network contained squeeze-and-excitation and Residual blocks in each convolutional block to perform dynamic channel-wise feature recalibration. Furthermore, up-sampling paths were added to supplement low-resolution features to the model and also to compute the overall loss function. The dice similarity coefficient (DSC), precision, recall, and the average symmetric surface distances were used to assess the performance of the proposed approach on 86 pairs of diagnostic PET and pCT images. The proposed model using dual-modality images was compared with both conventional 3D-UNet architecture and single-modality image input. Results The average DSC of the proposed model with both PET and pCT images was 0.844, compared to 0.795 and 0.827, when using 3D-UNet and nnUnet. It also outperformed using either pCT or PET alone with the same network, which had DSC of 0.823 and 0.732, respectively. Discussion Therefore, our proposed segmentation approach is able to outperform the current 3D-UNet network with diagnostic PET and pCT images. The integration of two image modalities helps improve segmentation accuracy.
Collapse
Affiliation(s)
- Xuyao Yu
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
- Tianjin Medical University, Tianjin, China
| | - Lian He
- Perception Vision Medical Technologies Co Ltd, Guangzhou, China
| | - Yuwen Wang
- Department of Radiotherapy, Tianjin Cancer Hospital Airport Hospital, Tianjin, China
| | - Yang Dong
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Yongchun Song
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Zhiyong Yuan
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Ziye Yan
- Perception Vision Medical Technologies Co Ltd, Guangzhou, China
| | - Wei Wang
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| |
Collapse
|
5
|
Delaby N, Barateau A, Chiavassa S, Biston MC, Chartier P, Graulières E, Guinement L, Huger S, Lacornerie T, Millardet-Martin C, Sottiaux A, Caron J, Gensanne D, Pointreau Y, Coutte A, Biau J, Serre AA, Castelli J, Tomsej M, Garcia R, Khamphan C, Badey A. Practical and technical key challenges in head and neck adaptive radiotherapy: The GORTEC point of view. Phys Med 2023; 109:102568. [PMID: 37015168 DOI: 10.1016/j.ejmp.2023.102568] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 02/15/2023] [Accepted: 03/18/2023] [Indexed: 04/05/2023] Open
Abstract
Anatomical variations occur during head and neck (H&N) radiotherapy (RT) treatment. These variations may result in underdosage to the target volume or overdosage to the organ at risk. Replanning during the treatment course can be triggered to overcome this issue. Due to technological, methodological and clinical evolutions, tools for adaptive RT (ART) are becoming increasingly sophisticated. The aim of this paper is to give an overview of the key steps of an H&N ART workflow and tools from the point of view of a group of French-speaking medical physicists and physicians (from GORTEC). Focuses are made on image registration, segmentation, estimation of the delivered dose of the day, workflow and quality assurance for an implementation of H&N offline and online ART. Practical recommendations are given to assist physicians and medical physicists in a clinical workflow.
Collapse
|
6
|
Yang X, Wu J, Chen X. Application of Artificial Intelligence to the Diagnosis and Therapy of Nasopharyngeal Carcinoma. J Clin Med 2023; 12:jcm12093077. [PMID: 37176518 PMCID: PMC10178972 DOI: 10.3390/jcm12093077] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 04/12/2023] [Accepted: 04/18/2023] [Indexed: 05/15/2023] Open
Abstract
Artificial intelligence (AI) is an interdisciplinary field that encompasses a wide range of computer science disciplines, including image recognition, machine learning, human-computer interaction, robotics and so on. Recently, AI, especially deep learning algorithms, has shown excellent performance in the field of image recognition, being able to automatically perform quantitative evaluation of complex medical image features to improve diagnostic accuracy and efficiency. AI has a wider and deeper application in the medical field of diagnosis, treatment and prognosis. Nasopharyngeal carcinoma (NPC) occurs frequently in southern China and Southeast Asian countries and is the most common head and neck cancer in the region. Detecting and treating NPC early is crucial for a good prognosis. This paper describes the basic concepts of AI, including traditional machine learning and deep learning algorithms, and their clinical applications of detecting and assessing NPC lesions, facilitating treatment and predicting prognosis. The main limitations of current AI technologies are briefly described, including interpretability issues, privacy and security and the need for large amounts of annotated data. Finally, we discuss the remaining challenges and the promising future of using AI to diagnose and treat NPC.
Collapse
Affiliation(s)
- Xinggang Yang
- Division of Biotherapy, Cancer Center, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
| | - Juan Wu
- Out-Patient Department, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
| | - Xiyang Chen
- Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
| |
Collapse
|
7
|
Hasan Z, Key S, Habib AR, Wong E, Aweidah L, Kumar A, Sacks R, Singh N. Convolutional Neural Networks in ENT Radiology: Systematic Review of the Literature. Ann Otol Rhinol Laryngol 2023; 132:417-430. [PMID: 35651308 DOI: 10.1177/00034894221095899] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
INTRODUCTION Convolutional neural networks (CNNs) represent a state-of-the-art methodological technique in AI and deep learning, and were specifically created for image classification and computer vision tasks. CNNs have been applied in radiology in a number of different disciplines, mostly outside otolaryngology, potentially due to a lack of familiarity with this technology within the otolaryngology community. CNNs have the potential to revolutionize clinical practice by reducing the time required to perform manual tasks. This literature search aims to present a comprehensive systematic review of the published literature with regard to CNNs and their utility to date in ENT radiology. METHODS Data were extracted from a variety of databases including PubMED, Proquest, MEDLINE Open Knowledge Maps, and Gale OneFile Computer Science. Medical subject headings (MeSH) terms and keywords were used to extract related literature from each databases inception to October 2020. Inclusion criteria were studies where CNNs were used as the main intervention and CNNs focusing on radiology relevant to ENT. Titles and abstracts were reviewed followed by the contents. Once the final list of articles was obtained, their reference lists were also searched to identify further articles. RESULTS Thirty articles were identified for inclusion in this study. Studies utilizing CNNs in most ENT subspecialties were identified. Studies utilized CNNs for a number of tasks including identification of structures, presence of pathology, and segmentation of tumors for radiotherapy planning. All studies reported a high degree of accuracy of CNNs in performing the chosen task. CONCLUSION This study provides a better understanding of CNN methodology used in ENT radiology demonstrating a myriad of potential uses for this exciting technology including nodule and tumor identification, identification of anatomical variation, and segmentation of tumors. It is anticipated that this field will continue to evolve and these technologies and methodologies will become more entrenched in our everyday practice.
Collapse
Affiliation(s)
- Zubair Hasan
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| | - Seraphina Key
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, VIC, Australia
| | - Al-Rahim Habib
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Princess Alexandra Hospital, Woolloongabba, QLD, Australia
| | - Eugene Wong
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| | - Layal Aweidah
- Faculty of Medicine, University of Notre Dame, Darlinghurst, NSW, Australia
| | - Ashnil Kumar
- School of Biomedical Engineering, Faculty of Engineering, University of Sydney, Darlington, NSW, Australia
| | - Raymond Sacks
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Concord Hospital, Concord, NSW, Australia
| | - Narinder Singh
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| |
Collapse
|
8
|
Zhou H, Li H, Chen S, Yang S, Ruan G, Liu L, Chen H. BSMM-Net: Multi-modal neural network based on bilateral symmetry for nasopharyngeal carcinoma segmentation. Front Hum Neurosci 2023; 16:1068713. [PMID: 36704094 PMCID: PMC9872196 DOI: 10.3389/fnhum.2022.1068713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/05/2022] [Indexed: 01/11/2023] Open
Abstract
Introduction Automatically and accurately delineating the primary nasopharyngeal carcinoma (NPC) tumors in head magnetic resonance imaging (MRI) images is crucial for patient staging and radiotherapy. Inspired by the bilateral symmetry of head and complementary information of different modalities, a multi-modal neural network named BSMM-Net is proposed for NPC segmentation. Methods First, a bilaterally symmetrical patch block (BSP) is used to crop the image and the bilaterally flipped image into patches. BSP can improve the precision of locating NPC lesions and is a simulation of radiologist locating the tumors with the bilateral difference of head in clinical practice. Second, modality-specific and multi-modal fusion features (MSMFFs) are extracted by the proposed MSMFF encoder to fully utilize the complementary information of T1- and T2-weighted MRI. The MSMFFs are then fed into the base decoder to aggregate representative features and precisely delineate the NPC. MSMFF is the output of MSMFF encoder blocks, which consist of six modality-specific networks and one multi-modal fusion network. Except T1 and T2, the other four modalities are generated from T1 and T2 by the BSP and DT modal generate block. Third, the MSMFF decoder with similar structure to the MSMFF encoder is deployed to supervise the encoder during training and assure the validity of the MSMFF from the encoder. Finally, experiments are conducted on the dataset of 7633 samples collected from 745 patients. Results and discussion The global DICE, precision, recall and IoU of the testing set are 0.82, 0.82, 0.86, and 0.72, respectively. The results show that the proposed model is better than the other state-of-the-art methods for NPC segmentation. In clinical diagnosis, the BSMM-Net can give precise delineation of NPC, which can be used to schedule the radiotherapy.
Collapse
Affiliation(s)
- Haoyang Zhou
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, China
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, Guangxi, China
| | - Haojiang Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center (SYSUCC), Guanghzou, Guangdong, China
| | - Shuchao Chen
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, China
| | - Shixin Yang
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, China
| | - Guangying Ruan
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center (SYSUCC), Guanghzou, Guangdong, China
| | - Lizhi Liu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center (SYSUCC), Guanghzou, Guangdong, China
| | - Hongbo Chen
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, China
| |
Collapse
|
9
|
Huang Z, Tang S, Chen Z, Wang G, Shen H, Zhou Y, Wang H, Fan W, Liang D, Hu Y, Hu Z. TG-Net: Combining transformer and GAN for nasopharyngeal carcinoma tumor segmentation based on total-body uEXPLORER PET/CT scanner. Comput Biol Med 2022; 148:105869. [PMID: 35905660 DOI: 10.1016/j.compbiomed.2022.105869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 06/20/2022] [Accepted: 07/09/2022] [Indexed: 11/17/2022]
Abstract
Nasopharyngeal carcinoma (NPC) is a malignant tumor, and the main treatment is radiotherapy. Accurate delineation of the target tumor is essential for radiotherapy of NPC. NPC tumors are small in size and vary widely in shape and structure, making it a time-consuming and laborious task for even experienced radiologists to manually outline tumors. However, the segmentation performance of current deep learning models is not satisfactory, mainly manifested by poor segmentation boundaries. To solve this problem, this paper proposes a segmentation method for nasopharyngeal carcinoma based on dynamic PET-CT image data, whose input data include CT, PET, and parametric images (Ki images). This method uses a generative adversarial network with a modified UNet integrated with a Transformer as the generator (TG-Net) to achieve automatic segmentation of NPC on combined CT-PET-Ki images. In the coding stage, TG-Net uses moving windows to replace traditional pooling operations to obtain patches of different sizes, which can reduce information loss in the coding process. Moreover, the introduction of Transformer can make the network learn more representative features and improve the discriminant ability of the model, especially for tumor boundaries. Finally, the results of fivefold cross validation with an average Dice similarity coefficient score of 0.9135 show that our method has good segmentation performance. Comparative experiments also show that our network structure is superior to the most advanced methods in the segmentation of NPC. In addition, this work is the first to use Ki images to assist tumor segmentation. We also demonstrated the usefulness of adding Ki images to aid in tumor segmentation.
Collapse
Affiliation(s)
- Zhengyong Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Si Tang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China; Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Zixiang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Guoshuai Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Hao Shen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Yun Zhou
- Central Research Institute, United Imaging Healthcare Group, Shanghai, 201807, China
| | - Haining Wang
- Central Research Institute, United Imaging Healthcare Group, Shanghai, 201807, China
| | - Wei Fan
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China; Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yingying Hu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China; Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China.
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
10
|
Liao W, He J, Luo X, Wu M, Shen Y, Li C, Xiao J, Wang G, Chen N. Automatic Delineation of Gross Tumor Volume Based on Magnetic Resonance Imaging by Performing a Novel Semisupervised Learning Framework in Nasopharyngeal Carcinoma. Int J Radiat Oncol Biol Phys 2022; 113:893-902. [PMID: 35381322 DOI: 10.1016/j.ijrobp.2022.03.031] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 03/14/2022] [Accepted: 03/24/2022] [Indexed: 02/05/2023]
Abstract
PURPOSE We aimed to validate the accuracy and clinical value of a novel semisupervised learning framework for gross tumor volume (GTV) delineation in nasopharyngeal carcinoma. METHODS AND MATERIALS Two hundred fifty-eight patients with magnetic resonance imaging data sets were divided into training (n = 180), validation (n = 20), and testing (n = 58) cohorts. Ground truth contours of nasopharynx GTV (GTVnx) and node GTV (GTVnd) were manually delineated by 2 experienced radiation oncologists. Twenty percent (n = 36) labeled and 80% (n = 144) unlabeled images were used to train the model, producing model-generated contours for patients from the testing cohort. Nine experienced experts were invited to revise model-generated GTV in 20 randomly selected patients from the testing cohort. Six junior oncologists were asked to delineate GTV in 12 randomly selected patients from the testing cohort without and with the assistance of the model, and revision degrees were compared under these 2 modes. The Dice similarity coefficient (DSC) was used to quantify the accuracy of the model. RESULTS The model-generated contours showed a high accuracy compared with ground truth contours, with an average DSC score of 0.83 and 0.80 for GTVnx and GTVnd, respectively. There was no significant difference in DSC score between T1-2 and T3-4 patients (0.81 vs 0.83; P = .223), or between N1-2 and N3 patients (0.80 vs 0.79; P = .807). The mean revision degree was lower than 10% in 19 (95%) patients for GTVnx and in 16 (80%) patients for GTVnd. With assistance of the model, the mean revision degree for GTVnx and GTVnd by junior oncologists was reduced from 25.63% to 7.75% and from 21.38% to 14.44%, respectively. Meanwhile, the delineating efficiency was improved by over 60%. CONCLUSIONS The proposed semisupervised learning-based model showed a high accuracy for delineating GTV of nasopharyngeal carcinoma. It was clinically applicable and could assist junior oncologists to improve GTV contouring accuracy and save contouring time.
Collapse
Affiliation(s)
- Wenjun Liao
- Department of Radiation Oncology, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China; Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Jinlan He
- Department of Radiation Oncology, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China
| | - Xiangde Luo
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Mengwan Wu
- Cancer Clinical Research Center, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Yuanyuan Shen
- Cancer Clinical Research Center, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Churong Li
- Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Jianghong Xiao
- Department of Radiation Oncology, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Nianyong Chen
- Department of Radiation Oncology, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
11
|
Yang G, Dai Z, Zhang Y, Zhu L, Tan J, Chen Z, Zhang B, Cai C, He Q, Li F, Wang X, Yang W. Multiscale Local Enhancement Deep Convolutional Networks for the Automated 3D Segmentation of Gross Tumor Volumes in Nasopharyngeal Carcinoma: A Multi-Institutional Dataset Study. Front Oncol 2022; 12:827991. [PMID: 35387126 PMCID: PMC8979212 DOI: 10.3389/fonc.2022.827991] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 02/24/2022] [Indexed: 01/10/2023] Open
Abstract
Purpose Accurate segmentation of gross target volume (GTV) from computed tomography (CT) images is a prerequisite in radiotherapy for nasopharyngeal carcinoma (NPC). However, this task is very challenging due to the low contrast at the boundary of the tumor and the great variety of sizes and morphologies of tumors between different stages. Meanwhile, the data source also seriously affect the results of segmentation. In this paper, we propose a novel three-dimensional (3D) automatic segmentation algorithm that adopts cascaded multiscale local enhancement of convolutional neural networks (CNNs) and conduct experiments on multi-institutional datasets to address the above problems. Materials and Methods In this study, we retrospectively collected CT images of 257 NPC patients to test the performance of the proposed automatic segmentation model, and conducted experiments on two additional multi-institutional datasets. Our novel segmentation framework consists of three parts. First, the segmentation framework is based on a 3D Res-UNet backbone model that has excellent segmentation performance. Then, we adopt a multiscale dilated convolution block to enhance the receptive field and focus on the target area and boundary for segmentation improvement. Finally, a central localization cascade model for local enhancement is designed to concentrate on the GTV region for fine segmentation to improve the robustness. The Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average symmetric surface distance (ASSD) and 95% Hausdorff distance (HD95) are utilized as qualitative evaluation criteria to estimate the performance of our automated segmentation algorithm. Results The experimental results show that compared with other state-of-the-art methods, our modified version 3D Res-UNet backbone has excellent performance and achieves the best results in terms of the quantitative metrics DSC, PPR, ASSD and HD95, which reached 74.49 ± 7.81%, 79.97 ± 13.90%, 1.49 ± 0.65 mm and 5.06 ± 3.30 mm, respectively. It should be noted that the receptive field enhancement mechanism and cascade architecture can have a great impact on the stable output of automatic segmentation results with high accuracy, which is critical for an algorithm. The final DSC, SEN, ASSD and HD95 values can be increased to 76.23 ± 6.45%, 79.14 ± 12.48%, 1.39 ± 5.44mm, 4.72 ± 3.04mm. In addition, the outcomes of multi-institution experiments demonstrate that our model is robust and generalizable and can achieve good performance through transfer learning. Conclusions The proposed algorithm could accurately segment NPC in CT images from multi-institutional datasets and thereby may improve and facilitate clinical applications.
Collapse
Affiliation(s)
- Geng Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Zhenhui Dai
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Lin Zhu
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Junwen Tan
- Department of Oncology, The Fourth Affiliated Hospital of Guangxi Medical University, Liuzhou, China
| | - Zefeiyun Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Bailin Zhang
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Chunya Cai
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Qiang He
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Fei Li
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xuetao Wang
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| |
Collapse
|
12
|
|
13
|
Hu L, Li J, Peng X, Xiao J, Zhan B, Zu C, Wu X, Zhou J, Wang Y. Semi-supervised NPC segmentation with uncertainty and attention guided consistency. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.108021] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
14
|
Tao G, Li H, Huang J, Han C, Chen J, Ruan G, Huang W, Hu Y, Dan T, Zhang B, He S, Liu L, Cai H. SeqSeg: A Sequential Method to Achieve Nasopharyngeal Carcinoma Segmentation Free from Background Dominance. Med Image Anal 2022; 78:102381. [DOI: 10.1016/j.media.2022.102381] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 01/18/2022] [Accepted: 01/31/2022] [Indexed: 11/30/2022]
|
15
|
Multi-scale brain tumor segmentation combined with deep supervision. Int J Comput Assist Radiol Surg 2021; 17:561-568. [PMID: 34894336 DOI: 10.1007/s11548-021-02515-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 09/29/2021] [Indexed: 10/19/2022]
Abstract
PURPOSE Fully convolutional neural networks (FCNNs) have achieved good performance in the field of medical image segmentation. FCNNs that use multimodal images and multi-scale feature extraction have higher accuracy for brain tumor segmentation. Therefore, we have made some improvements to U-Net for fully automated segmentation of gliomas using multimodal images. And we named it multi-scale dilate network with deep supervision (MSD-Net). METHODS MSD-Net is a symmetrical structure composed of a down-sampling process and an up-sampling process. In the down-sampling process, we use the multi-scale feature extraction block (ME) to extract multi-scale features and focus on primary features. Unlike other methods, ME consists of dilate convolution and standard convolution. Dilate convolution extracts multi-scale informations and standard convolution merges features of different scales. Hence, the output of the ME contains local information and global information. During the up-sampling process, we add a deep supervision block (DSB), which can shorten the length of back-propagation. In this paper, we pay more attention to the importance of shallow features for feature restoration. RESULTS Our network validated in the BraTS17's validation dataset. The DSC scores of MSD-Net for complete tumor, tumor core and enhancing tumor were 0.88, 0.81 and 0.78, respectively, which outperforms most networks. CONCLUSION This study shows that ME enhances the feature extraction ability of the network and improves the accuracy of segmentation results. DSB speeds up the convergence of the network. In addition, we should also pay attention to the contribution of shallow features to feature restoration.
Collapse
|
16
|
Li Y, Han G, Liu X. DCNet: Densely Connected Deep Convolutional Encoder-Decoder Network for Nasopharyngeal Carcinoma Segmentation. SENSORS 2021; 21:s21237877. [PMID: 34883878 PMCID: PMC8659888 DOI: 10.3390/s21237877] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 11/22/2021] [Accepted: 11/23/2021] [Indexed: 11/20/2022]
Abstract
Nasopharyngeal Carcinoma segmentation in magnetic resonance imagery (MRI) is vital to radiotherapy. Exact dose delivery hinges on an accurate delineation of the gross tumor volume (GTV). However, the large-scale variation in tumor volume is intractable, and the performance of current models is mostly unsatisfactory with indistinguishable and blurred boundaries of segmentation results of tiny tumor volume. To address the problem, we propose a densely connected deep convolutional network consisting of an encoder network and a corresponding decoder network, which extracts high-level semantic features from different levels and uses low-level spatial features concurrently to obtain fine-grained segmented masks. Skip-connection architecture is involved and modified to propagate spatial information to the decoder network. Preliminary experiments are conducted on 30 patients. Experimental results show our model outperforms all baseline models, with improvements of 4.17%. An ablation study is performed, and the effectiveness of the novel loss function is validated.
Collapse
Affiliation(s)
- Yang Li
- School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China;
| | - Guanghui Han
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China;
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450046, China
| | - Xiujian Liu
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China;
- Correspondence:
| |
Collapse
|
17
|
Liu Y, Yuan X, Jiang X, Wang P, Kou J, Wang H, Liu M. Dilated Adversarial U-Net Network for automatic gross tumor volume segmentation of nasopharyngeal carcinoma. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107722] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
18
|
Yousefirizi F, Jha AK, Brosch-Lenz J, Saboury B, Rahmim A. Toward High-Throughput Artificial Intelligence-Based Segmentation in Oncological PET Imaging. PET Clin 2021; 16:577-596. [PMID: 34537131 DOI: 10.1016/j.cpet.2021.06.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Artificial intelligence (AI) techniques for image-based segmentation have garnered much attention in recent years. Convolutional neural networks have shown impressive results and potential toward fully automated segmentation in medical imaging, and particularly PET imaging. To cope with the limited access to annotated data needed in supervised AI methods, given tedious and prone-to-error manual delineations, semi-supervised and unsupervised AI techniques have also been explored for segmentation of tumors or normal organs in single- and bimodality scans. This work reviews existing AI techniques for segmentation tasks and the evaluation criteria for translational AI-based segmentation efforts toward routine adoption in clinical workflows.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Abhinav K Jha
- Department of Biomedical Engineering, Washington University in St. Louis, St Louis, MO 63130, USA; Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO 63110, USA
| | - Julia Brosch-Lenz
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada; Department of Physics, University of British Columbia, Senior Scientist & Provincial Medical Imaging Physicist, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada
| |
Collapse
|
19
|
Liu Y, Gao MJ, Zhou J, Du F, Chen L, Huang ZK, Hu JB, Lou C. Changes of [ 18F]FDG-PET/CT quantitative parameters in tumor lesions by the Bayesian penalized-likelihood PET reconstruction algorithm and its influencing factors. BMC Med Imaging 2021; 21:133. [PMID: 34530768 PMCID: PMC8444406 DOI: 10.1186/s12880-021-00664-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 09/05/2021] [Indexed: 11/10/2022] Open
Abstract
Background To compare the changes in quantitative parameters and the size and degree of 18F-fluorodeoxyglucose ([18F]FDG) uptake of malignant tumor lesions between Bayesian penalized-likelihood (BPL) and non-BPL reconstruction algorithms. Methods Positron emission tomography/computed tomography images of 86 malignant tumor lesions were reconstructed using the algorithms of ordered subset expectation maximization (OSEM), OSEM + time of flight (TOF), OSEM + TOF + point spread function (PSF), and BPL. [18F]FDG parameters of maximum standardized uptake value (SUVmax), SUVmean, metabolic tumor volume (MTV), total lesion glycolysis (TLG), and signal-to-background ratio (SBR) of these lesions were measured. Quantitative parameters between the different reconstruction algorithms were compared, and correlations between parameter variation and lesion size or the degree of [18F]FDG uptake were analyzed. Results After BPL reconstruction, SUVmax, SUVmean, and SBR were significantly increased, MTV was significantly decreased. The difference values of %ΔSUVmax, %ΔSUVmean, %ΔSBR, and the absolute value of %ΔMTV between BPL and OSEM + TOF were 40.00%, 38.50%, 33.60%, and 33.20%, respectively, which were significantly higher than those between BPL and OSEM + TOF + PSF. Similar results were observed in the comparison of OSEM and OSEM + TOF + PSF with BPL. The %ΔSUVmax, %ΔSUVmean, and %ΔSBR were all significantly negatively correlated with the size and degree of [18F]FDG uptake in the lesions, whereas significant positive correlations were observed for %ΔMTV and %ΔTLG. Conclusion The BPL reconstruction algorithm significantly increased SUVmax, SUVmean, and SBR and decreased MTV of tumor lesions, especially in small or relatively hypometabolic lesions.
Collapse
Affiliation(s)
- Yao Liu
- Department of Nuclear Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, 3 East Qingchun Rd, Jianggan District, Hangzhou, 310000, Zhejiang, People's Republic of China
| | - Mei-Jia Gao
- Department of Nuclear Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, 3 East Qingchun Rd, Jianggan District, Hangzhou, 310000, Zhejiang, People's Republic of China
| | - Jie Zhou
- Department of Nuclear Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, 3 East Qingchun Rd, Jianggan District, Hangzhou, 310000, Zhejiang, People's Republic of China
| | - Fan Du
- Department of Nuclear Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, 3 East Qingchun Rd, Jianggan District, Hangzhou, 310000, Zhejiang, People's Republic of China
| | - Liang Chen
- Department of Nuclear Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, 3 East Qingchun Rd, Jianggan District, Hangzhou, 310000, Zhejiang, People's Republic of China
| | - Zhong-Ke Huang
- Department of Nuclear Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, 3 East Qingchun Rd, Jianggan District, Hangzhou, 310000, Zhejiang, People's Republic of China
| | - Ji-Bo Hu
- Department of Nuclear Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, 3 East Qingchun Rd, Jianggan District, Hangzhou, 310000, Zhejiang, People's Republic of China
| | - Cen Lou
- Department of Nuclear Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, 3 East Qingchun Rd, Jianggan District, Hangzhou, 310000, Zhejiang, People's Republic of China.
| |
Collapse
|
20
|
Samarasinghe G, Jameson M, Vinod S, Field M, Dowling J, Sowmya A, Holloway L. Deep learning for segmentation in radiation therapy planning: a review. J Med Imaging Radiat Oncol 2021; 65:578-595. [PMID: 34313006 DOI: 10.1111/1754-9485.13286] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 06/29/2021] [Indexed: 12/21/2022]
Abstract
Segmentation of organs and structures, as either targets or organs-at-risk, has a significant influence on the success of radiation therapy. Manual segmentation is a tedious and time-consuming task for clinicians, and inter-observer variability can affect the outcomes of radiation therapy. The recent hype over deep neural networks has added many powerful auto-segmentation methods as variations of convolutional neural networks (CNN). This paper presents a descriptive review of the literature on deep learning techniques for segmentation in radiation therapy planning. The most common CNN architecture across the four clinical sub sites considered was U-net, with the majority of deep learning segmentation articles focussed on head and neck normal tissue structures. The most common data sets were CT images from an inhouse source, along with some public data sets. N-fold cross-validation was commonly employed; however, not all work separated training, test and validation data sets. This area of research is expanding rapidly. To facilitate comparisons of proposed methods and benchmarking, consistent use of appropriate metrics and independent validation should be carefully considered.
Collapse
Affiliation(s)
- Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia
| | - Michael Jameson
- Genesiscare, Sydney, New South Wales, Australia.,St Vincent's Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Shalini Vinod
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Matthew Field
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Jason Dowling
- Commonwealth Scientific and Industrial Research Organisation, Australian E-Health Research Centre, Herston, Queensland, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Lois Holloway
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| |
Collapse
|
21
|
Lei W, Mei H, Sun Z, Ye S, Gu R, Wang H, Huang R, Zhang S, Zhang S, Wang G. Automatic segmentation of organs-at-risk from head-and-neck CT using separable convolutional neural network with hard-region-weighted loss. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.135] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
22
|
Herskovits EH. Artificial intelligence in molecular imaging. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:824. [PMID: 34268437 PMCID: PMC8246206 DOI: 10.21037/atm-20-6191] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 11/27/2020] [Indexed: 12/16/2022]
Abstract
AI has, to varying degrees, affected all aspects of molecular imaging, from image acquisition to diagnosis. During the last decade, the advent of deep learning in particular has transformed medical image analysis. Although the majority of recent advances have resulted from neural-network models applied to image segmentation, a broad range of techniques has shown promise for image reconstruction, image synthesis, differential-diagnosis generation, and treatment guidance. Applications of AI for drug design indicate the way forward for using AI to facilitate molecular-probe design, which is still in its early stages. Deep-learning models have demonstrated increased efficiency and image quality for PET reconstruction from sinogram data. Generative adversarial networks (GANs), which are paired neural networks that are jointly trained to generate and classify images, have found applications in modality transformation, artifact reduction, and synthetic-PET-image generation. Some AI applications, based either partly or completely on neural-network approaches, have demonstrated superior differential-diagnosis generation relative to radiologists. However, AI models have a history of brittleness, and physicians and patients may not trust AI applications that cannot explain their reasoning. To date, the majority of molecular-imaging applications of AI have been confined to research projects, and are only beginning to find their ways into routine clinical workflows via commercialization and, in some cases, integration into scanner hardware. Evaluation of actual clinical products will yield more realistic assessments of AI’s utility in molecular imaging.
Collapse
Affiliation(s)
- Edward H Herskovits
- Department of Diagnostic Radiology and Nuclear Medicine, The University of Maryland, Baltimore, School of Medicine, Baltimore, MD, USA
| |
Collapse
|
23
|
Yakar M, Etiz D. Artificial intelligence in radiation oncology. Artif Intell Med Imaging 2021; 2:13-31. [DOI: 10.35711/aimi.v2.i2.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 03/30/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is a computer science that tries to mimic human-like intelligence in machines that use computer software and algorithms to perform specific tasks without direct human input. Machine learning (ML) is a subunit of AI that uses data-driven algorithms that learn to imitate human behavior based on a previous example or experience. Deep learning is an ML technique that uses deep neural networks to create a model. The growth and sharing of data, increasing computing power, and developments in AI have initiated a transformation in healthcare. Advances in radiation oncology have produced a significant amount of data that must be integrated with computed tomography imaging, dosimetry, and imaging performed before each fraction. Of the many algorithms used in radiation oncology, has advantages and limitations with different computational power requirements. The aim of this review is to summarize the radiotherapy (RT) process in workflow order by identifying specific areas in which quality and efficiency can be improved by ML. The RT stage is divided into seven stages: patient evaluation, simulation, contouring, planning, quality control, treatment application, and patient follow-up. A systematic evaluation of the applicability, limitations, and advantages of AI algorithms has been done for each stage.
Collapse
Affiliation(s)
- Melek Yakar
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| | - Durmus Etiz
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| |
Collapse
|
24
|
Iantsen A, Ferreira M, Lucia F, Jaouen V, Reinhold C, Bonaffini P, Alfieri J, Rovira R, Masson I, Robin P, Mervoyer A, Rousseau C, Kridelka F, Decuypere M, Lovinfosse P, Pradier O, Hustinx R, Schick U, Visvikis D, Hatt M. Convolutional neural networks for PET functional volume fully automatic segmentation: development and validation in a multi-center setting. Eur J Nucl Med Mol Imaging 2021; 48:3444-3456. [PMID: 33772335 PMCID: PMC8440243 DOI: 10.1007/s00259-021-05244-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 02/07/2021] [Indexed: 11/12/2022]
Abstract
Purpose In this work, we addressed fully automatic determination of tumor functional uptake from positron emission tomography (PET) images without relying on other image modalities or additional prior constraints, in the context of multicenter images with heterogeneous characteristics. Methods In cervical cancer, an additional challenge is the location of the tumor uptake near or even stuck to the bladder. PET datasets of 232 patients from five institutions were exploited. To avoid unreliable manual delineations, the ground truth was generated with a semi-automated approach: a volume containing the tumor and excluding the bladder was first manually determined, then a well-validated, semi-automated approach relying on the Fuzzy locally Adaptive Bayesian (FLAB) algorithm was applied to generate the ground truth. Our model built on the U-Net architecture incorporates residual blocks with concurrent spatial squeeze and excitation modules, as well as learnable non-linear downsampling and upsampling blocks. Experiments relied on cross-validation (four institutions for training and validation, and the fifth for testing). Results The model achieved good Dice similarity coefficient (DSC) with little variability across institutions (0.80 ± 0.03), with higher recall (0.90 ± 0.05) than precision (0.75 ± 0.05) and improved results over the standard U-Net (DSC 0.77 ± 0.05, recall 0.87 ± 0.02, precision 0.74 ± 0.08). Both vastly outperformed a fixed threshold at 40% of SUVmax (DSC 0.33 ± 0.15, recall 0.52 ± 0.17, precision 0.30 ± 0.16). In all cases, the model could determine the tumor uptake without including the bladder. Neither shape priors nor anatomical information was required to achieve efficient training. Conclusion The proposed method could facilitate the deployment of a fully automated radiomics pipeline in such a challenging multicenter context. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05244-z.
Collapse
Affiliation(s)
- Andrei Iantsen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France.
| | - Marta Ferreira
- GIGA-CRC in vivo Imaging, University of Liège, Liège, Belgium
| | - Francois Lucia
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Vincent Jaouen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Caroline Reinhold
- Department of Radiology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Pietro Bonaffini
- Department of Radiology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Joanne Alfieri
- Department of Radiation Oncology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Ramon Rovira
- Gynecology Oncology and Laparoscopy Department, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Ingrid Masson
- Department of Radiation Oncology, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Philippe Robin
- Nuclear Medicine Department, University Hospital, Brest, France
| | - Augustin Mervoyer
- Department of Radiation Oncology, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Caroline Rousseau
- Nuclear Medicine Department, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Frédéric Kridelka
- Division of Oncological Gynecology, University Hospital of Liège, Liège, Belgium
| | - Marjolein Decuypere
- Division of Oncological Gynecology, University Hospital of Liège, Liège, Belgium
| | - Pierre Lovinfosse
- Division of Nuclear Medicine and Oncological Imaging, University Hospital of Liège, Liège, Belgium
| | | | - Roland Hustinx
- GIGA-CRC in vivo Imaging, University of Liège, Liège, Belgium
| | - Ulrike Schick
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | | | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| |
Collapse
|
25
|
Abstract
Head–Neck Cancer (HNC) has a relevant impact on the oncology patient population and for this reason, the present review is dedicated to this type of neoplastic disease. In particular, a collection of methods aimed at tumor delineation is presented, because this is a fundamental task to perform efficient radiotherapy. Such a segmentation task is often performed on uni-modal data (usually Positron Emission Tomography (PET)) even though multi-modal images are preferred (PET-Computerized Tomography (CT)/PET-Magnetic Resonance (MR)). Datasets can be private or freely provided by online repositories on the web. The adopted techniques can belong to the well-known image processing/computer-vision algorithms or the newest deep learning/artificial intelligence approaches. All these aspects are analyzed in the present review and comparison among various approaches is performed. From the present review, the authors draw the conclusion that despite the encouraging results of computerized approaches, their performance is far from handmade tumor delineation result.
Collapse
|
26
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
27
|
Gong Y, Shan H, Teng Y, Tu N, Li M, Liang G, Wang G, Wang S. Parameter-Transferred Wasserstein Generative Adversarial Network (PT-WGAN) for Low-Dose PET Image Denoising. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:213-223. [PMID: 35402757 PMCID: PMC8993163 DOI: 10.1109/trpms.2020.3025071] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
Due to the widespread use of positron emission tomography (PET) in clinical practice, the potential risk of PET-associated radiation dose to patients needs to be minimized. However, with the reduction in the radiation dose, the resultant images may suffer from noise and artifacts that compromise diagnostic performance. In this paper, we propose a parameter-transferred Wasserstein generative adversarial network (PT-WGAN) for low-dose PET image denoising. The contributions of this paper are twofold: i) a PT-WGAN framework is designed to denoise low-dose PET images without compromising structural details, and ii) a task-specific initialization based on transfer learning is developed to train PT-WGAN using trainable parameters transferred from a pretrained model, which significantly improves the training efficiency of PT-WGAN. The experimental results on clinical data show that the proposed network can suppress image noise more effectively while preserving better image fidelity than recently published state-of-the-art methods. We make our code available at https://github.com/90n9-yu/PT-WGAN.
Collapse
Affiliation(s)
- Yu Gong
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China, and Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China, and the Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 201210, China
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China, and the Key Laboratory of Intelligent Computing in Medical Images, Ministry of Education, Shenyang 110169, China
| | - Ning Tu
- PET-CT/MRI Center and Molecular Imaging Center, Wuhan University Renmin Hospital, Wuhan, 430060, China
| | - Ming Li
- Neusoft Medical Systems Co., Ltd, Shenyang 110167, China
| | - Guodong Liang
- Neusoft Medical Systems Co., Ltd, Shenyang 110167, China
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
28
|
Torres-Velázquez M, Chen WJ, Li X, McMillan AB. Application and Construction of Deep Learning Networks in Medical Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:137-159. [PMID: 34017931 PMCID: PMC8132932 DOI: 10.1109/trpms.2020.3030611] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning (DL) approaches are part of the machine learning (ML) subfield concerned with the development of computational models to train artificial intelligence systems. DL models are characterized by automatically extracting high-level features from the input data to learn the relationship between matching datasets. Thus, its implementation offers an advantage over common ML methods that often require the practitioner to have some domain knowledge of the input data to select the best latent representation. As a result of this advantage, DL has been successfully applied within the medical imaging field to address problems, such as disease classification and tumor segmentation for which it is difficult or impossible to determine which image features are relevant. Therefore, taking into consideration the positive impact of DL on the medical imaging field, this article reviews the key concepts associated with its evolution and implementation. The sections of this review summarize the milestones related to the development of the DL field, followed by a description of the elements of deep neural network and an overview of its application within the medical imaging field. Subsequently, the key steps necessary to implement a supervised DL application are defined, and associated limitations are discussed.
Collapse
Affiliation(s)
- Maribel Torres-Velázquez
- Department of Biomedical Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Wei-Jie Chen
- Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Xue Li
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA, and also with the Department of Medical Physics, University of Wisconsin-Madison, Madison, WI 53705 USA
| |
Collapse
|
29
|
Jing B, Deng Y, Zhang T, Hou D, Li B, Qiang M, Liu K, Ke L, Li T, Sun Y, Lv X, Li C. Deep learning for risk prediction in patients with nasopharyngeal carcinoma using multi-parametric MRIs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105684. [PMID: 32781421 DOI: 10.1016/j.cmpb.2020.105684] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Accepted: 07/28/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Magnetic resonance images (MRI) is the main diagnostic tool for risk stratification and treatment decision in nasopharyngeal carcinoma (NPC). However, the holistic feature information of multi-parametric MRIs has not been fully exploited by clinicians to accurately evaluate patients. OBJECTIVE To help clinicians fully utilize the missed information to regroup patients, we built an end-to-end deep learning model to extract feature information from multi-parametric MRIs for predicting and stratifying the risk scores of NPC patients. METHODS In this paper, we proposed an end-to-end multi-modality deep survival network (MDSN) to precisely predict the risk of disease progression of NPC patients. Extending from 3D dense net, this proposed MDSN extracted deep representation from multi-parametric MRIs (T1w, T2w, and T1c). Moreover, deep features and clinical stages were integrated through MDSN to more accurately predict the overall risk score (ORS) of individual NPC patient. RESULT A total of 1,417 individuals treated between January 2012 and December 2014 were included for training and validating the end-to-end MDSN. Results were then tested in a retrospective cohort of 429 patients included in the same institution. The C-index of the proposed method with or without clinical stages was 0.672 and 0.651 on the test set, respectively, which was higher than the that of the stage grouping (0.610). CONCLUSIONS The C-index of the model which integrated clinical stages with deep features is 0.062 higher than that of stage grouping alone (0.672 vs 0.610). We conclude that features extracted from multi-parametric MRIs based on MDSN can well assist the clinical stages in regrouping patients.
Collapse
Affiliation(s)
- Bingzhong Jing
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Yishu Deng
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Tao Zhang
- Guangzhou Deepaint intelligence Tenchnology Co.Ltd., Guangzhou 510060, China
| | - Dan Hou
- Guangzhou Deepaint intelligence Tenchnology Co.Ltd., Guangzhou 510060, China
| | - Bin Li
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Mengyun Qiang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Kuiyuan Liu
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Liangru Ke
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Radiology, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Taihe Li
- Shenzhen Annet Information System Co.LTD., Guangzhou 510060, China
| | - Ying Sun
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Radiotherapy, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Xing Lv
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China.
| | - Chaofeng Li
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China.
| |
Collapse
|
30
|
Arabi H, Zaidi H. Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy. Eur J Hybrid Imaging 2020; 4:17. [PMID: 34191161 PMCID: PMC8218135 DOI: 10.1186/s41824-020-00086-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 08/10/2020] [Indexed: 12/22/2022] Open
Abstract
This brief review summarizes the major applications of artificial intelligence (AI), in particular deep learning approaches, in molecular imaging and radiation therapy research. To this end, the applications of artificial intelligence in five generic fields of molecular imaging and radiation therapy, including PET instrumentation design, PET image reconstruction quantification and segmentation, image denoising (low-dose imaging), radiation dosimetry and computer-aided diagnosis, and outcome prediction are discussed. This review sets out to cover briefly the fundamental concepts of AI and deep learning followed by a presentation of seminal achievements and the challenges facing their adoption in clinical setting.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700, Groningen, RB, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
| |
Collapse
|
31
|
A Collaborative Dictionary Learning Model for Nasopharyngeal Carcinoma Segmentation on Multimodalities MR Sequences. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:7562140. [PMID: 32908581 PMCID: PMC7474760 DOI: 10.1155/2020/7562140] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 08/06/2020] [Accepted: 08/12/2020] [Indexed: 11/18/2022]
Abstract
Nasopharyngeal carcinoma (NPC) is the most common malignant tumor of the nasopharynx. The delicate nature of the nasopharyngeal structures means that noninvasive magnetic resonance imaging (MRI) is the preferred diagnostic technique for NPC. However, NPC is a typically infiltrative tumor, usually with a small volume, and thus, it remains challenging to discriminate it from tightly connected surrounding tissues. To address this issue, this study proposes a voxel-wise discriminate method for locating and segmenting NPC from normal tissues in MRI sequences. The located NPC is refined to obtain its accurate segmentation results by an original multiviewed collaborative dictionary classification (CODL) model. The proposed CODL reconstructs a latent intact space and equips it with discriminative power for the collective multiview analysis task. Experiments on synthetic data demonstrate that CODL is capable of finding a discriminative space for multiview orthogonal data. We then evaluated the method on real NPC. Experimental results show that CODL could accurately discriminate and localize NPCs of different volumes. This method achieved superior performances in segmenting NPC compared with benchmark methods. Robust segmentation results show that CODL can effectively assist clinicians in locating NPC.
Collapse
|
32
|
Wang X, Yang G, Zhang Y, Zhu L, Xue X, Zhang B, Cai C, Jin H, Zheng J, Wu J, Yang W, Dai Z. Automated delineation of nasopharynx gross tumor volume for nasopharyngeal carcinoma by plain CT combining contrast-enhanced CT using deep learning. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2020. [DOI: 10.1080/16878507.2020.1795565] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
- Xuetao Wang
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Geng Yang
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Lin Zhu
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiaoguang Xue
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Bailin Zhang
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Chunya Cai
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Huaizhi Jin
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Jianxiao Zheng
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Jian Wu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | | |
Collapse
|
33
|
Image segmentation of nasopharyngeal carcinoma using 3D CNN with long-range skip connection and multi-scale feature pyramid. Soft comput 2020. [DOI: 10.1007/s00500-020-04708-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
|
34
|
Ke L, Deng Y, Xia W, Qiang M, Chen X, Liu K, Jing B, He C, Xie C, Guo X, Lv X, Li C. Development of a self-constrained 3D DenseNet model in automatic detection and segmentation of nasopharyngeal carcinoma using magnetic resonance images. Oral Oncol 2020; 110:104862. [PMID: 32615440 DOI: 10.1016/j.oraloncology.2020.104862] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 05/18/2020] [Accepted: 06/14/2020] [Indexed: 01/31/2023]
Abstract
OBJECTIVES We aimed to develop a dual-task model to detect and segment nasopharyngeal carcinoma (NPC) automatically in magnetic resource images (MRI) based on deep learning method, since the differential diagnosis of NPC and atypical benign hyperplasia was difficult and the radiotherapy target contouring of NPC was labor-intensive. MATERIALS AND METHODS A self-constrained 3D DenseNet (SC-DenseNet) architecture was improved using separated training and validation sets. A total of 4100 individuals were finally enrolled and split into the training, validation and test sets at a proximate ratio of 8:1:1 using simple randomization. The diagnostic metrics of the established model against experienced radiologists was compared in the test set. The dice similarity coefficient (DSC) of manual and model-defined tumor region was used to evaluate the efficacy of segmentation. RESULTS Totally, 3142 nasopharyngeal carcinoma (NPC) and 958 benign hyperplasia were included. The SC-DenseNet model showed encouraging performance in detecting NPC, attained a higher overall accuracy, sensitivity and specificity than those of the experienced radiologists (97.77% vs 95.87%, 99.68% vs 99.24% and 91.67% vs 85.21%, respectively). Moreover, the model also exhibited promising performance in automatic segmentation of tumor region in NPC, with an average DSC at 0.77 ± 0.07 in the test set. CONCLUSIONS The SC-DenseNet model showed competence in automatic detection and segmentation of NPC in MRI, indicating the promising application value as an assistant tool in clinical practice, especially in screening project.
Collapse
Affiliation(s)
- Liangru Ke
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Yishu Deng
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Weixiong Xia
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Mengyun Qiang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Xi Chen
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Kuiyuan Liu
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Bingzhong Jing
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Caisheng He
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Chuanmiao Xie
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Xiang Guo
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China
| | - Xing Lv
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China.
| | - Chaofeng Li
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, PR China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China; Precision Medicine Center, Sun Yat-Sen University Cancer Center, Guangzhou 510060, PR China.
| |
Collapse
|
35
|
Extracting and Selecting Robust Radiomic Features from PET/MR Images in Nasopharyngeal Carcinoma. Mol Imaging Biol 2020; 22:1581-1591. [DOI: 10.1007/s11307-020-01507-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
36
|
Comparison of rigid and deformable image registration for nasopharyngeal carcinoma radiotherapy planning with diagnostic position PET/CT. Jpn J Radiol 2019; 38:256-264. [PMID: 31834577 DOI: 10.1007/s11604-019-00911-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Accepted: 12/06/2019] [Indexed: 10/25/2022]
Abstract
PURPOSE This observer study aimed to compare rigid image registration (RIR) with deformable image registration (DIR) for diagnostic position (DP) positron emission tomography/computed tomography (PET/CT) images in the delineation of gross tumor volumes (GTVs) in nasopharyngeal carcinoma (NPC) radiotherapy planning. MATERIALS AND METHODS Four radiation oncologists individually delineated the GTVs, GTVRIR, and GTVDIR, on planning CT (pCT) images registered with DP-PET/CT images using RIR and B-spline-based DIR, respectively. Reference GTVs were independently delineated by all radiation oncologists using radiotherapy position (RP)-PET/CT images. DP- and RP-PET/CT images for 14 patients with NPC were acquired using early and delayed scans, respectively. Dice's similarity coefficient (DSC), mean distance to agreement, and volume agreement with reference GTVs were compared by considering the interobserver variability in reference contours. RESULTS The average DSCs for GTVRIR and GTVDIR were 0.77 and 0.77, which were acceptable for GTV delineation. There were no statistically significant differences between GTVRIR and GTVDIR in all evaluation indexes (p > 0.05). Furthermore, the correlation between neck flexion angle differences and GTV accuracy was not statistically significant (p > 0.05). CONCLUSION RIR was a feasible choice compared with the B-spline-based DIR in GTV delineation for NPC under variations of neck flexion angle.
Collapse
|
37
|
Jeba JA, Devi SN. Efficient graph cut optimization using hybrid kernel functions for segmentation of FDG uptakes in fused PET/CT images. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105815] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
38
|
Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09788-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
39
|
Nensa F, Demircioglu A, Rischpler C. Artificial Intelligence in Nuclear Medicine. J Nucl Med 2019; 60:29S-37S. [DOI: 10.2967/jnumed.118.220590] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 05/16/2019] [Indexed: 02/06/2023] Open
|