1
|
Zhang S, Li K, Sun Y, Wan Y, Ao Y, Zhong Y, Liang M, Wang L, Chen X, Pei X, Hu Y, Chen D, Li M, Shan H. Deep Learning For Automatic Gross Tumor Volumes Contouring in Esophageal Cancer Based on Contrast-Enhanced Computed Tomography Images: A Multi-Institutional Study. Int J Radiat Oncol Biol Phys 2024:S0360-3016(24)00350-X. [PMID: 38432286 DOI: 10.1016/j.ijrobp.2024.02.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 02/02/2024] [Accepted: 02/18/2024] [Indexed: 03/05/2024]
Abstract
PURPOSE To develop and externally validate an automatic artificial intelligence (AI) tool for delineating gross tumor volume (GTV) in patients with esophageal squamous cell carcinoma (ESCC), which can assist in neo-adjuvant or radical radiation therapy treatment planning. METHODS AND MATERIALS In this multi-institutional study, contrast-enhanced CT images from 580 eligible ESCC patients were retrospectively collected. The GTV contours delineated by 2 experts via consensus were used as ground truth. A 3-dimensional deep learning model was developed for GTV contouring in the training cohort and internally and externally validated in 3 validation cohorts. The AI tool was compared against 12 board-certified experts in 25 patients randomly selected from the external validation cohort to evaluate its assistance in improving contouring performance and reducing variation. Contouring performance was measured using dice similarity coefficient (DSC) and average surface distance. Additionally, our previously established radiomics model for predicting pathologic complete response was used to compare AI-generated and ground truth contours, to assess the potential of the AI contouring tool in radiomics analysis. RESULTS The AI tool demonstrated good GTV contouring performance in multicenter validation cohorts, with median DSC values of 0.865, 0.876, and 0.866 and median average surface distance values of 0.939, 0.789, and 0.875 mm, respectively. Furthermore, the AI tool significantly improved contouring performance for half of 12 board-certified experts (DSC values, 0.794-0.835 vs 0.856-0.881, P = .003-0.048), reduced the intra- and interobserver variations by 37.4% and 55.2%, respectively, and saved contouring time by 77.6%. In the radiomics analysis, 88.7% of radiomic features from ground truth and AI-generated contours demonstrated stable reproducibility, and similar pathologic complete response prediction performance for these contours (P = .430) was observed. CONCLUSIONS Our AI contouring tool can improve GTV contouring performance and facilitate radiomics analysis in ESCC patients, which indicates its potential for GTV contouring during radiation therapy treatment planning and radiomics studies.
Collapse
Affiliation(s)
- Shuaitong Zhang
- School of Medical Technology, Beijing Institute of Technology, Beijing, China
| | - Kunwei Li
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, Guangdong, China; Guangdong Provincial Key Laboratory of Biomedical Imaging and Guangdong Provincial Engineering Research Center of Molecular Imaging, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, Guangdong, China
| | - Yuchen Sun
- School of Medical Technology, Beijing Institute of Technology, Beijing, China
| | - Yun Wan
- Department of Radiology, Xinyi City People's Hospital, Xinyi, Guangdong, China
| | - Yong Ao
- Department of Thoracic Surgery, Sun Yat-sen University Cancer Center, Guangzhou, Guangdong, China; State Key Laboratory of Oncology in South China, Guangdong Esophageal Cancer Institute, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, China
| | - Yinghua Zhong
- Department of Radiology, The Third People's Hospital of Zhuhai, Zhuhai, Guangdong, China
| | - Mingzhu Liang
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, Guangdong, China
| | - Lizhu Wang
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, Guangdong, China
| | - Xiangmeng Chen
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, Guangdong, China
| | - Xiaofeng Pei
- Department of Radiation Oncology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, Guangdong, China
| | - Yi Hu
- Department of Thoracic Surgery, Sun Yat-sen University Cancer Center, Guangzhou, Guangdong, China; State Key Laboratory of Oncology in South China, Guangdong Esophageal Cancer Institute, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, China.
| | - Duanduan Chen
- School of Medical Technology, Beijing Institute of Technology, Beijing, China.
| | - Man Li
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, Guangdong, China.
| | - Hong Shan
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, Guangdong, China; Department of Interventional Medicine, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, Guangdong, China.
| |
Collapse
|
2
|
Wu Q, Pei Y, Cheng Z, Hu X, Wang C. SDS-Net: A lightweight 3D convolutional neural network with multi-branch attention for multimodal brain tumor accurate segmentation. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:17384-17406. [PMID: 37920059 DOI: 10.3934/mbe.2023773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
The accurate and fast segmentation method of tumor regions in brain Magnetic Resonance Imaging (MRI) is significant for clinical diagnosis, treatment and monitoring, given the aggressive and high mortality rate of brain tumors. However, due to the limitation of computational complexity, convolutional neural networks (CNNs) face challenges in being efficiently deployed on resource-limited devices, which restricts their popularity in practical medical applications. To address this issue, we propose a lightweight and efficient 3D convolutional neural network SDS-Net for multimodal brain tumor MRI image segmentation. SDS-Net combines depthwise separable convolution and traditional convolution to construct the 3D lightweight backbone blocks, lightweight feature extraction (LFE) and lightweight feature fusion (LFF) modules, which effectively utilizes the rich local features in multimodal images and enhances the segmentation performance of sub-tumor regions. In addition, 3D shuffle attention (SA) and 3D self-ensemble (SE) modules are incorporated into the encoder and decoder of the network. The SA helps to capture high-quality spatial and channel features from the modalities, and the SE acquires more refined edge features by gathering information from each layer. The proposed SDS-Net was validated on the BRATS datasets. The Dice coefficients were achieved 92.7, 80.0 and 88.9% for whole tumor (WT), enhancing tumor (ET) and tumor core (TC), respectively, on the BRTAS 2020 dataset. On the BRTAS 2021 dataset, the Dice coefficients were 91.8, 82.5 and 86.8% for WT, ET and TC, respectively. Compared with other state-of-the-art methods, SDS-Net achieved superior segmentation performance with fewer parameters and less computational cost, under the condition of 2.52 M counts and 68.18 G FLOPs.
Collapse
Affiliation(s)
- Qian Wu
- School of Humanistic Medicine, Anhui Medical University, Hefei 230032, China
- School of Biomedical Engineering, Anhui Medical University, Hefei 230032, China
| | - Yuyao Pei
- School of Biomedical Engineering, Anhui Medical University, Hefei 230032, China
| | - Zihao Cheng
- School of Biomedical Engineering, Anhui Medical University, Hefei 230032, China
| | - Xiaopeng Hu
- Department of Medical Imaging, First Affiliated Hospital of Anhui Medical University, Hefei 230032, China
| | - Changqing Wang
- School of Biomedical Engineering, Anhui Medical University, Hefei 230032, China
| |
Collapse
|
3
|
Menon N, Guidozzi N, Chidambaram S, Markar SR. Performance of radiomics-based artificial intelligence systems in the diagnosis and prediction of treatment response and survival in esophageal cancer: a systematic review and meta-analysis of diagnostic accuracy. Dis Esophagus 2023; 36:doad034. [PMID: 37236811 PMCID: PMC10789236 DOI: 10.1093/dote/doad034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 05/04/2023] [Accepted: 05/18/2023] [Indexed: 05/28/2023]
Abstract
Radiomics can interpret radiological images with more detail and in less time compared to the human eye. Some challenges in managing esophageal cancer can be addressed by incorporating radiomics into image interpretation, treatment planning, and predicting response and survival. This systematic review and meta-analysis provides a summary of the evidence of radiomics in esophageal cancer. The systematic review was carried out using Pubmed, MEDLINE, and Ovid EMBASE databases-articles describing radiomics in esophageal cancer were included. A meta-analysis was also performed; 50 studies were included. For the assessment of treatment response using 18F-FDG PET/computed tomography (CT) scans, seven studies (443 patients) were included in the meta-analysis. The pooled sensitivity and specificity were 86.5% (81.1-90.6) and 87.1% (78.0-92.8). For the assessment of treatment response using CT scans, five studies (625 patients) were included in the meta-analysis, with a pooled sensitivity and specificity of 86.7% (81.4-90.7) and 76.1% (69.9-81.4). The remaining 37 studies formed the qualitative review, discussing radiomics in diagnosis, radiotherapy planning, and survival prediction. This review explores the wide-ranging possibilities of radiomics in esophageal cancer management. The sensitivities of 18F-FDG PET/CT scans and CT scans are comparable, but 18F-FDG PET/CT scans have improved specificity for AI-based prediction of treatment response. Models integrating clinical and radiomic features facilitate diagnosis and survival prediction. More research is required into comparing models and conducting large-scale studies to build a robust evidence base.
Collapse
Affiliation(s)
- Nainika Menon
- Department of General Surgery, Oxford University Hospitals, Oxford, UK
| | - Nadia Guidozzi
- Department of General Surgery, University of Witwatersrand, Johannesburg, South Africa
| | - Swathikan Chidambaram
- Academic Surgical Unit, Department of Surgery and Cancer, Imperial College London, St Mary’s Hospital, London, UK
| | - Sheraz Rehan Markar
- Department of General Surgery, Oxford University Hospitals, Oxford, UK
- Nuffield Department of Surgery, University of Oxford, Oxford, UK
| |
Collapse
|
4
|
Mackay K, Bernstein D, Glocker B, Kamnitsas K, Taylor A. A Review of the Metrics Used to Assess Auto-Contouring Systems in Radiotherapy. Clin Oncol (R Coll Radiol) 2023; 35:354-369. [PMID: 36803407 DOI: 10.1016/j.clon.2023.01.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 12/05/2022] [Accepted: 01/23/2023] [Indexed: 02/01/2023]
Abstract
Auto-contouring could revolutionise future planning of radiotherapy treatment. The lack of consensus on how to assess and validate auto-contouring systems currently limits clinical use. This review formally quantifies the assessment metrics used in studies published during one calendar year and assesses the need for standardised practice. A PubMed literature search was undertaken for papers evaluating radiotherapy auto-contouring published during 2021. Papers were assessed for types of metric and the methodology used to generate ground-truth comparators. Our PubMed search identified 212 studies, of which 117 met the criteria for clinical review. Geometric assessment metrics were used in 116 of 117 studies (99.1%). This includes the Dice Similarity Coefficient used in 113 (96.6%) studies. Clinically relevant metrics, such as qualitative, dosimetric and time-saving metrics, were less frequently used in 22 (18.8%), 27 (23.1%) and 18 (15.4%) of 117 studies, respectively. There was heterogeneity within each category of metric. Over 90 different names for geometric measures were used. Methods for qualitative assessment were different in all but two papers. Variation existed in the methods used to generate radiotherapy plans for dosimetric assessment. Consideration of editing time was only given in 11 (9.4%) papers. A single manual contour as a ground-truth comparator was used in 65 (55.6%) studies. Only 31 (26.5%) studies compared auto-contours to usual inter- and/or intra-observer variation. In conclusion, significant variation exists in how research papers currently assess the accuracy of automatically generated contours. Geometric measures are the most popular, however their clinical utility is unknown. There is heterogeneity in the methods used to perform clinical assessment. Considering the different stages of system implementation may provide a framework to decide the most appropriate metrics. This analysis supports the need for a consensus on the clinical implementation of auto-contouring.
Collapse
Affiliation(s)
- K Mackay
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK.
| | - D Bernstein
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| | - B Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - K Kamnitsas
- Department of Computing, Imperial College London, South Kensington Campus, London, UK; Department of Engineering Science, University of Oxford, Oxford, UK
| | - A Taylor
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| |
Collapse
|
5
|
Tu JX, Lin XT, Ye HQ, Yang SL, Deng LF, Zhu RL, Wu L, Zhang XQ. Global research trends of artificial intelligence applied in esophageal carcinoma: A bibliometric analysis (2000-2022) via CiteSpace and VOSviewer. Front Oncol 2022; 12:972357. [PMID: 36091151 PMCID: PMC9453500 DOI: 10.3389/fonc.2022.972357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Accepted: 07/29/2022] [Indexed: 12/09/2022] Open
Abstract
ObjectiveUsing visual bibliometric analysis, the application and development of artificial intelligence in clinical esophageal cancer are summarized, and the research progress, hotspots, and emerging trends of artificial intelligence are elucidated.MethodsOn April 7th, 2022, articles and reviews regarding the application of AI in esophageal cancer, published between 2000 and 2022 were chosen from the Web of Science Core Collection. To conduct co-authorship, co-citation, and co-occurrence analysis of countries, institutions, authors, references, and keywords in this field, VOSviewer (version 1.6.18), CiteSpace (version 5.8.R3), Microsoft Excel 2019, R 4.2, an online bibliometric platform (http://bibliometric.com/) and an online browser plugin (https://www.altmetric.com/) were used.ResultsA total of 918 papers were included, with 23,490 citations. 5,979 authors, 39,962 co-cited authors, and 42,992 co-cited papers were identified in the study. Most publications were from China (317). In terms of the H-index (45) and citations (9925), the United States topped the list. The journal “New England Journal of Medicine” of Medicine, General & Internal (IF = 91.25) published the most studies on this topic. The University of Amsterdam had the largest number of publications among all institutions. The past 22 years of research can be broadly divided into two periods. The 2000 to 2016 research period focused on the classification, identification and comparison of esophageal cancer. Recently (2017-2022), the application of artificial intelligence lies in endoscopy, diagnosis, and precision therapy, which have become the frontiers of this field. It is expected that closely esophageal cancer clinical measures based on big data analysis and related to precision will become the research hotspot in the future.ConclusionsAn increasing number of scholars are devoted to artificial intelligence-related esophageal cancer research. The research field of artificial intelligence in esophageal cancer has entered a new stage. In the future, there is a need to continue to strengthen cooperation between countries and institutions. Improving the diagnostic accuracy of esophageal imaging, big data-based treatment and prognosis prediction through deep learning technology will be the continuing focus of research. The application of AI in esophageal cancer still has many challenges to overcome before it can be utilized.
Collapse
Affiliation(s)
- Jia-xin Tu
- School of Public Health, Nanchang University, Nanchang, China
- Jiangxi Provincial Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, China
| | - Xue-ting Lin
- School of Public Health, Nanchang University, Nanchang, China
- Jiangxi Provincial Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, China
| | - Hui-qing Ye
- School of Public Health, Nanchang University, Nanchang, China
| | - Shan-lan Yang
- School of Public Health, Nanchang University, Nanchang, China
- Jiangxi Provincial Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, China
| | - Li-fang Deng
- School of Public Health, Nanchang University, Nanchang, China
- Jiangxi Provincial Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, China
| | - Ruo-ling Zhu
- School of Public Health, Nanchang University, Nanchang, China
- Jiangxi Provincial Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, China
| | - Lei Wu
- School of Public Health, Nanchang University, Nanchang, China
- Jiangxi Provincial Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, China
- *Correspondence: Lei Wu, ; Xiao-qiang Zhang,
| | - Xiao-qiang Zhang
- Department of Thoracic Surgery, The Second Affiliated Hospital of Nanchang University, Nanchang, China
- *Correspondence: Lei Wu, ; Xiao-qiang Zhang,
| |
Collapse
|
6
|
Yu X, Tang S, Cheang CF, Yu HH, Choi IC. Multi-Task Model for Esophageal Lesion Analysis Using Endoscopic Images: Classification with Image Retrieval and Segmentation with Attention. SENSORS 2021; 22:s22010283. [PMID: 35009825 PMCID: PMC8749873 DOI: 10.3390/s22010283] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 12/24/2021] [Accepted: 12/27/2021] [Indexed: 12/12/2022]
Abstract
The automatic analysis of endoscopic images to assist endoscopists in accurately identifying the types and locations of esophageal lesions remains a challenge. In this paper, we propose a novel multi-task deep learning model for automatic diagnosis, which does not simply replace the role of endoscopists in decision making, because endoscopists are expected to correct the false results predicted by the diagnosis system if more supporting information is provided. In order to help endoscopists improve the diagnosis accuracy in identifying the types of lesions, an image retrieval module is added in the classification task to provide an additional confidence level of the predicted types of esophageal lesions. In addition, a mutual attention module is added in the segmentation task to improve its performance in determining the locations of esophageal lesions. The proposed model is evaluated and compared with other deep learning models using a dataset of 1003 endoscopic images, including 290 esophageal cancer, 473 esophagitis, and 240 normal. The experimental results show the promising performance of our model with a high accuracy of 96.76% for the classification and a Dice coefficient of 82.47% for the segmentation. Consequently, the proposed multi-task deep learning model can be an effective tool to help endoscopists in judging esophageal lesions.
Collapse
Affiliation(s)
- Xiaoyuan Yu
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
| | - Suigu Tang
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
| | - Chak Fong Cheang
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
- Correspondence: (C.F.C.); (H.H.Y.)
| | - Hon Ho Yu
- Kiang Wu Hospital, Santo António, Macau;
- Correspondence: (C.F.C.); (H.H.Y.)
| | | |
Collapse
|