1
|
Xu Y, Wang J, Li C, Su Y, Peng H, Guo L, Lin S, Li J, Wu D. Advancing precise diagnosis of nasopharyngeal carcinoma through endoscopy-based radiomics analysis. iScience 2024; 27:110590. [PMID: 39252978 PMCID: PMC11381885 DOI: 10.1016/j.isci.2024.110590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 05/25/2024] [Accepted: 07/23/2024] [Indexed: 09/11/2024] Open
Abstract
Nasopharyngeal carcinoma (NPC) has high metastatic potential and is hard to detect early. This study aims to develop a deep learning model for NPC diagnosis using optical imagery. From April 2008 to May 2021, we analyzed 12,087 nasopharyngeal endoscopic images and 309 videos from 1,108 patients. The pretrained model was fine-tuned with stochastic gradient descent on the final layers. Data augmentation was applied during training. Videos were converted to images for malignancy scoring. Performance metrics like AUC, accuracy, and sensitivity were calculated based on the malignancy score. The deep learning model demonstrated high performance in identifying NPC, with AUC values of 0.981 (95% confidence of interval [CI] 0.965-0.996) for the Fujian Cancer Hospital dataset and 0.937 (0.905-0.970) for the Jiangxi Cancer Hospital dataset. The proposed model effectively diagnoses NPC with high accuracy, sensitivity, and specificity across multiple datasets. It shows promise for early NPC detection, especially in identifying latent lesions.
Collapse
Affiliation(s)
- Yun Xu
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, China
- Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian, China
| | - Jiesong Wang
- Department of Lymphoma & Head and Neck Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, China
| | - Chenxin Li
- Department of Electrical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Yong Su
- Department of Radiation Oncology, Jiangxi Cancer Hospital, Jiangxi, China
- National Health Commission (NHC) Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma (Jiangxi Cancer Hospital of Nanchang University), Nanchang, China
| | - Hewei Peng
- Department of Epidemiology and Health Statistics, Fujian Provincial Key Laboratory of Environment Factors and Cancer, School of Public Health, Fujian Medical University, Fuzhou, China
| | - Lanyan Guo
- School of Medical Imaging, Fujian Medical University, Fuzhou, China
| | - Shaojun Lin
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, China
- Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian, China
| | - Jingao Li
- Department of Radiation Oncology, Jiangxi Cancer Hospital, Jiangxi, China
- National Health Commission (NHC) Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma (Jiangxi Cancer Hospital of Nanchang University), Nanchang, China
| | - Dan Wu
- Tianjin Key Laboratory of Human Development and Reproductive Regulation, Tianjin Central Hospital of Gynecology Obstetrics and Nankai University Affiliated Hospital of Obstetrics and Gynecology, Tianjin, China
- Tianjin Cancer Institute, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin Medical University Cancer Institute and Hospital, Tianjin Medical University, Tianjin, China
| |
Collapse
|
2
|
Huang L, Zhang N, Yi Y, Zhou W, Zhou B, Dai J, Wang J. SAMCF: Adaptive global style alignment and multi-color spaces fusion for joint optic cup and disc segmentation. Comput Biol Med 2024; 178:108639. [PMID: 38878394 DOI: 10.1016/j.compbiomed.2024.108639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 04/21/2024] [Accepted: 05/18/2024] [Indexed: 07/24/2024]
Abstract
The optic cup (OC) and optic disc (OD) are two critical structures in retinal fundus images, and their relative positions and sizes are essential for effectively diagnosing eye diseases. With the success of deep learning in computer vision, deep learning-based segmentation models have been widely used for joint optic cup and disc segmentation. However, there are three prominent issues that impact the segmentation performance. First, significant differences among datasets collecting from various institutions, protocols, and devices lead to performance degradation of models. Second, we find that images with only RGB information struggle to counteract the interference caused by brightness variations, affecting color representation capability. Finally, existing methods typically ignored the edge perception, facing the challenges in obtaining clear and smooth edge segmentation results. To address these drawbacks, we propose a novel framework based on Style Alignment and Multi-Color Fusion (SAMCF) for joint OC and OD segmentation. Initially, we introduce a domain generalization method to generate uniformly styled images without damaged image content for mitigating domain shift issues. Next, based on multiple color spaces, we propose a feature extraction and fusion network aiming to handle brightness variation interference and improve color representation capability. Lastly, an edge aware loss is designed to generate fine edge segmentation results. Our experiments conducted on three public datasets, DGS, RIM, and REFUGE, demonstrate that our proposed SAMCF achieves superior performance to existing state-of-the-art methods. Moreover, SAMCF exhibits remarkable generalization ability across multiple retinal fundus image datasets, showcasing its outstanding generality.
Collapse
Affiliation(s)
- Longjun Huang
- School of Software, Nanchang Key Laboratory for Blindness and Visual Impairment Prevention Technology and Equipment, Jiangxi Normal University, Nanchang, 330022, China
| | - Ningyi Zhang
- School of Software, Nanchang Key Laboratory for Blindness and Visual Impairment Prevention Technology and Equipment, Jiangxi Normal University, Nanchang, 330022, China
| | - Yugen Yi
- School of Software, Nanchang Key Laboratory for Blindness and Visual Impairment Prevention Technology and Equipment, Jiangxi Normal University, Nanchang, 330022, China.
| | - Wei Zhou
- College of Computer Science, Shenyang Aerospace University, Shenyang, 110136, China
| | - Bin Zhou
- School of Software, Nanchang Key Laboratory for Blindness and Visual Impairment Prevention Technology and Equipment, Jiangxi Normal University, Nanchang, 330022, China
| | - Jiangyan Dai
- School of Computer Engineering, Weifang University, 261061, China.
| | - Jianzhong Wang
- College of Information Science and Technology, Northeast Normal University, Changchun, 130117, China
| |
Collapse
|
3
|
Ueda Y, Ogawa D, Ishida T. Patient Re-Identification Based on Deep Metric Learning in Trunk Computed Tomography Images Acquired from Devices from Different Vendors. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1124-1136. [PMID: 38366292 PMCID: PMC11169436 DOI: 10.1007/s10278-024-01017-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 12/05/2023] [Accepted: 12/27/2023] [Indexed: 02/18/2024]
Abstract
During radiologic interpretation, radiologists read patient identifiers from the metadata of medical images to recognize the patient being examined. However, it is challenging for radiologists to identify "incorrect" metadata and patient identification errors. We propose a method that uses a patient re-identification technique to link correct metadata to an image set of computed tomography images of a trunk with lost or wrongly assigned metadata. This method is based on a feature vector matching technique that uses a deep feature extractor to adapt to the cross-vendor domain contained in the scout computed tomography image dataset. To identify "incorrect" metadata, we calculated the highest similarity score between a follow-up image and a stored baseline image linked to the correct metadata. The re-identification performance tests whether the image with the highest similarity score belongs to the same patient, i.e., whether the metadata attached to the image are correct. The similarity scores between the follow-up and baseline images for the same "correct" patients were generally greater than those for "incorrect" patients. The proposed feature extractor was sufficiently robust to extract individual distinguishable features without additional training, even for unknown scout computed tomography images. Furthermore, the proposed augmentation technique further improved the re-identification performance of the subset for different vendors by incorporating changes in width magnification due to changes in patient table height during each examination. We believe that metadata checking using the proposed method would help detect the metadata with an "incorrect" patient identifier assigned due to unavoidable errors such as human error.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan.
| | - Daiki Ogawa
- School of Allied Health Sciences, Faculty of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Takayuki Ishida
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
4
|
Xu H, Li C, Zhang L, Ding Z, Lu T, Hu H. Immunotherapy efficacy prediction through a feature re-calibrated 2.5D neural network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 249:108135. [PMID: 38569256 DOI: 10.1016/j.cmpb.2024.108135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 03/11/2024] [Accepted: 03/13/2024] [Indexed: 04/05/2024]
Abstract
BACKGROUND AND OBJECTIVE Lung cancer continues to be a leading cause of cancer-related mortality worldwide, with immunotherapy emerging as a promising therapeutic strategy for advanced non-small cell lung cancer (NSCLC). Despite its potential, not all patients experience benefits from immunotherapy, and the current biomarkers used for treatment selection possess inherent limitations. As a result, the implementation of imaging-based biomarkers to predict the efficacy of lung cancer treatments offers a promising avenue for improving therapeutic outcomes. METHODS This study presents an automatic system for immunotherapy efficacy prediction on the subjects with lung cancer, facilitating significant clinical implications. Our model employs an advanced 2.5D neural network that incorporates 2D intra-slice feature extraction and 3D inter-slice feature aggregation. We further present a lesion-focused prior to guide the re-calibration for intra-slice features, and a attention-based re-calibration for the inter-slice features. Finally, we design an accumulated back-propagation strategy to optimize network parameters in a memory-efficient fashion. RESULTS We demonstrate that the proposed method achieves impressive performance on an in-house clinical dataset, surpassing existing state-of-the-art models. Furthermore, the proposed model exhibits increased efficiency in inference for each subject on average. To further validate the effectiveness of our model and its components, we conducted comprehensive and in-depth ablation experiments and discussions. CONCLUSION The proposed model showcases the potential to enhance physicians' diagnostic performance due to its impressive performance in predicting immunotherapy efficacy, thereby offering significant clinical application value. Moreover, we conduct adequate comparison experiments of the proposed methods and existing advanced models. These findings contribute to our understanding of the proposed model's effectiveness and serve as motivation for future work in immunotherapy efficacy prediction.
Collapse
Affiliation(s)
- Haipeng Xu
- Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fujian 350014, China.
| | - Chenxin Li
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong 999077, SAR, China.
| | - Longfeng Zhang
- Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fujian 350014, China.
| | - Zhiyuan Ding
- School of Informatics, Xiamen University, Fujian 350014, China.
| | - Tao Lu
- Department of Radiology, Fujian Medical University Cancer Hospital and Fujian Cancer Hospital, Fujian 350014, China.
| | - Huihua Hu
- Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fujian 350014, China.
| |
Collapse
|
5
|
Qu J, Xiao X, Wei X, Qian X. A causality-inspired generalized model for automated pancreatic cancer diagnosis. Med Image Anal 2024; 94:103154. [PMID: 38552527 DOI: 10.1016/j.media.2024.103154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 02/29/2024] [Accepted: 03/20/2024] [Indexed: 04/16/2024]
Abstract
Pancreatic cancer (PC) is a severely malignant cancer variant with high mortality. Since PC has no obvious symptoms, most PC patients are belatedly diagnosed at advanced disease stages. Recently, artificial intelligence (AI) approaches have demonstrated promising prospects for early diagnosis of pancreatic cancer. However, certain non-causal factors (such as intensity and texture appearance variations, also called confounders) tend to induce spurious correlation with PC diagnosis. This undermines the generalization performance and the clinical applicability of the AI-based PC diagnosis approaches. Therefore, we propose a causal intervention based automated method for pancreatic cancer diagnosis with contrast-enhanced computerized tomography (CT) images, where a confounding effects reduction scheme is developed for alleviating spurious correlations to achieve unbiased learning, thereby improving the generalization performance. Specifically, a continuous image generation strategy was developed to simulate wide variations of intensity differences caused by imaging heterogeneities, where Monte Carlo sampling is added to further enhance the continuity of simulated images. Then, to enhance the pancreatic texture variability, a texture diversification method was introduced in conjunction with gradient-based data augmentation. Finally, a causal intervention strategy was proposed to alleviate the adverse confounding effects by decoupling the causal and non-causal factors and combining them randomly. Extensive experiments showed remarkable diagnosis performance on a cross-validation dataset. Also, promising generalization performance with an average accuracy of 0.87 was attained on three independent test sets of a total of 782 subjects. Therefore, the proposed method shows high clinical feasibility and applicability for pancreatic cancer diagnosis.
Collapse
Affiliation(s)
- Jiaqi Qu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, PR China
| | - Xiang Xiao
- Department of Medical Imaging, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, PR China
| | - Xunbin Wei
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, PR China; Peking University Cancer Hospital & Institute, Beijing, 100142, PR China; Biomedical Engineering Department, Peking University, Beijing, 100081, PR China; Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, PR China; International Cancer Institute, Peking University, Beijing 100191, PR China.
| | - Xiaohua Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, PR China.
| |
Collapse
|
6
|
Gu R, Wang G, Lu J, Zhang J, Lei W, Chen Y, Liao W, Zhang S, Li K, Metaxas DN, Zhang S. CDDSA: Contrastive domain disentanglement and style augmentation for generalizable medical image segmentation. Med Image Anal 2023; 89:102904. [PMID: 37506556 DOI: 10.1016/j.media.2023.102904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 06/06/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023]
Abstract
Generalization to previously unseen images with potential domain shifts is essential for clinically applicable medical image segmentation. Disentangling domain-specific and domain-invariant features is key for Domain Generalization (DG). However, existing DG methods struggle to achieve effective disentanglement. To address this problem, we propose an efficient framework called Contrastive Domain Disentanglement and Style Augmentation (CDDSA) for generalizable medical image segmentation. First, a disentangle network decomposes the image into domain-invariant anatomical representation and domain-specific style code, where the former is sent for further segmentation that is not affected by domain shift, and the disentanglement is regularized by a decoder that combines the anatomical representation and style code to reconstruct the original image. Second, to achieve better disentanglement, a contrastive loss is proposed to encourage the style codes from the same domain and different domains to be compact and divergent, respectively. Finally, to further improve generalizability, we propose a style augmentation strategy to synthesize images with various unseen styles in real time while maintaining anatomical information. Comprehensive experiments on a public multi-site fundus image dataset and an in-house multi-site Nasopharyngeal Carcinoma Magnetic Resonance Image (NPC-MRI) dataset show that the proposed CDDSA achieved remarkable generalizability across different domains, and it outperformed several state-of-the-art methods in generalizable segmentation. Code is available at https://github.com/HiLab-git/DAG4MIA.
Collapse
Affiliation(s)
- Ran Gu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai AI Lab, Shanghai, China.
| | - Jiangshan Lu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jingyang Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China; School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Wenhui Lei
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China; Shanghai AI Lab, Shanghai, China
| | - Yinan Chen
- SenseTime Research, Shanghai, China; West China Hospital-SenseTime Joint Lab, West China Biomedical Big Data Center, Sichuan University, Chengdu, China
| | - Wenjun Liao
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Shichuan Zhang
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Kang Li
- West China Hospital-SenseTime Joint Lab, West China Biomedical Big Data Center, Sichuan University, Chengdu, China
| | - Dimitris N Metaxas
- Department of Computer Science, Rutgers University, Piscataway NJ 08854, USA
| | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; SenseTime Research, Shanghai, China; Shanghai AI Lab, Shanghai, China.
| |
Collapse
|
7
|
Fogarollo S, Bale R, Harders M. Towards liver segmentation in the wild via contrastive distillation. Int J Comput Assist Radiol Surg 2023; 18:1143-1149. [PMID: 37145251 PMCID: PMC10329587 DOI: 10.1007/s11548-023-02912-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 04/06/2023] [Indexed: 05/06/2023]
Abstract
PURPOSE Automatic liver segmentation is a key component for performing computer-assisted hepatic procedures. The task is challenging due to the high variability in organ appearance, numerous imaging modalities, and limited availability of labels. Moreover, strong generalization performance is required in real-world scenarios. However, existing supervised methods cannot be applied to data not seen during training (i.e. in the wild) because they generalize poorly. METHODS We propose to distill knowledge from a powerful model with our novel contrastive distillation scheme. We use a pre-trained large neural network to train our smaller model. A key novelty is to map neighboring slices close together in the latent representation, while mapping distant slices far away. Then, we use ground-truth labels to learn a U-Net style upsampling path and recover the segmentation map. RESULTS The pipeline is proven to be robust enough to perform state-of-the-art inference on target unseen domains. We carried out an extensive experimental validation using six common abdominal datasets, covering multiple modalities, as well as 18 patient datasets from the Innsbruck University Hospital. A sub-second inference time and a data-efficient training pipeline make it possible to scale our method to real-world conditions. CONCLUSION We propose a novel contrastive distillation scheme for automatic liver segmentation. A limited set of assumptions and superior performance to state-of-the-art techniques make our method a candidate for application to real-world scenarios.
Collapse
Affiliation(s)
- Stefano Fogarollo
- Department of Computer Science Interactive Graphics and Simulation Group (IGS), University of Innsbruck, Innsbruck, Austria.
| | - Reto Bale
- Interventional Oncology-Microinvasive Therapy (SIP), Department of Radiology, Medical University Innsbruck, Innsbruck, Austria
| | - Matthias Harders
- Department of Computer Science Interactive Graphics and Simulation Group (IGS), University of Innsbruck, Innsbruck, Austria
| |
Collapse
|
8
|
Sendra-Balcells C, Campello VM, Martín-Isla C, Viladés D, Descalzo ML, Guala A, Rodríguez-Palomares JF, Lekadir K. Domain generalization in deep learning for contrast-enhanced imaging. Comput Biol Med 2022; 149:106052. [DOI: 10.1016/j.compbiomed.2022.106052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 08/09/2022] [Accepted: 08/20/2022] [Indexed: 11/03/2022]
|