1
|
Zou J, Shen YK, Wu SN, Wei H, Li QJ, Xu SH, Ling Q, Kang M, Liu ZL, Huang H, Chen X, Wang YX, Liao XL, Tan G, Shao Y. Prediction Model of Ocular Metastases in Gastric Adenocarcinoma: Machine Learning-Based Development and Interpretation Study. Technol Cancer Res Treat 2024; 23:15330338231219352. [PMID: 38233736 PMCID: PMC10865948 DOI: 10.1177/15330338231219352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 10/10/2023] [Accepted: 11/08/2023] [Indexed: 01/19/2024] Open
Abstract
Background: Although gastric adenocarcinoma (GA) related ocular metastasis (OM) is rare, its occurrence indicates a more severe disease. We aimed to utilize machine learning (ML) to analyze the risk factors of GA-related OM and predict its risks. Methods: This is a retrospective cohort study. The clinical data of 3532 GA patients were collected and randomly classified into training and validation sets in a ratio of 7:3. Those with or without OM were classified into OM and non-OM (NOM) groups. Univariate and multivariate logistic regression analyses and least absolute shrinkage and selection operator were conducted. We integrated the variables identified through feature importance ranking and further refined the selection process using forward sequential feature selection based on random forest (RF) algorithm before incorporating them into the ML model. We applied six ML algorithms to construct the predictive GA model. The area under the receiver operating characteristic (ROC) curve indicated the model's predictive ability. Also, we established a network risk calculator based on the best performance model. We used Shapley additive interpretation (SHAP) to identify risk factors and to confirm the interpretability of the black box model. We have de-identified all patient details. Results: The ML model, consisting of 13 variables, achieved an optimal predictive performance using the gradient boosting machine (GBM) model, with an impressive area under the curve (AUC) of 0.997 in the test set. Utilizing the SHAP method, we identified crucial factors for OM in GA patients, including LDL, CA724, CEA, AFP, CA125, Hb, CA153, and Ca2+. Additionally, we validated the model's reliability through an analysis of two patient cases and developed a functional online web prediction calculator based on the GBM model. Conclusion: We used the ML method to establish a risk prediction model for GA-related OM and showed that GBM performed best among the six ML models. The model may identify patients with GA-related OM to provide early and timely treatment.
Collapse
Affiliation(s)
- Jie Zou
- Department of Ophthalmology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Jiangxi Branch of National Clinical Research Center for Ocular Disease, Nanchang, Jiangxi, People's Republic of China
| | - Yan-Kun Shen
- Department of Ophthalmology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Jiangxi Branch of National Clinical Research Center for Ocular Disease, Nanchang, Jiangxi, People's Republic of China
| | - Shi-Nan Wu
- Department of Ophthalmology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Jiangxi Branch of National Clinical Research Center for Ocular Disease, Nanchang, Jiangxi, People's Republic of China
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, Fujian, People's Republic of China
| | - Hong Wei
- Department of Ophthalmology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Jiangxi Branch of National Clinical Research Center for Ocular Disease, Nanchang, Jiangxi, People's Republic of China
| | - Qing-Jian Li
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, Fujian, People's Republic of China
| | - San Hua Xu
- Department of Ophthalmology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Jiangxi Branch of National Clinical Research Center for Ocular Disease, Nanchang, Jiangxi, People's Republic of China
| | - Qian Ling
- Department of Ophthalmology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Jiangxi Branch of National Clinical Research Center for Ocular Disease, Nanchang, Jiangxi, People's Republic of China
| | - Min Kang
- Department of Ophthalmology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Jiangxi Branch of National Clinical Research Center for Ocular Disease, Nanchang, Jiangxi, People's Republic of China
| | - Zhao-Lin Liu
- Department of Ophthalmology, the First Affiliated Hospital of University of South China, Hunan Branch of National Clinical Research Center for Ocular Disease, Hengyan, Hunan Province, People's Republic of China
| | - Hui Huang
- Department of Ophthalmology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Jiangxi Branch of National Clinical Research Center for Ocular Disease, Nanchang, Jiangxi, People's Republic of China
| | - Xu Chen
- Department of Ophthalmology and Visual Sciences, Maastricht University, Maastricht, Limburg Province, Netherlands
| | - Yi-Xin Wang
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | - Xu-Lin Liao
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, People's Republic of China
| | - Gang Tan
- Department of Ophthalmology, the First Affiliated Hospital of University of South China, Hunan Branch of National Clinical Research Center for Ocular Disease, Hengyan, Hunan Province, People's Republic of China
| | - Yi Shao
- Department of Ophthalmology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Jiangxi Branch of National Clinical Research Center for Ocular Disease, Nanchang, Jiangxi, People's Republic of China
- Current affiliation: Department of Ophthalmology, Eye & ENT Hospital of Fudan University, Shanghai, China
| |
Collapse
|
2
|
Zhao H, Zheng C, Zhang H, Rao M, Li Y, Fang D, Huang J, Zhang W, Yuan G. Diagnosis of thyroid disease using deep convolutional neural network models applied to thyroid scintigraphy images: a multicenter study. Front Endocrinol (Lausanne) 2023; 14:1224191. [PMID: 37635985 PMCID: PMC10453808 DOI: 10.3389/fendo.2023.1224191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 07/24/2023] [Indexed: 08/29/2023] Open
Abstract
Objectives The aim of this study was to improve the diagnostic performance of nuclear medicine physicians using a deep convolutional neural network (DCNN) model and validate the results with two multicenter datasets for thyroid disease by analyzing clinical single-photon emission computed tomography (SPECT) image data. Methods In this multicenter retrospective study, 3194 SPECT thyroid images were collected for model training (n=2067), internal validation (n=514) and external validation (n=613). First, four pretrained DCNN models (AlexNet, ShuffleNetV2, MobileNetV3 and ResNet-34) for were tested multiple medical image classification of thyroid disease types (i.e., Graves' disease, subacute thyroiditis, thyroid tumor and normal thyroid). The best performing model was then subjected to fivefold cross-validation to further assess its performance, and the diagnostic performance of this model was compared with that of junior and senior nuclear medicine physicians. Finally, class-specific attentional regions were visualized with attention heatmaps using gradient-weighted class activation mapping. Results Each of the four pretrained neural networks attained an overall accuracy of more than 0.85 for the classification of SPECT thyroid images. The improved ResNet-34 model performed best, with an accuracy of 0.944. For the internal validation set, the ResNet-34 model showed higher accuracy (p < 0.001) when compared to that of the senior nuclear medicine physician, with an improvement of nearly 10%. Our model achieved an overall accuracy of 0.931 for the external dataset, a significantly higher accuracy than that of the senior physician (0.931 vs. 0.868, p < 0.001). Conclusion The DCNN-based model performed well in terms of diagnosing thyroid scintillation images. The DCNN model showed higher sensitivity and greater specificity in identifying Graves' disease, subacute thyroiditis, and thyroid tumors compared to those of nuclear medicine physicians, illustrating the feasibility of deep learning models to improve the diagnostic efficiency for assisting clinicians.
Collapse
|