1
|
Jian L, Chen X, Hu P, Li H, Fang C, Wang J, Wu N, Yu X. Predicting progression-free survival in patients with epithelial ovarian cancer using an interpretable random forest model. Heliyon 2024; 10:e35344. [PMID: 39166005 PMCID: PMC11334804 DOI: 10.1016/j.heliyon.2024.e35344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 07/26/2024] [Accepted: 07/26/2024] [Indexed: 08/22/2024] Open
Abstract
Prognostic models play a crucial role in providing personalised risk assessment, guiding treatment decisions, and facilitating the counselling of patients with cancer. However, previous imaging-based artificial intelligence models of epithelial ovarian cancer lacked interpretability. In this study, we aimed to develop an interpretable machine-learning model to predict progression-free survival in patients with epithelial ovarian cancer using clinical variables and radiomics features. A total of 102 patients with epithelial ovarian cancer who underwent contrast-enhanced computed tomography scans were enrolled in this retrospective study. Pre-surgery clinical data, including age, performance status, body mass index, tumour stage, venous blood cancer antigen-125 (CA125) level, white blood cell count, neutrophil count, red blood cell count, haemoglobin level, and platelet count, were obtained from medical records. The volume of interest for each tumour was manually delineated slice-by-slice along the boundary. A total of 2074 radiomic features were extracted from the pre- and post-contrast computed tomography images. Optimal radiomic features were selected using the Least Absolute Shrinkage and Selection Operator logistic regression. Multivariate Cox analysis was performed to identify independent predictors of three-year progression-free survival. The random forest algorithm developed radiomic and combined models using four-fold cross-validation. Finally, the Shapley additive explanation algorithm was applied to interpret the predictions of the combined model. Multivariate Cox analysis identified CA-125 levels (P = 0.015), tumour stage (P = 0.019), and Radscore (P < 0.001) as independent predictors of progression-free survival. The combined model based on these factors achieved an area under the curve of 0.812 (95 % confidence interval: 0.802-0.822) in the training cohort and 0.772 (95 % confidence interval: 0.727-0.817) in the validation cohort. The most impactful features on the model output were Radscore, followed by tumour stage and CA-125. In conclusion, the Shapley additive explanation-based interpretation of the prognostic model enables clinicians to understand the reasoning behind predictions better.
Collapse
Affiliation(s)
- Lian Jian
- Department of Radiology, The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University/Hunan Cancer Hospital, Changsha, Hunan, China
| | - Xiaoyan Chen
- Department of Pathology, The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University/Hunan Cancer Hospital, Changsha, Hunan, China
| | - Pingsheng Hu
- Department of Radiology, The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University/Hunan Cancer Hospital, Changsha, Hunan, China
| | - Handong Li
- Department of Radiology, The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University/Hunan Cancer Hospital, Changsha, Hunan, China
| | - Chao Fang
- Department of Clinical Pharmaceutical Research Institution, The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University/Hunan Cancer Hospital, Changsha, Hunan, China
| | - Jing Wang
- Department of Clinical Pharmaceutical Research Institution, The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University/Hunan Cancer Hospital, Changsha, Hunan, China
| | - Nayiyuan Wu
- Central Laboratory, The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University/Hunan Cancer Hospital, Changsha, Hunan, China
| | - Xiaoping Yu
- Department of Radiology, The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University/Hunan Cancer Hospital, Changsha, Hunan, China
| |
Collapse
|
2
|
Jiang H, Du Y, Lu Z, Wang B, Zhao Y, Wang R, Zhang H, Mok GSP. Radiomics incorporating deep features for predicting Parkinson's disease in 123I-Ioflupane SPECT. EJNMMI Phys 2024; 11:60. [PMID: 38985382 PMCID: PMC11236833 DOI: 10.1186/s40658-024-00651-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 05/24/2024] [Indexed: 07/11/2024] Open
Abstract
PURPOSE 123I-Ioflupane SPECT is an effective tool for the diagnosis and progression assessment of Parkinson's disease (PD). Radiomics and deep learning (DL) can be used to track and analyze the underlying image texture and features to predict the Hoehn-Yahr stages (HYS) of PD. In this study, we aim to predict HYS at year 0 and year 4 after the first diagnosis with combined imaging, radiomics and DL-based features using 123I-Ioflupane SPECT images at year 0. METHODS In this study, 161 subjects from the Parkinson's Progressive Marker Initiative database underwent baseline 3T MRI and 123I-Ioflupane SPECT, with HYS assessment at years 0 and 4 after first diagnosis. Conventional imaging features (IF) and radiomic features (RaF) for striatum uptakes were extracted from SPECT images using MRI- and SPECT-based (SPECT-V and SPECT-T) segmentations respectively. A 2D DenseNet was used to predict HYS of PD, and simultaneously generate deep features (DF). The random forest algorithm was applied to develop models based on DF, RaF, IF and combined features to predict HYS (stage 0, 1 and 2) at year 0 and (stage 0, 1 and ≥ 2) at year 4, respectively. Model predictive accuracy and receiver operating characteristic (ROC) analysis were assessed for various prediction models. RESULTS For the diagnostic accuracy at year 0, DL (0.696) outperformed most models, except DF + IF in SPECT-V (0.704), significantly superior based on paired t-test. For year 4, accuracy of DF + RaF model in MRI-based method is the highest (0.835), significantly better than DF + IF, IF + RaF, RaF and IF models. And DL (0.820) surpassed models in both SPECT-based methods. The area under the ROC curve (AUC) highlighted DF + RaF model (0.854) in MRI-based method at year 0 and DF + RaF model (0.869) in SPECT-T method at year 4, outperforming DL models, respectively. And then, there was no significant differences between SPECT-based and MRI-based segmentation methods except for the imaging feature models. CONCLUSION The combination of radiomic and deep features enhances the prediction accuracy of PD HYS compared to only radiomics or DL. This suggests the potential for further advancements in predictive model performance for PD HYS at year 0 and year 4 after first diagnosis using 123I-Ioflupane SPECT images at year 0, thereby facilitating early diagnosis and treatment for PD patients. No significant difference was observed in radiomics results obtained between MRI- and SPECT-based striatum segmentations for radiomic and deep features.
Collapse
Affiliation(s)
- Han Jiang
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Avenida da Universidade, Macau, Macau SAR, China
- PET-CT Center, Fujian Medical University Union Hospital, Fuzhou, China
| | - Yu Du
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Avenida da Universidade, Macau, Macau SAR, China
- Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Taipa, Macau SAR, China
| | - Zhonglin Lu
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Avenida da Universidade, Macau, Macau SAR, China
- Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Taipa, Macau SAR, China
| | - Bingjie Wang
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Avenida da Universidade, Macau, Macau SAR, China
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yonghua Zhao
- State Key Laboratory of Quality Research in Chinese Medicine, Institute of Chinese Medical Sciences, University of Macau, Taipa, Macau SAR, China
| | - Ruibing Wang
- State Key Laboratory of Quality Research in Chinese Medicine, Institute of Chinese Medical Sciences, University of Macau, Taipa, Macau SAR, China
| | - Hong Zhang
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang, University School of Medicine, 88 Jiefang Road, Zhejiang, 310009, Zhejiang, China.
- Institute of Nuclear Medicine and Molecular, Imaging of Zhejiang University, Hangzhou, China.
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, China.
- College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China.
- Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, China.
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Avenida da Universidade, Macau, Macau SAR, China.
- Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Taipa, Macau SAR, China.
| |
Collapse
|
3
|
Bacalbasa N, Petrea S, Gaspar B, Pop L, Varlas V, Hasegan A, Gorecki G, Martac C, Stoian M, Zgura A, Balescu I. The Influence of Inflammatory and Nutritional Status on the Long-Term Outcomes in Advanced Stage Ovarian Cancer. Cancers (Basel) 2024; 16:2504. [PMID: 39061143 PMCID: PMC11274520 DOI: 10.3390/cancers16142504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 06/23/2024] [Accepted: 07/06/2024] [Indexed: 07/28/2024] Open
Abstract
BACKGROUND Despite improving surgical techniques and achieving more often complete debulking procedures, certain patients with advanced-stage ovarian cancer still have a very poor prognosis. The aim of the current paper is to investigate whether inflammatory and nutritional status can predict the long-term outcomes of ovarian cancer patients. METHODS A retrospective analysis of 57 cases diagnosed with advanced-stage ovarian cancer submitted to surgery as first intent therapy was carried out. In all cases, the preoperative status was determined by calculating the CRP/albumin ratio, as well as the Glasgow score, the modified Glasgow score and the prognostic nutritional index. RESULTS Patients presenting higher values of the CRP/albumin ratio, with a higher Glasgow score, modified Glasgow score and prognostic nutritional index (PNI), were more frequently associated with incomplete debulking surgery, a higher peritoneal carcinomatosis index and poorer overall survival (20 months versus 9 months for the CRP/albumin ratio p = 0.011, 42 versus 27 versus 12 months for the Glasgow score p = 0.042, 50 versus 19 versus 12 months for the modified Glasgow score, p = 0.001, and 54 months versus 21 months, p = 0.011 for the prognostic nutritional index). CONCLUSIONS A strong relationship between the nutritional and inflammatory status in advanced-stage ovarian cancer seems to exist.
Collapse
Affiliation(s)
- Nicolae Bacalbasa
- Department of Visceral Surgery, Center of Excellence in Translational Medicine “Fundeni” Clinical Institute, 022328 Bucharest, Romania;
- Department of Surgery, “Carol Davila” University of Medicine and Pharmacy, 022328 Bucharest, Romania;
| | - Sorin Petrea
- Department of Surgery, “Carol Davila” University of Medicine and Pharmacy, 022328 Bucharest, Romania;
- Department of Surgery, “Ion Cantacuzino” Clinical Hospital, 022328 Bucharest, Romania
| | - Bogdan Gaspar
- Department of Surgery, “Carol Davila” University of Medicine and Pharmacy, 022328 Bucharest, Romania;
- Department of Visceral Surgery, “Floreasca” Clinical Emergency Hospital, 022328 Bucharest, Romania
| | - Lucian Pop
- Department of Obstetrics and Gynecology, “Carol Davila” University of Medicine and Pharmacy, 022328 Bucharest, Romania; (L.P.); (V.V.)
- Department of Obstetrics and Gynecology, National Institute of Mother and Child Care Alessandrescu-Rusescu, 022328 Bucharest, Romania
| | - Valentin Varlas
- Department of Obstetrics and Gynecology, “Carol Davila” University of Medicine and Pharmacy, 022328 Bucharest, Romania; (L.P.); (V.V.)
- Department of Obstetrics and Gynecology, “Filantropia” Clinical Hospital, 022328 Bucharest, Romania
| | - Adrian Hasegan
- Department of Urology, Sibiu Emergency Hospital, Faculty of Medicine, University of Sibiu, 550169 Sibiu, Romania;
| | - Gabriel Gorecki
- Department of Anesthesia and Intensive Care, CF 2 Clinical Hospital, 022328 Bucharest, Romania;
- Faculty of Medicine, Titu Maiorescu University, 022328 Bucharest, Romania
| | - Cristina Martac
- Department of Anesthesiology, Fundeni Clinical Hospital, 022328 Bucharest, Romania;
| | - Marilena Stoian
- Department of Internal Medicine, “Carol Davila” University of Medicine and Pharmacy, 022328 Bucharest, Romania;
- Department of Internal Medicine and Nephrology, Dr. Ion Cantacuzino Hospital, 022328 Bucharest, Romania
| | - Anca Zgura
- Department of Medical Oncology, Oncological Institute Prof. Dr. Al. Trestioreanu, 022328 Bucharest, Romania;
- Department of Medical Oncology, “Carol Davila” University of Medicine and Pharmacy, 022328 Bucharest, Romania
| | - Irina Balescu
- Department of Visceral Surgery, “Carol Davila” University of Medicine and Pharmacy, 022328 Bucharest, Romania;
| |
Collapse
|
4
|
Zheng Y, Wang H, Weng T, Li Q, Guo L. Application of convolutional neural network for differentiating ovarian thecoma-fibroma and solid ovarian cancer based on MRI. Acta Radiol 2024; 65:860-868. [PMID: 38751048 DOI: 10.1177/02841851241252951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/26/2024]
Abstract
BACKGROUND Ovarian thecoma-fibroma and solid ovarian cancer have similar clinical and imaging features, and it is difficult for radiologists to differentiate them. Since the treatment and prognosis of them are different, accurate characterization is crucial. PURPOSE To non-invasively differentiate ovarian thecoma-fibroma and solid ovarian cancer by convolutional neural network based on magnetic resonance imaging (MRI), and to provide the interpretability of the model. MATERIAL AND METHODS A total of 156 tumors, including 86 ovarian thecoma-fibroma and 70 solid ovarian cancer, were split into the training set, the validation set, and the test set according to the ratio of 8:1:1 by stratified random sampling. In this study, we used four different networks, two different weight modes, two different optimizers, and four different sizes of regions of interest (ROI) to test the model performance. This process was repeated 10 times to calculate the average performance of the test set. The gradient weighted class activation mapping (Grad-CAM) was used to explain how the model makes classification decisions by visual location map. RESULTS ResNet18, which had pre-trained weight, using Adam and one multiple ROI circumscribed rectangle, achieved best performance. The average accuracy, precision, recall, and AUC were 0.852, 0.828, 0.848, and 0.919 (P < 0.01), respectively. Grad-CAM showed areas associated with classification appeared on the edge or interior of ovarian thecoma-fibroma and the interior of solid ovarian cancer. CONCLUSION This study shows that convolution neural network based on MRI can be helpful for radiologists in differentiating ovarian thecoma-fibroma and solid ovarian cancer.
Collapse
Affiliation(s)
- Yuemei Zheng
- Department of Medical Imaging, Affiliated Hospital of Jining Medical University, Jining, PR China
| | - Hong Wang
- Department of Radiology, Tianjin First Central Hospital, Tianjin, PR China
| | - Tingting Weng
- School of Medical Imaging, Tianjin Medical University, Tianjin, PR China
| | - Qiong Li
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, PR China
| | - Li Guo
- School of Medical Imaging, Tianjin Medical University, Tianjin, PR China
| |
Collapse
|
5
|
Chai H, Huang Y, Xu L, Song X, He M, Wang Q. A decentralized federated learning-based cancer survival prediction method with privacy protection. Heliyon 2024; 10:e31873. [PMID: 38845954 PMCID: PMC11153246 DOI: 10.1016/j.heliyon.2024.e31873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 05/18/2024] [Accepted: 05/23/2024] [Indexed: 06/09/2024] Open
Abstract
Background Survival prediction is one of the crucial goals in precision medicine, as accurate survival assessment can aid physicians in selecting appropriate treatment for individual patients. To achieve this aim, extensive data must be utilized to train the prediction model and prevent overfitting. However, the collection of patient data for disease prediction is challenging due to potential variations in data sources across institutions and concerns regarding privacy and ownership issues in data sharing. To facilitate the integration of cancer data from different institutions without violating privacy laws, we developed a federated learning-based data integration framework called AdFed, which can be used to evaluate patients' survival while considering the privacy protection problem by utilizing the decentralized federated learning technology and regularization method. Results AdFed was tested on different cancer datasets that contain the patients' information from different institutions. The experimental results show that AdFed using distributed data can achieve better performance in cancer survival prediction (AUC = 0.605) than the compared federated-learning-based methods (average AUC = 0.554). Additionally, to assess the biological interpretability of our method, in the case study we list 10 identified genes related to liver cancer selected by AdFed, among which 5 genes have been proved by literature review. Conclusions The results indicate that AdFed outperforms better than other federated-learning-based methods, and the interpretable algorithm can select biologically significant genes and pathways while ensuring the confidentiality and integrity of data.
Collapse
Affiliation(s)
- Hua Chai
- School of Mathematics and Big Data, Foshan University, Foshan, 528000, China
| | - Yiqian Huang
- School of Mathematics and Big Data, Foshan University, Foshan, 528000, China
| | - Lekai Xu
- School of Mathematics and Big Data, Foshan University, Foshan, 528000, China
| | - Xinpeng Song
- School of Mathematics and Big Data, Foshan University, Foshan, 528000, China
| | - Minfan He
- School of Mathematics and Big Data, Foshan University, Foshan, 528000, China
| | - Qingyong Wang
- School of Information and Artificial Intelligence, Anhui Agricultural University, Hefei, 230036, China
- Anhui Provincial Engineering Research Center for Agricultural Information Perception and Intelligent Computing, Hefei, 230036, China
| |
Collapse
|
6
|
Summers KL, Kerut EK, To F, Sheahan CM, Sheahan MG. Machine learning-based prediction of abdominal aortic aneurysms for individualized patient care. J Vasc Surg 2024; 79:1057-1067.e2. [PMID: 38185212 DOI: 10.1016/j.jvs.2023.12.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 11/30/2023] [Accepted: 12/01/2023] [Indexed: 01/09/2024]
Abstract
OBJECTIVE The United States Preventative Services Task Force guidelines for screening for abdominal aortic aneurysms (AAA) are broad and exclude many at risk groups. We analyzed a large AAA screening database to examine the utility of a novel machine learning (ML) model for predicting individual risk of AAA. METHODS We created a ML model to predict the presence of AAAs (>3 cm) from the database of a national nonprofit screening organization (AAAneurysm Outreach). Participants self-reported demographics and comorbidities. The model is a two-layered feed-forward shallow network. The ML model then generated AAA probability based on patient characteristics. We evaluated graphs to determine significant factors, and then compared those graphs with a traditional logistic regression model. RESULTS We analyzed a cohort of 10,033 patients with an AAA prevalence of 2.74%. Consistent with logistic regression analysis, the ML model identified the following predictors of AAA: Caucasian race, male gender, advancing age, and recent or past smoker with recent smoker having a more profound affect (P < .05). Interestingly, the ML model showed body mass index (BMI) was associated with likelihood of AAAs, especially for younger females. The ML model also identified a higher than predicted risk of AAA in several groups, including female nonsmokers with cardiac disease, female diabetics, those with a family history of AAA, and those with hypertension or hyperlipidemia at older ages. An elevated BMI conveyed a higher than expected risk in male smokers and all females. The ML model also identified a complex relationship of both diabetes mellitus and hyperlipidemia with gender. Family history of AAA was a more important risk factor in the ML model for both men and women too. CONCLUSIONS We successfully developed an ML model based on an AAA screening database that unveils a complex relationship between AAA prevalence and many risk factors, including BMI. The model also highlights the need to expand AAA screening efforts in women. Using ML models in the clinical setting has the potential to deliver precise, individualized screening recommendations.
Collapse
Affiliation(s)
- Kelli L Summers
- Division of Vascular Surgery, Department of Surgery, LSU Health Sciences Center, School of Medicine, New Orleans, LA.
| | - Edmund K Kerut
- Division of Cardiovascular Diseases, Department of Medicine, LSU Health Sciences Center, New Orleans, LA; Heart Clinic of Louisiana, Marrero, LA
| | - Filip To
- Department of Agricultural and Biological Engineering, Bagley College of Engineering, Mississippi State University, Mississippi State, MS
| | - Claudie M Sheahan
- Division of Vascular Surgery, Department of Surgery, LSU Health Sciences Center, School of Medicine, New Orleans, LA
| | - Malachi G Sheahan
- Division of Vascular Surgery, Department of Surgery, LSU Health Sciences Center, School of Medicine, New Orleans, LA
| |
Collapse
|
7
|
Yin R, Dou Z, Wang Y, Zhang Q, Guo Y, Wang Y, Chen Y, Zhang C, Li H, Jian X, Qi L, Ma W. Preoperative CECT-Based Multitask Model Predicts Peritoneal Recurrence and Disease-Free Survival in Advanced Ovarian Cancer: A Multicenter Study. Acad Radiol 2024:S1076-6332(24)00231-9. [PMID: 38693025 DOI: 10.1016/j.acra.2024.04.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 04/13/2024] [Accepted: 04/14/2024] [Indexed: 05/03/2024]
Abstract
RATIONALE AND OBJECTIVES Peritoneal recurrence is the predominant pattern of recurrence in advanced ovarian cancer (AOC) and portends a dismal prognosis. Accurate prediction of peritoneal recurrence and disease-free survival (DFS) is crucial to identify patients who might benefit from intensive treatment. We aimed to develop a predictive model for peritoneal recurrence and prognosis in AOC. METHODS In this retrospective multi-institution study of 515 patients, an end-to-end multi-task convolutional neural network (MCNN) comprising a segmentation convolutional neural network (CNN) and a classification CNN was developed and tested using preoperative CT images, and MCNN-score was generated to indicate the peritoneal recurrence and DFS status in patients with AOC. We evaluated the accuracy of the model for automatic segmentation and predict prognosis. RESULTS The MCNN achieved promising segmentation performances with a mean Dice coefficient of 84.3% (range: 78.8%-87.0%). The MCNN was able to predict peritoneal recurrence in the training (AUC 0.87; 95% CI 0.82-0.90), internal test (0.88; 0.85-0.92), and external test set (0.82; 0.78-0.86). Similarly, MCNN demonstrated consistently high accuracy in predicting recurrence, with an AUC of 0.85; 95% CI 0.82-0.88, 0.83; 95% CI 0.80-0.86, and 0.85; 95% CI 0.83-0.88. For patients with a high MCNN-score of recurrence, it was associated with poorer DFS with P < 0.0001 and hazard ratios of 0.1964 (95% CI: 0.1439-0.2680), 0.3249 (95% CI: 0.1896-0.5565), and 0.3458 (95% CI: 0.2582-0.4632). CONCLUSION The MCNN approach demonstrated high performance in predicting peritoneal recurrence and DFS in patients with AOC.
Collapse
Affiliation(s)
- Rui Yin
- National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China; School of Biomedical Engineering & Technology, Tianjin Medical University, Tianjin 300203, China
| | - Zhaoxiang Dou
- Department of Breast Imaging, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Yanyan Wang
- Department of CT and MRI, Shanxi Tumor Hospital, Taiyuan 030013, China
| | - Qian Zhang
- Department of Radiology, Baoding No. 1 Central Hospital, Baoding 071030, China
| | - Yijun Guo
- Department of Breast Imaging, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Yigeng Wang
- Department of Radiology, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Ying Chen
- Department of Gynecologic Oncology, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Chao Zhang
- Department of Bone Cancer, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Huiyang Li
- Department of Gynecology and Obstetrics, Tianjin Medical University General Hospital, Tianjin 300052, China
| | - Xiqi Jian
- School of Biomedical Engineering & Technology, Tianjin Medical University, Tianjin 300203, China
| | - Lisha Qi
- Department of Pathology, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Wenjuan Ma
- Department of Breast Imaging, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China.
| |
Collapse
|
8
|
Liu D, Yang K, Zhang C, Xiao D, Zhao Y. Fully-Automatic Detection and Diagnosis System for Thyroid Nodules Based on Ultrasound Video Sequences by Artificial Intelligence. J Multidiscip Healthc 2024; 17:1641-1651. [PMID: 38646015 PMCID: PMC11027922 DOI: 10.2147/jmdh.s439629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 04/08/2024] [Indexed: 04/23/2024] Open
Abstract
Background Interpretation of ultrasound findings of thyroid nodules is subjective and labor-intensive for radiologists. Artificial intelligence (AI) is a relatively objective and efficient technology. We aimed to establish a fully automatic detection and diagnosis system for thyroid nodules based on AI technology by analyzing ultrasound video sequences. Patients and Methods We prospectively acquired dynamic ultrasound videos of 1067 thyroid nodules (804 for training and 263 for validation) from December 2018 to January 2021. All the patients underwent hemithyroidectomy or total thyroidectomy. Dynamic ultrasound videos were used to develop an AI system consisting of two deep learning models that could automatically detect and diagnose thyroid nodules. Average precision (AP) was used to estimate the performance of the detection model. The area under the receiver operating characteristic curve (AUC) was used to measure the performance of the diagnostic model. Results Location and shape were accurately detected with a high AP of 0.914 in the validation cohort. The AUC of the diagnostic model was 0.953 in the validation cohort. The sensitivity and specificity of junior and senior radiologists were 76.9% vs 78.3% and 68.4% vs 81.1%, respectively. The diagnostic performance of the AI diagnostic model was superior to that of junior radiologists (P = 0.016) and was not significantly different from that of senior radiologists (P = 0.281). Conclusion We established a fully automatic detection and diagnosis system for thyroid nodules based on ultrasound video using an AI approach that can be conveniently applied to optimize the management of patients with thyroid nodules.
Collapse
Affiliation(s)
- Dan Liu
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, People’s Republic of China
| | - Ke Yang
- The First in-Patient Department, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, 330006, People’s Republic of China
| | - Chunquan Zhang
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, People’s Republic of China
| | - Dandan Xiao
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, People’s Republic of China
| | - Yu Zhao
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, People’s Republic of China
| |
Collapse
|
9
|
Brandão M, Mendes F, Martins M, Cardoso P, Macedo G, Mascarenhas T, Mascarenhas Saraiva M. Revolutionizing Women's Health: A Comprehensive Review of Artificial Intelligence Advancements in Gynecology. J Clin Med 2024; 13:1061. [PMID: 38398374 PMCID: PMC10889757 DOI: 10.3390/jcm13041061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/04/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024] Open
Abstract
Artificial intelligence has yielded remarkably promising results in several medical fields, namely those with a strong imaging component. Gynecology relies heavily on imaging since it offers useful visual data on the female reproductive system, leading to a deeper understanding of pathophysiological concepts. The applicability of artificial intelligence technologies has not been as noticeable in gynecologic imaging as in other medical fields so far. However, due to growing interest in this area, some studies have been performed with exciting results. From urogynecology to oncology, artificial intelligence algorithms, particularly machine learning and deep learning, have shown huge potential to revolutionize the overall healthcare experience for women's reproductive health. In this review, we aim to establish the current status of AI in gynecology, the upcoming developments in this area, and discuss the challenges facing its clinical implementation, namely the technological and ethical concerns for technology development, implementation, and accountability.
Collapse
Affiliation(s)
- Marta Brandão
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
| | - Francisco Mendes
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Miguel Martins
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Guilherme Macedo
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Teresa Mascarenhas
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Obstetrics and Gynecology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Miguel Mascarenhas Saraiva
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| |
Collapse
|
10
|
Mahootiha M, Qadir HA, Bergsland J, Balasingham I. Multimodal deep learning for personalized renal cell carcinoma prognosis: Integrating CT imaging and clinical data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107978. [PMID: 38113804 DOI: 10.1016/j.cmpb.2023.107978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 12/05/2023] [Accepted: 12/12/2023] [Indexed: 12/21/2023]
Abstract
BACKGROUND AND OBJECTIVE Renal cell carcinoma represents a significant global health challenge with a low survival rate. The aim of this research was to devise a comprehensive deep-learning model capable of predicting survival probabilities in patients with renal cell carcinoma by integrating CT imaging and clinical data and addressing the limitations observed in prior studies. The aim is to facilitate the identification of patients requiring urgent treatment. METHODS The proposed framework comprises three modules: a 3D image feature extractor, clinical variable selection, and survival prediction. Based on the 3D CNN architecture, the feature extractor module predicts the ISUP grade of renal cell carcinoma tumors linked to mortality rates from CT images. Clinical variables are systematically selected using the Spearman score and random forest importance score as criteria. A deep learning-based network, trained with discrete LogisticHazard-based loss, performs the survival prediction. Nine distinct experiments are performed, with varying numbers of clinical variables determined by different thresholds of the Spearman and importance scores. RESULTS Our findings demonstrate that the proposed strategy surpasses the current literature on renal cancer prognosis based on CT scans and clinical factors. The best-performing experiment yielded a concordance index of 0.84 and an area under the curve value of 0.8 on the test cohort, which suggests strong predictive power. CONCLUSIONS The multimodal deep-learning approach developed in this study shows promising results in estimating survival probabilities for renal cell carcinoma patients using CT imaging and clinical data. This may have potential implications in identifying patients who require urgent treatment, potentially improving patient outcomes. The code created for this project is available for the public on: GitHub.
Collapse
Affiliation(s)
- Maryamalsadat Mahootiha
- The Intervention Centre, Oslo University Hospital, Oslo, 0372, Norway; Faculty of Medicine, University of Oslo, Oslo, 0372, Norway.
| | - Hemin Ali Qadir
- The Intervention Centre, Oslo University Hospital, Oslo, 0372, Norway
| | - Jacob Bergsland
- The Intervention Centre, Oslo University Hospital, Oslo, 0372, Norway
| | - Ilangko Balasingham
- The Intervention Centre, Oslo University Hospital, Oslo, 0372, Norway; Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
11
|
Nguyen HS, Ho DKN, Nguyen NN, Tran HM, Tam KW, Le NQK. Predicting EGFR Mutation Status in Non-Small Cell Lung Cancer Using Artificial Intelligence: A Systematic Review and Meta-Analysis. Acad Radiol 2024; 31:660-683. [PMID: 37120403 DOI: 10.1016/j.acra.2023.03.040] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 03/25/2023] [Accepted: 03/28/2023] [Indexed: 05/01/2023]
Abstract
RATIONALE AND OBJECTIVES Recent advancements in artificial intelligence (AI) render a substantial promise for epidermal growth factor receptor (EGFR) mutation status prediction in non-small cell lung cancer (NSCLC). We aimed to evaluate the performance and quality of AI algorithms that use radiomics features in predicting EGFR mutation status in patient with NSCLC. MATERIALS AND METHODS We searched PubMed (Medline), EMBASE, Web of Science, and IEEExplore for studies published up to February 28, 2022. Studies utilizing an AI algorithm (either conventional machine learning [cML] and deep learning [DL]) for predicting EGFR mutations in patients with NSLCL were included. We extracted binary diagnostic accuracy data and constructed a bivariate random-effects model to obtain pooled sensitivity, specificity, and 95% confidence interval. This study is registered with PROSPERO, CRD42021278738. RESULTS Our search identified 460 studies, of which 42 were included. Thirty-five studies were included in the meta-analysis. The AI algorithms exhibited an overall area under the curve (AUC) value of 0.789 and pooled sensitivity and specificity levels of 72.2% and 73.3%, respectively. The DL algorithms outperformed cML in terms of AUC (0.822 vs. 0.775) and sensitivity (80.1% vs. 71.1%), but had lower specificity (70.0% vs. 73.8%, p-value < 0.001) compared to cML. Subgroup analysis revealed that the use of positron-emission tomography/computed tomography, additional clinical information, deep feature extraction, and manual segmentation can improve diagnostic performance. CONCLUSION DL algorithms can serve as a novel method for increasing predictive accuracy and thus have considerable potential for use in predicting EGFR mutation status in patient with NSCLC. We also suggest that guidelines on using AI algorithms in medical image analysis should be developed with a focus on oncologic radiomics.
Collapse
Affiliation(s)
- Hung Song Nguyen
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei City, Taiwan (H.S.N., N.N.N.); Department of Pediatrics, Pham Ngoc Thach University of Medicine, Ho Chi Minh City, Viet Nam (H.S.N.); Intensive Care Unit Department, Children's Hospital 1, Ho Chi Minh City, Viet Nam (H.S.N.)
| | - Dang Khanh Ngan Ho
- School of Nutrition and Health Sciences, College of Nutrition, Taipei Medical University, Taipei, Taiwan (D.K.N.H.)
| | - Nam Nhat Nguyen
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei City, Taiwan (H.S.N., N.N.N.)
| | - Huy Minh Tran
- Department of Neurosurgery, Faculty of Medicine, University of Medicine and Pharmacy at Ho Chi Minh City, Ho Chi Minh City, Viet Nam (H.M.T.)
| | - Ka-Wai Tam
- Center for Evidence-based Health Care, Shuang Ho Hospital, Taipei Medical University, New Taipei City, Taiwan (K.-W.T.); Cochrane Taiwan, Taipei Medical University, Taipei City, Taiwan (K.-W.T.); Division of General Surgery, Department of Surgery, Shuang Ho Hospital, Taipei Medical University, New Taipei City, Taiwan (K.-W.T.); Division of General Surgery, Department of Surgery, School of Medicine, College of Medicine, Taipei Medical University, Taipei City, Taiwan (K.-W.T.)
| | - Nguyen Quoc Khanh Le
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan (N.Q.K.L.); Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei 110, Taiwan (N.Q.K.L.); AIBioMed Research Group, Taipei Medical University, Taipei 110, Taiwan (N.Q.K.L.); Translational Imaging Research Center, Taipei Medical University Hospital, Taipei 110, Taiwan (N.Q.K.L.).
| |
Collapse
|
12
|
Mahootiha M, Qadir HA, Aghayan D, Fretland ÅA, von Gohren Edwin B, Balasingham I. Deep learning-assisted survival prognosis in renal cancer: A CT scan-based personalized approach. Heliyon 2024; 10:e24374. [PMID: 38298725 PMCID: PMC10828686 DOI: 10.1016/j.heliyon.2024.e24374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 12/19/2023] [Accepted: 01/08/2024] [Indexed: 02/02/2024] Open
Abstract
This paper presents a deep learning (DL) approach for predicting survival probabilities of renal cancer patients based solely on preoperative CT imaging. The proposed approach consists of two networks: a classifier- and a survival- network. The classifier attempts to extract features from 3D CT scans to predict the ISUP grade of Renal cell carcinoma (RCC) tumors, as defined by the International Society of Urological Pathology (ISUP). Our classifier is a 3D convolutional neural network to avoid losing crucial information on the interconnection of slides in 3D images. We employ multiple procedures, including image augmentation, preprocessing, and concatenation, to improve the performance of the classifier. Given the strong correlation between ISUP grading and renal cancer prognosis in the clinical context, we use the ISUP grading features extracted by the classifier as the input to the survival network. By leveraging this clinical association and the classifier network, we are able to model our survival analysis using a simple DL-based network. We adopt a discrete LogisticHazard-based loss to extract intrinsic survival characteristics of RCC tumors from CT images. This allows us to build a completely parametric survival model that varies with patients' tumor characteristics and predicts non-proportional survival probability curves for different patients. Our results demonstrated that the proposed method could predict the future course of renal cancer with reasonable accuracy from the CT scans. The proposed method obtained an average concordance index of 0.72, an integrated Brier score of 0.15, and an area under the curve value of 0.71 on the test cohorts.
Collapse
Affiliation(s)
- Maryamalsadat Mahootiha
- The Intervention Centre, Oslo University Hospital, Oslo, 0372, Norway
- Faculty of Medicine, University of Oslo, Oslo, 0372, Norway
| | - Hemin Ali Qadir
- The Intervention Centre, Oslo University Hospital, Oslo, 0372, Norway
| | - Davit Aghayan
- The Intervention Centre, Oslo University Hospital, Oslo, 0372, Norway
| | | | - Bjørn von Gohren Edwin
- The Intervention Centre, Oslo University Hospital, Oslo, 0372, Norway
- Faculty of Medicine, University of Oslo, Oslo, 0372, Norway
| | - Ilangko Balasingham
- The Intervention Centre, Oslo University Hospital, Oslo, 0372, Norway
- Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
13
|
Wu X, Wu H, Miao S, Cao G, Su H, Pan J, Xu Y. Deep learning prediction of esophageal squamous cell carcinoma invasion depth from arterial phase enhanced CT images: a binary classification approach. BMC Med Inform Decis Mak 2024; 24:3. [PMID: 38167058 PMCID: PMC10759510 DOI: 10.1186/s12911-023-02386-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 12/04/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND Precise prediction of esophageal squamous cell carcinoma (ESCC) invasion depth is crucial not only for optimizing treatment plans but also for reducing the need for invasive procedures, consequently lowering complications and costs. Despite this, current techniques, which can be invasive and costly, struggle with achieving the necessary precision, highlighting a pressing need for more effective, non-invasive alternatives. METHOD We developed ResoLSTM-Depth, a deep learning model to distinguish ESCC stages T1-T2 from T3-T4. It integrates ResNet-18 and Long Short-Term Memory (LSTM) networks, leveraging their strengths in spatial and sequential data processing. This method uses arterial phase CT scans from ESCC patients. The dataset was meticulously segmented by an experienced radiologist for effective training and validation. RESULTS Upon performing five-fold cross-validation, the ResoLSTM-Depth model exhibited commendable performance with an accuracy of 0.857, an AUC of 0.901, a sensitivity of 0.884, and a specificity of 0.828. These results were superior to the ResNet-18 model alone, where the average accuracy is 0.824 and the AUC is 0.879. Attention maps further highlighted influential features for depth prediction, enhancing model interpretability. CONCLUSION ResoLSTM-Depth is a promising tool for ESCC invasion depth prediction. It offers potential for improvement in the staging and therapeutic planning of ESCC.
Collapse
Affiliation(s)
- Xiaoli Wu
- Department of Gastroenterology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Hao Wu
- Department of Gastroenterology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Shouliang Miao
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Guoquan Cao
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Huang Su
- Department of Gastroenterology, Wenzhou Central Hospital, Wenzhou, Zhejiang, China
| | - Jie Pan
- Department of Gastroenterology, Wenzhou Central Hospital, Wenzhou, Zhejiang, China
| | - Yilun Xu
- Department of Gastroenterology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China.
| |
Collapse
|
14
|
Leng Y, Li S, Zhu J, Wang X, Luo F, Wang Y, Gong L. Application of medical imaging in ovarian cancer: a bibliometric analysis from 2000 to 2022. Front Oncol 2023; 13:1326297. [PMID: 38111527 PMCID: PMC10725957 DOI: 10.3389/fonc.2023.1326297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 11/14/2023] [Indexed: 12/20/2023] Open
Abstract
Background Ovarian cancer (OC) is the most lethal tumor within the female reproductive system. Medical imaging plays a significant role in diagnosis and monitoring OC. This study aims to use bibliometric analysis to explore the current research hotspots and collaborative networks in the application of medical imaging in OC from 2000 to 2022. Methods A systematica search for medical imaging in OC was conducted on the Web of Science Core Collection on August 9, 2023. All reviews and articles published from January 2000 to December 2022 were downloaded, and an analysis of countries, institutions, journals, keywords, and collaborative networks was perfomed using CiteSpace and VOSviewer. Results A total of 5,958 publications were obtained, demonstrating a clear upward trend in annual publications over the study peroid. The USA led in productivity with 1,373 publications, and Harvard University emerged as the most prominent institution with 202 publications. Timmerman D was the most prolific contributor with 100 publications, and Gynecological Oncology led in the number of publications with 296. The top three keywords were "ovarian cancer" (1,256), "ultrasound" (725), and "diagnosis" (712). In addition, "pelvic masses" had the highest burst strength (25.5), followed by "magnetic resonance imaging (MRI)" (21.47). Recent emergent keywords such as "apoptosis", "nanoparticles", "features", "accuracy", and "human epididymal protein 4 (HE 4)" reflect research trends in this field and may become research hotspots in the future. Conclusion This study provides a comprehensive summary of the key contributions of OC imaging to field's development over the past 23 years. Presently, primary areas of OC imaging research include MRI, targeted therapy of OC, novel biomarker (HE 4), and artificial intelligence. These areas are expected to influence future research endeavors in this field.
Collapse
Affiliation(s)
- Yinping Leng
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Shuhao Li
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Jianghua Zhu
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Xiwen Wang
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Fengyuan Luo
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Yu Wang
- Clinical and Technical Support, Philips Healthcare, Shanghai, China
| | - Lianggeng Gong
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| |
Collapse
|
15
|
Sadeghi MH, Sina S, Alavi M, Giammarile F. The OCDA-Net: a 3D convolutional neural network-based system for classification and staging of ovarian cancer patients using [ 18F]FDG PET/CT examinations. Ann Nucl Med 2023; 37:645-654. [PMID: 37768493 DOI: 10.1007/s12149-023-01867-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023]
Abstract
OBJECTIVE To create the 3D convolutional neural network (CNN)-based system that can use whole-body [18F]FDG PET for recurrence/post-therapy surveillance in ovarian cancer (OC). METHODS In this study, 1224 image sets from OC patients who underwent whole-body [18F]FDG PET/CT at Kowsar Hospital between April 2019 and May 2022 were investigated. For recurrence/post-therapy surveillance, diagnostic classification as cancerous, and non-cancerous and staging as stage III, and stage IV were determined by pathological diagnosis and specialists' interpretation. New deep neural network algorithms, the OCDAc-Net, and the OCDAs-Net were developed for diagnostic classification and staging of OC patients using [18F]FDG PET/CT images. Examinations were divided into independent training (75%), validation (10%), and testing (15%) subsets. RESULTS This study included 37 women (mean age 56.3 years; age range 36-83 years). Data augmentation techniques were applied to the images in two phases. There were 1224 image sets for diagnostic classification and staging. For the test set, 170 image sets were considered for diagnostic classification and staging. The OCDAc-Net areas under the receiver operating characteristic curve (AUCs) and overall accuracy for diagnostic classification were 0.990 and 0.92, respectively. The OCDAs-Net achieved areas under the receiver operating characteristic curve (AUCs) of 0.995 and overall accuracy of 0.94 for staging. CONCLUSIONS The proposed 3D CNN-based models provide potential tools for recurrence/post-therapy surveillance in OC. The OCDAc-Net and the OCDAs-Net model provide a new prognostic analysis method that can utilize PET images without pathological findings for diagnostic classification and staging.
Collapse
Affiliation(s)
- Mohammad Hossein Sadeghi
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
| | - Sedigheh Sina
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran.
- Radiation Research Center, School of Mechanical Engineering, Shiraz University, Shiraz, Iran.
| | - Mehrosadat Alavi
- Department of Nuclear Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Francesco Giammarile
- Nuclear Medicine and Diagnostic Imaging Section, Division of Human Health, International Atomic Energy Agency, Vienna, Austria
| |
Collapse
|
16
|
Jiang Y, Wang C, Zhou S. Artificial intelligence-based risk stratification, accurate diagnosis and treatment prediction in gynecologic oncology. Semin Cancer Biol 2023; 96:82-99. [PMID: 37783319 DOI: 10.1016/j.semcancer.2023.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 08/27/2023] [Accepted: 09/25/2023] [Indexed: 10/04/2023]
Abstract
As data-driven science, artificial intelligence (AI) has paved a promising path toward an evolving health system teeming with thrilling opportunities for precision oncology. Notwithstanding the tremendous success of oncological AI in such fields as lung carcinoma, breast tumor and brain malignancy, less attention has been devoted to investigating the influence of AI on gynecologic oncology. Hereby, this review sheds light on the ever-increasing contribution of state-of-the-art AI techniques to the refined risk stratification and whole-course management of patients with gynecologic tumors, in particular, cervical, ovarian and endometrial cancer, centering on information and features extracted from clinical data (electronic health records), cancer imaging including radiological imaging, colposcopic images, cytological and histopathological digital images, and molecular profiling (genomics, transcriptomics, metabolomics and so forth). However, there are still noteworthy challenges beyond performance validation. Thus, this work further describes the limitations and challenges faced in the real-word implementation of AI models, as well as potential solutions to address these issues.
Collapse
Affiliation(s)
- Yuting Jiang
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Chengdi Wang
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Shengtao Zhou
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China.
| |
Collapse
|
17
|
Garg P, Mohanty A, Ramisetty S, Kulkarni P, Horne D, Pisick E, Salgia R, Singhal SS. Artificial intelligence and allied subsets in early detection and preclusion of gynecological cancers. Biochim Biophys Acta Rev Cancer 2023; 1878:189026. [PMID: 37980945 DOI: 10.1016/j.bbcan.2023.189026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 11/09/2023] [Accepted: 11/14/2023] [Indexed: 11/21/2023]
Abstract
Gynecological cancers including breast, cervical, ovarian, uterine, and vaginal, pose the greatest threat to world health, with early identification being crucial to patient outcomes and survival rates. The application of machine learning (ML) and artificial intelligence (AI) approaches to the study of gynecological cancer has shown potential to revolutionize cancer detection and diagnosis. The current review outlines the significant advancements, obstacles, and prospects brought about by AI and ML technologies in the timely identification and accurate diagnosis of different types of gynecological cancers. The AI-powered technologies can use genomic data to discover genetic alterations and biomarkers linked to a particular form of gynecologic cancer, assisting in the creation of targeted treatments. Furthermore, it has been shown that the potential benefits of AI and ML technologies in gynecologic tumors can greatly increase the accuracy and efficacy of cancer diagnosis, reduce diagnostic delays, and possibly eliminate the need for needless invasive operations. In conclusion, the review focused on the integrative part of AI and ML based tools and techniques in the early detection and exclusion of various cancer types; together with a collaborative coordination between research clinicians, data scientists, and regulatory authorities, which is suggested to realize the full potential of AI and ML in gynecologic cancer care.
Collapse
Affiliation(s)
- Pankaj Garg
- Department of Chemistry, GLA University, Mathura, Uttar Pradesh 281406, India
| | - Atish Mohanty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sravani Ramisetty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Prakash Kulkarni
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - David Horne
- Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Evan Pisick
- Department of Medical Oncology, City of Hope, Chicago, IL 60099, USA
| | - Ravi Salgia
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sharad S Singhal
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA.
| |
Collapse
|
18
|
Ma B, Guo J, Chu H, van Dijk LV, van Ooijen PM, Langendijk JA, Both S, Sijtsema NM. Comparison of computed tomography image features extracted by radiomics, self-supervised learning and end-to-end deep learning for outcome prediction of oropharyngeal cancer. Phys Imaging Radiat Oncol 2023; 28:100502. [PMID: 38026084 PMCID: PMC10663809 DOI: 10.1016/j.phro.2023.100502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/02/2023] [Accepted: 10/17/2023] [Indexed: 12/01/2023] Open
Abstract
Background and purpose To compare the prediction performance of image features of computed tomography (CT) images extracted by radiomics, self-supervised learning and end-to-end deep learning for local control (LC), regional control (RC), locoregional control (LRC), distant metastasis-free survival (DMFS), tumor-specific survival (TSS), overall survival (OS) and disease-free survival (DFS) of oropharyngeal squamous cell carcinoma (OPSCC) patients after (chemo)radiotherapy. Methods and materials The OPC-Radiomics dataset was used for model development and independent internal testing and the UMCG-OPC set for external testing. Image features were extracted from the Gross Tumor Volume contours of the primary tumor (GTVt) regions in CT scans when using radiomics or a self-supervised learning-based method (autoencoder). Clinical and combined (radiomics, autoencoder or end-to-end) models were built using multivariable Cox proportional-hazard analysis with clinical features only and both clinical and image features for LC, RC, LRC, DMFS, TSS, OS and DFS prediction, respectively. Results In the internal test set, combined autoencoder models performed better than clinical models and combined radiomics models for LC, RC, LRC, DMFS, TSS and DFS prediction (largest improvements in C-index: 0.91 vs. 0.76 in RC and 0.74 vs. 0.60 in DMFS). In the external test set, combined radiomics models performed better than clinical and combined autoencoder models for all endpoints (largest improvements in LC, 0.82 vs. 0.71). Furthermore, combined models performed better in risk stratification than clinical models and showed good calibration for most endpoints. Conclusions Image features extracted using self-supervised learning showed best internal prediction performance while radiomics features have better external generalizability.
Collapse
Affiliation(s)
- Baoqiang Ma
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Machine Learning Lab, Data Science Center in Health (DASH), Groningen, Netherlands
- Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence , University of Groningen, Groningen, Netherlands
| | - Hung Chu
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Machine Learning Lab, Data Science Center in Health (DASH), Groningen, Netherlands
- Center for Information Technology, University of Groningen ,Groningen, Netherlands
| | - Lisanne V. van Dijk
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Peter M.A. van Ooijen
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Machine Learning Lab, Data Science Center in Health (DASH), Groningen, Netherlands
| | - Johannes A. Langendijk
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
| | - Stefan Both
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
| | - Nanna M. Sijtsema
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
| |
Collapse
|
19
|
Hatamikia S, Nougaret S, Panico C, Avesani G, Nero C, Boldrini L, Sala E, Woitek R. Ovarian cancer beyond imaging: integration of AI and multiomics biomarkers. Eur Radiol Exp 2023; 7:50. [PMID: 37700218 PMCID: PMC10497482 DOI: 10.1186/s41747-023-00364-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 06/19/2023] [Indexed: 09/14/2023] Open
Abstract
High-grade serous ovarian cancer is the most lethal gynaecological malignancy. Detailed molecular studies have revealed marked intra-patient heterogeneity at the tumour microenvironment level, likely contributing to poor prognosis. Despite large quantities of clinical, molecular and imaging data on ovarian cancer being accumulated worldwide and the rise of high-throughput computing, data frequently remain siloed and are thus inaccessible for integrated analyses. Only a minority of studies on ovarian cancer have set out to harness artificial intelligence (AI) for the integration of multiomics data and for developing powerful algorithms that capture the characteristics of ovarian cancer at multiple scales and levels. Clinical data, serum markers, and imaging data were most frequently used, followed by genomics and transcriptomics. The current literature proves that integrative multiomics approaches outperform models based on single data types and indicates that imaging can be used for the longitudinal tracking of tumour heterogeneity in space and potentially over time. This review presents an overview of studies that integrated two or more data types to develop AI-based classifiers or prediction models.Relevance statement Integrative multiomics models for ovarian cancer outperform models using single data types for classification, prognostication, and predictive tasks.Key points• This review presents studies using multiomics and artificial intelligence in ovarian cancer.• Current literature proves that integrative multiomics outperform models using single data types.• Around 60% of studies used a combination of imaging with clinical data.• The combination of genomics and transcriptomics with imaging data was infrequently used.
Collapse
Affiliation(s)
- Sepideh Hatamikia
- Research Center for Medical Image Analysis and AI (MIAAI), Danube Private University, Krems, Austria.
- Austrian Center for Medical Innovation and Technology (ACMIT), Wiener Neustadt, Austria.
| | - Stephanie Nougaret
- Department of Radiology, Montpellier Cancer Institute, University of Montpellier, Montpellier, France
| | - Camilla Panico
- Dipartimento di Diagnostica Per Immagini, Radioterapia Oncologica Ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Giacomo Avesani
- Dipartimento di Diagnostica Per Immagini, Radioterapia Oncologica Ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Camilla Nero
- Scienze Della Salute Della Donna, del bambino e Di Sanità Pubblica, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Luca Boldrini
- Dipartimento di Diagnostica Per Immagini, Radioterapia Oncologica Ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Evis Sala
- Dipartimento di Diagnostica Per Immagini, Radioterapia Oncologica Ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Ramona Woitek
- Research Center for Medical Image Analysis and AI (MIAAI), Danube Private University, Krems, Austria
- Department of Radiology, University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| |
Collapse
|
20
|
Zhou HY, Cheng JM, Chen TW, Zhang XM, Ou J, Cao JM, Li HJ. CT radiomics for prediction of microvascular invasion in hepatocellular carcinoma: A systematic review and meta-analysis. Clinics (Sao Paulo) 2023; 78:100264. [PMID: 37562218 PMCID: PMC10432601 DOI: 10.1016/j.clinsp.2023.100264] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 07/13/2023] [Accepted: 07/18/2023] [Indexed: 08/12/2023] Open
Abstract
The power of computed tomography (CT) radiomics for preoperative prediction of microvascular invasion (MVI) in hepatocellular carcinoma (HCC) demonstrated in current research is variable. This systematic review and meta-analysis aim to evaluate the value of CT radiomics for MVI prediction in HCC, and to investigate the methodologic quality in the workflow of radiomics research. Databases of PubMed, Embase, Web of Science, and Cochrane Library were systematically searched. The methodologic quality of included studies was assessed. Validation data from studies with Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) statement type 2a or above were extracted for meta-analysis. Eleven studies were included, among which nine were eligible for meta-analysis. Radiomics quality scores of the enrolled eleven studies varied from 6 to 17, accounting for 16.7%-47.2% of the total points, with an average score of 14. Pooled sensitivity, specificity, and Area Under the summary receiver operator Characteristic Curve (AUC) were 0.82 (95% CI 0.77-0.86), 0.79 (95% CI 0.75-0.83), and 0.87 (95% CI 0.84-0.91) for the predictive performance of CT radiomics, respectively. Meta-regression and subgroup analyses showed radiomics model based on 3D tumor segmentation, and deep learning model achieved superior performances compared to 2D segmentation and non-deep learning model, respectively (AUC: 0.93 vs. 0.83, and 0.97 vs. 0.83, respectively). This study proves that CT radiomics could predict MVI in HCC. The heterogeneity of the included studies precludes a definition of the role of CT radiomics in predicting MVI, but methodology warrants uniformization in the radiology community regarding radiomics in HCC.
Collapse
Affiliation(s)
- Hai-Ying Zhou
- Medical Imaging Key Laboratory of Sichuan Province, and Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Sichuan, China
| | - Jin-Mei Cheng
- Medical Imaging Key Laboratory of Sichuan Province, and Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Sichuan, China
| | - Tian-Wu Chen
- Medical Imaging Key Laboratory of Sichuan Province, and Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Sichuan, China; Department of Radiology, the Second Affiliated Hospital of Chongqing Medical University, Chongqing, China.
| | - Xiao-Ming Zhang
- Medical Imaging Key Laboratory of Sichuan Province, and Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Sichuan, China
| | - Jing Ou
- Medical Imaging Key Laboratory of Sichuan Province, and Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Sichuan, China
| | - Jin-Ming Cao
- Department of Radiology, Nanchong Central Hospital/Second School of Clinical Medicine, North Sichuan Medical College, Sichuan, China
| | - Hong-Jun Li
- Department of Radiology, Beijing YouAn Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
21
|
Wang R, Xiong K, Wang Z, Wu D, Hu B, Ruan J, Sun C, Ma D, Li L, Liao S. Immunodiagnosis - the promise of personalized immunotherapy. Front Immunol 2023; 14:1216901. [PMID: 37520576 PMCID: PMC10372420 DOI: 10.3389/fimmu.2023.1216901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 06/21/2023] [Indexed: 08/01/2023] Open
Abstract
Immunotherapy showed remarkable efficacy in several cancer types. However, the majority of patients do not benefit from immunotherapy. Evaluating tumor heterogeneity and immune status before treatment is key to identifying patients that are more likely to respond to immunotherapy. Demographic characteristics (such as sex, age, and race), immune status, and specific biomarkers all contribute to response to immunotherapy. A comprehensive immunodiagnostic model integrating all these three dimensions by artificial intelligence would provide valuable information for predicting treatment response. Here, we coined the term "immunodiagnosis" to describe the blueprint of the immunodiagnostic model. We illustrated the features that should be included in immunodiagnostic model and the strategy of constructing the immunodiagnostic model. Lastly, we discussed the incorporation of this immunodiagnosis model in clinical practice in hopes of improving the prognosis of tumor immunotherapy.
Collapse
Affiliation(s)
- Renjie Wang
- Department of Obstetrics and Gynecology, Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Kairong Xiong
- Department of Obstetrics and Gynecology, Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhimin Wang
- Division of Endocrinology and Metabolic Diseases, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Di Wu
- Department of Obstetrics and Gynecology, Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Bai Hu
- Department of Obstetrics and Gynecology, Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jinghan Ruan
- Department of Obstetrics and Gynecology, Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Chaoyang Sun
- Department of Obstetrics and Gynecology, Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ding Ma
- Department of Obstetrics and Gynecology, Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Li Li
- Department of Obstetrics and Gynecology, Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shujie Liao
- Department of Obstetrics and Gynecology, Cancer Biology Research Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
22
|
Nikulin P, Zschaeck S, Maus J, Cegla P, Lombardo E, Furth C, Kaźmierska J, Rogasch JMM, Holzgreve A, Albert NL, Ferentinos K, Strouthos I, Hajiyianni M, Marschner SN, Belka C, Landry G, Cholewinski W, Kotzerke J, Hofheinz F, van den Hoff J. A convolutional neural network with self-attention for fully automated metabolic tumor volume delineation of head and neck cancer in [Formula: see text]F]FDG PET/CT. Eur J Nucl Med Mol Imaging 2023; 50:2751-2766. [PMID: 37079128 PMCID: PMC10317885 DOI: 10.1007/s00259-023-06197-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 03/14/2023] [Indexed: 04/21/2023]
Abstract
PURPOSE PET-derived metabolic tumor volume (MTV) and total lesion glycolysis of the primary tumor are known to be prognostic of clinical outcome in head and neck cancer (HNC). Including evaluation of lymph node metastases can further increase the prognostic value of PET but accurate manual delineation and classification of all lesions is time-consuming and prone to interobserver variability. Our goal, therefore, was development and evaluation of an automated tool for MTV delineation/classification of primary tumor and lymph node metastases in PET/CT investigations of HNC patients. METHODS Automated lesion delineation was performed with a residual 3D U-Net convolutional neural network (CNN) incorporating a multi-head self-attention block. 698 [Formula: see text]F]FDG PET/CT scans from 3 different sites and 5 public databases were used for network training and testing. An external dataset of 181 [Formula: see text]F]FDG PET/CT scans from 2 additional sites was employed to assess the generalizability of the network. In these data, primary tumor and lymph node (LN) metastases were interactively delineated and labeled by two experienced physicians. Performance of the trained network models was assessed by 5-fold cross-validation in the main dataset and by pooling results from the 5 developed models in the external dataset. The Dice similarity coefficient (DSC) for individual delineation tasks and the primary tumor/metastasis classification accuracy were used as evaluation metrics. Additionally, a survival analysis using univariate Cox regression was performed comparing achieved group separation for manual and automated delineation, respectively. RESULTS In the cross-validation experiment, delineation of all malignant lesions with the trained U-Net models achieves DSC of 0.885, 0.805, and 0.870 for primary tumor, LN metastases, and the union of both, respectively. In external testing, the DSC reaches 0.850, 0.724, and 0.823 for primary tumor, LN metastases, and the union of both, respectively. The voxel classification accuracy was 98.0% and 97.9% in cross-validation and external data, respectively. Univariate Cox analysis in the cross-validation and the external testing reveals that manually and automatically derived total MTVs are both highly prognostic with respect to overall survival, yielding essentially identical hazard ratios (HR) ([Formula: see text]; [Formula: see text] vs. [Formula: see text]; [Formula: see text] in cross-validation and [Formula: see text]; [Formula: see text] vs. [Formula: see text]; [Formula: see text] in external testing). CONCLUSION To the best of our knowledge, this work presents the first CNN model for successful MTV delineation and lesion classification in HNC. In the vast majority of patients, the network performs satisfactory delineation and classification of primary tumor and lymph node metastases and only rarely requires more than minimal manual correction. It is thus able to massively facilitate study data evaluation in large patient groups and also does have clear potential for supervised clinical application.
Collapse
Affiliation(s)
- Pavel Nikulin
- Helmholtz-Zentrum Dresden-Rossendorf, PET Center, Institute of Radiopharmaceutical Cancer Research, Bautzner Landstrasse 400, 01328, Dresden, Germany.
| | - Sebastian Zschaeck
- Department of Radiation Oncology, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Jens Maus
- Helmholtz-Zentrum Dresden-Rossendorf, PET Center, Institute of Radiopharmaceutical Cancer Research, Bautzner Landstrasse 400, 01328, Dresden, Germany
| | - Paulina Cegla
- Department of Nuclear Medicine, Greater Poland Cancer Centre, Poznan, Poland
| | - Elia Lombardo
- Department of Radiation Oncology, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Christian Furth
- Department of Nuclear Medicine, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Joanna Kaźmierska
- Electroradiology Department, University of Medical Sciences, Poznan, Poland
- Radiotherapy Department II, Greater Poland Cancer Centre, Poznan, Poland
| | - Julian M M Rogasch
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany
- Department of Nuclear Medicine, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Adrien Holzgreve
- Department of Nuclear Medicine, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Nathalie L Albert
- Department of Nuclear Medicine, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Konstantinos Ferentinos
- Department of Radiation Oncology, German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Iosif Strouthos
- Department of Radiation Oncology, German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Marina Hajiyianni
- Department of Radiation Oncology, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Sebastian N Marschner
- Department of Radiation Oncology, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Claus Belka
- Department of Radiation Oncology, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
- German Cancer Consortium (DKTK), Partner Site Munich, Munich, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Witold Cholewinski
- Department of Nuclear Medicine, Greater Poland Cancer Centre, Poznan, Poland
- Electroradiology Department, University of Medical Sciences, Poznan, Poland
| | - Jörg Kotzerke
- Department of Nuclear Medicine, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Frank Hofheinz
- Helmholtz-Zentrum Dresden-Rossendorf, PET Center, Institute of Radiopharmaceutical Cancer Research, Bautzner Landstrasse 400, 01328, Dresden, Germany
| | - Jörg van den Hoff
- Helmholtz-Zentrum Dresden-Rossendorf, PET Center, Institute of Radiopharmaceutical Cancer Research, Bautzner Landstrasse 400, 01328, Dresden, Germany
- Department of Nuclear Medicine, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
23
|
Shao Y, Dang Y, Cheng Y, Gui Y, Chen X, Chen T, Zeng Y, Tan L, Zhang J, Xiao M, Yan X, Lv K, Zhou Z. Predicting the Efficacy of Neoadjuvant Chemotherapy for Pancreatic Cancer Using Deep Learning of Contrast-Enhanced Ultrasound Videos. Diagnostics (Basel) 2023; 13:2183. [PMID: 37443577 DOI: 10.3390/diagnostics13132183] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/21/2023] [Accepted: 06/26/2023] [Indexed: 07/15/2023] Open
Abstract
Contrast-enhanced ultrasound (CEUS) is a promising imaging modality in predicting the efficacy of neoadjuvant chemotherapy for pancreatic cancer, a tumor with high mortality. In this study, we proposed a deep-learning-based strategy for analyzing CEUS videos to predict the prognosis of pancreatic cancer neoadjuvant chemotherapy. Pre-trained convolutional neural network (CNN) models were used for binary classification of the chemotherapy as effective or ineffective, with CEUS videos collected before chemotherapy as the model input, and with the efficacy after chemotherapy as the reference standard. We proposed two deep learning models. The first CNN model used videos of ultrasound (US) and CEUS (US+CEUS), while the second CNN model only used videos of selected regions of interest (ROIs) within CEUS (CEUS-ROI). A total of 38 patients with strict restriction of clinical factors were enrolled, with 76 original CEUS videos collected. After data augmentation, 760 and 720 videos were included for the two CNN models, respectively. Seventy-six-fold and 72-fold cross-validations were performed to validate the classification performance of the two CNN models. The areas under the curve were 0.892 and 0.908 for the two models. The accuracy, recall, precision and F1 score were 0.829, 0.759, 0.786, and 0.772 for the first model. Those were 0.864, 0.930, 0.866, and 0.897 for the second model. A total of 38.2% and 40.3% of the original videos could be clearly distinguished by the deep learning models when the naked eye made an inaccurate classification. This study is the first to demonstrate the feasibility and potential of deep learning models based on pre-chemotherapy CEUS videos in predicting the efficacy of neoadjuvant chemotherapy for pancreas cancer.
Collapse
Affiliation(s)
- Yuming Shao
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Yingnan Dang
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing 100124, China
| | - Yuejuan Cheng
- Department of Medical Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Yang Gui
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Xueqi Chen
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Tianjiao Chen
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Yan Zeng
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing 100124, China
| | - Li Tan
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Jing Zhang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Mengsu Xiao
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Xiaoyi Yan
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Ke Lv
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Zhuhuang Zhou
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing 100124, China
| |
Collapse
|
24
|
Panico A, Gatta G, Salvia A, Grezia GD, Fico N, Cuccurullo V. Radiomics in Breast Imaging: Future Development. J Pers Med 2023; 13:jpm13050862. [PMID: 37241032 DOI: 10.3390/jpm13050862] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 05/02/2023] [Accepted: 05/16/2023] [Indexed: 05/28/2023] Open
Abstract
Breast cancer is the most common and most commonly diagnosed non-skin cancer in women. There are several risk factors related to habits and heredity, and screening is essential to reduce the incidence of mortality. Thanks to screening and increased awareness among women, most breast cancers are diagnosed at an early stage, increasing the chances of cure and survival. Regular screening is essential. Mammography is currently the gold standard for breast cancer diagnosis. In mammography, we can encounter problems with the sensitivity of the instrument; in fact, in the case of a high density of glands, the ability to detect small masses is reduced. In fact, in some cases, the lesion may not be particularly evident, it may be hidden, and it is possible to incur false negatives as partial details that may escape the radiologist's eye. The problem is, therefore, substantial, and it makes sense to look for techniques that can increase the quality of diagnosis. In recent years, innovative techniques based on artificial intelligence have been used in this regard, which are able to see where the human eye cannot reach. In this paper, we can see the application of radiomics in mammography.
Collapse
Affiliation(s)
- Alessandra Panico
- Radiology Division, Department of Precision Medicine, Università della Campania "Luigi Vanvitelli", 80138 Naples, Italy
| | - Gianluca Gatta
- Radiology Division, Department of Precision Medicine, Università della Campania "Luigi Vanvitelli", 80138 Naples, Italy
| | - Antonio Salvia
- Radiology Division, Department of Precision Medicine, Università della Campania "Luigi Vanvitelli", 80138 Naples, Italy
| | | | - Noemi Fico
- Department of Physics "Ettore Pancini", Università di Napoli Federico II, 80126 Naples, Italy
| | - Vincenzo Cuccurullo
- Nuclear Medicine Unit, Department of Precision Medicine, Università della Campania "Luigi Vanvitelli", 80138 Naples, Italy
| |
Collapse
|
25
|
Jan YT, Tsai PS, Huang WH, Chou LY, Huang SC, Wang JZ, Lu PH, Lin DC, Yen CS, Teng JP, Mok GSP, Shih CT, Wu TH. Machine learning combined with radiomics and deep learning features extracted from CT images: a novel AI model to distinguish benign from malignant ovarian tumors. Insights Imaging 2023; 14:68. [PMID: 37093321 PMCID: PMC10126170 DOI: 10.1186/s13244-023-01412-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/20/2023] [Indexed: 04/25/2023] Open
Abstract
BACKGROUND To develop an artificial intelligence (AI) model with radiomics and deep learning (DL) features extracted from CT images to distinguish benign from malignant ovarian tumors. METHODS We enrolled 149 patients with pathologically confirmed ovarian tumors. A total of 185 tumors were included and divided into training and testing sets in a 7:3 ratio. All tumors were manually segmented from preoperative contrast-enhanced CT images. CT image features were extracted using radiomics and DL. Five models with different combinations of feature sets were built. Benign and malignant tumors were classified using machine learning (ML) classifiers. The model performance was compared with five radiologists on the testing set. RESULTS Among the five models, the best performing model is the ensemble model with a combination of radiomics, DL, and clinical feature sets. The model achieved an accuracy of 82%, specificity of 89% and sensitivity of 68%. Compared with junior radiologists averaged results, the model had a higher accuracy (82% vs 66%) and specificity (89% vs 65%) with comparable sensitivity (68% vs 67%). With the assistance of the model, the junior radiologists achieved a higher average accuracy (81% vs 66%), specificity (80% vs 65%), and sensitivity (82% vs 67%), approaching to the performance of senior radiologists. CONCLUSIONS We developed a CT-based AI model that can differentiate benign and malignant ovarian tumors with high accuracy and specificity. This model significantly improved the performance of less-experienced radiologists in ovarian tumor assessment, and may potentially guide gynecologists to provide better therapeutic strategies for these patients.
Collapse
Affiliation(s)
- Ya-Ting Jan
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Pei-Shan Tsai
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Wen-Hui Huang
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Ling-Ying Chou
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Shih-Chieh Huang
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Jing-Zhe Wang
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Pei-Hsuan Lu
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Dao-Chen Lin
- Division of Endocrine and Metabolism, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chun-Sheng Yen
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
| | - Ju-Ping Teng
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, China
| | - Cheng-Ting Shih
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, 404, Taiwan.
| | - Tung-Hsin Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan.
| |
Collapse
|
26
|
Al-Tashi Q, Saad MB, Muneer A, Qureshi R, Mirjalili S, Sheshadri A, Le X, Vokes NI, Zhang J, Wu J. Machine Learning Models for the Identification of Prognostic and Predictive Cancer Biomarkers: A Systematic Review. Int J Mol Sci 2023; 24:7781. [PMID: 37175487 PMCID: PMC10178491 DOI: 10.3390/ijms24097781] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/10/2023] [Accepted: 04/19/2023] [Indexed: 05/15/2023] Open
Abstract
The identification of biomarkers plays a crucial role in personalized medicine, both in the clinical and research settings. However, the contrast between predictive and prognostic biomarkers can be challenging due to the overlap between the two. A prognostic biomarker predicts the future outcome of cancer, regardless of treatment, and a predictive biomarker predicts the effectiveness of a therapeutic intervention. Misclassifying a prognostic biomarker as predictive (or vice versa) can have serious financial and personal consequences for patients. To address this issue, various statistical and machine learning approaches have been developed. The aim of this study is to present an in-depth analysis of recent advancements, trends, challenges, and future prospects in biomarker identification. A systematic search was conducted using PubMed to identify relevant studies published between 2017 and 2023. The selected studies were analyzed to better understand the concept of biomarker identification, evaluate machine learning methods, assess the level of research activity, and highlight the application of these methods in cancer research and treatment. Furthermore, existing obstacles and concerns are discussed to identify prospective research areas. We believe that this review will serve as a valuable resource for researchers, providing insights into the methods and approaches used in biomarker discovery and identifying future research opportunities.
Collapse
Affiliation(s)
- Qasem Al-Tashi
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Maliazurina B. Saad
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Amgad Muneer
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Rizwan Qureshi
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Seyedali Mirjalili
- Centre for Artificial Intelligence Research and Optimization, Torrens University Australia, Fortitude Valley, Brisbane, QLD 4006, Australia
- Yonsei Frontier Lab, Yonsei University, Seoul 03722, Republic of Korea
- University Research and Innovation Center, Obuda University, 1034 Budapest, Hungary
| | - Ajay Sheshadri
- Department of Pulmonary Medicine, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Xiuning Le
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Natalie I. Vokes
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jianjun Zhang
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jia Wu
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| |
Collapse
|
27
|
Kinoshita M, Ueda D, Matsumoto T, Shinkawa H, Yamamoto A, Shiba M, Okada T, Tani N, Tanaka S, Kimura K, Ohira G, Nishio K, Tauchi J, Kubo S, Ishizawa T. Deep Learning Model Based on Contrast-Enhanced Computed Tomography Imaging to Predict Postoperative Early Recurrence after the Curative Resection of a Solitary Hepatocellular Carcinoma. Cancers (Basel) 2023; 15:cancers15072140. [PMID: 37046801 PMCID: PMC10092973 DOI: 10.3390/cancers15072140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 03/28/2023] [Accepted: 04/03/2023] [Indexed: 04/08/2023] Open
Abstract
We aimed to develop the deep learning (DL) predictive model for postoperative early recurrence (within 2 years) of hepatocellular carcinoma (HCC) based on contrast-enhanced computed tomography (CECT) imaging. This study included 543 patients who underwent initial hepatectomy for HCC and were randomly classified into training, validation, and test datasets at a ratio of 8:1:1. Several clinical variables and arterial CECT images were used to create predictive models for early recurrence. Artificial intelligence models were implemented using convolutional neural networks and multilayer perceptron as a classifier. Furthermore, the Youden index was used to discriminate between high- and low-risk groups. The importance values of each explanatory variable for early recurrence were calculated using permutation importance. The DL predictive model for postoperative early recurrence was developed with the area under the curve values of 0.71 (test datasets) and 0.73 (validation datasets). Postoperative early recurrence incidences in the high- and low-risk groups were 73% and 30%, respectively (p = 0.0057). Permutation importance demonstrated that among the explanatory variables, the variable with the highest importance value was CECT imaging analysis. We developed a DL model to predict postoperative early HCC recurrence. DL-based analysis is effective for determining the treatment strategies in patients with HCC.
Collapse
Affiliation(s)
- Masahiko Kinoshita
- Department of Hepato-Biliary-Pancreatic Surgery, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Daiju Ueda
- Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
- Department of Diagnostic and Interventional Radiology, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Toshimasa Matsumoto
- Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
- Department of Diagnostic and Interventional Radiology, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Hiroji Shinkawa
- Department of Hepato-Biliary-Pancreatic Surgery, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Akira Yamamoto
- Department of Diagnostic and Interventional Radiology, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Masatsugu Shiba
- Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
- Department of Biofunctional Analysis, Graduate School of medicine, Osaka Metropolitan University, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Takuma Okada
- Department of Hepato-Biliary-Pancreatic Surgery, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Naoki Tani
- Department of Hepato-Biliary-Pancreatic Surgery, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Shogo Tanaka
- Department of Hepato-Biliary-Pancreatic Surgery, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Kenjiro Kimura
- Department of Hepato-Biliary-Pancreatic Surgery, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Go Ohira
- Department of Hepato-Biliary-Pancreatic Surgery, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Kohei Nishio
- Department of Hepato-Biliary-Pancreatic Surgery, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Jun Tauchi
- Department of Hepato-Biliary-Pancreatic Surgery, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Shoji Kubo
- Department of Hepato-Biliary-Pancreatic Surgery, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| | - Takeaki Ishizawa
- Department of Hepato-Biliary-Pancreatic Surgery, Osaka Metropolitan University Graduate School of Medicine, 1-4-3 Asahimachi, Abeno-ku, Osaka 545-8585, Japan
| |
Collapse
|
28
|
Sheehy J, Rutledge H, Acharya UR, Loh HW, Gururajan R, Tao X, Zhou X, Li Y, Gurney T, Kondalsamy-Chennakesavan S. Gynecological cancer prognosis using machine learning techniques: A systematic review of last three decades (1990–2022). Artif Intell Med 2023; 139:102536. [PMID: 37100507 DOI: 10.1016/j.artmed.2023.102536] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 03/19/2023] [Accepted: 03/23/2023] [Indexed: 03/30/2023]
Abstract
OBJECTIVE Many Computer Aided Prognostic (CAP) systems based on machine learning techniques have been proposed in the field of oncology. The objective of this systematic review was to assess and critically appraise the methodologies and approaches used in predicting the prognosis of gynecological cancers using CAPs. METHODS Electronic databases were used to systematically search for studies utilizing machine learning methods in gynecological cancers. Study risk of bias (ROB) and applicability were assessed using the PROBAST tool. 139 studies met the inclusion criteria, of which 71 predicted outcomes for ovarian cancer patients, 41 predicted outcomes for cervical cancer patients, 28 predicted outcomes for uterine cancer patients, and 2 predicted outcomes for gynecological malignancies broadly. RESULTS Random forest (22.30 %) and support vector machine (21.58 %) classifiers were used most commonly. Use of clinicopathological, genomic and radiomic data as predictors was observed in 48.20 %, 51.08 % and 17.27 % of studies, respectively, with some studies using multiple modalities. 21.58 % of studies were externally validated. Twenty-three individual studies compared ML and non-ML methods. Study quality was highly variable and methodologies, statistical reporting and outcome measures were inconsistent, preventing generalized commentary or meta-analysis of performance outcomes. CONCLUSION There is significant variability in model development when prognosticating gynecological malignancies with respect to variable selection, machine learning (ML) methods and endpoint selection. This heterogeneity prevents meta-analysis and conclusions regarding the superiority of ML methods. Furthermore, PROBAST-mediated ROB and applicability analysis demonstrates concern for the translatability of existing models. This review identifies ways that this can be improved upon in future works to develop robust, clinically translatable models within this promising field.
Collapse
|
29
|
Deep learning for the ovarian lesion localization and discrimination between borderline and malignant ovarian tumors based on routine MR imaging. Sci Rep 2023; 13:2770. [PMID: 36797331 PMCID: PMC9935539 DOI: 10.1038/s41598-023-29814-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Accepted: 02/10/2023] [Indexed: 02/18/2023] Open
Abstract
To establish a deep learning (DL) model in differentiating borderline ovarian tumor (BOT) from epithelial ovarian cancer (EOC) on conventional MR imaging. We retrospectively enrolled 201 patients of 102 pathologically proven BOTs and 99 EOCs at OB/GYN hospital Fudan University, between January 2015 and December 2017. All imaging data were reviewed on picture archiving and communication systems (PACS) server. Both T1-weighted imaging (T1WI) and T2-weighted imaging (T2WI) MR images were used for lesion area determination. We trained a U-net++ model with deep supervision to segment the lesion area on MR images. Then, the segmented regions were fed into a classification model based on DL network to categorize ovarian masses automatically. For ovarian lesion segmentation, the mean dice similarity coefficient (DSC) of the trained U-net++ model in the testing dataset achieved 0.73 [Formula: see text] 0.25, 0.76 [Formula: see text] 0.18, and 0.60 [Formula: see text] 0.24 in the sagittal T2WI, coronal T2WI, and axial T1WI images, respectively. The DL model by combined T2WI computerized network could differentiate BOT from EOC with a significantly higher AUC of 0.87, an accuracy of 83.7%, a sensitivity of 75.0% and a specificity of 87.5%. In comparison, the AUC yielded by radiologist was only 0.75, with an accuracy of 75.5%, a sensitivity of 96.0% and specificity of 54.2% (P < 0.001).The trained DL network model derived from routine MR imaging could help to distinguish BOT from EOC with a high accuracy, which was superior to radiologists' assessment.
Collapse
|
30
|
Liu L, Wan H, Liu L, Wang J, Tang Y, Cui S, Li Y. Deep Learning Provides a New Magnetic Resonance Imaging-Based Prognostic Biomarker for Recurrence Prediction in High-Grade Serous Ovarian Cancer. Diagnostics (Basel) 2023; 13:diagnostics13040748. [PMID: 36832236 PMCID: PMC9954966 DOI: 10.3390/diagnostics13040748] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 01/13/2023] [Accepted: 01/28/2023] [Indexed: 02/18/2023] Open
Abstract
This study aims to use a deep learning method to develop a signature extract from preoperative magnetic resonance imaging (MRI) and to evaluate its ability as a non-invasive recurrence risk prognostic marker in patients with advanced high-grade serous ovarian cancer (HGSOC). Our study comprises a total of 185 patients with pathologically confirmed HGSOC. A total of 185 patients were randomly assigned in a 5:3:2 ratio to a training cohort (n = 92), validation cohort 1 (n = 56), and validation cohort 2 (n = 37). We built a new deep learning network from 3839 preoperative MRI images (T2-weighted images and diffusion-weighted images) to extract HGSOC prognostic indicators. Following that, a fusion model including clinical and deep learning features is developed to predict patients' individual recurrence risk and 3-year recurrence likelihood. In the two validation cohorts, the consistency index of the fusion model was higher than both the deep learning model and the clinical feature model (0.752, 0.813 vs. 0.625, 0.600 vs. 0.505, 0.501). Among the three models, the fusion model had a higher AUC than either the deep learning model or the clinical model in validation cohorts 1 or 2 (AUC = was 0.986, 0.961 vs. 0.706, 0.676/0.506, 0.506). Using the DeLong method, the difference between them was statistically significant (p < 0.05). The Kaplan-Meier analysis distinguished two patient groups with high and low recurrence risk (p = 0.0008 and 0.0035, respectively). Deep learning may be a low-cost, non-invasive method for predicting risk for advanced HGSOC recurrence. Deep learning based on multi-sequence MRI serves as a prognostic biomarker for advanced HGSOC, which provides a preoperative model for predicting recurrence in HGSOC. Additionally, using the fusion model as a new prognostic analysis means that can use MRI data can be used without the need to follow-up the prognostic biomarker.
Collapse
Affiliation(s)
- Lili Liu
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, No. 1 Youyi Road, Yuzhong District, Chongqing 400016, China
- Department of Radiology, Chongqing General Hospital, Chongqing 401120, China
| | - Haoming Wan
- College of Computer and Information Science, Chongqing Normal University, Chongqing 400016, China
| | - Li Liu
- Department of Radiology, The People’s Hospital of Yubei District of Chongqing, Chongqing 401120, China
| | - Jie Wang
- Department of Nuclear Medicine, The First Affiliated Hospital of Chongqing Medical University, Chongqing 400016, China
| | - Yibo Tang
- College of Computer and Information Science, Chongqing Normal University, Chongqing 400016, China
| | - Shaoguo Cui
- College of Computer and Information Science, Chongqing Normal University, Chongqing 400016, China
- Correspondence: (S.C.); (Y.L.)
| | - Yongmei Li
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, No. 1 Youyi Road, Yuzhong District, Chongqing 400016, China
- Correspondence: (S.C.); (Y.L.)
| |
Collapse
|
31
|
Lei R, Yu Y, Li Q, Yao Q, Wang J, Gao M, Wu Z, Ren W, Tan Y, Zhang B, Chen L, Lin Z, Yao H. Deep learning magnetic resonance imaging predicts platinum sensitivity in patients with epithelial ovarian cancer. Front Oncol 2022; 12:895177. [PMID: 36505880 PMCID: PMC9727155 DOI: 10.3389/fonc.2022.895177] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 10/18/2022] [Indexed: 11/24/2022] Open
Abstract
Objective The aim of the study is to develop and validate a deep learning model to predict the platinum sensitivity of patients with epithelial ovarian cancer (EOC) based on contrast-enhanced magnetic resonance imaging (MRI). Methods In this retrospective study, 93 patients with EOC who received platinum-based chemotherapy (≥4 cycles) and debulking surgery at the Sun Yat-sen Memorial Hospital from January 2011 to January 2020 were enrolled and randomly assigned to the training and validation cohorts (2:1). Two different models were built based on either the primary tumor or whole volume of the abdomen as the volume of interest (VOI) within the same cohorts, and then a pre-trained convolutional neural network Med3D (Resnet 10 version) was transferred to automatically extract 1,024 features from two MRI sequences (CE-T1WI and T2WI) of each patient to predict platinum sensitivity. The performance of the two models was compared. Results A total of 93 women (mean age, 50.5 years ± 10.5 [standard deviation]) were evaluated (62 in the training cohort and 31 in the validation cohort). The AUCs of the whole abdomen model were 0.97 and 0.98 for the training and validation cohorts, respectively, which was better than the primary tumor model (AUCs of 0.88 and 0.81 in the training and validation cohorts, respectively). In k-fold cross-validation and stratified analysis, the whole abdomen model maintained a stable performance, and the decision function value generated by the model was a prognostic indicator that successfully discriminates high- and low-risk recurrence patients. Conclusion The non-manually segmented whole-abdomen deep learning model based on MRI exhibited satisfactory predictive performance for platinum sensitivity and may assist gynecologists in making optimal treatment decisions.
Collapse
Affiliation(s)
- Ruilin Lei
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Medical Research Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Department of Gynecological Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yunfang Yu
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Medical Research Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Faculty of Medicine, Macau University of Science and Technology, Macao, Macao SAR, China
| | - Qingjian Li
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Medical Research Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Qinyue Yao
- Cells Vision Medical Technology Inc., Guangzhou, China
| | - Jin Wang
- Cells Vision Medical Technology Inc., Guangzhou, China
| | - Ming Gao
- Department of Radiology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Zhuo Wu
- Department of Radiology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Wei Ren
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Medical Research Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yujie Tan
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Medical Research Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Bingzhong Zhang
- Department of Gynecological Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Liliang Chen
- Cells Vision Medical Technology Inc., Guangzhou, China
| | - Zhongqiu Lin
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Medical Research Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Department of Gynecological Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,*Correspondence: Zhongqiu Lin, ; Herui Yao,
| | - Herui Yao
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Medical Research Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,Breast Tumor Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China,*Correspondence: Zhongqiu Lin, ; Herui Yao,
| |
Collapse
|
32
|
Lee J, Liu C, Kim J, Chen Z, Sun Y, Rogers JR, Chung WK, Weng C. Deep learning for rare disease: A scoping review. J Biomed Inform 2022; 135:104227. [DOI: 10.1016/j.jbi.2022.104227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 08/22/2022] [Accepted: 10/07/2022] [Indexed: 10/31/2022]
|
33
|
Deep Learning Assessment for Mining Important Medical Image Features of Various Modalities. Diagnostics (Basel) 2022; 12:diagnostics12102333. [DOI: 10.3390/diagnostics12102333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 09/13/2022] [Accepted: 09/22/2022] [Indexed: 11/16/2022] Open
Abstract
Deep learning (DL) is a well-established pipeline for feature extraction in medical and nonmedical imaging tasks, such as object detection, segmentation, and classification. However, DL faces the issue of explainability, which prohibits reliable utilisation in everyday clinical practice. This study evaluates DL methods for their efficiency in revealing and suggesting potential image biomarkers. Eleven biomedical image datasets of various modalities are utilised, including SPECT, CT, photographs, microscopy, and X-ray. Seven state-of-the-art CNNs are employed and tuned to perform image classification in tasks. The main conclusion of the research is that DL reveals potential biomarkers in several cases, especially when the models are trained from scratch in domains where low-level features such as shapes and edges are not enough to make decisions. Furthermore, in some cases, device acquisition variations slightly affect the performance of DL models.
Collapse
|
34
|
Wu M, Zhao Y, Dong X, Jin Y, Cheng S, Zhang N, Xu S, Gu S, Wu Y, Yang J, Yao L, Wang Y. Artificial intelligence-based preoperative prediction system for diagnosis and prognosis in epithelial ovarian cancer: A multicenter study. Front Oncol 2022; 12:975703. [PMID: 36212430 PMCID: PMC9532858 DOI: 10.3389/fonc.2022.975703] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 08/11/2022] [Indexed: 11/13/2022] Open
Abstract
Background Ovarian cancer (OC) is the most lethal gynecological malignancy, with limited early screening methods and poor prognosis. Artificial intelligence technology has made a great breakthrough in cancer diagnosis. Purpose We aim to develop a specific interpretable machine learning (ML) prediction model for the diagnosis and prognosis of epithelial ovarian cancer (EOC) based on a variety of biomarkers. Methods A total of 521 patients with EOC and 144 patients with benign gynecological diseases were enrolled including derivation datasets and an external validation cohort. The predicted information was acquired by 9 supervised ML methods, through 34 parameters. Behind predicted reasons for the best ML were improved by using the SHapley Additive exPlanations (SHAP) algorithm. In addition, the prognosis of EOC was analyzed by unsupervised clustering and Kaplan–Meier (KM) survival analysis. Results ML technology was superior to conventional logistic regression in predicting EOC diagnosis and XGBoost performed best in the external validation datasets. The AUC values of distinguishing EOC and benign disease patients, determining pathological type, grade and clinical stage were 0.958 (0.926-0.989), 0.792 (0.701-0.8834), 0.819 (0.687-0.950) and 0.68 (0.573-0.788) respectively. For negative CA-125 EOC patients, the AUC performance of XGBoost model was 0.835(0.763-0.907). We used unsupervised cluster analysis to identify EOC subgroups with significantly poor overall survival (p-value <0.0001) and recurrence-free survival (p-value <0.0001). Conclusions Based on the preoperative characteristics, we proved that ML algorithm can provide an acceptable diagnosis and prognosis prediction model for EOC patients. Meanwhile, SHAP analysis can improve the interpretability of ML models and contribute to precision medicine.
Collapse
Affiliation(s)
- Meixuan Wu
- Department of Obstetrics and Gynecology, Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Yaqian Zhao
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Xuhui Dong
- Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| | - Yue Jin
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Shanshan Cheng
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Nan Zhang
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Shilin Xu
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Sijia Gu
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Yongsong Wu
- Department of Obstetrics and Gynecology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China
| | - Jiani Yang
- Department of Obstetrics and Gynecology, Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University, Shanghai, China
- *Correspondence: Yu Wang, ; Liangqing Yao, ; Jiani Yang,
| | - Liangqing Yao
- Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
- *Correspondence: Yu Wang, ; Liangqing Yao, ; Jiani Yang,
| | - Yu Wang
- Department of Obstetrics and Gynecology, Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University, Shanghai, China
- *Correspondence: Yu Wang, ; Liangqing Yao, ; Jiani Yang,
| |
Collapse
|
35
|
Chen J, Li Y, Guo L, Zhou X, Zhu Y, He Q, Han H, Feng Q. Machine learning techniques for CT imaging diagnosis of novel coronavirus pneumonia: a review. Neural Comput Appl 2022; 36:1-19. [PMID: 36159188 PMCID: PMC9483435 DOI: 10.1007/s00521-022-07709-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 08/04/2022] [Indexed: 11/20/2022]
Abstract
Since 2020, novel coronavirus pneumonia has been spreading rapidly around the world, bringing tremendous pressure on medical diagnosis and treatment for hospitals. Medical imaging methods, such as computed tomography (CT), play a crucial role in diagnosing and treating COVID-19. A large number of CT images (with large volume) are produced during the CT-based medical diagnosis. In such a situation, the diagnostic judgement by human eyes on the thousands of CT images is inefficient and time-consuming. Recently, in order to improve diagnostic efficiency, the machine learning technology is being widely used in computer-aided diagnosis and treatment systems (i.e., CT Imaging) to help doctors perform accurate analysis and provide them with effective diagnostic decision support. In this paper, we comprehensively review these frequently used machine learning methods applied in the CT Imaging Diagnosis for the COVID-19, discuss the machine learning-based applications from the various kinds of aspects including the image acquisition and pre-processing, image segmentation, quantitative analysis and diagnosis, and disease follow-up and prognosis. Moreover, we also discuss the limitations of the up-to-date machine learning technology in the context of CT imaging computer-aided diagnosis.
Collapse
Affiliation(s)
- Jingjing Chen
- Zhejiang University City College, Hangzhou, China
- Zhijiang College of Zhejiang University of Technology, Shaoxing, China
| | - Yixiao Li
- Faculty of Science, Zhejiang University of Technology, Hangzhou, China
| | - Lingling Guo
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Xiaokang Zhou
- Faculty of Data Science, Shiga University, Hikone, Japan
- RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
| | - Yihan Zhu
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Qingfeng He
- School of Pharmacy, Fudan University, Shanghai, China
| | - Haijun Han
- School of Medicine, Zhejiang University City College, Hangzhou, China
| | - Qilong Feng
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| |
Collapse
|
36
|
Zheng Y, Wang F, Zhang W, Li Y, Yang B, Yang X, Dong T. Preoperative CT-based deep learning model for predicting overall survival in patients with high-grade serous ovarian cancer. Front Oncol 2022; 12:986089. [PMID: 36158664 PMCID: PMC9504666 DOI: 10.3389/fonc.2022.986089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 08/18/2022] [Indexed: 11/20/2022] Open
Abstract
Purpose High-grade serous ovarian cancer (HGSOC) is aggressive and has a high mortality rate. A Vit-based deep learning model was developed to predicting overall survival in HGSOC patients based on preoperative CT images. Methods 734 patients with HGSOC were retrospectively studied at Qilu Hospital of Shandong University with preoperative CT images and clinical information. The whole dataset was randomly split into training cohort (n = 550) and validation cohort (n = 184). A Vit-based deep learning model was built to output an independent prognostic risk score, afterward, a nomogram was then established for predicting overall survival. Results Our Vit-based deep learning model showed promising results in predicting survival in the training cohort (AUC = 0.822) and the validation cohort (AUC = 0.823). The multivariate Cox regression analysis indicated that the image score was an independent prognostic factor in the training (HR = 9.03, 95% CI: 4.38, 18.65) and validation cohorts (HR = 9.59, 95% CI: 4.20, 21.92). Kaplan-Meier survival analysis indicates that the image score obtained from model yields promising prognostic significance to refine the risk stratification of patients with HGSOC, and the integrative nomogram achieved a C-index of 0.74 in the training cohort and 0.72 in the validation cohort. Conclusions Our model provides a non-invasive, simple, and feasible method to predicting overall survival in patients with HGSOC based on preoperative CT images, which could help predicting the survival prognostication and may facilitate clinical decision making in the era of individualized and precision medicine.
Collapse
Affiliation(s)
- Yawen Zheng
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan, China
| | - Fang Wang
- Department of Radiology, Qilu Hospital of Shandong University, Jinan, China
| | - Wenxia Zhang
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan, China
| | - Yongmei Li
- Operating room, Qilu Hospital of Shandong University, Jinan, China
| | - Bo Yang
- Department of Radiology, Qilu Hospital of Shandong University, Jinan, China
- Department of Radiology, Qingzhou People’s Hospital, Qingzhou, China
| | - Xingsheng Yang
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan, China
- *Correspondence: Xingsheng Yang, ; Taotao Dong,
| | - Taotao Dong
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan, China
- *Correspondence: Xingsheng Yang, ; Taotao Dong,
| |
Collapse
|
37
|
Mao W, Chen C, Gao H, Xiong L, Lin Y. A deep learning-based automatic staging method for early endometrial cancer on MRI images. Front Physiol 2022; 13:974245. [PMID: 36111158 PMCID: PMC9468895 DOI: 10.3389/fphys.2022.974245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 08/09/2022] [Indexed: 11/30/2022] Open
Abstract
Early treatment increases the 5-year survival rate of patients with endometrial cancer (EC). Deep learning (DL) as a new computer-aided diagnosis method has been widely used in medical image processing which can reduce the misdiagnosis by radiologists. An automatic staging method based on DL for the early diagnosis of EC will benefit both radiologists and patients. To develop an effective and automatic prediction model for early EC diagnosis on magnetic resonance imaging (MRI) images, we retrospectively enrolled 117 patients (73 of stage IA, 44 of stage IB) with a pathological diagnosis of early EC confirmed by postoperative biopsy at our institution from 1 January 2018, to 31 December 2020. Axial T2-weighted image (T2WI), axial diffusion-weighted image (DWI) and sagittal T2WI images from 117 patients have been classified into stage IA and stage IB according to the patient’s pathological diagnosis. Firstly, a semantic segmentation model based on the U-net network is trained to segment the uterine region and the tumor region on the MRI images. Then, the area ratio of the tumor region to the uterine region (TUR) in the segmentation map is calculated. Finally, the receiver operating characteristic curves (ROCs) are plotted by the TUR and the results of the patient’s pathological diagnosis in the test set to find the optimal staging thresholds for stage IA and stage IB. In the test sets, the trained semantic segmentation model yields the average Dice similarity coefficients of uterus and tumor on axial T2WI, axial DWI, and sagittal T2WI were 0.958 and 0.917, 0.956 and 0.941, 0.972 and 0.910 respectively. With pathological diagnostic results as the gold standard, the classification model on axial T2WI, axial DWI, and sagittal T2WI yielded an area under the curve (AUC) of 0.86, 0.85 and 0.94, respectively. In this study, an automatic DL-based segmentation model combining the ROC analysis of TUR on MRI images presents an effective early EC staging method.
Collapse
Affiliation(s)
- Wei Mao
- School of Optoelectronic and Communication Engineering, Xiamen University of Technology, Xiamen, Fujian, China
| | - Chunxia Chen
- Department of Radiology, Fujian Maternity and Child Health Hospital, Fuzhou, Fujian, China
| | - Huachao Gao
- School of Optoelectronic and Communication Engineering, Xiamen University of Technology, Xiamen, Fujian, China
| | - Liu Xiong
- School of Optoelectronic and Communication Engineering, Xiamen University of Technology, Xiamen, Fujian, China
| | - Yongping Lin
- School of Optoelectronic and Communication Engineering, Xiamen University of Technology, Xiamen, Fujian, China
- *Correspondence: Yongping Lin,
| |
Collapse
|
38
|
Yao F, Ding J, Lin F, Xu X, Jiang Q, Zhang L, Fu Y, Yang Y, Lan L. Nomogram based on ultrasound radiomics score and clinical variables for predicting histologic subtypes of epithelial ovarian cancer. Br J Radiol 2022; 95:20211332. [PMID: 35612547 PMCID: PMC10162053 DOI: 10.1259/bjr.20211332] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 05/11/2022] [Accepted: 05/19/2022] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVE Ovarian cancer is one of the most common causes of death in gynecological tumors, and its most common type is epithelial ovarian cancer (EOC). This study aimed to establish a radiomics signature based on ultrasound images to predict the histopathological types of EOC. METHODS Overall, 265 patients with EOC who underwent preoperative ultrasonography and surgery were eligible. They were randomly sorted into two cohorts (training cohort: test cohort = 7:3). We outlined the region of interest of the tumor on the ultrasound images of the lesion. Then, the radiomics features were extracted. Clinical, Rad-score and combined models were constructed based on the least absolute shrinkage, selection operator, and logistic regression analysis. The performance of the models was evaluated using receiver operating characteristic curves and decision curve analysis (DCA). A nomogram was formulated based on the combined prediction model. RESULTS The combined model had good performance in predicting EOC histopathological types, with an AUC of 0.83 (95% CI: 0.77-0.90) and 0.82 (95% CI: 0.71-0.93) in the training and test cohorts, respectively. The calibration curves showed that the nomogram estimation was consistent with the actual observations. DCA also verified the clinical value of the combined model. CONCLUSIONS The combined model containing clinical and ultrasound radiomics features showed an excellent performance in predicting type I and type II EOC. ADVANCES IN KNOWLEDGE This study presents the first application of ultrasound radiomics features to distinguish EOC histopathological types. The proposed clinical-radiomics nomogram could help gynecologists non-invasively identify EOC types before surgery.
Collapse
Affiliation(s)
- Fei Yao
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Jie Ding
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Feng Lin
- Department of Gynecology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Xiaomin Xu
- Department of Ultrasound imaging, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Qi Jiang
- School of First Clinical Medicine, Wenzhou Medical University, Wenzhou, China
| | - Li Zhang
- School of First Clinical Medicine, Wenzhou Medical University, Wenzhou, China
| | - Yanqi Fu
- School of First Clinical Medicine, Wenzhou Medical University, Wenzhou, China
| | - Yunjun Yang
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Li Lan
- Department of Ultrasound imaging, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
39
|
Li H, Song Q, Gui D, Wang M, Min X, Li A. Reconstruction-assisted Feature Encoding Network for Histologic Subtype Classification of Non-small Cell Lung Cancer. IEEE J Biomed Health Inform 2022; 26:4563-4574. [PMID: 35849680 DOI: 10.1109/jbhi.2022.3192010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Accurate histological subtype classification between adenocarcinoma (ADC) and squamous cell carcinoma (SCC) using computed tomography (CT) images is of great importance to assist clinicians in determining treatment and therapy plans for non-small cell lung cancer (NSCLC) patients. Although current deep learning approaches have achieved promising progress in this field, they are often difficult to capture efficient tumor representations due to inadequate training data, and in consequence show limited performance. In this study, we propose a novel and effective reconstruction-assisted feature encoding network (RAFENet) for histological subtype classification by leveraging an auxiliary image reconstruction task to enable extra guidance and regularization for enhanced tumor feature representations. Different from existing reconstruction-assisted methods that directly use generalizable features obtained from shared encoder for primary task, a dedicated task-aware encoding module is utilized in RAFENet to perform refinement of generalizable features. Specifically, a cascade of cross-level non-local blocks are introduced to progressively refine generalizable features at different levels with the aid of lower-level task-specific information, which can successfully learn multi-level task-specific features tailored to histological subtype classification. Moreover, in addition to widely adopted pixel-wise reconstruction loss, we introduce a powerful semantic consistency loss function to explicitly supervise the training of RAFENet, which combines both feature consistency loss and prediction consistency loss to ensure semantic invariance during image reconstruction. Extensive experimental results show that RAFENet effectively addresses the difficult issues that cannot be resolved by existing reconstruction-based methods and consistently outperforms other state-of-the-art methods on both public and in-house NSCLC datasets.
Collapse
|
40
|
Zhou J, Cao W, Wang L, Pan Z, Fu Y. Application of artificial intelligence in the diagnosis and prognostic prediction of ovarian cancer. Comput Biol Med 2022; 146:105608. [PMID: 35584585 DOI: 10.1016/j.compbiomed.2022.105608] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 05/08/2022] [Accepted: 05/09/2022] [Indexed: 11/03/2022]
Abstract
In recent years, the wide application of artificial intelligence (AI) has dramatically improved the work efficiency of clinicians and reduced their workload. This review provides a glance at the latest advances in AI-assisted diagnosis and prognostic prediction of ovarian cancer (OC). We performed an advanced search in PubMed and IEEE/IET Electronic Library, and included 39 articles in this review. A comprehensive and objective criterion was built to assess the reliability and quality of all studies from four aspects: the size of datasets for model development, research design, the division of training sets and test sets, and the type of quantitative performance indicators. This review analyzed the construction of AI models, including data pre-processing methods, feature selection techniques, AI classifiers, or algorithms. Additionally, we compared the performance of these models built on different datasets, which may support researchers for further iteration and development of AI. Finally, we discussed the challenges and future directions for AI application in medicine.
Collapse
Affiliation(s)
- Jingyang Zhou
- Queen Mary School, Medical Department, Nanchang University, Nanchang, 330031, Jiangxi Province, PR China
| | - Weiwei Cao
- Queen Mary School, Medical Department, Nanchang University, Nanchang, 330031, Jiangxi Province, PR China
| | - Lan Wang
- Queen Mary School, Medical Department, Nanchang University, Nanchang, 330031, Jiangxi Province, PR China
| | - Zezheng Pan
- Faculty of Basic Medical Science, Nanchang University, Nanchang, 330006, Jiangxi Province, PR China
| | - Ying Fu
- The First Affiliated Hospital of Nanchang University, Nanchang, 330006, Jiangxi Province, PR China.
| |
Collapse
|
41
|
Combining Molecular, Imaging, and Clinical Data Analysis for Predicting Cancer Prognosis. Cancers (Basel) 2022; 14:cancers14133215. [PMID: 35804988 PMCID: PMC9265023 DOI: 10.3390/cancers14133215] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 06/24/2022] [Accepted: 06/27/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary The rise of Big Data, the widespread use of Machine Learning, and the cheapening of omics techniques have allowed for the creation of more sophisticated and accurate models in biomedical research. This article presents the state-of-the-art predictive models of cancer prognosis that use multimodal data, considering clinical, molecular (omics and non-omics), and image data. The subject of study, the data modalities used, the data processing and modelling methods applied, the validation strategies involved, the integration strategies encompassed, and the evolution of prognostic predictive models are discussed. Finally, we discuss challenges and opportunities in this field of cancer research, with great potential impact on the clinical management of patients and, by extension, on the implementation of personalised and precision medicine. Abstract Cancer is one of the most detrimental diseases globally. Accordingly, the prognosis prediction of cancer patients has become a field of interest. In this review, we have gathered 43 state-of-the-art scientific papers published in the last 6 years that built cancer prognosis predictive models using multimodal data. We have defined the multimodality of data as four main types: clinical, anatomopathological, molecular, and medical imaging; and we have expanded on the information that each modality provides. The 43 studies were divided into three categories based on the modelling approach taken, and their characteristics were further discussed together with current issues and future trends. Research in this area has evolved from survival analysis through statistical modelling using mainly clinical and anatomopathological data to the prediction of cancer prognosis through a multi-faceted data-driven approach by the integration of complex, multimodal, and high-dimensional data containing multi-omics and medical imaging information and by applying Machine Learning and, more recently, Deep Learning techniques. This review concludes that cancer prognosis predictive multimodal models are capable of better stratifying patients, which can improve clinical management and contribute to the implementation of personalised medicine as well as provide new and valuable knowledge on cancer biology and its progression.
Collapse
|
42
|
Gong J, Zhang W, Huang W, Liao Y, Yin Y, Shi M, Qin W, Zhao L. CT-based radiomics nomogram may predict local recurrence-free survival in esophageal cancer patients receiving definitive chemoradiation or radiotherapy: a multicenter study. Radiother Oncol 2022; 174:8-15. [PMID: 35750106 DOI: 10.1016/j.radonc.2022.06.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 05/18/2022] [Accepted: 06/15/2022] [Indexed: 12/22/2022]
Abstract
BACKGROUND AND PURPOSE To establish and validate a contrast-enhanced computed tomography-based hybrid radiomics nomogram for prediction of local recurrence-free survival (LRFS) in esophageal squamous cell cancer (ESCC) patients receiving definitive (chemo)radiotherapy in a multicenter setting. MATERIALS AND METHODS This retrospective study included 302 ESCC patients from Xijing Hospital receiving definitive (chemo)radiotherapy, which were randomly assigned to the training (n=201) and internal validation set (n=101). And 74 and 21 ESCC patients from the other two centers were used as the external validation set (n=95). A hybrid radiomics nomogram was established by integrating clinical factors, radiomic signature and deep-learning signature in training set and was tested in two validation sets. RESULTS The deep-learning signature showed better prognostic performance than radiomic signature for predicting LRFS in training (C-index:0.73 vs 0.70), internal (Cindex: 0.72 vs 0.64) and external validation set (C-index:0.72 vs 0.63), which could stratify patients into high and low-risk group with different prognosis (cut-off value: -0.06). Low-risk groups had better LRFS than high-risk groups in training (p<0.0001; 2-y LRFS 71.1% vs 33.0%), internal (p<0.01; 2-y LRFS 58.8% vs 34.8%) and external validation sets (p<0.0001; 2-y LRFS 61.9% vs 22.4%), respectively. The hybrid radiomics nomogram established by integrating radiomic signature, deep-learning signature with clinical factors including T stage and concurrent chemotherapy outperformed any one or two combinations in training (C-index:0.82), internal (Cindex: 0.78), and external validation set (C-index:0.76). Calibration curves showed good agreement. CONCLUSIONS The hybrid radiomics based on pretreatment contrast-enhanced computed tomography provided a promising way to predict local recurrence of ESCC patients receiving definitive (chemo)radiotherapy.
Collapse
Affiliation(s)
- Jie Gong
- Department of Radiation Oncology, Xijing Hospital, Air Force Medical University. Xi'an, China
| | - Wencheng Zhang
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin, China
| | - Wei Huang
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| | - Ye Liao
- Department of Radiation Oncology, Xijing Hospital, Air Force Medical University. Xi'an, China
| | - Yutian Yin
- Department of Radiation Oncology, Xijing Hospital, Air Force Medical University. Xi'an, China
| | - Mei Shi
- Department of Radiation Oncology, Xijing Hospital, Air Force Medical University. Xi'an, China.
| | - Wei Qin
- Life Sciences Research Center, School of Life Sciences and Technology, Xidian University, Xi'an, China
| | - Lina Zhao
- Department of Radiation Oncology, Xijing Hospital, Air Force Medical University. Xi'an, China.
| |
Collapse
|
43
|
Boehm KM, Aherne EA, Ellenson L, Nikolovski I, Alghamdi M, Vázquez-García I, Zamarin D, Long Roche K, Liu Y, Patel D, Aukerman A, Pasha A, Rose D, Selenica P, Causa Andrieu PI, Fong C, Capanu M, Reis-Filho JS, Vanguri R, Veeraraghavan H, Gangai N, Sosa R, Leung S, McPherson A, Gao J, Lakhman Y, Shah SP. Multimodal data integration using machine learning improves risk stratification of high-grade serous ovarian cancer. NATURE CANCER 2022; 3:723-733. [PMID: 35764743 PMCID: PMC9239907 DOI: 10.1038/s43018-022-00388-9] [Citation(s) in RCA: 84] [Impact Index Per Article: 42.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 04/27/2022] [Indexed: 04/25/2023]
Abstract
Patients with high-grade serous ovarian cancer suffer poor prognosis and variable response to treatment. Known prognostic factors for this disease include homologous recombination deficiency status, age, pathological stage and residual disease status after debulking surgery. Recent work has highlighted important prognostic information captured in computed tomography and histopathological specimens, which can be exploited through machine learning. However, little is known about the capacity of combining features from these disparate sources to improve prediction of treatment response. Here, we assembled a multimodal dataset of 444 patients with primarily late-stage high-grade serous ovarian cancer and discovered quantitative features, such as tumor nuclear size on staining with hematoxylin and eosin and omental texture on contrast-enhanced computed tomography, associated with prognosis. We found that these features contributed complementary prognostic information relative to one another and clinicogenomic features. By fusing histopathological, radiologic and clinicogenomic machine-learning models, we demonstrate a promising path toward improved risk stratification of patients with cancer through multimodal data integration.
Collapse
Affiliation(s)
- Kevin M Boehm
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
- Weill Cornell/Rockefeller/Sloan Kettering Tri-Institutional MD-PhD Program, New York, NY, USA
| | - Emily A Aherne
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Lora Ellenson
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Ines Nikolovski
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Mohammed Alghamdi
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Ignacio Vázquez-García
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
- Irving Institute for Cancer Dynamics, Columbia University, New York, NY, USA
| | - Dmitriy Zamarin
- Department of Medical Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
- Department of Medicine, Weill Cornell Medicine, New York, NY, USA
| | - Kara Long Roche
- Department of Surgical Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Ying Liu
- Department of Medical Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
- Department of Medicine, Weill Cornell Medicine, New York, NY, USA
| | - Druv Patel
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Andrew Aukerman
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Arfath Pasha
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Doori Rose
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Pier Selenica
- Human Oncology and Pathogenesis Program, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | | | - Chris Fong
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Marinela Capanu
- Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Jorge S Reis-Filho
- Human Oncology and Pathogenesis Program, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Rami Vanguri
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Natalie Gangai
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Ramon Sosa
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Samantha Leung
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Andrew McPherson
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - JianJiong Gao
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
- Kravis Center for Molecular Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Yulia Lakhman
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
| | - Sohrab P Shah
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
| |
Collapse
|
44
|
Sun C, Li B, Wei G, Qiu W, Li D, Li X, Liu X, Wei W, Wang S, Liu Z, Tian J, Liang L. Deep learning with whole slide images can improve the prognostic risk stratification with stage III colorectal cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106914. [PMID: 35640390 DOI: 10.1016/j.cmpb.2022.106914] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 05/15/2022] [Accepted: 05/24/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Adjuvant chemotherapy is recommended as standard treatment for colorectal cancer (CRC) with stage III according to TNM stage. However, outcomes are varied even among patients receiving similar treatments. We aimed to develop a prognostic signature to stratify outcomes and benefit from different chemotherapy regimens by analyzing whole slide images (WSI) using deep learning. METHODS We proposed an unsupervised deep learning network (variational autoencoder and generative adversarial network) in 180,819 image tiles from the training set (147 patients) to develop a WSI signature for predicting the disease-free survival (DFS) and overall survival (OS) of patients, and tested in validation set of 63 patients. An integrated nomogram was constructed to investigate the incremental value of deep learning signature (DLS) to TNM stage for individualized outcomes prediction. RESULTS The DLS was associated with DFS and OS in both training and validation sets and proved to be an independent prognostic factor. Integrating the DLS and clinicopathologic factors showed better performance (C-index: DFS, 0.748; OS, 0.794; in the validation set) than TNM stage. In patients whose DLS and clinical risk levels were inconsistent, their risk of relapse was reclassified. In the subgroup of patients treated with 3 months, high-DLS was associated with worse DFS (hazard ratio: 3.622-7.728). CONCLUSIONS The proposed based-WSI DLS improved risk stratification and could help identify patients with stage III CRC who may benefit from the prolonged duration of chemotherapy.
Collapse
Affiliation(s)
- Caixia Sun
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of and Engineering Medicine, Beihang University, Beijing 100191, China; CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Bingbing Li
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou 510515, Guangdong Province, China; Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou 510515, Guangdong Province, China
| | - Genxia Wei
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou 510515, Guangdong Province, China; Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou 510515, Guangdong Province, China
| | - Weihao Qiu
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou 510515, Guangdong Province, China; Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou 510515, Guangdong Province, China
| | - Danyi Li
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou 510515, Guangdong Province, China; Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou 510515, Guangdong Province, China
| | - Xiangzhao Li
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou 510515, Guangdong Province, China; Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou 510515, Guangdong Province, China
| | - Xiangyu Liu
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Wei Wei
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Shuo Wang
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of and Engineering Medicine, Beihang University, Beijing 100191, China; CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100080, China.
| | - Jie Tian
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of and Engineering Medicine, Beihang University, Beijing 100191, China; CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
| | - Li Liang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou 510515, Guangdong Province, China; Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou 510515, Guangdong Province, China.
| |
Collapse
|
45
|
CT-Based Radiomics and Deep Learning for BRCA Mutation and Progression-Free Survival Prediction in Ovarian Cancer Using a Multicentric Dataset. Cancers (Basel) 2022; 14:cancers14112739. [PMID: 35681720 PMCID: PMC9179845 DOI: 10.3390/cancers14112739] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/15/2022] [Accepted: 05/29/2022] [Indexed: 02/04/2023] Open
Abstract
PURPOSE Build predictive radiomic models for early relapse and BRCA mutation based on a multicentric database of high-grade serous ovarian cancer (HGSOC) and validate them in a test set coming from different institutions. METHODS Preoperative CTs of patients with HGSOC treated at four referral centers were retrospectively acquired and manually segmented. Hand-crafted features and deep radiomics features were extracted respectively by dedicated software (MODDICOM) and a dedicated convolutional neural network (CNN). Features were selected with and without prior harmonization (ComBat harmonization), and models were built using different machine learning algorithms, including clinical variables. RESULTS We included 218 patients. Radiomic models showed low performance in predicting both BRCA mutation (AUC in test set between 0.46 and 0.59) and 1-year relapse (AUC in test set between 0.46 and 0.56); deep learning models demonstrated similar results (AUC in the test of 0.48 for BRCA and 0.50 for relapse). The inclusion of clinical variables improved the performance of the radiomic models to predict BRCA mutation (AUC in the test set of 0.74). CONCLUSIONS In our multicentric dataset, representative of a real-life clinical scenario, we could not find a good radiomic predicting model for PFS and BRCA mutational status, with both traditional radiomics and deep learning, but the combination of clinical and radiomic models improved model performance for the prediction of BRCA mutation. These findings highlight the need for standardization through the whole radiomic pipelines and robust multicentric external validations of results.
Collapse
|
46
|
Radiogenomics: A Valuable Tool for the Clinical Assessment and Research of Ovarian Cancer. J Comput Assist Tomogr 2022; 46:371-378. [DOI: 10.1097/rct.0000000000001279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
47
|
Dense Convolutional Network and Its Application in Medical Image Analysis. BIOMED RESEARCH INTERNATIONAL 2022; 2022:2384830. [PMID: 35509707 PMCID: PMC9060995 DOI: 10.1155/2022/2384830] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 03/23/2022] [Indexed: 12/28/2022]
Abstract
Dense convolutional network (DenseNet) is a hot topic in deep learning research in recent years, which has good applications in medical image analysis. In this paper, DenseNet is summarized from the following aspects. First, the basic principle of DenseNet is introduced; second, the development of DenseNet is summarized and analyzed from five aspects: broaden DenseNet structure, lightweight DenseNet structure, dense unit, dense connection mode, and attention mechanism; finally, the application research of DenseNet in the field of medical image analysis is summarized from three aspects: pattern recognition, image segmentation, and object detection. The network structures of DenseNet are systematically summarized in this paper, which has certain positive significance for the research and development of DenseNet.
Collapse
|
48
|
Wang CW, Lee YC, Chang CC, Lin YJ, Liou YA, Hsu PC, Chang CC, Sai AKO, Wang CH, Chao TK. A Weakly Supervised Deep Learning Method for Guiding Ovarian Cancer Treatment and Identifying an Effective Biomarker. Cancers (Basel) 2022; 14:cancers14071651. [PMID: 35406422 PMCID: PMC8996991 DOI: 10.3390/cancers14071651] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/14/2022] [Accepted: 03/18/2022] [Indexed: 02/04/2023] Open
Abstract
Ovarian cancer is a common malignant gynecological disease. Molecular target therapy, i.e., antiangiogenesis with bevacizumab, was found to be effective in some patients of epithelial ovarian cancer (EOC). Although careful patient selection is essential, there are currently no biomarkers available for routine therapeutic usage. To the authors’ best knowledge, this is the first automated precision oncology framework to effectively identify and select EOC and peritoneal serous papillary carcinoma (PSPC) patients with positive therapeutic effect. From March 2013 to January 2021, we have a database, containing four kinds of immunohistochemical tissue samples, including AIM2, c3, C5 and NLRP3, from patients diagnosed with EOC and PSPC and treated with bevacizumab in a hospital-based retrospective study. We developed a hybrid deep learning framework and weakly supervised deep learning models for each potential biomarker, and the experimental results show that the proposed model in combination with AIM2 achieves high accuracy 0.92, recall 0.97, F-measure 0.93 and AUC 0.97 for the first experiment (66% training and 34%testing) and high accuracy 0.86 ± 0.07, precision 0.9 ± 0.07, recall 0.85 ± 0.06, F-measure 0.87 ± 0.06 and AUC 0.91 ± 0.05 for the second experiment using five-fold cross validation, respectively. Both Kaplan-Meier PFS analysis and Cox proportional hazards model analysis further confirmed that the proposed AIM2-DL model is able to distinguish patients gaining positive therapeutic effects with low cancer recurrence from patients with disease progression after treatment (p < 0.005).
Collapse
Affiliation(s)
- Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (C.-W.W.); (Y.-A.L.); (C.-C.C.); (A.-K.-O.S.)
- Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei 106335, Taiwan;
| | - Yu-Ching Lee
- Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei 106335, Taiwan;
| | - Cheng-Chang Chang
- Department of Gynecology and Obstetrics, Tri-Service General Hospital, Taipei 11490, Taiwan; (C.-C.C.); (P.-C.H.)
- Graduate Institute of Medical Sciences, National Defense Medical Center, Taipei 11490, Taiwan
| | - Yi-Jia Lin
- Department of Pathology, Tri-Service General Hospital, Taipei 11490, Taiwan;
- Institute of Pathology and Parasitology, National Defense Medical Center, Taipei 11490, Taiwan
| | - Yi-An Liou
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (C.-W.W.); (Y.-A.L.); (C.-C.C.); (A.-K.-O.S.)
| | - Po-Chao Hsu
- Department of Gynecology and Obstetrics, Tri-Service General Hospital, Taipei 11490, Taiwan; (C.-C.C.); (P.-C.H.)
- Graduate Institute of Medical Sciences, National Defense Medical Center, Taipei 11490, Taiwan
| | - Chun-Chieh Chang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (C.-W.W.); (Y.-A.L.); (C.-C.C.); (A.-K.-O.S.)
| | - Aung-Kyaw-Oo Sai
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (C.-W.W.); (Y.-A.L.); (C.-C.C.); (A.-K.-O.S.)
| | - Chih-Hung Wang
- Department of Otolaryngology-Head and Neck Surgery, Tri-Service General Hospital, Taipei 11490, Taiwan;
- Department of Otolaryngology-Head and Neck Surgery, National Defense Medical Center, Taipei 11490, Taiwan
| | - Tai-Kuang Chao
- Department of Pathology, Tri-Service General Hospital, Taipei 11490, Taiwan;
- Institute of Pathology and Parasitology, National Defense Medical Center, Taipei 11490, Taiwan
- Correspondence:
| |
Collapse
|
49
|
Tong T, Gu J, Xu D, Song L, Zhao Q, Cheng F, Yuan Z, Tian S, Yang X, Tian J, Wang K, Jiang T. Deep learning radiomics based on contrast-enhanced ultrasound images for assisted diagnosis of pancreatic ductal adenocarcinoma and chronic pancreatitis. BMC Med 2022; 20:74. [PMID: 35232446 PMCID: PMC8889703 DOI: 10.1186/s12916-022-02258-8] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 01/13/2022] [Indexed: 02/08/2023] Open
Abstract
BACKGROUND Accurate and non-invasive diagnosis of pancreatic ductal adenocarcinoma (PDAC) and chronic pancreatitis (CP) can avoid unnecessary puncture and surgery. This study aimed to develop a deep learning radiomics (DLR) model based on contrast-enhanced ultrasound (CEUS) images to assist radiologists in identifying PDAC and CP. METHODS Patients with PDAC or CP were retrospectively enrolled from three hospitals. Detailed clinicopathological data were collected for each patient. Diagnoses were confirmed pathologically using biopsy or surgery in all patients. We developed an end-to-end DLR model for diagnosing PDAC and CP using CEUS images. To verify the clinical application value of the DLR model, two rounds of reader studies were performed. RESULTS A total of 558 patients with pancreatic lesions were enrolled and were split into the training cohort (n=351), internal validation cohort (n=109), and external validation cohorts 1 (n=50) and 2 (n=48). The DLR model achieved an area under curve (AUC) of 0.986 (95% CI 0.975-0.994), 0.978 (95% CI 0.950-0.996), 0.967 (95% CI 0.917-1.000), and 0.953 (95% CI 0.877-1.000) in the training, internal validation, and external validation cohorts 1 and 2, respectively. The sensitivity and specificity of the DLR model were higher than or comparable to the diagnoses of the five radiologists in the three validation cohorts. With the aid of the DLR model, the diagnostic sensitivity of all radiologists was further improved at the expense of a small or no decrease in specificity in the three validation cohorts. CONCLUSIONS The findings of this study suggest that our DLR model can be used as an effective tool to assist radiologists in the diagnosis of PDAC and CP.
Collapse
Affiliation(s)
- Tong Tong
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Jionghui Gu
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
- Department of Ultrasound, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, 310003, China
| | - Dong Xu
- The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), No.1 East Banshan Road, Gongshu District, Hangzhou, 310022, China
| | - Ling Song
- Department of ultrasound, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Qiyu Zhao
- Department of Ultrasound, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, 310003, China
| | - Fang Cheng
- The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), No.1 East Banshan Road, Gongshu District, Hangzhou, 310022, China
| | - Zhiqiang Yuan
- Department of ultrasound, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Shuyuan Tian
- Department of Ultrasound, Tongde Hospital of Zhejiang Province, Hangzhou, 310012, China
| | - Xin Yang
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China.
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, 100191, China.
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Tian'an Jiang
- Department of Ultrasound, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, 310003, China.
- Zhejiang Provincial Key Laboratory of Pulsed Electric Field Technology for Medical Transformation, Hangzhou, 310003, China.
| |
Collapse
|
50
|
Mikdadi D, O'Connell KA, Meacham PJ, Dugan MA, Ojiere MO, Carlson TB, Klenk JA. Applications of artificial intelligence (AI) in ovarian cancer, pancreatic cancer, and image biomarker discovery. Cancer Biomark 2022; 33:173-184. [PMID: 35213360 DOI: 10.3233/cbm-210301] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
BACKGROUND Artificial intelligence (AI), including machine learning (ML) and deep learning, has the potential to revolutionize biomedical research. Defined as the ability to "mimic" human intelligence by machines executing trained algorithms, AI methods are deployed for biomarker discovery. OBJECTIVE We detail the advancements and challenges in the use of AI for biomarker discovery in ovarian and pancreatic cancer. We also provide an overview of associated regulatory and ethical considerations. METHODS We conducted a literature review using PubMed and Google Scholar to survey the published findings on the use of AI in ovarian cancer, pancreatic cancer, and cancer biomarkers. RESULTS Most AI models associated with ovarian and pancreatic cancer have yet to be applied in clinical settings, and imaging data in many studies are not publicly available. Low disease prevalence and asymptomatic disease limits data availability required for AI models. The FDA has yet to qualify imaging biomarkers as effective diagnostic tools for these cancers. CONCLUSIONS Challenges associated with data availability, quality, bias, as well as AI transparency and explainability, will likely persist. Explainable and trustworthy AI efforts will need to continue so that the research community can better understand and construct effective models for biomarker discovery in rare cancers.
Collapse
Affiliation(s)
- Dina Mikdadi
- Biomedical Data Science Lab, Deloitte Consulting LLP, Arlington, VA, USA
| | - Kyle A O'Connell
- Biomedical Data Science Lab, Deloitte Consulting LLP, Arlington, VA, USA.,Department of Biology, George Washington University, Washington, DC, USA
| | - Philip J Meacham
- Biomedical Data Science Lab, Deloitte Consulting LLP, Arlington, VA, USA
| | - Madeleine A Dugan
- Biomedical Data Science Lab, Deloitte Consulting LLP, Arlington, VA, USA
| | - Michael O Ojiere
- Biomedical Data Science Lab, Deloitte Consulting LLP, Arlington, VA, USA
| | - Thaddeus B Carlson
- Biomedical Data Science Lab, Deloitte Consulting LLP, Arlington, VA, USA
| | - Juergen A Klenk
- Biomedical Data Science Lab, Deloitte Consulting LLP, Arlington, VA, USA
| |
Collapse
|