1
|
Chen J, Lin F, Dai Z, Chen Y, Fan Y, Li A, Zhao C. Survival prediction in diffuse large B-cell lymphoma patients: multimodal PET/CT deep features radiomic model utilizing automated machine learning. J Cancer Res Clin Oncol 2024; 150:452. [PMID: 39382750 PMCID: PMC11464575 DOI: 10.1007/s00432-024-05905-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Accepted: 07/21/2024] [Indexed: 10/10/2024]
Abstract
PURPOSE We sought to develop an effective combined model for predicting the survival of patients with diffuse large B-cell lymphoma (DLBCL) based on the multimodal PET-CT deep features radiomics signature (DFR-signature). METHODS 369 DLBCL patients from two medical centers were included in this study. Their PET and CT images were fused to construct the multimodal PET-CT images using a deep learning fusion network. Then the deep features were extracted from those fused PET-CT images, and the DFR-signature was constructed through an Automated machine learning (AutoML) model. Combined with clinical indexes from the Cox regression analysis, we constructed a combined model to predict the progression-free survival (PFS) and the overall survival (OS) of patients. In addition, the combined model was evaluated in the concordance index (C-index) and the time-dependent area under the ROC curve (tdAUC). RESULTS A total of 1000 deep features were extracted to build a DFR-signature. Besides the DFR-signature, the combined model integrating metabolic and clinical factors performed best in terms of PFS and OS. For PFS, the C-indices are 0.784 and 0.739 in the training cohort and internal validation cohort, respectively. For OS, the C-indices are 0.831 and 0.782 in the training cohort and internal validation cohort. CONCLUSIONS DFR-signature constructed from multimodal images improved the classification accuracy of prognosis for DLBCL patients. Moreover, the constructed DFR-signature combined with NCCN-IPI exhibited excellent potential for risk stratification of DLBCL patients.
Collapse
Affiliation(s)
- Jianxin Chen
- The Key Laboratory of Broadband Wireless Communication and Sensor Network Technology (Ministry of Education), Nanjing University of Posts and Telecommunications, Nanjing, China.
| | - Fengyi Lin
- The Key Laboratory of Broadband Wireless Communication and Sensor Network Technology (Ministry of Education), Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Zhaoyan Dai
- The Key Laboratory of Broadband Wireless Communication and Sensor Network Technology (Ministry of Education), Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Yu Chen
- The Key Laboratory of Broadband Wireless Communication and Sensor Network Technology (Ministry of Education), Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Yawen Fan
- The Key Laboratory of Broadband Wireless Communication and Sensor Network Technology (Ministry of Education), Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Ang Li
- The Key Laboratory of Broadband Wireless Communication and Sensor Network Technology (Ministry of Education), Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Chenyu Zhao
- The Key Laboratory of Broadband Wireless Communication and Sensor Network Technology (Ministry of Education), Nanjing University of Posts and Telecommunications, Nanjing, China
| |
Collapse
|
2
|
Maniaci A, Lavalle S, Gagliano C, Lentini M, Masiello E, Parisi F, Iannella G, Cilia ND, Salerno V, Cusumano G, La Via L. The Integration of Radiomics and Artificial Intelligence in Modern Medicine. Life (Basel) 2024; 14:1248. [PMID: 39459547 PMCID: PMC11508875 DOI: 10.3390/life14101248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 09/16/2024] [Accepted: 09/18/2024] [Indexed: 10/28/2024] Open
Abstract
With profound effects on patient care, the role of artificial intelligence (AI) in radiomics has become a disruptive force in contemporary medicine. Radiomics, the quantitative feature extraction and analysis from medical images, offers useful imaging biomarkers that can reveal important information about the nature of diseases, how well patients respond to treatment and patient outcomes. The use of AI techniques in radiomics, such as machine learning and deep learning, has made it possible to create sophisticated computer-aided diagnostic systems, predictive models, and decision support tools. The many uses of AI in radiomics are examined in this review, encompassing its involvement of quantitative feature extraction from medical images, the machine learning, deep learning and computer-aided diagnostic (CAD) systems approaches in radiomics, and the effect of radiomics and AI on improving workflow automation and efficiency, optimize clinical trials and patient stratification. This review also covers the predictive modeling improvement by machine learning in radiomics, the multimodal integration and enhanced deep learning architectures, and the regulatory and clinical adoption considerations for radiomics-based CAD. Particular emphasis is given to the enormous potential for enhancing diagnosis precision, treatment personalization, and overall patient outcomes.
Collapse
Affiliation(s)
- Antonino Maniaci
- Faculty of Medicine and Surgery, University of Enna “Kore”, 94100 Enna, Italy; (A.M.); (S.L.); (C.G.)
| | - Salvatore Lavalle
- Faculty of Medicine and Surgery, University of Enna “Kore”, 94100 Enna, Italy; (A.M.); (S.L.); (C.G.)
| | - Caterina Gagliano
- Faculty of Medicine and Surgery, University of Enna “Kore”, 94100 Enna, Italy; (A.M.); (S.L.); (C.G.)
| | - Mario Lentini
- ASP Ragusa, Hospital Giovanni Paolo II, 97100 Ragusa, Italy;
| | - Edoardo Masiello
- Radiology Unit, Department Clinical and Experimental, Experimental Imaging Center, Vita-Salute San Raffaele University, 20132 Milan, Italy
| | - Federica Parisi
- Department of Medical and Surgical Sciences and Advanced Technologies “GF Ingrassia”, ENT Section, University of Catania, Via S. Sofia, 78, 95125 Catania, Italy;
| | - Giannicola Iannella
- Department of ‘Organi di Senso’, University “Sapienza”, Viale dell’Università, 33, 00185 Rome, Italy;
| | - Nicole Dalia Cilia
- Department of Computer Engineering, University of Enna “Kore”, 94100 Enna, Italy;
- Institute for Computing and Information Sciences, Radboud University Nijmegen, 6544 Nijmegen, The Netherlands
| | - Valerio Salerno
- Department of Engineering and Architecture, Kore University of Enna, 94100 Enna, Italy;
| | - Giacomo Cusumano
- University Hospital Policlinico “G. Rodolico—San Marco”, 95123 Catania, Italy;
- Department of General Surgery and Medical-Surgical Specialties, University of Catania, 95123 Catania, Italy
| | - Luigi La Via
- University Hospital Policlinico “G. Rodolico—San Marco”, 95123 Catania, Italy;
| |
Collapse
|
3
|
Coppes RP, van Dijk LV. Future of Team-based Basic and Translational Science in Radiation Oncology. Semin Radiat Oncol 2024; 34:370-378. [PMID: 39271272 DOI: 10.1016/j.semradonc.2024.07.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/15/2024]
Abstract
To further optimize radiotherapy, a more personalized treatment towards individual patient's risk profiles, dissecting both patient-specific tumor and normal tissue response to multimodality treatments is needed. Novel developments in radiobiology, using in vitro patient-specific complex tissue resembling 3D models and multiomics approaches at a spatial single-cell level, may provide unprecedented insight into the radiation responses of tumors and normal tissue. Here, we describe the necessary team effort, including all disciplines in radiation oncology, to integrate such data into clinical prediction models and link the relatively "big data" from the clinical practice, allowing accurate patient stratification for personalized treatment approaches.
Collapse
Affiliation(s)
- R P Coppes
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.; Department of Biomedical Sciences, University Medical Center Groningen, University of Groningen, Groningen, Netherlands..
| | - L V van Dijk
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| |
Collapse
|
4
|
Fu X, Chen C, Chen Z, Yu J, Wang L. Radiogenomics based survival prediction of small-sample glioblastoma patients by multi-task DFFSP model. BIOMED ENG-BIOMED TE 2024:bmt-2022-0221. [PMID: 39241784 DOI: 10.1515/bmt-2022-0221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 08/21/2024] [Indexed: 09/09/2024]
Abstract
In this paper, the multi-task dense-feature-fusion survival prediction (DFFSP) model is proposed to predict the three-year survival for glioblastoma (GBM) patients based on radiogenomics data. The contrast-enhanced T1-weighted (T1w) image, T2-weighted (T2w) image and copy number variation (CNV) is used as the input of the three branches of the DFFSP model. This model uses two image extraction modules consisting of residual blocks and one dense feature fusion module to make multi-scale fusion of T1w and T2w image features as backbone. Also, a gene feature extraction module is used to adaptively weight CNV fragments. Besides, a transfer learning module is introduced to solve the small sample problem and an image reconstruction module is adopted to make the model anatomy-aware under a multi-task framework. 256 sample pairs (T1w and corresponding T2w MRI slices) and 187 CNVs of 74 patients were used. The experimental results show that the proposed model can predict the three-year survival of GBM patients with the accuracy of 89.1 %, which is improved by 3.2 and 4.7 % compared with the model without genes and the model using last fusion strategy, respectively. This model could also classify the patients into high-risk and low-risk groups, which will effectively assist doctors in diagnosing GBM patients.
Collapse
Affiliation(s)
- Xue Fu
- Department of Biomedical Engineering, 47854 Nanjing University of Aeronautics and Astronautics , Nanjing, China
| | - Chunxiao Chen
- Department of Biomedical Engineering, 47854 Nanjing University of Aeronautics and Astronautics , Nanjing, China
| | - Zhiying Chen
- Department of Biomedical Engineering, 47854 Nanjing University of Aeronautics and Astronautics , Nanjing, China
| | - Jie Yu
- Department of Biomedical Engineering, 47854 Nanjing University of Aeronautics and Astronautics , Nanjing, China
| | - Liang Wang
- Department of Biomedical Engineering, 47854 Nanjing University of Aeronautics and Astronautics , Nanjing, China
| |
Collapse
|
5
|
Iqbal MS, Belal Bin Heyat M, Parveen S, Ammar Bin Hayat M, Roshanzamir M, Alizadehsani R, Akhtar F, Sayeed E, Hussain S, Hussein HS, Sawan M. Progress and trends in neurological disorders research based on deep learning. Comput Med Imaging Graph 2024; 116:102400. [PMID: 38851079 DOI: 10.1016/j.compmedimag.2024.102400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 05/07/2024] [Accepted: 05/13/2024] [Indexed: 06/10/2024]
Abstract
In recent years, deep learning (DL) has emerged as a powerful tool in clinical imaging, offering unprecedented opportunities for the diagnosis and treatment of neurological disorders (NDs). This comprehensive review explores the multifaceted role of DL techniques in leveraging vast datasets to advance our understanding of NDs and improve clinical outcomes. Beginning with a systematic literature review, we delve into the utilization of DL, particularly focusing on multimodal neuroimaging data analysis-a domain that has witnessed rapid progress and garnered significant scientific interest. Our study categorizes and critically analyses numerous DL models, including Convolutional Neural Networks (CNNs), LSTM-CNN, GAN, and VGG, to understand their performance across different types of Neurology Diseases. Through particular analysis, we identify key benchmarks and datasets utilized in training and testing DL models, shedding light on the challenges and opportunities in clinical neuroimaging research. Moreover, we discuss the effectiveness of DL in real-world clinical scenarios, emphasizing its potential to revolutionize ND diagnosis and therapy. By synthesizing existing literature and describing future directions, this review not only provides insights into the current state of DL applications in ND analysis but also covers the way for the development of more efficient and accessible DL techniques. Finally, our findings underscore the transformative impact of DL in reshaping the landscape of clinical neuroimaging, offering hope for enhanced patient care and groundbreaking discoveries in the field of neurology. This review paper is beneficial for neuropathologists and new researchers in this field.
Collapse
Affiliation(s)
- Muhammad Shahid Iqbal
- Department of Computer Science and Information Technology, Women University of Azad Jammu & Kashmir, Bagh, Pakistan.
| | - Md Belal Bin Heyat
- CenBRAIN Neurotech Center of Excellence, School of Engineering, Westlake University, Hangzhou, Zhejiang, China.
| | - Saba Parveen
- College of Electronics and Information Engineering, Shenzhen University, Shenzhen 518060, China.
| | | | - Mohamad Roshanzamir
- Department of Computer Engineering, Faculty of Engineering, Fasa University, Fasa, Iran.
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation, Deakin University, VIC 3216, Australia.
| | - Faijan Akhtar
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China.
| | - Eram Sayeed
- Kisan Inter College, Dhaurahara, Kushinagar, India.
| | - Sadiq Hussain
- Department of Examination, Dibrugarh University, Assam 786004, India.
| | - Hany S Hussein
- Electrical Engineering Department, Faculty of Engineering, King Khalid University, Abha 61411, Saudi Arabia; Electrical Engineering Department, Faculty of Engineering, Aswan University, Aswan 81528, Egypt.
| | - Mohamad Sawan
- CenBRAIN Neurotech Center of Excellence, School of Engineering, Westlake University, Hangzhou, Zhejiang, China.
| |
Collapse
|
6
|
Duan W, Wang Z, Ma Z, Zheng H, Li Y, Pei D, Wang M, Qiu Y, Duan M, Yan D, Ji Y, Cheng J, Liu X, Zhang Z, Yan J. Radiomic profiling for insular diffuse glioma stratification with distinct biologic pathway activities. Cancer Sci 2024; 115:1261-1272. [PMID: 38279197 PMCID: PMC11007007 DOI: 10.1111/cas.16089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 12/31/2023] [Accepted: 01/05/2024] [Indexed: 01/28/2024] Open
Abstract
Current literature emphasizes surgical complexities and customized resection for managing insular gliomas; however, radiogenomic investigations into prognostic radiomic traits remain limited. We aimed to develop and validate a radiomic model using multiparametric magnetic resonance imaging (MRI) for prognostic prediction and to reveal the underlying biological mechanisms. Radiomic features from preoperative MRI were utilized to develop and validate a radiomic risk signature (RRS) for insular gliomas, validated through paired MRI and RNA-seq data (N = 39), to identify core pathways underlying the RRS and individual prognostic radiomic features. An 18-feature-based RRS was established for overall survival (OS) prediction. Gene set enrichment analysis (GSEA) and weighted gene coexpression network analysis (WGCNA) were used to identify intersectional pathways. In total, 364 patients with insular gliomas (training set, N = 295; validation set, N = 69) were enrolled. RRS was significantly associated with insular glioma OS (log-rank p = 0.00058; HR = 3.595, 95% CI:1.636-7.898) in the validation set. The radiomic-pathological-clinical model (R-P-CM) displayed enhanced reliability and accuracy in prognostic prediction. The radiogenomic analysis revealed 322 intersectional pathways through GSEA and WGCNA fusion; 13 prognostic radiomic features were significantly correlated with these intersectional pathways. The RRS demonstrated independent predictive value for insular glioma prognosis compared with established clinical and pathological profiles. The biological basis for prognostic radiomic indicators includes immune, proliferative, migratory, metabolic, and cellular biological function-related pathways.
Collapse
Affiliation(s)
- Wenchao Duan
- Department of NeurosurgeryThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Zilong Wang
- Department of NeurosurgeryThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Zeyu Ma
- Department of NeurosurgeryThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Hongwei Zheng
- Department of MRIThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Yinhua Li
- Department of MRIThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Dongling Pei
- Department of NeurosurgeryThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Minkai Wang
- Department of NeurosurgeryThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Yuning Qiu
- Department of NeurosurgeryThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Mengjiao Duan
- Department of MRIThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Dongming Yan
- Department of NeurosurgeryThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Yuchen Ji
- Department of NeurosurgeryThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Jingliang Cheng
- Department of MRIThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Xianzhi Liu
- Department of NeurosurgeryThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Zhenyu Zhang
- Department of NeurosurgeryThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| | - Jing Yan
- Department of MRIThe First Affiliated Hospital of Zhengzhou UniversityZhengzhouHenanChina
| |
Collapse
|
7
|
Khalighi S, Reddy K, Midya A, Pandav KB, Madabhushi A, Abedalthagafi M. Artificial intelligence in neuro-oncology: advances and challenges in brain tumor diagnosis, prognosis, and precision treatment. NPJ Precis Oncol 2024; 8:80. [PMID: 38553633 PMCID: PMC10980741 DOI: 10.1038/s41698-024-00575-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 03/13/2024] [Indexed: 04/02/2024] Open
Abstract
This review delves into the most recent advancements in applying artificial intelligence (AI) within neuro-oncology, specifically emphasizing work on gliomas, a class of brain tumors that represent a significant global health issue. AI has brought transformative innovations to brain tumor management, utilizing imaging, histopathological, and genomic tools for efficient detection, categorization, outcome prediction, and treatment planning. Assessing its influence across all facets of malignant brain tumor management- diagnosis, prognosis, and therapy- AI models outperform human evaluations in terms of accuracy and specificity. Their ability to discern molecular aspects from imaging may reduce reliance on invasive diagnostics and may accelerate the time to molecular diagnoses. The review covers AI techniques, from classical machine learning to deep learning, highlighting current applications and challenges. Promising directions for future research include multimodal data integration, generative AI, large medical language models, precise tumor delineation and characterization, and addressing racial and gender disparities. Adaptive personalized treatment strategies are also emphasized for optimizing clinical outcomes. Ethical, legal, and social implications are discussed, advocating for transparency and fairness in AI integration for neuro-oncology and providing a holistic understanding of its transformative impact on patient care.
Collapse
Affiliation(s)
- Sirvan Khalighi
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Kartik Reddy
- Department of Radiology, Emory University, Atlanta, GA, USA
| | - Abhishek Midya
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Krunal Balvantbhai Pandav
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Anant Madabhushi
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA.
- Atlanta Veterans Administration Medical Center, Atlanta, GA, USA.
| | - Malak Abedalthagafi
- Department of Pathology and Laboratory Medicine, Emory University, Atlanta, GA, USA.
- The Cell and Molecular Biology Program, Winship Cancer Institute, Atlanta, GA, USA.
| |
Collapse
|
8
|
Wang Y, Yin C, Zhang P. Multimodal risk prediction with physiological signals, medical images and clinical notes. Heliyon 2024; 10:e26772. [PMID: 38455585 PMCID: PMC10918115 DOI: 10.1016/j.heliyon.2024.e26772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 02/17/2024] [Accepted: 02/20/2024] [Indexed: 03/09/2024] Open
Abstract
The broad adoption of electronic health record (EHR) systems brings us a tremendous amount of clinical data and thus provides opportunities to conduct data-based healthcare research to solve various clinical problems in the medical domain. Machine learning and deep learning methods are widely used in the medical informatics and healthcare domain due to their power to mine insights from raw data. When adapting deep learning models for EHR data, it is essential to consider its heterogeneous nature: EHR contains patient records from various sources including medical tests (e.g. blood test, microbiology test), medical imaging, diagnosis, medications, procedures, clinical notes, etc. Those modalities together provide a holistic view of patient health status and complement each other. Therefore, combining data from multiple modalities that are intrinsically different is challenging but intuitively promising in deep learning for EHR. To assess the expectations of multimodal data, we introduce a comprehensive fusion framework designed to integrate temporal variables, medical images, and clinical notes in EHR for enhanced performance in clinical risk prediction. Early, joint, and late fusion strategies are employed to combine data from various modalities effectively. We test the model with three predictive tasks: in-hospital mortality, long length of stay, and 30-day readmission. Experimental results show that multimodal models outperform uni-modal models in the tasks involved. Additionally, by training models with different input modality combinations, we calculate the Shapley value for each modality to quantify their contribution to multimodal performance. It is shown that temporal variables tend to be more helpful than CXR images and clinical notes in the three explored predictive tasks.
Collapse
Affiliation(s)
- Yuanlong Wang
- Department of Computer Science and Engineering, The Ohio State University, Columbus, OH 43210, USA
| | - Changchang Yin
- Department of Computer Science and Engineering, The Ohio State University, Columbus, OH 43210, USA
- Department of Biomedical Informatics, The Ohio State University, Columbus, OH 43210, USA
| | - Ping Zhang
- Department of Computer Science and Engineering, The Ohio State University, Columbus, OH 43210, USA
- Department of Biomedical Informatics, The Ohio State University, Columbus, OH 43210, USA
| |
Collapse
|
9
|
Lee JO, Ahn SS, Choi KS, Lee J, Jang J, Park JH, Hwang I, Park CK, Park SH, Chung JW, Choi SH. Added prognostic value of 3D deep learning-derived features from preoperative MRI for adult-type diffuse gliomas. Neuro Oncol 2024; 26:571-580. [PMID: 37855826 PMCID: PMC10912011 DOI: 10.1093/neuonc/noad202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND To investigate the prognostic value of spatial features from whole-brain MRI using a three-dimensional (3D) convolutional neural network for adult-type diffuse gliomas. METHODS In a retrospective, multicenter study, 1925 diffuse glioma patients were enrolled from 5 datasets: SNUH (n = 708), UPenn (n = 425), UCSF (n = 500), TCGA (n = 160), and Severance (n = 132). The SNUH and Severance datasets served as external test sets. Precontrast and postcontrast 3D T1-weighted, T2-weighted, and T2-FLAIR images were processed as multichannel 3D images. A 3D-adapted SE-ResNeXt model was trained to predict overall survival. The prognostic value of the deep learning-based prognostic index (DPI), a spatial feature-derived quantitative score, and established prognostic markers were evaluated using Cox regression. Model evaluation was performed using the concordance index (C-index) and Brier score. RESULTS The MRI-only median DPI survival prediction model achieved C-indices of 0.709 and 0.677 (BS = 0.142 and 0.215) and survival differences (P < 0.001 and P = 0.002; log-rank test) for the SNUH and Severance datasets, respectively. Multivariate Cox analysis revealed DPI as a significant prognostic factor, independent of clinical and molecular genetic variables: hazard ratio = 0.032 and 0.036 (P < 0.001 and P = 0.004) for the SNUH and Severance datasets, respectively. Multimodal prediction models achieved higher C-indices than models using only clinical and molecular genetic variables: 0.783 vs. 0.774, P = 0.001, SNUH; 0.766 vs. 0.748, P = 0.023, Severance. CONCLUSIONS The global morphologic feature derived from 3D CNN models using whole-brain MRI has independent prognostic value for diffuse gliomas. Combining clinical, molecular genetic, and imaging data yields the best performance.
Collapse
Affiliation(s)
- Jung Oh Lee
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Artificial Intelligence Collaborative Network, Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Sung Soo Ahn
- Department of Radiology, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Kyu Sung Choi
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Artificial Intelligence Collaborative Network, Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Junhyeok Lee
- Interdisciplinary Programs in Cancer Biology Major, Seoul National University Graduate School, Seoul, Republic of Korea
| | - Joon Jang
- Department of Biomedical Sciences, Seoul National University, Seoul, Republic of Korea
| | - Jung Hyun Park
- Department of Radiology, Seoul Metropolitan Government-Seoul National University Boramae Medical Center, Seoul, South Korea
| | - Inpyeong Hwang
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Artificial Intelligence Collaborative Network, Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Chul-Kee Park
- Department of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - Sung Hye Park
- Department of Pathology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Jin Wook Chung
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Artificial Intelligence Collaborative Network, Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Institute of Innovate Biomedical Technology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Seung Hong Choi
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Artificial Intelligence Collaborative Network, Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Center for Nanoparticle Research, Institute for Basic Science, Seoul, Republic of Korea
| |
Collapse
|
10
|
Li YL, Leu HB, Ting CH, Lim SS, Tsai TY, Wu CH, Chung IF, Liang KH. Predicting long-term time to cardiovascular incidents using myocardial perfusion imaging and deep convolutional neural networks. Sci Rep 2024; 14:3802. [PMID: 38360974 PMCID: PMC10869727 DOI: 10.1038/s41598-024-54139-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 02/08/2024] [Indexed: 02/17/2024] Open
Abstract
Myocardial perfusion imaging (MPI) is a clinical tool which can assess the heart's perfusion status, thereby revealing impairments in patients' cardiac function. Within the MPI modality, the acquired three-dimensional signals are typically represented as a sequence of two-dimensional grayscale tomographic images. Here, we proposed an end-to-end survival training approach for processing gray-scale MPI tomograms to generate a risk score which reflects subsequent time to cardiovascular incidents, including cardiovascular death, non-fatal myocardial infarction, and non-fatal ischemic stroke (collectively known as Major Adverse Cardiovascular Events; MACE) as well as Congestive Heart Failure (CHF). We recruited a total of 1928 patients who had undergone MPI followed by coronary interventions. Among them, 80% (n = 1540) were randomly reserved for the training and 5- fold cross-validation stage, while 20% (n = 388) were set aside for the testing stage. The end-to-end survival training can converge well in generating effective AI models via the fivefold cross-validation approach with 1540 patients. When a candidate model is evaluated using independent images, the model can stratify patients into below-median-risk (n = 194) and above-median-risk (n = 194) groups, the corresponding survival curves of the two groups have significant difference (P < 0.0001). We further stratify the above-median-risk group to the quartile 3 and 4 group (n = 97 each), and the three patient strata, referred to as the high, intermediate and low risk groups respectively, manifest statistically significant difference. Notably, the 5-year cardiovascular incident rate is less than 5% in the low-risk group (accounting for 50% of all patients), while the rate is nearly 40% in the high-risk group (accounting for 25% of all patients). Evaluation of patient subgroups revealed stronger effect size in patients with three blocked arteries (Hazard ratio [HR]: 18.377, 95% CI 3.719-90.801, p < 0.001), followed by those with two blocked vessels at HR 7.484 (95% CI 1.858-30.150; p = 0.005). Regarding stent placement, patients with a single stent displayed a HR of 4.410 (95% CI 1.399-13.904; p = 0.011). Patients with two stents show a HR of 10.699 (95% CI 2.262-50.601; p = 0.003), escalating notably to a HR of 57.446 (95% CI 1.922-1717.207; p = 0.019) for patients with three or more stents, indicating a substantial relationship between the disease severity and the predictive capability of the AI for subsequent cardiovascular inciidents. The success of the MPI AI model in stratifying patients into subgroups with distinct time-to-cardiovascular incidents demonstrated the feasibility of proposed end-to-end survival training approach.
Collapse
Affiliation(s)
- Yi-Lian Li
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Hsin-Bang Leu
- Department of Medicine, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Chien-Hsin Ting
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Su-Shen Lim
- Department of Medicine, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Tsung-Ying Tsai
- Department of Medicine, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Cheng-Hsueh Wu
- Department of Medicine, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - I-Fang Chung
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei City, Taiwan.
| | - Kung-Hao Liang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei City, Taiwan.
| |
Collapse
|
11
|
Liu X, Shusharina N, Shih HA, Kuo CCJ, El Fakhri G, Woo J. Treatment-wise Glioblastoma Survival Inference with Multi-parametric Preoperative MRI. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2024; 12927:129272H. [PMID: 39444513 PMCID: PMC11497473 DOI: 10.1117/12.3006897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2024]
Abstract
In this work, we aim to predict the survival time (ST) of glioblastoma (GBM) patients undergoing different treatments based on preoperative magnetic resonance (MR) scans. The personalized and precise treatment planning can be achieved by comparing the ST of different treatments. It is well established that both the current status of the patient (as represented by the MR scans) and the choice of treatment are the cause of ST. While previous related MR-based glioblastoma ST studies have focused only on the direct mapping of MR scans to ST, they have not included the underlying causal relationship between treatments and ST. To address this limitation, we propose a treatment-conditioned regression model for glioblastoma ST that incorporates treatment information in addition to MR scans. Our approach allows us to effectively utilize the data from all of the treatments in a unified manner, rather than having to train separate models for each of the treatments. Furthermore, treatment can be effectively injected into each convolutional layer through the adaptive instance normalization we employ. We evaluate our framework on the BraTS20 ST prediction task. Three treatment options are considered: Gross Total Resection (GTR), Subtotal Resection (STR), and no resection. The evaluation results demonstrate the effectiveness of injecting the treatment for estimating GBM survival.
Collapse
Affiliation(s)
- Xiaofeng Liu
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA
| | - Nadya Shusharina
- Dept. of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| | - Helen A Shih
- Dept. of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| | - C-C Jay Kuo
- Dept. of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90007, USA
| | - Georges El Fakhri
- Dept. of Radiology and Biomedical Imaging, Yale University, New Heaven, CT 06519, USA
| | - Jonghye Woo
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA
| |
Collapse
|
12
|
Schön F, Kieslich A, Nebelung H, Riediger C, Hoffmann RT, Zwanenburg A, Löck S, Kühn JP. Comparative analysis of radiomics and deep-learning algorithms for survival prediction in hepatocellular carcinoma. Sci Rep 2024; 14:590. [PMID: 38182664 PMCID: PMC10770355 DOI: 10.1038/s41598-023-50451-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 12/20/2023] [Indexed: 01/07/2024] Open
Abstract
To examine the comparative robustness of computed tomography (CT)-based conventional radiomics and deep-learning convolutional neural networks (CNN) to predict overall survival (OS) in HCC patients. Retrospectively, 114 HCC patients with pretherapeutic CT of the liver were randomized into a development (n = 85) and a validation (n = 29) cohort, including patients of all tumor stages and several applied therapies. In addition to clinical parameters, image annotations of the liver parenchyma and of tumor findings on CT were available. Cox-regression based on radiomics features and CNN models were established and combined with clinical parameters to predict OS. Model performance was assessed using the concordance index (C-index). Log-rank tests were used to test model-based patient stratification into high/low-risk groups. The clinical Cox-regression model achieved the best validation performance for OS (C-index [95% confidence interval (CI)] 0.74 [0.57-0.86]) with a significant difference between the risk groups (p = 0.03). In image analysis, the CNN models (lowest C-index [CI] 0.63 [0.39-0.83]; highest C-index [CI] 0.71 [0.49-0.88]) were superior to the corresponding radiomics models (lowest C-index [CI] 0.51 [0.30-0.73]; highest C-index [CI] 0.66 [0.48-0.79]). A significant risk stratification was not possible (p > 0.05). Under clinical conditions, CNN-algorithms demonstrate superior prognostic potential to predict OS in HCC patients compared to conventional radiomics approaches and could therefore provide important information in the clinical setting, especially when clinical data is limited.
Collapse
Affiliation(s)
- Felix Schön
- Institute and Polyclinic for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Carl Gustav Carus, TU Dresden, Dresden, Germany.
| | - Aaron Kieslich
- OncoRay‑National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, TU Dresden, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany.
| | - Heiner Nebelung
- Institute and Polyclinic for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Carl Gustav Carus, TU Dresden, Dresden, Germany
| | - Carina Riediger
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TU Dresden, Dresden, Germany
| | - Ralf-Thorsten Hoffmann
- Institute and Polyclinic for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Carl Gustav Carus, TU Dresden, Dresden, Germany
| | - Alex Zwanenburg
- OncoRay‑National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, TU Dresden, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC) Dresden, Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Steffen Löck
- OncoRay‑National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, TU Dresden, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
| | - Jens-Peter Kühn
- Institute and Polyclinic for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Carl Gustav Carus, TU Dresden, Dresden, Germany
| |
Collapse
|
13
|
Herr J, Stoyanova R, Mellon EA. Convolutional Neural Networks for Glioma Segmentation and Prognosis: A Systematic Review. Crit Rev Oncog 2024; 29:33-65. [PMID: 38683153 DOI: 10.1615/critrevoncog.2023050852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024]
Abstract
Deep learning (DL) is poised to redefine the way medical images are processed and analyzed. Convolutional neural networks (CNNs), a specific type of DL architecture, are exceptional for high-throughput processing, allowing for the effective extraction of relevant diagnostic patterns from large volumes of complex visual data. This technology has garnered substantial interest in the field of neuro-oncology as a promising tool to enhance medical imaging throughput and analysis. A multitude of methods harnessing MRI-based CNNs have been proposed for brain tumor segmentation, classification, and prognosis prediction. They are often applied to gliomas, the most common primary brain cancer, to classify subtypes with the goal of guiding therapy decisions. Additionally, the difficulty of repeating brain biopsies to evaluate treatment response in the setting of often confusing imaging findings provides a unique niche for CNNs to help distinguish the treatment response to gliomas. For example, glioblastoma, the most aggressive type of brain cancer, can grow due to poor treatment response, can appear to grow acutely due to treatment-related inflammation as the tumor dies (pseudo-progression), or falsely appear to be regrowing after treatment as a result of brain damage from radiation (radiation necrosis). CNNs are being applied to separate this diagnostic dilemma. This review provides a detailed synthesis of recent DL methods and applications for intratumor segmentation, glioma classification, and prognosis prediction. Furthermore, this review discusses the future direction of MRI-based CNN in the field of neuro-oncology and challenges in model interpretability, data availability, and computation efficiency.
Collapse
Affiliation(s)
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| | - Eric Albert Mellon
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| |
Collapse
|
14
|
Wang W, Liang H, Zhang Z, Xu C, Wei D, Li W, Qian Y, Zhang L, Liu J, Lei D. Comparing three-dimensional and two-dimensional deep-learning, radiomics, and fusion models for predicting occult lymph node metastasis in laryngeal squamous cell carcinoma based on CT imaging: a multicentre, retrospective, diagnostic study. EClinicalMedicine 2024; 67:102385. [PMID: 38261897 PMCID: PMC10796944 DOI: 10.1016/j.eclinm.2023.102385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 11/29/2023] [Accepted: 12/04/2023] [Indexed: 01/25/2024] Open
Abstract
Background The occult lymph node metastasis (LNM) of laryngeal squamous cell carcinoma (LSCC) affects the treatment and prognosis of patients. This study aimed to comprehensively compare the performance of the three-dimensional and two-dimensional deep learning models, radiomics model, and the fusion models for predicting occult LNM in LSCC. Methods In this retrospective diagnostic study, a total of 553 patients with clinical N0 stage LSCC, who underwent surgical treatment without distant metastasis and multiple primary cancers, were consecutively enrolled from four Chinese medical centres between January 01, 2016 and December 30, 2020. The participant data were manually retrieved from medical records, imaging databases, and pathology reports. The study cohort was divided into a training set (n = 300), an internal test set (n = 89), and two external test sets (n = 120 and 44, respectively). The three-dimensional deep learning (3D DL), two-dimensional deep learning (2D DL), and radiomics model were developed using CT images of the primary tumor. The clinical model was constructed based on clinical and radiological features. Two fusion strategies were utilized to develop the fusion model: the feature-based DLRad_FB model and the decision-based DLRad_DB model. The discriminative ability and correlation of 3D DL, 2D DL and radiomics features were analysed comprehensively. The performances of the predictive models were evaluated based on the pathological diagnosis. Findings The 3D DL features had superior discriminative ability and lower internal redundancy compared to 2D DL and radiomics features. The DLRad_DB model achieved the highest AUC (0.89-0.90) among all the study sets, significantly outperforming the clinical model (AUC = 0.73-0.78, P = 0.0001-0.042, Delong test). Compared to the DLRad_DB model, the AUC values for the DLRad_FB, 3D DL, 2D DL, and radiomics models were 0.82-0.84 (P = 0.025-0.46), 0.86-0.89 (P = 0.75-0.97), 0.83-0.86 (P = 0.029-0.66), and 0.79-0.82 (P = 0.0072-0.10), respectively in the study sets. Additionally, the DLRad_DB model exhibited the best sensitivity (82-88%) and specificity (79-85%) in the test sets. Interpretation The decision-based fusion model DLRad_DB, which combines 3D DL, 2D DL, radiomics, and clinical data, can be utilized to predict occult LNM in LSCC. This has the potential to minimize unnecessary lymph node dissection and prophylactic radiotherapy in patients with cN0 disease. Funding National Natural Science Foundation of China, Natural Science Foundation of Shandong Province.
Collapse
Affiliation(s)
- Wenlun Wang
- Department of Otorhinolaryngology, Qilu Hospital of Shandong University, Jinan, Shandong, China
- NHC Key Laboratory of Otorhinolaryngology (Shandong University), Jinan, Shandong, China
| | - Hui Liang
- Department of Otorhinolaryngology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Ji’nan 250014, Shandong, China
| | - Zhouyi Zhang
- Department of Otorhinolaryngology, Qilu Hospital of Shandong University, Jinan, Shandong, China
- NHC Key Laboratory of Otorhinolaryngology (Shandong University), Jinan, Shandong, China
| | - Chenyang Xu
- Department of Otorhinolaryngology, Qilu Hospital of Shandong University, Jinan, Shandong, China
- NHC Key Laboratory of Otorhinolaryngology (Shandong University), Jinan, Shandong, China
| | - Dongmin Wei
- Department of Otorhinolaryngology, Qilu Hospital of Shandong University, Jinan, Shandong, China
- NHC Key Laboratory of Otorhinolaryngology (Shandong University), Jinan, Shandong, China
| | - Wenming Li
- Department of Otorhinolaryngology, Qilu Hospital of Shandong University, Jinan, Shandong, China
- NHC Key Laboratory of Otorhinolaryngology (Shandong University), Jinan, Shandong, China
| | - Ye Qian
- Department of Otorhinolaryngology, Qilu Hospital of Shandong University, Jinan, Shandong, China
- NHC Key Laboratory of Otorhinolaryngology (Shandong University), Jinan, Shandong, China
| | - Lihong Zhang
- Department of Otorhinolaryngology Head & Neck Surgery, Peking University People’s Hospital, Beijing 100044, China
| | - Jun Liu
- Department of Otolaryngology-Head & Neck Surgery, West China Hospital, Sichuan University, Chengdu, China
| | - Dapeng Lei
- Department of Otorhinolaryngology, Qilu Hospital of Shandong University, Jinan, Shandong, China
- NHC Key Laboratory of Otorhinolaryngology (Shandong University), Jinan, Shandong, China
| |
Collapse
|
15
|
Miao S, Jia H, Huang W, Cheng K, Zhou W, Wang R. Subcutaneous fat predicts bone metastasis in breast cancer: A novel multimodality-based deep learning model. Cancer Biomark 2024; 39:171-185. [PMID: 38043007 PMCID: PMC11091603 DOI: 10.3233/cbm-230219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 10/24/2023] [Indexed: 12/04/2023]
Abstract
OBJECTIVES This study explores a deep learning (DL) approach to predicting bone metastases in breast cancer (BC) patients using clinical information, such as the fat index, and features like Computed Tomography (CT) images. METHODS CT imaging data and clinical information were collected from 431 BC patients who underwent radical surgical resection at Harbin Medical University Cancer Hospital. The area of muscle and adipose tissue was obtained from CT images at the level of the eleventh thoracic vertebra. The corresponding histograms of oriented gradients (HOG) and local binary pattern (LBP) features were extracted from the CT images, and the network features were derived from the LBP and HOG features as well as the CT images through deep learning (DL). The combination of network features with clinical information was utilized to predict bone metastases in BC patients using the Gradient Boosting Decision Tree (GBDT) algorithm. Regularized Cox regression models were employed to identify independent prognostic factors for bone metastasis. RESULTS The combination of clinical information and network features extracted from LBP features, HOG features, and CT images using a convolutional neural network (CNN) yielded the best performance, achieving an AUC of 0.922 (95% confidence interval [CI]: 0.843-0.964, P< 0.01). Regularized Cox regression results indicated that the subcutaneous fat index was an independent prognostic factor for bone metastasis in breast cancer (BC). CONCLUSION Subcutaneous fat index could predict bone metastasis in BC patients. Deep learning multimodal algorithm demonstrates superior performance in assessing bone metastases in BC patients.
Collapse
Affiliation(s)
- Shidi Miao
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Haobo Jia
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Wenjuan Huang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, Heilongjiang, China
- Department of Internal Medicine, Harbin Medical University Cancer Hospital, Harbin Medical University, Harbin, Heilongjiang, China
| | - Ke Cheng
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Wenjin Zhou
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Ruitao Wang
- Department of Internal Medicine, Harbin Medical University Cancer Hospital, Harbin Medical University, Harbin, Heilongjiang, China
| |
Collapse
|
16
|
Nakhate V, Gonzalez Castro LN. Artificial intelligence in neuro-oncology. Front Neurosci 2023; 17:1217629. [PMID: 38161802 PMCID: PMC10755952 DOI: 10.3389/fnins.2023.1217629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 11/14/2023] [Indexed: 01/03/2024] Open
Abstract
Artificial intelligence (AI) describes the application of computer algorithms to the solution of problems that have traditionally required human intelligence. Although formal work in AI has been slowly advancing for almost 70 years, developments in the last decade, and particularly in the last year, have led to an explosion of AI applications in multiple fields. Neuro-oncology has not escaped this trend. Given the expected integration of AI-based methods to neuro-oncology practice over the coming years, we set to provide an overview of existing technologies as they are applied to the neuropathology and neuroradiology of brain tumors. We highlight current benefits and limitations of these technologies and offer recommendations on how to appraise novel AI-tools as they undergo consideration for integration into clinical workflows.
Collapse
Affiliation(s)
- Vihang Nakhate
- Department of Neurology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
- Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | - L. Nicolas Gonzalez Castro
- Department of Neurology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
- The Center for Neuro-Oncology, Dana–Farber Cancer Institute, Boston, MA, United States
| |
Collapse
|
17
|
Ke J, Liu K, Sun Y, Xue Y, Huang J, Lu Y, Dai J, Chen Y, Han X, Shen Y, Shen D. Artifact Detection and Restoration in Histology Images With Stain-Style and Structural Preservation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3487-3500. [PMID: 37352087 DOI: 10.1109/tmi.2023.3288940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/25/2023]
Abstract
The artifacts in histology images may encumber the accurate interpretation of medical information and cause misdiagnosis. Accordingly, prepending manual quality control of artifacts considerably decreases the degree of automation. To close this gap, we propose a methodical pre-processing framework to detect and restore artifacts, which minimizes their impact on downstream AI diagnostic tasks. First, the artifact recognition network AR-Classifier first differentiates common artifacts from normal tissues, e.g., tissue folds, marking dye, tattoo pigment, spot, and out-of-focus, and also catalogs artifact patches by their restorability. Then, the succeeding artifact restoration network AR-CycleGAN performs de-artifact processing where stain styles and tissue structures can be maximally retained. We construct a benchmark for performance evaluation, curated from both clinically collected WSIs and public datasets of colorectal and breast cancer. The functional structures are compared with state-of-the-art methods, and also comprehensively evaluated by multiple metrics across multiple tasks, including artifact classification, artifact restoration, downstream diagnostic tasks of tumor classification and nuclei segmentation. The proposed system allows full automation of deep learning based histology image analysis without human intervention. Moreover, the structure-independent characteristic enables its processing with various artifact subtypes. The source code and data in this research are available at https://github.com/yunboer/AR-classifier-and-AR-CycleGAN.
Collapse
|
18
|
Pacella G, Brunese MC, D’Imperio E, Rotondo M, Scacchi A, Carbone M, Guerra G. Pancreatic Ductal Adenocarcinoma: Update of CT-Based Radiomics Applications in the Pre-Surgical Prediction of the Risk of Post-Operative Fistula, Resectability Status and Prognosis. J Clin Med 2023; 12:7380. [PMID: 38068432 PMCID: PMC10707069 DOI: 10.3390/jcm12237380] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 11/21/2023] [Accepted: 11/23/2023] [Indexed: 09/10/2024] Open
Abstract
BACKGROUND Pancreatic ductal adenocarcinoma (PDAC) is the seventh leading cause of cancer-related deaths worldwide. Surgical resection is the main driver to improving survival in resectable tumors, while neoadjuvant treatment based on chemotherapy (and radiotherapy) is the best option-treatment for a non-primally resectable disease. CT-based imaging has a central role in detecting, staging, and managing PDAC. As several authors have proposed radiomics for risk stratification in patients undergoing surgery for PADC, in this narrative review, we have explored the actual fields of interest of radiomics tools in PDAC built on pre-surgical imaging and clinical variables, to obtain more objective and reliable predictors. METHODS The PubMed database was searched for papers published in the English language no earlier than January 2018. RESULTS We found 301 studies, and 11 satisfied our research criteria. Of those included, four were on resectability status prediction, three on preoperative pancreatic fistula (POPF) prediction, and four on survival prediction. Most of the studies were retrospective. CONCLUSIONS It is possible to conclude that many performing models have been developed to get predictive information in pre-surgical evaluation. However, all the studies were retrospective, lacking further external validation in prospective and multicentric cohorts. Furthermore, the radiomics models and the expression of results should be standardized and automatized to be applicable in clinical practice.
Collapse
Affiliation(s)
- Giulia Pacella
- Department of Medicine and Health Science “V. Tiberio”, University of Molise, 86100 Campobasso, Italy; (G.P.)
| | - Maria Chiara Brunese
- Department of Medicine and Health Science “V. Tiberio”, University of Molise, 86100 Campobasso, Italy; (G.P.)
| | | | - Marco Rotondo
- Department of Medicine and Health Science “V. Tiberio”, University of Molise, 86100 Campobasso, Italy; (G.P.)
| | - Andrea Scacchi
- General Surgery Unit, University of Milano-Bicocca, 20126 Milan, Italy
| | - Mattia Carbone
- San Giovanni di Dio e Ruggi d’Aragona Hospital, 84131 Salerno, Italy;
| | - Germano Guerra
- Department of Medicine and Health Science “V. Tiberio”, University of Molise, 86100 Campobasso, Italy; (G.P.)
| |
Collapse
|
19
|
Urrutia R, Espejo D, Evens N, Guerra M, Sühn T, Boese A, Hansen C, Fuentealba P, Illanes A, Poblete V. Clustering Methods for Vibro-Acoustic Sensing Features as a Potential Approach to Tissue Characterisation in Robot-Assisted Interventions. SENSORS (BASEL, SWITZERLAND) 2023; 23:9297. [PMID: 38067671 PMCID: PMC10708300 DOI: 10.3390/s23239297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 11/06/2023] [Accepted: 11/11/2023] [Indexed: 12/18/2023]
Abstract
This article provides a comprehensive analysis of the feature extraction methods applied to vibro-acoustic signals (VA signals) in the context of robot-assisted interventions. The primary objective is to extract valuable information from these signals to understand tissue behaviour better and build upon prior research. This study is divided into three key stages: feature extraction using the Cepstrum Transform (CT), Mel-Frequency Cepstral Coefficients (MFCCs), and Fast Chirplet Transform (FCT); dimensionality reduction employing techniques such as Principal Component Analysis (PCA), t-Distributed Stochastic Neighbour Embedding (t-SNE), and Uniform Manifold Approximation and Projection (UMAP); and, finally, classification using a nearest neighbours classifier. The results demonstrate that using feature extraction techniques, especially the combination of CT and MFCC with dimensionality reduction algorithms, yields highly efficient outcomes. The classification metrics (Accuracy, Recall, and F1-score) approach 99%, and the clustering metric is 0.61. The performance of the CT-UMAP combination stands out in the evaluation metrics.
Collapse
Affiliation(s)
- Robin Urrutia
- Instituto de Acústica, Facultad de Ciencias de la Ingeniería, Universidad Austral de Chile, Valdivia 5111187, Chile; (R.U.); (V.P.)
- Audio Mining Laboratory (AuMiLab), Instituto de Acústica, Universidad Austral de Chile, Valdivia 5111187, Chile;
| | - Diego Espejo
- Audio Mining Laboratory (AuMiLab), Instituto de Acústica, Universidad Austral de Chile, Valdivia 5111187, Chile;
| | - Natalia Evens
- Instituto de Anatomia, Histologia y Patologia, Facultad de Medicina, Universidad Austral de Chile, Valdivia 5111187, Chile; (N.E.); (M.G.)
| | - Montserrat Guerra
- Instituto de Anatomia, Histologia y Patologia, Facultad de Medicina, Universidad Austral de Chile, Valdivia 5111187, Chile; (N.E.); (M.G.)
| | - Thomas Sühn
- Department of Orthopaedic Surgery, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany;
- SURAG Medical GmbH, 39118 Magdeburg, Germany;
| | - Axel Boese
- INKA Innovation Laboratory for Image Guided Therapy, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany
| | - Christian Hansen
- Research Campus STIMULATE, Otto-von-Guericke University Magdeburg, 39106 Magdeburg, Germany;
| | - Patricio Fuentealba
- Instituto de Electricidad y Electrónica, Facultad de Ciencias de la Ingeniería, Universidad Austral de Chile, Valdivia 5111187, Chile;
| | | | - Victor Poblete
- Instituto de Acústica, Facultad de Ciencias de la Ingeniería, Universidad Austral de Chile, Valdivia 5111187, Chile; (R.U.); (V.P.)
- Audio Mining Laboratory (AuMiLab), Instituto de Acústica, Universidad Austral de Chile, Valdivia 5111187, Chile;
| |
Collapse
|
20
|
Papi Z, Fathi S, Dalvand F, Vali M, Yousefi A, Tabatabaei MH, Amouheidari A, Abedi I. Auto-Segmentation and Classification of Glioma Tumors with the Goals of Treatment Response Assessment Using Deep Learning Based on Magnetic Resonance Imaging. Neuroinformatics 2023; 21:641-650. [PMID: 37458971 DOI: 10.1007/s12021-023-09640-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/26/2023] [Indexed: 10/18/2023]
Abstract
Glioma is the most common primary intracranial neoplasm in adults. Radiotherapy is a treatment approach in glioma patients, and Magnetic Resonance Imaging (MRI) is a beneficial diagnostic tool in treatment planning. Treatment response assessment in glioma patients is usually based on the Response Assessment in Neuro Oncology (RANO) criteria. The limitation of assessment based on RANO is two-dimensional (2D) manual measurements. Deep learning (DL) has great potential in neuro-oncology to improve the accuracy of response assessment. In the current research, firstly, the BraTS 2018 Challenge dataset included 210 HGG and 75 LGG were applied to train a designed U-Net network for automatic tumor and intra-tumoral segmentation, followed by training of the designed classifier with transfer learning for determining grading HGG and LGG. Then, designed networks were employed for the segmentation and classification of local MRI images of 49 glioma patients pre and post-radiotherapy. The results of tumor segmentation and its intra-tumoral regions were utilized to determine the volume of different regions and treatment response assessment. Treatment response assessment demonstrated that radiotherapy is effective on the whole tumor and enhancing region with p-value ≤ 0.05 with a 95% confidence level, while it did not affect necrosis and peri-tumoral edema regions. This work demonstrated the potential of using deep learning in MRI images to provide a beneficial tool in the automated treatment response assessment so that the patient can obtain the best treatment.
Collapse
Affiliation(s)
- Zahra Papi
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Sina Fathi
- Department of Health Information Management, School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Fatemeh Dalvand
- Department of Medical Radiation, Shahid Beheshti University, Tehran, Iran
| | - Mahsa Vali
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, Iran
| | - Ali Yousefi
- Department of Management- Operations Research, University of Isfahan, Isfahan, Iran
| | | | | | - Iraj Abedi
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran.
| |
Collapse
|
21
|
Liu Y, Wu M. Deep learning in precision medicine and focus on glioma. Bioeng Transl Med 2023; 8:e10553. [PMID: 37693051 PMCID: PMC10486341 DOI: 10.1002/btm2.10553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 04/13/2023] [Accepted: 05/08/2023] [Indexed: 09/12/2023] Open
Abstract
Deep learning (DL) has been successfully applied to different fields for a range of tasks. In medicine, DL methods have been also used to improve the efficiency of disease diagnosis. In this review, we first summarize the history of the development of artificial intelligence models, demonstrate the features of the subtypes of machine learning and different DL networks, and then explore their application in the different fields of precision medicine, such as cardiology, gastroenterology, ophthalmology, dermatology, and oncology. By digging more information and extracting multilevel features from medical data, we found that DL helps doctors assess diseases automatically and monitor patients' physical health. In gliomas, research regarding application prospect of DL was mainly shown through magnetic resonance imaging and then by pathological slides. However, multi-omics data, such as whole exome sequence, RNA sequence, proteomics, and epigenomics, have not been covered thus far. In general, the quality and quantity of DL datasets still need further improvements, and more fruitful multi-omics characteristics will bring more comprehensive and accurate diagnosis in precision medicine and glioma.
Collapse
Affiliation(s)
- Yihao Liu
- Hunan Key Laboratory of Cancer Metabolism, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaHunanChina
- NHC Key Laboratory of Carcinogenesis, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Key Laboratory of Carcinogenesis and Cancer Invasion of the Chinese Ministry of Education, Cancer Research InstituteCentral South UniversityChangshaHunanChina
| | - Minghua Wu
- Hunan Key Laboratory of Cancer Metabolism, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaHunanChina
- NHC Key Laboratory of Carcinogenesis, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Key Laboratory of Carcinogenesis and Cancer Invasion of the Chinese Ministry of Education, Cancer Research InstituteCentral South UniversityChangshaHunanChina
| |
Collapse
|
22
|
Martucci M, Russo R, Giordano C, Schiarelli C, D’Apolito G, Tuzza L, Lisi F, Ferrara G, Schimperna F, Vassalli S, Calandrelli R, Gaudino S. Advanced Magnetic Resonance Imaging in the Evaluation of Treated Glioblastoma: A Pictorial Essay. Cancers (Basel) 2023; 15:3790. [PMID: 37568606 PMCID: PMC10417432 DOI: 10.3390/cancers15153790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/14/2023] [Accepted: 07/24/2023] [Indexed: 08/13/2023] Open
Abstract
MRI plays a key role in the evaluation of post-treatment changes, both in the immediate post-operative period and during follow-up. There are many different treatment's lines and many different neuroradiological findings according to the treatment chosen and the clinical timepoint at which MRI is performed. Structural MRI is often insufficient to correctly interpret and define treatment-related changes. For that, advanced MRI modalities, including perfusion and permeability imaging, diffusion tensor imaging, and magnetic resonance spectroscopy, are increasingly utilized in clinical practice to characterize treatment effects more comprehensively. This article aims to provide an overview of the role of advanced MRI modalities in the evaluation of treated glioblastomas. For a didactic purpose, we choose to divide the treatment history in three main timepoints: post-surgery, during Stupp (first-line treatment) and at recurrence (second-line treatment). For each, a brief introduction, a temporal subdivision (when useful) or a specific drug-related paragraph were provided. Finally, the current trends and application of radiomics and artificial intelligence (AI) in the evaluation of treated GB have been outlined.
Collapse
Affiliation(s)
- Matia Martucci
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy; (R.R.); (C.G.); (C.S.); (G.D.); (R.C.); (S.G.)
| | - Rosellina Russo
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy; (R.R.); (C.G.); (C.S.); (G.D.); (R.C.); (S.G.)
| | - Carolina Giordano
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy; (R.R.); (C.G.); (C.S.); (G.D.); (R.C.); (S.G.)
| | - Chiara Schiarelli
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy; (R.R.); (C.G.); (C.S.); (G.D.); (R.C.); (S.G.)
| | - Gabriella D’Apolito
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy; (R.R.); (C.G.); (C.S.); (G.D.); (R.C.); (S.G.)
| | - Laura Tuzza
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (L.T.); (F.L.); (G.F.); (F.S.); (S.V.)
| | - Francesca Lisi
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (L.T.); (F.L.); (G.F.); (F.S.); (S.V.)
| | - Giuseppe Ferrara
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (L.T.); (F.L.); (G.F.); (F.S.); (S.V.)
| | - Francesco Schimperna
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (L.T.); (F.L.); (G.F.); (F.S.); (S.V.)
| | - Stefania Vassalli
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (L.T.); (F.L.); (G.F.); (F.S.); (S.V.)
| | - Rosalinda Calandrelli
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy; (R.R.); (C.G.); (C.S.); (G.D.); (R.C.); (S.G.)
| | - Simona Gaudino
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy; (R.R.); (C.G.); (C.S.); (G.D.); (R.C.); (S.G.)
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (L.T.); (F.L.); (G.F.); (F.S.); (S.V.)
| |
Collapse
|
23
|
Ahmed T. Biomaterial-based in vitro 3D modeling of glioblastoma multiforme. CANCER PATHOGENESIS AND THERAPY 2023; 1:177-194. [PMID: 38327839 PMCID: PMC10846340 DOI: 10.1016/j.cpt.2023.01.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 12/24/2022] [Accepted: 01/04/2023] [Indexed: 02/09/2024]
Abstract
Adult-onset brain cancers, such as glioblastomas, are particularly lethal. People with glioblastoma multiforme (GBM) do not anticipate living for more than 15 months if there is no cure. The results of conventional treatments over the past 20 years have been underwhelming. Tumor aggressiveness, location, and lack of systemic therapies that can penetrate the blood-brain barrier are all contributing factors. For GBM treatments that appear promising in preclinical studies, there is a considerable rate of failure in phase I and II clinical trials. Unfortunately, access becomes impossible due to the intricate architecture of tumors. In vitro, bioengineered cancer models are currently being used by researchers to study disease development, test novel therapies, and advance specialized medications. Many different techniques for creating in vitro systems have arisen over the past few decades due to developments in cellular and tissue engineering. Later-stage research may yield better results if in vitro models that resemble brain tissue and the blood-brain barrier are used. With the use of 3D preclinical models made available by biomaterials, researchers have discovered that it is possible to overcome these limitations. Innovative in vitro models for the treatment of GBM are possible using biomaterials and novel drug carriers. This review discusses the benefits and drawbacks of 3D in vitro glioblastoma modeling systems.
Collapse
Affiliation(s)
- Tanvir Ahmed
- Department of Pharmaceutical Sciences, North South University, Bashundhara, Dhaka, 1229, Bangladesh
| |
Collapse
|
24
|
Zhao J, Huang CC, Zhang Y, Liu Y, Tsai SJ, Lin CP, Lo CYZ. Structure-function coupling in white matter uncovers the abnormal brain connectivity in Schizophrenia. Transl Psychiatry 2023; 13:214. [PMID: 37339983 DOI: 10.1038/s41398-023-02520-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 06/09/2023] [Accepted: 06/12/2023] [Indexed: 06/22/2023] Open
Abstract
Schizophrenia is characterized by dysconnectivity syndrome. Evidence of widespread impairment of structural and functional integration has been demonstrated in schizophrenia. Although white matter (WM) microstructural abnormalities have been commonly reported in schizophrenia, the dysfunction of WM as well as the relationship between structure and function in WM remains uncertain. In this study, we proposed a novel structure-function coupling measurement to reflect neuronal information transfer, which combined spatial-temporal correlations of functional signals with diffusion tensor orientations in the WM circuit from functional and diffusion magnetic resonance images (MRI). By analyzing MRI data from 75 individuals with schizophrenia (SZ) and 89 healthy volunteers (HV), the associations between structure and function in WM regions in schizophrenia were examined. Randomized validation of the measurement was performed in the HV group to confirm the capacity of the neural signal transferring along the WM tracts, referring to quantifying the association between structure and function. Compared to HV, SZ showed a widespread decrease in the structure-function coupling within WM regions, involving the corticospinal tract and the superior longitudinal fasciculus. Additionally, the structure-function coupling in the WM tracts was found to be significantly correlated with psychotic symptoms and illness duration in schizophrenia, suggesting that abnormal signal transfer of neuronal fiber pathways could be a potential mechanism of the neuropathology of schizophrenia. This work supports the dysconnectivity hypothesis of schizophrenia from the aspect of circuit function, and highlights the critical role of WM networks in the pathophysiology of schizophrenia.
Collapse
Affiliation(s)
- Jiajia Zhao
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), Affiliated Mental Health Center (ECNU), Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.
- Shanghai Changning Mental Health Center, Shanghai, China.
| | - Yajuan Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yuchen Liu
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
| | - Shih-Jen Tsai
- Department of Psychiatry, Taipei Veterans General Hospital, Taipei, Taiwan
- Division of Psychiatry, Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ching-Po Lin
- Institute of Neuroscience, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Education and Research, Taipei City Hospital, Taipei, Taiwan
| | - Chun-Yi Zac Lo
- Department of Biomedical Engineering, Chung Yuan Christian University, Taoyuan, Taiwan.
| |
Collapse
|
25
|
Sun T, Wang Y, Liu X, Li Z, Zhang J, Lu J, Qu L, Haller S, Duan Y, Zhuo Z, Cheng D, Xu X, Jia W, Liu Y. Deep learning based on preoperative magnetic resonance (MR) images improves the predictive power of survival models in primary spinal cord astrocytomas. Neuro Oncol 2023; 25:1157-1165. [PMID: 36562243 PMCID: PMC10237430 DOI: 10.1093/neuonc/noac280] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Prognostic models for spinal cord astrocytoma patients are lacking due to the low incidence of the disease. Here, we aim to develop a fully automated deep learning (DL) pipeline for stratified overall survival (OS) prediction based on preoperative MR images. METHODS A total of 587 patients diagnosed with intramedullary tumors were retrospectively enrolled in our hospital to develop an automated pipeline for tumor segmentation and OS prediction. The automated pipeline included a T2WI-based tumor segmentation model and 3 cascaded binary OS prediction models (1-year, 3-year, and 5-year models). For the tumor segmentation model, 439 cases of intramedullary tumors were used to model training and testing using a transfer learning strategy. A total of 138 patients diagnosed with astrocytomas were included to train and test the OS prediction models via 10 × 10-fold cross-validation using CNNs. RESULTS The dice of the tumor segmentation model with the test set was 0.852. The results indicated that the best input of OS prediction models was a combination of T2W and T1C images and the tumor mask. The 1-year, 3-year, and 5-year automated OS prediction models achieved accuracies of 86.0%, 84.0%, and 88.0% and AUCs of 0.881 (95% CI 0.839-0.918), 0.862 (95% CI 0.827-0.901), and 0.905 (95% CI 0.867-0.942), respectively. The automated DL pipeline achieved 4-class OS prediction (<1 year, 1-3 years, 3-5 years, and >5 years) with 75.3% accuracy. CONCLUSIONS We proposed an automated DL pipeline for segmenting spinal cord astrocytomas and stratifying OS based on preoperative MR images.
Collapse
Affiliation(s)
- Ting Sun
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| | - Yongzhi Wang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| | - Xing Liu
- Department of Pathology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| | - Zhaohui Li
- Department of Machine learning, BioMind Inc., Beijing, 100070, China
| | - Jie Zhang
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- Department of Radiology, Beijing Renhe Hospital, Beijing 102600, China
| | - Jing Lu
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- Department of Radiology, Third Medical Center of Chinese PLA General Hospital, Beijing 100089, China
| | - Liying Qu
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| | - Sven Haller
- Department of Imaging and Medical Informatics, University Hospitals of Geneva and Faculty of Medicine of the University of Geneva, Geneva, Switzerland
| | - Yunyun Duan
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| | - Zhizheng Zhuo
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| | - Dan Cheng
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| | - Xiaolu Xu
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| | - Wenqing Jia
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| | - Yaou Liu
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| |
Collapse
|
26
|
Wang Y, Yin C, Zhang P. Multimodal Risk Prediction with Physiological Signals, Medical Images and Clinical Notes. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.05.18.23290207. [PMID: 37293005 PMCID: PMC10246140 DOI: 10.1101/2023.05.18.23290207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The broad adoption of electronic health records (EHRs) provides great opportunities to conduct healthcare research and solve various clinical problems in medicine. With recent advances and success, methods based on machine learning and deep learning have become increasingly popular in medical informatics. Combining data from multiple modalities may help in predictive tasks. To assess the expectations of multimodal data, we introduce a comprehensive fusion framework designed to integrate temporal variables, medical images, and clinical notes in Electronic Health Record (EHR) for enhanced performance in downstream predictive tasks. Early, joint, and late fusion strategies were employed to effectively combine data from various modalities. Model performance and contribution scores show that multimodal models outperform uni-modal models in various tasks. Additionally, temporal signs contain more information than CXR images and clinical notes in three explored predictive tasks. Therefore, models integrating different data modalities can work better in predictive tasks.
Collapse
Affiliation(s)
- Yuanlong Wang
- Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio 43210, USA
| | - Changchang Yin
- Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio 43210, USA
- Department of Biomedical Informatics, The Ohio State University, Columbus, Ohio 43210 USA
| | - Ping Zhang
- Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio 43210, USA
- Department of Biomedical Informatics, The Ohio State University, Columbus, Ohio 43210 USA
| |
Collapse
|
27
|
Cahan N, Klang E, Marom EM, Soffer S, Barash Y, Burshtein E, Konen E, Greenspan H. Multimodal fusion models for pulmonary embolism mortality prediction. Sci Rep 2023; 13:7544. [PMID: 37160926 PMCID: PMC10170065 DOI: 10.1038/s41598-023-34303-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 04/27/2023] [Indexed: 05/11/2023] Open
Abstract
Pulmonary embolism (PE) is a common, life threatening cardiovascular emergency. Risk stratification is one of the core principles of acute PE management and determines the choice of diagnostic and therapeutic strategies. In routine clinical practice, clinicians rely on the patient's electronic health record (EHR) to provide a context for their medical imaging interpretation. Most deep learning models for radiology applications only consider pixel-value information without the clinical context. Only a few integrate both clinical and imaging data. In this work, we develop and compare multimodal fusion models that can utilize multimodal data by combining both volumetric pixel data and clinical patient data for automatic risk stratification of PE. Our best performing model is an intermediate fusion model that incorporates both bilinear attention and TabNet, and can be trained in an end-to-end manner. The results show that multimodality boosts performance by up to 14% with an area under the curve (AUC) of 0.96 for assessing PE severity, with a sensitivity of 90% and specificity of 94%, thus pointing to the value of using multimodal data to automatically assess PE severity.
Collapse
Affiliation(s)
- Noa Cahan
- Department of Biomedical Engineering, Tel-Aviv University, Tel Aviv, Israel.
| | - Eyal Klang
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan, Israel affiliated with the Tel Aviv University, Tel Aviv, Israel
| | - Edith M Marom
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan, Israel affiliated with the Tel Aviv University, Tel Aviv, Israel
| | - Shelly Soffer
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan, Israel affiliated with the Tel Aviv University, Tel Aviv, Israel
| | - Yiftach Barash
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan, Israel affiliated with the Tel Aviv University, Tel Aviv, Israel
| | - Evyatar Burshtein
- Department of Biomedical Engineering, Tel-Aviv University, Tel Aviv, Israel
| | - Eli Konen
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan, Israel affiliated with the Tel Aviv University, Tel Aviv, Israel
| | - Hayit Greenspan
- Department of Biomedical Engineering, Tel-Aviv University, Tel Aviv, Israel.
- Biomedical Engineering and Imaging Institute, Radiology Dept., Icahn School of Medicine at Mount Sinai, New York, United States.
| |
Collapse
|
28
|
Malhotra R, Singh Saini B, Gupta S. An interpretable feature-learned model for overall survival classification of High-Grade Gliomas. Phys Med 2023; 110:102591. [PMID: 37126962 DOI: 10.1016/j.ejmp.2023.102591] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 02/23/2023] [Accepted: 04/13/2023] [Indexed: 05/03/2023] Open
Abstract
PURPOSE An accurate and well-defined survival prediction of High Grade Gliomas (HGGs) is indispensable because of its high incidence and aggressiveness. Therefore, this paper presents a unified framework for fully automatic overall survival classification and its interpretation. METHODS AND MATERIALS Initially, a glioma detection model is utilized to detect the tumorous images. A pre-processing module is designed for extracting 2D slices and creating a survival data array for the classification network. Then, the classification pipeline is integrated with two separate pathways: a modality-specific and a modality-concatenated pathway. The modality-specific pathway runs three separate CNNs for extracting rich predictive features from three sub-regions of HGGs (peritumoral edema, enhancing tumor and necrosis) by using three neuro-imaging modalities. In these pathways, the image vectors of the different modalities are also concatenated to the final fusion layer to overcome the loss of lower-level tumor features. Furthermore, to exploit the intra-modality correlations, a modality-concatenated pathway is also added to the classification pipeline. The experiments are conducted on BraTS 2018 and BraTS 2019 benchmarks, demonstrating that the proposed approach performs competitively in classifying HGG patients into three survival groups, namely, short, mid, and long survivors. RESULTS The proposed approach achieves an overall classification accuracy, sensitivity, and specificity of about 0.998, 0.997, and 0.999, respectively, for the BraTS 2018 dataset, and for BraTS 2019, these values correspond to 1.000, 0.999, and 0.999. CONCLUSIONS The results indicate that the proposed model achieves the highest values of the evaluation metrics for the overall survival classification of HGG.
Collapse
Affiliation(s)
- Radhika Malhotra
- Department of Electronics and Communication, Dr B R Ambedkar National Institute of Technology, Jalandhar, Punjab 144011, India.
| | - Barjinder Singh Saini
- Department of Electronics and Communication, Dr B R Ambedkar National Institute of Technology, Jalandhar, Punjab 144011, India
| | - Savita Gupta
- Department of Computer Science and Engg., UIET, Sector 25, Panjab University, Chandigarh 160023, India
| |
Collapse
|
29
|
Karakis R, Gurkahraman K, Mitsis GD, Boudrias MH. DEEP LEARNING PREDICTION OF MOTOR PERFORMANCE IN STROKE INDIVIDUALS USING NEUROIMAGING DATA. J Biomed Inform 2023; 141:104357. [PMID: 37031755 DOI: 10.1016/j.jbi.2023.104357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 02/24/2023] [Accepted: 04/01/2023] [Indexed: 04/11/2023]
Abstract
The degree of motor impairment and profile of recovery after stroke are difficult to predict for each individual. Measures obtained from clinical assessments, as well as neurophysiological and neuroimaging techniques have been used as potential biomarkers of motor recovery, with limited accuracy up to date. To address this, the present study aimed to develop a deep learning model based on structural brain images obtained from stroke participants and healthy volunteers. The following inputs were used in a multi-channel 3D convolutional neural network (CNN) model: fractional anisotropy, mean diffusivity, radial diffusivity, and axial diffusivity maps obtained from Diffusion Tensor Imaging (DTI) images, white and gray matter intensity values obtained from Magnetic Resonance Imaging, as well as demographic data (e.g., age, gender). Upper limb motor function was classified into "Poor" and "Good" categories. To assess the performance of the DL model, we compared it to more standard machine learning (ML) classifiers including k-nearest neighbor, support vector machines (SVM), Decision Trees, Random Forests, Ada Boosting, and Naïve Bayes, whereby the inputs of these classifiers were the features taken from the fully connected layer of the CNN model. The highest accuracy and area under the curve values were 0.92 and 0.92 for the 3D-CNN and 0.91 and 0.91 for the SVM, respectively. The multi-channel 3D-CNN with residual blocks and SVM supported by DL was more accurate than traditional ML methods to classify upper limb motor impairment in the stroke population. These results suggest that combining volumetric DTI maps and measures of white and gray matter integrity can improve the prediction of the degree of motor impairment after stroke. Identifying the potential of recovery early on after a stroke could promote the allocation of resources to optimize the functional independence of these individuals and their quality of life.
Collapse
Affiliation(s)
- Rukiye Karakis
- Department of Software Engineering, Faculty of Technology, Sivas Cumhuriyet University, Turkey
| | - Kali Gurkahraman
- Department of Computer Engineering, Faculty of Engineering, Sivas Cumhuriyet University, Turkey
| | - Georgios D Mitsis
- Department of Bioengineering, Faculty of Engineering, McGill University, Montreal, QC, Canada
| | - Marie-Hélène Boudrias
- School of Physical and Occupational Therapy, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada; BRAIN Laboratory, Jewish Rehabilitation Hospital, Site of Centre for Interdisciplinary Research of Greater Montreal (CRIR) and CISSS-Laval, QC, Canada.
| |
Collapse
|
30
|
Liu X, Xing F, Gaggin HK, Kuo CCJ, El Fakhri G, Woo J. SUCCESSIVE SUBSPACE LEARNING FOR CARDIAC DISEASE CLASSIFICATION WITH TWO-PHASE DEFORMATION FIELDS FROM CINE MRI. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230746. [PMID: 38031559 PMCID: PMC10686280 DOI: 10.1109/isbi53787.2023.10230746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/01/2023]
Abstract
Cardiac cine magnetic resonance imaging (MRI) has been used to characterize cardiovascular diseases (CVD), often providing a noninvasive phenotyping tool. While recently flourished deep learning based approaches using cine MRI yield accurate characterization results, the performance is often degraded by small training samples. In addition, many deep learning models are deemed a "black box," for which models remain largely elusive in how models yield a prediction and how reliable they are. To alleviate this, this work proposes a lightweight successive subspace learning (SSL) framework for CVD classification, based on an interpretable feedforward design, in conjunction with a cardiac atlas. Specifically, our hierarchical SSL model is based on (i) neighborhood voxel expansion, (ii) unsupervised subspace approximation, (iii) supervised regression, and (iv) multi-level feature integration. In addition, using two-phase 3D deformation fields, including end-diastolic and end-systolic phases, derived between the atlas and individual subjects as input offers objective means of assessing CVD, even with small training samples. We evaluate our framework on the ACDC2017 database, comprising one healthy group and four disease groups. Compared with 3D CNN-based approaches, our framework achieves superior classification performance with 140× fewer parameters, which supports its potential value in clinical use.
Collapse
Affiliation(s)
- Xiaofeng Liu
- Dept. of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Fangxu Xing
- Dept. of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Hanna K Gaggin
- Dept. of Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - C-C Jay Kuo
- Dept. of ECE, University of Southern California, Los Angeles, CA, USA
| | - Georges El Fakhri
- Dept. of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Jonghye Woo
- Dept. of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
31
|
Wang M, Lee C, Wei Z, Ji H, Yang Y, Yang C. Clinical assistant decision-making model of tuberculosis based on electronic health records. BioData Min 2023; 16:11. [PMID: 36927471 PMCID: PMC10022184 DOI: 10.1186/s13040-023-00328-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 03/09/2023] [Indexed: 03/18/2023] Open
Abstract
BACKGROUND Tuberculosis is a dangerous infectious disease with the largest number of reported cases in China every year. Preventing missed diagnosis has an important impact on the prevention, treatment, and recovery of tuberculosis. The earliest pulmonary tuberculosis prediction models mainly used traditional image data combined with neural network models. However, a single data source tends to miss important information, such as primary symptoms and laboratory test results, that is available in multi-source data like medical records and tests. In this study, we propose a multi-stream integrated pulmonary tuberculosis diagnosis model based on structured and unstructured multi-source data from electronic health records. With the limited number of lung specialists and the high prevalence of tuberculosis, the application of this auxiliary diagnosis model can make substantial contributions to clinical settings. METHODS The subjects were patients at the respiratory department and infectious cases department of a large comprehensive hospital in China between 2015 to 2020. A total of 95,294 medical records were selected through a quality control process. Each record contains structured and unstructured data. First, numerical expressions of features for structured data were created. Then, feature engineering was performed through decision tree model, random forest, and GBDT. Features were included in the feature exclusion set as per their weights in descending order. When the importance of the set was higher than 0.7, this process was concluded. Finally, the contained features were used for model training. In addition, the unstructured free-text data was segmented at the character level and input into the model after indexing. Tuberculosis prediction was conducted through a multi-stream integration tuberculosis diagnosis model (MSI-PTDM), and the evaluation indices of accuracy, AUC, sensitivity, and specificity were compared against the prediction results of XGBoost, Text-CNN, Random Forest, SVM, and so on. RESULTS Through a variety of characteristic engineering methods, 20 characteristic factors, such as main complaint hemoptysis, cough, and test erythrocyte sedimentation rate, were selected, and the influencing factors were analyzed using the Chinese diagnostic standard of pulmonary tuberculosis. The area under the curve values for MSI-PTDM, XGBoost, Text-CNN, RF, and SVM were 0.9858, 0.9571, 0.9486, 0.9428, and 0.9429, respectively. The sensitivity, specificity, and accuracy of MSI-PTDM were 93.18%, 96.96%, and 96.96%, respectively. The MSI-PTDM prediction model was installed at a doctor workstation and operated in a real clinic environment for 4 months. A total of 692,949 patients were monitored, including 484 patients with confirmed pulmonary tuberculosis. The model predicted 440 cases of pulmonary tuberculosis. The positive sample recognition rate was 90.91%, the false-positive rate was 9.09%, the negative sample recognition rate was 96.17%, and the false-negative rate was 3.83%. CONCLUSIONS MSI-PTDM can process sparse data, dense data, and unstructured text data concurrently. The model adds a feature domain vector embedding the medical sparse features, and the single-valued sparse vectors are represented by multi-dimensional dense hidden vectors, which not only enhances the feature expression but also alleviates the side effects of sparsity on the model training. However, there may be information loss when features are extracted from text, and adding the processing of original unstructured text makes up for the error within the above process to a certain extent, so that the model can learn data more comprehensively and effectively. In addition, MSI-PTDM also allows interaction between features, considers the combination effect between patient features, adds more complex nonlinear calculation considerations, and improves the learning ability of the model. It has been verified using a test set and via deployment within an actual outpatient environment.
Collapse
Affiliation(s)
- Mengying Wang
- State Key Laboratory of Media Convergence and Communication, Communication University of China, No .1 Dingfuzhuang East Street, Chaoyang District, Beijing, China
| | - Cuixia Lee
- Peking University Third Hospital, Beijing, China
| | - Zhenhao Wei
- Goodwill Hessian Health Technology Co.Ltd, Beijing, China
| | - Hong Ji
- Peking University Third Hospital, Beijing, China
| | - Yingyun Yang
- State Key Laboratory of Media Convergence and Communication, Communication University of China, No .1 Dingfuzhuang East Street, Chaoyang District, Beijing, China.
| | - Cheng Yang
- State Key Laboratory of Media Convergence and Communication, Communication University of China, No .1 Dingfuzhuang East Street, Chaoyang District, Beijing, China.
| |
Collapse
|
32
|
Yoon T, Kang D. Bimodal CNN for cardiovascular disease classification by co-training ECG grayscale images and scalograms. Sci Rep 2023; 13:2937. [PMID: 36804469 PMCID: PMC9941114 DOI: 10.1038/s41598-023-30208-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 02/17/2023] [Indexed: 02/22/2023] Open
Abstract
This study aimed to develop a bimodal convolutional neural network (CNN) by co-training grayscale images and scalograms of ECG for cardiovascular disease classification. The bimodal CNN model was developed using a 12-lead ECG database collected from Chapman University and Shaoxing People's Hospital. The preprocessed database contains 10,588 ECG data and 11 heart rhythms labeled by a specialist physician. The preprocessed one-dimensional ECG signals were converted into two-dimensional grayscale images and scalograms, which are fed simultaneously to the bimodal CNN model as dual input images. The proposed model aims to improve the performance of CVDs classification by making use of ECG grayscale images and scalograms. The bimodal CNN model consists of two identical Inception-v3 backbone models, which were pre-trained on the ImageNet database. The proposed model was fine-tuned with 6780 dual-input images, validated with 1694 dual-input images, and tested on 2114 dual-input images. The bimodal CNN model using two identical Inception-v3 backbones achieved best AUC (0.992), accuracy (95.08%), sensitivity (0.942), precision (0.946) and F1-score (0.944) in lead II. Ensemble model of all leads obtained AUC (0.994), accuracy (95.74%), sensitivity (0.950), precision (0.953), and F1-score (0.952). The bimodal CNN model showed better diagnostic performance than logistic regression, XGBoost, LSTM, single CNN model training with grayscale images alone or with scalograms alone. The proposed bimodal CNN model would be of great help in diagnosing cardiovascular diseases.
Collapse
Affiliation(s)
- Taeyoung Yoon
- Department of Healthcare Information Technology, Inje University, Inje-ro, Gimhae-si, 50834, Republic of Korea
| | - Daesung Kang
- Department of Healthcare Information Technology, Inje University, Inje-ro, Gimhae-si, 50834, Republic of Korea.
| |
Collapse
|
33
|
Yan J, Sun Q, Tan X, Liang C, Bai H, Duan W, Mu T, Guo Y, Qiu Y, Wang W, Yao Q, Pei D, Zhao Y, Liu D, Duan J, Chen S, Sun C, Wang W, Liu Z, Hong X, Wang X, Guo Y, Xu Y, Liu X, Cheng J, Li ZC, Zhang Z. Image-based deep learning identifies glioblastoma risk groups with genomic and transcriptomic heterogeneity: a multi-center study. Eur Radiol 2023; 33:904-914. [PMID: 36001125 DOI: 10.1007/s00330-022-09066-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 07/20/2022] [Accepted: 07/25/2022] [Indexed: 02/03/2023]
Abstract
OBJECTIVES To develop and validate a deep learning imaging signature (DLIS) for risk stratification in patients with multiforme (GBM), and to investigate the biological pathways and genetic alterations underlying the DLIS. METHODS The DLIS was developed from multi-parametric MRI based on a training set (n = 600) and validated on an internal validation set (n = 164), an external test set 1 (n = 100), an external test set 2 (n = 161), and a public TCIA set (n = 88). A co-profiling framework based on a radiogenomics analysis dataset (n = 127) using multiscale high-dimensional data, including imaging, transcriptome, and genome, was established to uncover the biological pathways and genetic alterations underpinning the DLIS. RESULTS The DLIS was associated with survival (log-rank p < 0.001) and was an independent predictor (p < 0.001). The integrated nomogram incorporating the DLIS achieved improved C indices than the clinicomolecular nomogram (net reclassification improvement 0.39, p < 0.001). DLIS significantly correlated with core pathways of GBM (apoptosis and cell cycle-related P53 and RB pathways, and cell proliferation-related RTK pathway), as well as key genetic alterations (del_CDNK2A). The prognostic value of DLIS-correlated genes was externally confirmed on TCGA/CGGA sets (p < 0.01). CONCLUSIONS Our study offers a biologically interpretable deep learning predictor of survival outcomes in patients with GBM, which is crucial for better understanding GBM patient's prognosis and guiding individualized treatment. KEY POINTS • MRI-based deep learning imaging signature (DLIS) stratifies GBM into risk groups with distinct molecular characteristics. • DLIS is associated with P53, RB, and RTK pathways and del_CDNK2A mutation. • The prognostic value of DLIS-correlated pathway genes is externally demonstrated.
Collapse
Affiliation(s)
- Jing Yan
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Qiuchang Sun
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Xiangliang Tan
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Chaofeng Liang
- Department of Neurosurgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, 510630, China
| | - Hongmin Bai
- Department of Neurosurgery, Guangzhou General Hospital of Guangzhou Military Command, Guangzhou, 510010, China
| | - Wenchao Duan
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Tianhao Mu
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,HaploX Biotechnology, Shenzhen, Guangdong, China
| | - Yang Guo
- Department of Neurosurgery, Henan Provincial Hospital, Zhengzhou, 450052, Henan Province, China
| | - Yuning Qiu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Weiwei Wang
- Department of Pathology, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan Province, China
| | - Qiaoli Yao
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Dongling Pei
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Yuanshen Zhao
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Danni Liu
- HaploX Biotechnology, Shenzhen, Guangdong, China
| | - Jingxian Duan
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Shifu Chen
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,HaploX Biotechnology, Shenzhen, Guangdong, China
| | - Chen Sun
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Wenqing Wang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Zhen Liu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Xuanke Hong
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Xiangxiang Wang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Yu Guo
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Yikai Xu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Xianzhi Liu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Jingliang Cheng
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China.
| | - Zhi-Cheng Li
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China. .,University of Chinese Academy of Sciences, Beijing, China. .,Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, 518045, China.
| | - Zhenyu Zhang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China.
| |
Collapse
|
34
|
Assessing Metabolic Markers in Glioblastoma Using Machine Learning: A Systematic Review. Metabolites 2023; 13:metabo13020161. [PMID: 36837779 PMCID: PMC9958885 DOI: 10.3390/metabo13020161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 01/14/2023] [Accepted: 01/18/2023] [Indexed: 01/24/2023] Open
Abstract
Glioblastoma (GBM) is a common and deadly brain tumor with late diagnoses and poor prognoses. Machine learning (ML) is an emerging tool that can create highly accurate diagnostic and prognostic prediction models. This paper aimed to systematically search the literature on ML for GBM metabolism and assess recent advancements. A literature search was performed using predetermined search terms. Articles describing the use of an ML algorithm for GBM metabolism were included. Ten studies met the inclusion criteria for analysis: diagnostic (n = 3, 30%), prognostic (n = 6, 60%), or both (n = 1, 10%). Most studies analyzed data from multiple databases, while 50% (n = 5) included additional original samples. At least 2536 data samples were run through an ML algorithm. Twenty-seven ML algorithms were recorded with a mean of 2.8 algorithms per study. Algorithms were supervised (n = 24, 89%), unsupervised (n = 3, 11%), continuous (n = 19, 70%), or categorical (n = 8, 30%). The mean reported accuracy and AUC of ROC were 95.63% and 0.779, respectively. One hundred six metabolic markers were identified, but only EMP3 was reported in multiple studies. Many studies have identified potential biomarkers for GBM diagnosis and prognostication. These algorithms show promise; however, a consensus on even a handful of biomarkers has not yet been made.
Collapse
|
35
|
Yan T, Yan Z, Liu L, Zhang X, Chen G, Xu F, Li Y, Zhang L, Peng M, Wang L, Li D, Zhao D. Survival prediction for patients with glioblastoma multiforme using a Cox proportional hazards denoising autoencoder network. Front Comput Neurosci 2023; 16:916511. [PMID: 36704230 PMCID: PMC9871481 DOI: 10.3389/fncom.2022.916511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Accepted: 12/13/2022] [Indexed: 01/11/2023] Open
Abstract
Objectives This study aimed to establish and validate a prognostic model based on magnetic resonance imaging and clinical features to predict the survival time of patients with glioblastoma multiforme (GBM). Methods In this study, a convolutional denoising autoencoder (DAE) network combined with the loss function of the Cox proportional hazard regression model was used to extract features for survival prediction. In addition, the Kaplan-Meier curve, the Schoenfeld residual analysis, the time-dependent receiver operating characteristic curve, the nomogram, and the calibration curve were performed to assess the survival prediction ability. Results The concordance index (C-index) of the survival prediction model, which combines the DAE and the Cox proportional hazard regression model, reached 0.78 in the training set, 0.75 in the validation set, and 0.74 in the test set. Patients were divided into high- and low-risk groups based on the median prognostic index (PI). Kaplan-Meier curve was used for survival analysis (p = < 2e-16 in the training set, p = 3e-04 in the validation set, and p = 0.007 in the test set), which showed that the survival probability of different groups was significantly different, and the PI of the network played an influential role in the prediction of survival probability. In the residual verification of the PI, the fitting curve of the scatter plot was roughly parallel to the x-axis, and the p-value of the test was 0.11, proving that the PI and survival time were independent of each other and the survival prediction ability of the PI was less affected than survival time. The areas under the curve of the training set were 0.843, 0.871, 0.903, and 0.941; those of the validation set were 0.687, 0.895, 1.000, and 0.967; and those of the test set were 0.757, 0.852, 0.683, and 0.898. Conclusion The survival prediction model, which combines the DAE and the Cox proportional hazard regression model, can effectively predict the prognosis of patients with GBM.
Collapse
Affiliation(s)
- Ting Yan
- Key Laboratory of Cellular Physiology of the Ministry of Education, Department of Pathology, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Zhenpeng Yan
- Key Laboratory of Cellular Physiology of the Ministry of Education, Department of Pathology, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Lili Liu
- Key Laboratory of Cellular Physiology of the Ministry of Education, Department of Pathology, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Xiaoyu Zhang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Guohui Chen
- Key Laboratory of Cellular Physiology of the Ministry of Education, Department of Pathology, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Feng Xu
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Ying Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Lijuan Zhang
- Shanxi Provincial People's Hospital, Taiyuan, China
| | - Meilan Peng
- Key Laboratory of Cellular Physiology of the Ministry of Education, Department of Pathology, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Lu Wang
- Key Laboratory of Cellular Physiology of the Ministry of Education, Department of Pathology, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Dandan Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China,*Correspondence: Dandan Li ✉
| | - Dong Zhao
- Department of Stomatology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China,Dong Zhao ✉
| |
Collapse
|
36
|
Takagi Y, Hashimoto N, Masuda H, Miyoshi H, Ohshima K, Hontani H, Takeuchi I. Transformer-based personalized attention mechanism for medical images with clinical records. J Pathol Inform 2023; 14:100185. [PMID: 36691660 PMCID: PMC9860154 DOI: 10.1016/j.jpi.2022.100185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2022] [Revised: 12/10/2022] [Accepted: 12/28/2022] [Indexed: 01/04/2023] Open
Abstract
In medical image diagnosis, identifying the attention region, i.e., the region of interest for which the diagnosis is made, is an important task. Various methods have been developed to automatically identify target regions from given medical images. However, in actual medical practice, the diagnosis is made based on both the images and various clinical records. Consequently, pathologists examine medical images with prior knowledge of the patients and the attention regions may change depending on the clinical records. In this study, we propose a method, called the Personalized Attention Mechanism (PersAM) method, by which the attention regions in medical images according to the clinical records. The primary idea underlying the PersAM method is the encoding of the relationships between medical images and clinical records using a variant of the Transformer architecture. To demonstrate the effectiveness of the PersAM method, we applied it to a large-scale digital pathology problem involving identifying the subtypes of 842 malignant lymphoma patients based on their gigapixel whole-slide images and clinical records.
Collapse
Affiliation(s)
- Yusuke Takagi
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 4668555, Japan
| | - Noriaki Hashimoto
- RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 1030027, Japan
| | - Hiroki Masuda
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 4668555, Japan
| | - Hiroaki Miyoshi
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume 8300011, Japan
| | - Koichi Ohshima
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume 8300011, Japan
| | - Hidekata Hontani
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 4668555, Japan
| | - Ichiro Takeuchi
- RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 1030027, Japan
- Department of Mechanical Systems Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 4648603, Japan
| |
Collapse
|
37
|
Yang Z, Chen M, Kazemimoghadam M, Ma L, Stojadinovic S, Wardak Z, Timmerman R, Dan T, Lu W, Gu X. Ensemble learning for glioma patients overall survival prediction using pre-operative MRIs. Phys Med Biol 2022; 67:10.1088/1361-6560/aca375. [PMID: 36384039 PMCID: PMC9990877 DOI: 10.1088/1361-6560/aca375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 11/16/2022] [Indexed: 11/18/2022]
Abstract
Objective: Gliomas are the most common primary brain tumors. Approximately 70% of the glioma patients diagnosed with glioblastoma have an averaged overall survival (OS) of only ∼16 months. Early survival prediction is essential for treatment decision-making in glioma patients. Here we proposed an ensemble learning approach to predict the post-operative OS of glioma patients using only pre-operative MRIs.Approach: Our dataset was from the Medical Image Computing and Computer Assisted Intervention Brain Tumor Segmentation challenge 2020, which consists of multimodal pre-operative MRI scans of 235 glioma patients with survival days recorded. The backbone of our approach was a Siamese network consisting of twinned ResNet-based feature extractors followed by a 3-layer classifier. During training, the feature extractors explored traits of intra and inter-class by minimizing contrastive loss of randomly paired 2D pre-operative MRIs, and the classifier utilized the extracted features to generate labels with cost defined by cross-entropy loss. During testing, the extracted features were also utilized to define distance between the test sample and the reference composed of training data, to generate an additional predictor via K-NN classification. The final label was the ensemble classification from both the Siamese model and the K-NN model.Main results: Our approach classifies the glioma patients into 3 OS classes: long-survivors (>15 months), mid-survivors (between 10 and 15 months) and short-survivors (<10 months). The performance is assessed by the accuracy (ACC) and the area under the curve (AUC) of 3-class classification. The final result achieved an ACC of 65.22% and AUC of 0.81.Significance: Our Siamese network based ensemble learning approach demonstrated promising ability in mining discriminative features with minimal manual processing and generalization requirement. This prediction strategy can be potentially applied to assist timely clinical decision-making.
Collapse
Affiliation(s)
- Zi Yang
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Mingli Chen
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Mahdieh Kazemimoghadam
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Lin Ma
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Strahinja Stojadinovic
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Zabi Wardak
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Robert Timmerman
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tu Dan
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Weiguo Lu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xuejun Gu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Department of Radiation Oncology, Stanford University, Palo Alto, CA 94305, USA
| |
Collapse
|
38
|
García-García S, García-Galindo M, Arrese I, Sarabia R, Cepeda S. Current Evidence, Limitations and Future Challenges of Survival Prediction for Glioblastoma Based on Advanced Noninvasive Methods: A Narrative Review. MEDICINA (KAUNAS, LITHUANIA) 2022; 58:medicina58121746. [PMID: 36556948 PMCID: PMC9786785 DOI: 10.3390/medicina58121746] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 11/16/2022] [Accepted: 11/28/2022] [Indexed: 12/03/2022]
Abstract
Background and Objectives: Survival estimation for patients diagnosed with Glioblastoma (GBM) is an important information to consider in patient management and communication. Despite some known risk factors, survival estimation remains a major challenge. Novel non-invasive technologies such as radiomics and artificial intelligence (AI) have been implemented to increase the accuracy of these predictions. In this article, we reviewed and discussed the most significant available research on survival estimation for GBM through advanced non-invasive methods. Materials and Methods: PubMed database was queried for articles reporting on survival prognosis for GBM through advanced image and data management methods. Articles including in their title or abstract the following terms were initially screened: ((glioma) AND (survival)) AND ((artificial intelligence) OR (radiomics)). Exclusively English full-text articles, reporting on humans, published as of 1 September 2022 were considered. Articles not reporting on overall survival, evaluating the effects of new therapies or including other tumors were excluded. Research with a radiomics-based methodology were evaluated using the radiomics quality score (RQS). Results: 382 articles were identified. After applying the inclusion criteria, 46 articles remained for further analysis. These articles were thoroughly assessed, summarized and discussed. The results of the RQS revealed some of the limitations of current radiomics investigation on this field. Limitations of analyzed studies included data availability, patient selection and heterogeneity of methodologies. Future challenges on this field are increasing data availability, improving the general understanding of how AI handles data and establishing solid correlations between image features and tumor's biology. Conclusions: Radiomics and AI methods of data processing offer a new paradigm of possibilities to tackle the question of survival prognosis in GBM.
Collapse
Affiliation(s)
- Sergio García-García
- Department of Neurosurgery, University Hospital Río Hortega, Dulzaina 2, 47012 Valladolid, Spain
- Correspondence:
| | - Manuel García-Galindo
- Faculty of Medicine, University of Valladolid, Avenida Ramón y Cajal 7, 47003 Valladolid, Spain
| | - Ignacio Arrese
- Department of Neurosurgery, University Hospital Río Hortega, Dulzaina 2, 47012 Valladolid, Spain
| | - Rosario Sarabia
- Department of Neurosurgery, University Hospital Río Hortega, Dulzaina 2, 47012 Valladolid, Spain
| | - Santiago Cepeda
- Department of Neurosurgery, University Hospital Río Hortega, Dulzaina 2, 47012 Valladolid, Spain
| |
Collapse
|
39
|
Use of deep learning-based radiomics to differentiate Parkinson's disease patients from normal controls: a study based on [ 18F]FDG PET imaging. Eur Radiol 2022; 32:8008-8018. [PMID: 35674825 DOI: 10.1007/s00330-022-08799-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 03/27/2022] [Accepted: 04/01/2022] [Indexed: 01/03/2023]
Abstract
OBJECTIVES We proposed a novel deep learning-based radiomics (DLR) model to diagnose Parkinson's disease (PD) based on [18F]fluorodeoxyglucose (FDG) PET images. METHODS In this two-center study, 255 normal controls (NCs) and 103 PD patients were enrolled from Huashan Hospital, China; 26 NCs and 22 PD patients were enrolled as a separate test group from Wuxi 904 Hospital, China. The proposed DLR model consisted of a convolutional neural network-based feature encoder and a support vector machine (SVM) model-based classifier. The DLR model was trained and validated in the Huashan cohort and tested in the Wuxi cohort, and accuracy, sensitivity, specificity and receiver operator characteristic (ROC) curve graphs were used to describe the model's performance. Comparative experiments were performed based on four other models including the scale model, radiomics model, standard uptake value ratio (SUVR) model and DLR model. RESULTS The DLR model demonstrated superiority in differentiating PD patients and NCs in comparison to other models, with an accuracy of 95.17% [90.35%, 98.13%] (95% confidence intervals, CI) in the Huashan cohort. Moreover, the DLR model also demonstrated greater performance in diagnosing PD early than routine methods, with an accuracy of 85.58% [78.60%, 91.57%] in the Huashan cohort. CONCLUSIONS We developed a DLR model based on [18F]FDG PET images that showed good performance in the noninvasive, individualized prediction of PD and was superior to traditional handcrafted methods. This model has the potential to guide and facilitate clinical diagnosis and contribute to the development of precision treatment. KEY POINTS The DLR method on [18F]FDG PET images helps clinicians to diagnose PD and PD subgroups from normal controls. A prospective two-center study showed that the DLR method provides greater diagnostic accuracy.
Collapse
|
40
|
Lipkova J, Chen RJ, Chen B, Lu MY, Barbieri M, Shao D, Vaidya AJ, Chen C, Zhuang L, Williamson DFK, Shaban M, Chen TY, Mahmood F. Artificial intelligence for multimodal data integration in oncology. Cancer Cell 2022; 40:1095-1110. [PMID: 36220072 PMCID: PMC10655164 DOI: 10.1016/j.ccell.2022.09.012] [Citation(s) in RCA: 129] [Impact Index Per Article: 64.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 07/12/2022] [Accepted: 09/15/2022] [Indexed: 02/07/2023]
Abstract
In oncology, the patient state is characterized by a whole spectrum of modalities, ranging from radiology, histology, and genomics to electronic health records. Current artificial intelligence (AI) models operate mainly in the realm of a single modality, neglecting the broader clinical context, which inevitably diminishes their potential. Integration of different data modalities provides opportunities to increase robustness and accuracy of diagnostic and prognostic models, bringing AI closer to clinical practice. AI models are also capable of discovering novel patterns within and across modalities suitable for explaining differences in patient outcomes or treatment resistance. The insights gleaned from such models can guide exploration studies and contribute to the discovery of novel biomarkers and therapeutic targets. To support these advances, here we present a synopsis of AI methods and strategies for multimodal data fusion and association discovery. We outline approaches for AI interpretability and directions for AI-driven exploration through multimodal data interconnections. We examine challenges in clinical adoption and discuss emerging solutions.
Collapse
Affiliation(s)
- Jana Lipkova
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Bowen Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Department of Computer Science, Harvard University, Cambridge, MA, USA
| | - Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
| | - Matteo Barbieri
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Daniel Shao
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Harvard-MIT Health Sciences and Technology (HST), Cambridge, MA, USA
| | - Anurag J Vaidya
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Harvard-MIT Health Sciences and Technology (HST), Cambridge, MA, USA
| | - Chengkuan Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Luoting Zhuang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Muhammad Shaban
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Tiffany Y Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
41
|
di Noia C, Grist JT, Riemer F, Lyasheva M, Fabozzi M, Castelli M, Lodi R, Tonon C, Rundo L, Zaccagna F. Predicting Survival in Patients with Brain Tumors: Current State-of-the-Art of AI Methods Applied to MRI. Diagnostics (Basel) 2022; 12:diagnostics12092125. [PMID: 36140526 PMCID: PMC9497964 DOI: 10.3390/diagnostics12092125] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 08/05/2022] [Accepted: 08/17/2022] [Indexed: 11/24/2022] Open
Abstract
Given growing clinical needs, in recent years Artificial Intelligence (AI) techniques have increasingly been used to define the best approaches for survival assessment and prediction in patients with brain tumors. Advances in computational resources, and the collection of (mainly) public databases, have promoted this rapid development. This narrative review of the current state-of-the-art aimed to survey current applications of AI in predicting survival in patients with brain tumors, with a focus on Magnetic Resonance Imaging (MRI). An extensive search was performed on PubMed and Google Scholar using a Boolean research query based on MeSH terms and restricting the search to the period between 2012 and 2022. Fifty studies were selected, mainly based on Machine Learning (ML), Deep Learning (DL), radiomics-based methods, and methods that exploit traditional imaging techniques for survival assessment. In addition, we focused on two distinct tasks related to survival assessment: the first on the classification of subjects into survival classes (short and long-term or eventually short, mid and long-term) to stratify patients in distinct groups. The second focused on quantification, in days or months, of the individual survival interval. Our survey showed excellent state-of-the-art methods for the first, with accuracy up to ∼98%. The latter task appears to be the most challenging, but state-of-the-art techniques showed promising results, albeit with limitations, with C-Index up to ∼0.91. In conclusion, according to the specific task, the available computational methods perform differently, and the choice of the best one to use is non-univocal and dependent on many aspects. Unequivocally, the use of features derived from quantitative imaging has been shown to be advantageous for AI applications, including survival prediction. This evidence from the literature motivates further research in the field of AI-powered methods for survival prediction in patients with brain tumors, in particular, using the wealth of information provided by quantitative MRI techniques.
Collapse
Affiliation(s)
- Christian di Noia
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
| | - James T. Grist
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK
- Department of Radiology, Oxford University Hospitals NHS Foundation Trust, Oxford OX3 9DU, UK
- Oxford Centre for Clinical Magnetic Research Imaging, University of Oxford, Oxford OX3 9DU, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham B15 2SY, UK
| | - Frank Riemer
- Mohn Medical Imaging and Visualization Centre (MMIV), Department of Radiology, Haukeland University Hospital, N-5021 Bergen, Norway
| | - Maria Lyasheva
- Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, John Radcliffe Hospital, Oxford OX3 9DU, UK
| | - Miriana Fabozzi
- Centro Medico Polispecialistico (CMO), 80058 Torre Annunziata, Italy
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal
| | - Raffaele Lodi
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, 40139 Bologna, Italy
| | - Caterina Tonon
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, 40139 Bologna, Italy
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, 40139 Bologna, Italy
- Correspondence: ; Tel.: +39-0514969951
| |
Collapse
|
42
|
Zhu M, Li S, Kuang Y, Hill VB, Heimberger AB, Zhai L, Zhai S. Artificial intelligence in the radiomic analysis of glioblastomas: A review, taxonomy, and perspective. Front Oncol 2022; 12:924245. [PMID: 35982952 PMCID: PMC9379255 DOI: 10.3389/fonc.2022.924245] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 07/04/2022] [Indexed: 11/17/2022] Open
Abstract
Radiological imaging techniques, including magnetic resonance imaging (MRI) and positron emission tomography (PET), are the standard-of-care non-invasive diagnostic approaches widely applied in neuro-oncology. Unfortunately, accurate interpretation of radiological imaging data is constantly challenged by the indistinguishable radiological image features shared by different pathological changes associated with tumor progression and/or various therapeutic interventions. In recent years, machine learning (ML)-based artificial intelligence (AI) technology has been widely applied in medical image processing and bioinformatics due to its advantages in implicit image feature extraction and integrative data analysis. Despite its recent rapid development, ML technology still faces many hurdles for its broader applications in neuro-oncological radiomic analysis, such as lack of large accessible standardized real patient radiomic brain tumor data of all kinds and reliable predictions on tumor response upon various treatments. Therefore, understanding ML-based AI technologies is critically important to help us address the skyrocketing demands of neuro-oncology clinical deployments. Here, we provide an overview on the latest advancements in ML techniques for brain tumor radiomic analysis, emphasizing proprietary and public dataset preparation and state-of-the-art ML models for brain tumor diagnosis, classifications (e.g., primary and secondary tumors), discriminations between treatment effects (pseudoprogression, radiation necrosis) and true progression, survival prediction, inflammation, and identification of brain tumor biomarkers. We also compare the key features of ML models in the realm of neuroradiology with ML models employed in other medical imaging fields and discuss open research challenges and directions for future work in this nascent precision medicine area.
Collapse
Affiliation(s)
- Ming Zhu
- Department of Electrical and Computer Engineering, University of Nevada Las Vegas, Las Vegas, NV, United States
| | - Sijia Li
- Kirk Kerkorian School of Medicine, University of Nevada Las Vegas, Las Vegas, NV, United States
| | - Yu Kuang
- Medical Physics Program, Department of Health Physics, University of Nevada Las Vegas, Las Vegas, NV, United States
| | - Virginia B. Hill
- Department of Radiology, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
| | - Amy B. Heimberger
- Department of Neurological Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
- Malnati Brain Tumor Institute of the Lurie Comprehensive Cancer Center, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
| | - Lijie Zhai
- Department of Neurological Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
- Malnati Brain Tumor Institute of the Lurie Comprehensive Cancer Center, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
- *Correspondence: Lijie Zhai, ; Shengjie Zhai,
| | - Shengjie Zhai
- Department of Electrical and Computer Engineering, University of Nevada Las Vegas, Las Vegas, NV, United States
- *Correspondence: Lijie Zhai, ; Shengjie Zhai,
| |
Collapse
|
43
|
Static-Dynamic coordinated Transformer for Tumor Longitudinal Growth Prediction. Comput Biol Med 2022; 148:105922. [DOI: 10.1016/j.compbiomed.2022.105922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Revised: 07/18/2022] [Accepted: 07/30/2022] [Indexed: 11/20/2022]
|
44
|
Tang Z, Cao H, Xu Y, Yang Q, Wang J, Zhang H. Overall survival time prediction for glioblastoma using multimodal deep KNN. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac6e25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 05/09/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Glioblastoma (GBM) is a severe malignant brain tumor with bad prognosis, and overall survival (OS) time prediction is of great clinical value for customized treatment. Recently, many deep learning (DL) based methods have been proposed, and most of them build deep networks to directly map pre-operative images of patients to the OS time. However, such end-to-end prediction is sensitive to data inconsistency and noise. In this paper, inspired by the fact that clinicians usually evaluate patient prognosis according to previously encountered similar cases, we propose a novel multimodal deep KNN based OS time prediction method. Specifically, instead of the end-to-end prediction, for each input patient, our method first search its K nearest patients with known OS time in a learned metric space, and the final OS time of the input patient is jointly determined by the K nearest patients, which is robust to data inconsistency and noise. Moreover, to take advantage of multiple imaging modalities, a new inter-modality loss is introduced to encourage learning complementary features from different modalities. The in-house single-center dataset containing multimodal MR brain images of 78 GBM patients is used to evaluate our method. In addition, to demonstrate that our method is not limited to GBM, a public multi-center dataset (BRATS2019) containing 211 patients with low and high grade gliomas is also used in our experiment. As benefiting from the deep KNN and the inter-modality loss, our method outperforms all methods under evaluation in both datasets. To the best of our knowledge, this is the first work, which predicts the OS time of GBM patients in the strategy of KNN under the DL framework.
Collapse
|
45
|
Tang W, Zhang H, Yu P, Kang H, Zhang R. MMMNA-Net for Overall Survival Time Prediction of Brain Tumor Patients. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3805-3808. [PMID: 36086168 DOI: 10.1109/embc48229.2022.9871639] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Overall survival (OS) time is one of the most important evaluation indices for gliomas situations. Multi-modal Magnetic Resonance Imaging (MRI) scans play an important role in the study of glioma prognosis OS time. Several deep learning-based methods are proposed for the OS time prediction on multi-modal MRI problems. However, these methods usually fuse multi-modal information at the beginning or at the end of the deep learning networks and lack the fusion of features from different scales. In addition, the fusion at the end of networks always adapts global with global (eg. fully connected after concatenation of global average pooling output) or local with local (eg. bilinear pooling), which loses the information of local with global. In this paper, we propose a novel method for multi-modal OS time prediction of brain tumor patients, which contains an improved non-local features fusion module introduced on different scales. Our method obtains a relative 8.76% improvement over the current state-of-art method (0.6989 vs. 0.6426 on accuracy). An extra testing demonstrates that our method could adapt to the situations with missing modalities. The code is available at https://github.com/TangWen920812/mmmna-net.
Collapse
|
46
|
Jian A, Liu S, Di Ieva A. Artificial Intelligence for Survival Prediction in Brain Tumors on Neuroimaging. Neurosurgery 2022; 91:8-26. [PMID: 35348129 DOI: 10.1227/neu.0000000000001938] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 01/08/2022] [Indexed: 12/30/2022] Open
Abstract
Survival prediction of patients affected by brain tumors provides essential information to guide surgical planning, adjuvant treatment selection, and patient counseling. Current reliance on clinical factors, such as Karnofsky Performance Status Scale, and simplistic radiological characteristics are, however, inadequate for survival prediction in tumors such as glioma that demonstrate molecular and clinical heterogeneity with variable survival outcomes. Advances in the domain of artificial intelligence have afforded powerful tools to capture a large number of hidden high-dimensional imaging features that reflect abundant information about tumor structure and physiology. Here, we provide an overview of current literature that apply computational analysis tools such as radiomics and machine learning methods to the pipeline of image preprocessing, tumor segmentation, feature extraction, and construction of classifiers to establish survival prediction models based on neuroimaging. We also discuss challenges relating to the development and evaluation of such models and explore ethical issues surrounding the future use of machine learning predictions.
Collapse
Affiliation(s)
- Anne Jian
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
- Royal Melbourne Hospital, Melbourne, Australia
| | - Sidong Liu
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
- Centre for Health Informatics, Australian Institute of Health Innovation, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| |
Collapse
|
47
|
Mazaheri Y, Thakur SB, Bitencourt AGV, Lo Gullo R, Hötker AM, Bates DDB, Akin O. Evaluation of cancer outcome assessment using MRI: A review of deep-learning methods. BJR Open 2022; 4:20210072. [PMID: 36105425 PMCID: PMC9459949 DOI: 10.1259/bjro.20210072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 06/01/2022] [Accepted: 06/06/2022] [Indexed: 11/13/2022] Open
Abstract
Accurate evaluation of tumor response to treatment is critical to allow personalized treatment regimens according to the predicted response and to support clinical trials investigating new therapeutic agents by providing them with an accurate response indicator. Recent advances in medical imaging, computer hardware, and machine-learning algorithms have resulted in the increased use of these tools in the field of medicine as a whole and specifically in cancer imaging for detection and characterization of malignant lesions, prognosis, and assessment of treatment response. Among the currently available imaging techniques, magnetic resonance imaging (MRI) plays an important role in the evaluation of treatment assessment of many cancers, given its superior soft-tissue contrast and its ability to allow multiplanar imaging and functional evaluation. In recent years, deep learning (DL) has become an active area of research, paving the way for computer-assisted clinical and radiological decision support. DL can uncover associations between imaging features that cannot be visually identified by the naked eye and pertinent clinical outcomes. The aim of this review is to highlight the use of DL in the evaluation of tumor response assessed on MRI. In this review, we will first provide an overview of common DL architectures used in medical imaging research in general. Then, we will review the studies to date that have applied DL to magnetic resonance imaging for the task of treatment response assessment. Finally, we will discuss the challenges and opportunities of using DL within the clinical workflow.
Collapse
Affiliation(s)
| | | | | | - Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, United States
| | - Andreas M. Hötker
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland
| | - David D B Bates
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, United States
| | - Oguz Akin
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, United States
| |
Collapse
|
48
|
Liu Q, Hu P. A novel integrative computational framework for breast cancer radiogenomic biomarker discovery. Comput Struct Biotechnol J 2022; 20:2484-2494. [PMID: 35664228 PMCID: PMC9136270 DOI: 10.1016/j.csbj.2022.05.031] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 05/14/2022] [Accepted: 05/15/2022] [Indexed: 12/22/2022] Open
Abstract
Bayesian tensor factorization is used to integrate multiomics data for radiogenomics analysis. A regression framework is proposed to handle the unmatched data issue in radiogenomics analysis. Deep learning is used to identify prognostic meaningful radiogenomic biomarkers for cancer.
In precise medicine, it is with great value to develop computational frameworks for identifying prognostic biomarkers which can capture both multi-genomic and phenotypic heterogeneity of breast cancer (BC). Radiogenomics is a field where medical images and genomic measurements are integrated and mined to solve challenging clinical problems. Previous radiogenomic studies suffered from data incompleteness, feature subjectivity and low interpretability. For example, the majority of the radiogenomic studies miss one or two of medical imaging data, genomic data, and clinical outcome data, which results in the data incomplete issue. Feature subjectivity issue comes from the extraction of imaging features with significant human involvement. Thus, there is an urgent need to address above-mentioned limitations so that fully automatic and transparent radiogenomic prognostic biomarkers could be identified for BC. We proposed a novel framework for BC prognostic radiogenomic biomarker identification. This framework involves an explainable DL model for image feature extraction, a Bayesian tensor factorization (BTF) processing for multi-genomic feature extraction, a leverage strategy to utilize unpaired imaging, genomic, and survival outcome data, and a mediation analysis to provide further interpretation for identified biomarkers. This work provided a new perspective for conducting a comprehensive radiogenomic study when only limited resources are given. Compared with baseline traditional radiogenomic biomarkers, the 23 biomarkers identified by the proposed framework performed better in indicating patients’ survival outcome. And their interpretability is guaranteed by different levels of build-in and follow-up analyses.
Collapse
Affiliation(s)
- Qian Liu
- Department of Biochemistry and Medical Genetics, University of Manitoba, Winnipeg, Manitoba R3E 0W3, Canada
- Department of Computer Science, University of Manitoba, Winnipeg, Manitoba R3E 0W3, Canada
- Department of Statistics, University of Manitoba, Winnipeg, Manitoba R3E 0W3, Canada
| | - Pingzhao Hu
- Department of Biochemistry and Medical Genetics, University of Manitoba, Winnipeg, Manitoba R3E 0W3, Canada
- Department of Computer Science, University of Manitoba, Winnipeg, Manitoba R3E 0W3, Canada
- Corresponding author at: Department of Biochemistry and Medical Genetics, Room 308 - Basic Medical Sciences Building, 745 Bannatyne Avenue, University of Manitoba, Winnipeg, Manitoba R3E 0J9, Canada.
| |
Collapse
|
49
|
George E, Flagg E, Chang K, Bai HX, Aerts HJ, Vallières M, Reardon DA, Huang RY. Radiomics-Based Machine Learning for Outcome Prediction in a Multicenter Phase II Study of Programmed Death-Ligand 1 Inhibition Immunotherapy for Glioblastoma. AJNR Am J Neuroradiol 2022; 43:675-681. [PMID: 35483906 PMCID: PMC9089247 DOI: 10.3174/ajnr.a7488] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 02/17/2022] [Indexed: 12/25/2022]
Abstract
BACKGROUND AND PURPOSE Imaging assessment of an immunotherapy response in glioblastoma is challenging due to overlap in the appearance of treatment-related changes with tumor progression. Our purpose was to determine whether MR imaging radiomics-based machine learning can predict progression-free survival and overall survival in patients with glioblastoma on programmed death-ligand 1 inhibition immunotherapy. MATERIALS AND METHODS Post hoc analysis was performed of a multicenter trial on the efficacy of durvalumab in glioblastoma (n = 113). Radiomics tumor features on pretreatment and first on-treatment time point MR imaging were extracted. The random survival forest algorithm was applied to clinical and radiomics features from pretreatment and first on-treatment MR imaging from a subset of trial sites (n = 60-74) to train a model to predict long overall survival and progression-free survival and was tested externally on data from the remaining sites (n = 29-43). Model performance was assessed using the concordance index and dynamic area under the curve from different time points. RESULTS The mean age was 55.2 (SD, 11.5) years, and 69% of patients were male. Pretreatment MR imaging features had a poor predictive value for overall survival and progression-free survival (concordance index = 0.472-0.524). First on-treatment MR imaging features had high predictive value for overall survival (concordance index = 0.692-0.750) and progression-free survival (concordance index = 0.680-0.715). CONCLUSIONS A radiomics-based machine learning model from first on-treatment MR imaging predicts survival in patients with glioblastoma on programmed death-ligand 1 inhibition immunotherapy.
Collapse
Affiliation(s)
- E George
- From the Department of Radiology and Biomedical Imaging (E.G.), University of California San Francisco, San Francisco, California
| | - E Flagg
- Department of Radiology (E.F., R.Y.H.), Brigham and Women's Hospital, Boston, Massachusetts
| | - K Chang
- Massachusetts Institute of Technology (K.C.), Cambridge, Massachusetts
| | - H X Bai
- Department of Diagnostic Imaging (H.X.B.), Rhode Island Hospital and Warren Alpert Medical School of Brown University, Providence, Rhode Island
| | - H J Aerts
- Artificial Intelligence in Medicine Program (H.J.A.), Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
- Departments of Radiation Oncology and Radiology (H.J.A.), Brigham and Women's Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, Massachusetts
| | - M Vallières
- Department of Computer Science (M.V.), Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - D A Reardon
- Center for Neuro Oncology (D.A.R.), Dana-Farber Cancer Institute, Boston, Massachusetts
| | - R Y Huang
- Department of Radiology (E.F., R.Y.H.), Brigham and Women's Hospital, Boston, Massachusetts
| |
Collapse
|
50
|
Saravi B, Hassel F, Ülkümen S, Zink A, Shavlokhova V, Couillard-Despres S, Boeker M, Obid P, Lang GM. Artificial Intelligence-Driven Prediction Modeling and Decision Making in Spine Surgery Using Hybrid Machine Learning Models. J Pers Med 2022; 12:jpm12040509. [PMID: 35455625 PMCID: PMC9029065 DOI: 10.3390/jpm12040509] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 03/18/2022] [Accepted: 03/19/2022] [Indexed: 12/22/2022] Open
Abstract
Healthcare systems worldwide generate vast amounts of data from many different sources. Although of high complexity for a human being, it is essential to determine the patterns and minor variations in the genomic, radiological, laboratory, or clinical data that reliably differentiate phenotypes or allow high predictive accuracy in health-related tasks. Convolutional neural networks (CNN) are increasingly applied to image data for various tasks. Its use for non-imaging data becomes feasible through different modern machine learning techniques, converting non-imaging data into images before inputting them into the CNN model. Considering also that healthcare providers do not solely use one data modality for their decisions, this approach opens the door for multi-input/mixed data models which use a combination of patient information, such as genomic, radiological, and clinical data, to train a hybrid deep learning model. Thus, this reflects the main characteristic of artificial intelligence: simulating natural human behavior. The present review focuses on key advances in machine and deep learning, allowing for multi-perspective pattern recognition across the entire information set of patients in spine surgery. This is the first review of artificial intelligence focusing on hybrid models for deep learning applications in spine surgery, to the best of our knowledge. This is especially interesting as future tools are unlikely to use solely one data modality. The techniques discussed could become important in establishing a new approach to decision-making in spine surgery based on three fundamental pillars: (1) patient-specific, (2) artificial intelligence-driven, (3) integrating multimodal data. The findings reveal promising research that already took place to develop multi-input mixed-data hybrid decision-supporting models. Their implementation in spine surgery may hence be only a matter of time.
Collapse
Affiliation(s)
- Babak Saravi
- Department of Orthopedics and Trauma Surgery, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, 79108 Freiburg, Germany; (S.Ü.); (P.O.); (G.M.L.)
- Department of Spine Surgery, Loretto Hospital, 79100 Freiburg, Germany; (F.H.); (A.Z.)
- Institute of Experimental Neuroregeneration, Spinal Cord Injury and Tissue Regeneration Center Salzburg (SCI-TReCS), Paracelsus Medical University, 5020 Salzburg, Austria;
- Correspondence:
| | - Frank Hassel
- Department of Spine Surgery, Loretto Hospital, 79100 Freiburg, Germany; (F.H.); (A.Z.)
| | - Sara Ülkümen
- Department of Orthopedics and Trauma Surgery, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, 79108 Freiburg, Germany; (S.Ü.); (P.O.); (G.M.L.)
- Department of Spine Surgery, Loretto Hospital, 79100 Freiburg, Germany; (F.H.); (A.Z.)
| | - Alisia Zink
- Department of Spine Surgery, Loretto Hospital, 79100 Freiburg, Germany; (F.H.); (A.Z.)
| | - Veronika Shavlokhova
- Department of Oral and Maxillofacial Surgery, University Hospital Heidelberg, 69120 Heidelberg, Germany;
| | - Sebastien Couillard-Despres
- Institute of Experimental Neuroregeneration, Spinal Cord Injury and Tissue Regeneration Center Salzburg (SCI-TReCS), Paracelsus Medical University, 5020 Salzburg, Austria;
- Austrian Cluster for Tissue Regeneration, 1200 Vienna, Austria
| | - Martin Boeker
- Intelligence and Informatics in Medicine, Medical Center Rechts der Isar, School of Medicine, Technical University of Munich, 81675 Munich, Germany;
| | - Peter Obid
- Department of Orthopedics and Trauma Surgery, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, 79108 Freiburg, Germany; (S.Ü.); (P.O.); (G.M.L.)
| | - Gernot Michael Lang
- Department of Orthopedics and Trauma Surgery, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, 79108 Freiburg, Germany; (S.Ü.); (P.O.); (G.M.L.)
| |
Collapse
|