1
|
Al-Rahbi A, Al-Mahrouqi O, Al-Saadi T. Uses of artificial intelligence in glioma: A systematic review. MEDICINE INTERNATIONAL 2024; 4:40. [PMID: 38827949 PMCID: PMC11140312 DOI: 10.3892/mi.2024.164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 04/26/2024] [Indexed: 06/05/2024]
Abstract
Glioma is the most prevalent type of primary brain tumor in adults. The use of artificial intelligence (AI) in glioma is increasing and has exhibited promising results. The present study performed a systematic review of the applications of AI in glioma as regards diagnosis, grading, prediction of genotype, progression and treatment response using different databases. The aim of the present study was to demonstrate the trends (main directions) of the recent applications of AI within the field of glioma, and to highlight emerging challenges in integrating AI within clinical practice. A search in four databases (Scopus, PubMed, Wiley and Google Scholar) yielded a total of 42 articles specifically using AI in glioma and glioblastoma. The articles were retrieved and reviewed, and the data were summarized and analyzed. The majority of the articles were from the USA (n=18) followed by China (n=11). The number of articles increased by year reaching the maximum number in 2022. The majority of the articles studied glioma as opposed to glioblastoma. In terms of grading, the majority of the articles were about both low-grade glioma (LGG) and high-grade glioma (HGG) (n=23), followed by HGG/glioblastoma (n=13). Additionally, three articles were about LGG only; two articles did not specify the grade. It was found that one article had the highest sample size among the other studies, reaching 897 samples. Despite the limitations and challenges that face AI, the use of AI in glioma has increased in recent years with promising results, with a variety of applications ranging from diagnosis, grading, prognosis prediction, and reaching to treatment and post-operative care.
Collapse
Affiliation(s)
- Adham Al-Rahbi
- College of Medicine and Health Sciences, Sultan Qaboos University, Muscat 123, Sultanate of Oman
| | - Omar Al-Mahrouqi
- College of Medicine and Health Sciences, Sultan Qaboos University, Muscat 123, Sultanate of Oman
| | - Tariq Al-Saadi
- Department of Neurosurgery, Khoula Hospital, Muscat 123, Sultanate of Oman
- Department of Neurology and Neurosurgery-Montreal Neurological Institute, Faculty of Medicine, McGill University, Montreal, QC H3A 2B4, Canada
| |
Collapse
|
2
|
Herr J, Stoyanova R, Mellon EA. Convolutional Neural Networks for Glioma Segmentation and Prognosis: A Systematic Review. Crit Rev Oncog 2024; 29:33-65. [PMID: 38683153 DOI: 10.1615/critrevoncog.2023050852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024]
Abstract
Deep learning (DL) is poised to redefine the way medical images are processed and analyzed. Convolutional neural networks (CNNs), a specific type of DL architecture, are exceptional for high-throughput processing, allowing for the effective extraction of relevant diagnostic patterns from large volumes of complex visual data. This technology has garnered substantial interest in the field of neuro-oncology as a promising tool to enhance medical imaging throughput and analysis. A multitude of methods harnessing MRI-based CNNs have been proposed for brain tumor segmentation, classification, and prognosis prediction. They are often applied to gliomas, the most common primary brain cancer, to classify subtypes with the goal of guiding therapy decisions. Additionally, the difficulty of repeating brain biopsies to evaluate treatment response in the setting of often confusing imaging findings provides a unique niche for CNNs to help distinguish the treatment response to gliomas. For example, glioblastoma, the most aggressive type of brain cancer, can grow due to poor treatment response, can appear to grow acutely due to treatment-related inflammation as the tumor dies (pseudo-progression), or falsely appear to be regrowing after treatment as a result of brain damage from radiation (radiation necrosis). CNNs are being applied to separate this diagnostic dilemma. This review provides a detailed synthesis of recent DL methods and applications for intratumor segmentation, glioma classification, and prognosis prediction. Furthermore, this review discusses the future direction of MRI-based CNN in the field of neuro-oncology and challenges in model interpretability, data availability, and computation efficiency.
Collapse
Affiliation(s)
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| | - Eric Albert Mellon
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| |
Collapse
|
3
|
Oh S, Ryu J, Shin HJ, Song JH, Son SY, Hur H, Han SU. Deep learning using computed tomography to identify high-risk patients for acute small bowel obstruction: development and validation of a prediction model : a retrospective cohort study. Int J Surg 2023; 109:4091-4100. [PMID: 37720936 PMCID: PMC10720875 DOI: 10.1097/js9.0000000000000721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 08/19/2023] [Indexed: 09/19/2023]
Abstract
OBJECTIVE To build a novel classifier using an optimized 3D-convolutional neural network for predicting high-grade small bowel obstruction (HGSBO). SUMMARY BACKGROUND DATA Acute SBO is one of the most common acute abdominal diseases requiring urgent surgery. While artificial intelligence and abdominal computed tomography (CT) have been used to determine surgical treatment, differentiating normal cases, HGSBO requiring emergency surgery, and low-grade SBO (LGSBO) or paralytic ileus is difficult. METHODS A deep learning classifier was used to predict high-risk acute SBO patients using CT images at a tertiary hospital. Images from three groups of subjects (normal, nonsurgical, and surgical) were extracted; the dataset used in the study included 578 cases from 250 normal subjects, with 209 HGSBO and 119 LGSBO patients; over 38 000 CT images were used. Data were analyzed from 1 June 2022 to 5 February 2023. The classification performance was assessed based on accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve. RESULTS After fivefold cross-validation, the WideResNet classifier using dual-branch architecture with depth retention pooling achieved an accuracy of 72.6%, an area under receiver operating characteristic of 0.90, a sensitivity of 72.6%, a specificity of 86.3%, a positive predictive value of 74.1%, and a negative predictive value of 86.6% on all the test sets. CONCLUSIONS These results show the satisfactory performance of the deep learning classifier in predicting HGSBO compared to the previous machine learning model. The novel 3D classifier with dual-branch architecture and depth retention pooling based on artificial intelligence algorithms could be a reliable screening and decision-support tool for high-risk patients with SBO.
Collapse
Affiliation(s)
- Seungmin Oh
- Department of Artificial Intelligence, Ajou University, Suwon, South Korea
| | - Jongbin Ryu
- Department of Artificial Intelligence, Ajou University, Suwon, South Korea
- Department of Software and Computer Engineering, Ajou University, Suwon, South Korea
| | - Ho-Jung Shin
- Department of Surgery, Ajou University School of Medicine, Suwon, South Korea
| | - Jeong Ho Song
- Department of Surgery, Ajou University School of Medicine, Suwon, South Korea
| | - Sang-Yong Son
- Department of Surgery, Ajou University School of Medicine, Suwon, South Korea
| | - Hoon Hur
- Department of Surgery, Ajou University School of Medicine, Suwon, South Korea
| | - Sang-Uk Han
- Department of Surgery, Ajou University School of Medicine, Suwon, South Korea
| |
Collapse
|
4
|
Luckett PH, Olufawo M, Lamichhane B, Park KY, Dierker D, Verastegui GT, Yang P, Kim AH, Chheda MG, Snyder AZ, Shimony JS, Leuthardt EC. Predicting survival in glioblastoma with multimodal neuroimaging and machine learning. J Neurooncol 2023; 164:309-320. [PMID: 37668941 PMCID: PMC10522528 DOI: 10.1007/s11060-023-04439-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 08/26/2023] [Indexed: 09/06/2023]
Abstract
PURPOSE Glioblastoma (GBM) is the most common and aggressive malignant glioma, with an overall median survival of less than two years. The ability to predict survival before treatment in GBM patients would lead to improved disease management, clinical trial enrollment, and patient care. METHODS GBM patients (N = 133, mean age 60.8 years, median survival 14.1 months, 57.9% male) were retrospectively recruited from the neurosurgery brain tumor service at Washington University Medical Center. All patients completed structural neuroimaging and resting state functional MRI (RS-fMRI) before surgery. Demographics, measures of cortical thickness (CT), and resting state functional network connectivity (FC) were used to train a deep neural network to classify patients based on survival (< 1y, 1-2y, >2y). Permutation feature importance identified the strongest predictors of survival based on the trained models. RESULTS The models achieved a combined cross-validation and hold out accuracy of 90.6% in classifying survival (< 1y, 1-2y, >2y). The strongest demographic predictors were age at diagnosis and sex. The strongest CT predictors of survival included the superior temporal sulcus, parahippocampal gyrus, pericalcarine, pars triangularis, and middle temporal regions. The strongest FC features primarily involved dorsal and inferior somatomotor, visual, and cingulo-opercular networks. CONCLUSION We demonstrate that machine learning can accurately classify survival in GBM patients based on multimodal neuroimaging before any surgical or medical intervention. These results were achieved without information regarding presentation symptoms, treatments, postsurgical outcomes, or tumor genomic information. Our results suggest GBMs have a global effect on the brain's structural and functional organization, which is predictive of survival.
Collapse
Affiliation(s)
- Patrick H Luckett
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA.
| | - Michael Olufawo
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Bidhan Lamichhane
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
- Center for Health Sciences, Oklahoma State University, Tulsa, OK, 74136, USA
| | - Ki Yun Park
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Donna Dierker
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | | | - Peter Yang
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Albert H Kim
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
- Brain Tumor Center at Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO, USA
| | - Milan G Chheda
- Brain Tumor Center at Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO, USA
- Department of Medicine, Washington University School of Medicine, St. Louis, MO, USA
| | - Abraham Z Snyder
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
| | - Joshua S Shimony
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Brain Tumor Center at Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO, USA
| | - Eric C Leuthardt
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
- Brain Tumor Center at Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO, USA
- Department of Biomedical Engineering, Washington University in Saint Louis, St. Louis, MO, 63130, USA
- Department of Mechanical Engineering and Materials Science, Washington University in Saint Louis, St. Louis, MO, 63130, USA
- Center for Innovation in Neuroscience and Technology, Washington University School of Medicine, St. Louis, MO, 63110, USA
- Brain Laser Center, Washington University School of Medicine, St. Louis, MO, 63110, USA
- National Center for Adaptive Neurotechnologies, Albany, USA
| |
Collapse
|
5
|
Ding Y, Qin X, Zhang M, Geng J, Chen D, Deng F, Song C. RLSegNet: An Medical Image Segmentation Network Based on Reinforcement Learning. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2565-2576. [PMID: 35914053 DOI: 10.1109/tcbb.2022.3195705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In the area of medical image segmentation, the spatial information can be further used to enhance the image segmentation performance. And the 3D convolution is mainly used to better utilize the spatial information. However, how to better utilize the spatial information in the 2D convolution is still a challenging task. In this paper, we propose an image segmentation network based on reinforcement learning (RLSegNet), which can translate the image segmentation process into a serial of decision-making problem. The proposed RLSegNet is a U-shaped network, which is composed of three components: the feature extraction network, the Mask Prediction Network (MPNet), and the up-sampling network with the cascade attention module. The deep semantic feature in the image is first extracted by adopting the feature extraction network. Then, the Mask Prediction Network (MPNet) is proposed to generate the prediction mask for the current frame based on the prior knowledge (segmentation result). And the proposed cascade attention module is mainly used to generate the weighted feature mask so that the up-sampling network pays more attention to the interesting region. Specifically, the state, action and reward used in the reinforcement learning are redesigned in the proposed RLSegNet to translate the segmentation process as the decision-making process, which performs as the reinforcement learning to realize the brain tumor segmentation. Extensive experiments are conducted on the BRATS 2015 dataset to evaluate the proposed RLSegNet. The experimental results demonstrate that the proposed method can achieve a better segmentation performance, in comparison with other state-of-the-art methods.
Collapse
|
6
|
Bhavani MR, Vasanth DK. Classification of brain tumor using a multistage approach based on RELM and MLBP. EAI ENDORSED TRANSACTIONS ON PERVASIVE HEALTH AND TECHNOLOGY 2023. [DOI: 10.4108/eetpht.v8i4.3082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023] Open
Abstract
INTRODUCTION: Automatic segmentation and classification of brain tumors help in improvement of treatment which will increase the life of the patient. Tumor may be noncancerous (benign) or cancerous (malignant). Precancerous cells may also form into cancer.OBJECTIVES: Hough CNN is applied for selected section which applies hough casting technique in segmentation. METHODS: A multistage methodof extracting features, with multistage neighbouring is done for emerging an exact brain tumor classifying methodology.RESULTS: In this dataset three types of brain tumors are available they are meningioma, glioma, and pituitary.. CONCLUSION: This paperpresented an efficient brain tumor classification approach which involves multiscale preprocessing, multiscale feature extraction and classification.
Collapse
|
7
|
Improving brain tumor classification performance with an effective approach based on new deep learning model named 3ACL from 3D MRI data. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
8
|
Yousefian A, Shayegh F, Maleki Z. Detection of autism spectrum disorder using graph representation learning algorithms and deep neural network, based on fMRI signals. Front Syst Neurosci 2023; 16:904770. [PMID: 36817947 PMCID: PMC9932324 DOI: 10.3389/fnsys.2022.904770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 12/28/2022] [Indexed: 02/05/2023] Open
Abstract
Introduction Can we apply graph representation learning algorithms to identify autism spectrum disorder (ASD) patients within a large brain imaging dataset? ASD is mainly identified by brain functional connectivity patterns. Attempts to unveil the common neural patterns emerged in ASD are the essence of ASD classification. We claim that graph representation learning methods can appropriately extract the connectivity patterns of the brain, in such a way that the method can be generalized to every recording condition, and phenotypical information of subjects. These methods can capture the whole structure of the brain, both local and global properties. Methods The investigation is done for the worldwide brain imaging multi-site database known as ABIDE I and II (Autism Brain Imaging Data Exchange). Among different graph representation techniques, we used AWE, Node2vec, Struct2vec, multi node2vec, and Graph2Img. The best approach was Graph2Img, in which after extracting the feature vectors representative of the brain nodes, the PCA algorithm is applied to the matrix of feature vectors. The classifier adapted to the features embedded in graphs is an LeNet deep neural network. Results and discussion Although we could not outperform the previous accuracy of 10-fold cross-validation in the identification of ASD versus control patients in this dataset, for leave-one-site-out cross-validation, we could obtain better results (our accuracy: 80%). The result is that graph embedding methods can prepare the connectivity matrix more suitable for applying to a deep network.
Collapse
Affiliation(s)
| | - Farzaneh Shayegh
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, Iran
| | | |
Collapse
|
9
|
Wang R, Fu G, Li J, Pei Y. Diagnosis after zooming in: A multilabel classification model by imitating doctor reading habits to diagnose brain diseases. Med Phys 2022; 49:7054-7070. [PMID: 35880443 DOI: 10.1002/mp.15871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 03/18/2022] [Accepted: 06/28/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors. METHODS In this paper, we propose a model called DrCT2 that can detect brain diseases without using image-level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open-access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human-computer cooperation in different aspects. RESULTS The method achieved F1-scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency. CONCLUSIONS We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value.
Collapse
Affiliation(s)
- Ruiqian Wang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Guanghui Fu
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, F-75013, Paris, France
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu, Japan
| |
Collapse
|
10
|
Ngnamsie Njimbouom S, Lee K, Kim JD. MMDCP: Multi-Modal Dental Caries Prediction for Decision Support System Using Deep Learning. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:10928. [PMID: 36078635 PMCID: PMC9518085 DOI: 10.3390/ijerph191710928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/25/2022] [Accepted: 08/29/2022] [Indexed: 06/15/2023]
Abstract
In recent years, healthcare has gained unprecedented attention from researchers in the field of Human health science and technology. Oral health, a subdomain of healthcare described as being very complex, is threatened by diseases like dental caries, gum disease, oral cancer, etc. The critical point is to propose an identification mechanism to prevent the population from being affected by these diseases. The large amount of online data allows scholars to perform tremendous research on health conditions, specifically oral health. Regardless of the high-performing dental consultation tools available in current healthcare, computer-based technology has shown the ability to complete some tasks in less time and cost less than when using similar healthcare tools to perform the same type of work. Machine learning has displayed a wide variety of advantages in oral healthcare, such as predicting dental caries in the population. Compared to the standard dental caries prediction previously proposed, this work emphasizes the importance of using multiple data sources, referred to as multi-modality, to extract more features and obtain accurate performances. The proposed prediction model constructed using multi-modal data demonstrated promising performances with an accuracy of 90%, F1-score of 89%, a recall of 90%, and a precision of 89%.
Collapse
Affiliation(s)
| | - Kwonwoo Lee
- Department of Computer and Electronics Convergence Engineering, Sun Moon University, Asan 31460, Korea
| | - Jeong-Dong Kim
- Department of Computer and Electronics Convergence Engineering, Sun Moon University, Asan 31460, Korea
- Genome-Based BioIT Convergence Institute, Sun Moon University, Asan 31460, Korea
| |
Collapse
|
11
|
Development of a Hybrid-Imaging-Based Prognostic Index for Metastasized-Melanoma Patients in Whole-Body 18F-FDG PET/CT and PET/MRI Data. Diagnostics (Basel) 2022; 12:diagnostics12092102. [PMID: 36140504 PMCID: PMC9498091 DOI: 10.3390/diagnostics12092102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/19/2022] [Accepted: 08/25/2022] [Indexed: 11/17/2022] Open
Abstract
Besides tremendous treatment success in advanced melanoma patients, the rapid development of oncologic treatment options comes with increasingly high costs and can cause severe life-threatening side effects. For this purpose, predictive baseline biomarkers are becoming increasingly important for risk stratification and personalized treatment planning. Thus, the aim of this pilot study was the development of a prognostic tool for the risk stratification of the treatment response and mortality based on PET/MRI and PET/CT, including a convolutional neural network (CNN) for metastasized-melanoma patients before systemic-treatment initiation. The evaluation was based on 37 patients (19 f, 62 ± 13 y/o) with unresectable metastasized melanomas who underwent whole-body 18F-FDG PET/MRI and PET/CT scans on the same day before the initiation of therapy with checkpoint inhibitors and/or BRAF/MEK inhibitors. The overall survival (OS), therapy response, metastatically involved organs, number of lesions, total lesion glycolysis, total metabolic tumor volume (TMTV), peak standardized uptake value (SULpeak), diameter (Dmlesion) and mean apparent diffusion coefficient (ADCmean) were assessed. For each marker, a Kaplan−Meier analysis and the statistical significance (Wilcoxon test, paired t-test and Bonferroni correction) were assessed. Patients were divided into high- and low-risk groups depending on the OS and treatment response. The CNN segmentation and prediction utilized multimodality imaging data for a complementary in-depth risk analysis per patient. The following parameters correlated with longer OS: a TMTV < 50 mL; no metastases in the brain, bone, liver, spleen or pleura; ≤4 affected organ regions; no metastases; a Dmlesion > 37 mm or SULpeak < 1.3; a range of the ADCmean < 600 mm2/s. However, none of the parameters correlated significantly with the stratification of the patients into the high- or low-risk groups. For the CNN, the sensitivity, specificity, PPV and accuracy were 92%, 96%, 92% and 95%, respectively. Imaging biomarkers such as the metastatic involvement of specific organs, a high tumor burden, the presence of at least one large lesion or a high range of intermetastatic diffusivity were negative predictors for the OS, but the identification of high-risk patients was not feasible with the handcrafted parameters. In contrast, the proposed CNN supplied risk stratification with high specificity and sensitivity.
Collapse
|
12
|
Ismail M, Prasanna P, Bera K, Statsevych V, Hill V, Singh G, Partovi S, Beig N, McGarry S, Laviolette P, Ahluwalia M, Madabhushi A, Tiwari P. Radiomic Deformation and Textural Heterogeneity (R-DepTH) Descriptor to Characterize Tumor Field Effect: Application to Survival Prediction in Glioblastoma. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1764-1777. [PMID: 35108202 PMCID: PMC9575333 DOI: 10.1109/tmi.2022.3148780] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The concept of tumor field effect implies that cancer is a systemic disease with its impact way beyond the visible tumor confines. For instance, in Glioblastoma (GBM), an aggressive brain tumor, the increase in intracranial pressure due to tumor burden often leads to brain herniation and poor outcomes. Our work is based on the rationale that highly aggressive tumors tend to grow uncontrollably, leading to pronounced biomechanical tissue deformations in the normal parenchyma, which when combined with local morphological differences in the tumor confines on MRI scans, will comprehensively capture tumor field effect. Specifically, we present an integrated MRI-based descriptor, radiomic-Deformation and Textural Heterogeneity (r-DepTH). This descriptor comprises measurements of the subtle perturbations in tissue deformations throughout the surrounding normal parenchyma due to mass effect. This involves non-rigidly aligning the patients' MRI scans to a healthy atlas via diffeomorphic registration. The resulting inverse mapping is used to obtain the deformation field magnitudes in the normal parenchyma. These measurements are then combined with a 3D texture descriptor, Co-occurrence of Local Anisotropic Gradient Orientations (COLLAGE), which captures the morphological heterogeneity and infiltration within the tumor confines, on MRI scans. In this work, we extensively evaluated r-DepTH for survival risk-stratification on a total of 207 GBM cases from 3 different cohorts (Cohort 1 ( n1 = 53 ), Cohort 2 ( n2 = 75 ), and Cohort 3 ( n3 = 79 )), where each of these three cohorts was used as a training set for our model separately, and the other two cohorts were used for testing, independently, for each training experiment. When employing Cohort 1 for training, r-DepTH yielded Concordance indices (C-indices) of 0.7 and 0.65, hazard ratios (HR) and Confidence Intervals (CI) of 10 (6 - 19) and 5 (3 - 8) on Cohorts 2 and 3, respectively. Similarly, training on Cohort 2 yielded C-indices of 0.6 and 0.7, HR and CI of 1 (0.7 - 2) and 3 (2 - 5) on Cohorts 1 and 3, respectively. Finally, training on Cohort 3 yielded C-indices of 0.75 and 0.63, HR and CI of 24 (10 - 57) and 12 (6 - 21) on Cohorts 1 and 2, respectively. Our results show that r-DepTH descriptor may serve as a comprehensive and a robust MRI-based prognostic marker of disease aggressiveness and survival in solid tumors.
Collapse
|
13
|
Nam S, Kim D, Jung W, Zhu Y. Understanding the Research Landscape of Deep Learning in Biomedical Science: Scientometric Analysis. J Med Internet Res 2022; 24:e28114. [PMID: 35451980 PMCID: PMC9077503 DOI: 10.2196/28114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 05/30/2021] [Accepted: 02/20/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Advances in biomedical research using deep learning techniques have generated a large volume of related literature. However, there is a lack of scientometric studies that provide a bird's-eye view of them. This absence has led to a partial and fragmented understanding of the field and its progress. OBJECTIVE This study aimed to gain a quantitative and qualitative understanding of the scientific domain by analyzing diverse bibliographic entities that represent the research landscape from multiple perspectives and levels of granularity. METHODS We searched and retrieved 978 deep learning studies in biomedicine from the PubMed database. A scientometric analysis was performed by analyzing the metadata, content of influential works, and cited references. RESULTS In the process, we identified the current leading fields, major research topics and techniques, knowledge diffusion, and research collaboration. There was a predominant focus on applying deep learning, especially convolutional neural networks, to radiology and medical imaging, whereas a few studies focused on protein or genome analysis. Radiology and medical imaging also appeared to be the most significant knowledge sources and an important field in knowledge diffusion, followed by computer science and electrical engineering. A coauthorship analysis revealed various collaborations among engineering-oriented and biomedicine-oriented clusters of disciplines. CONCLUSIONS This study investigated the landscape of deep learning research in biomedicine and confirmed its interdisciplinary nature. Although it has been successful, we believe that there is a need for diverse applications in certain areas to further boost the contributions of deep learning in addressing biomedical research problems. We expect the results of this study to help researchers and communities better align their present and future work.
Collapse
Affiliation(s)
- Seojin Nam
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Donghun Kim
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Woojin Jung
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Yongjun Zhu
- Department of Library and Information Science, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
14
|
Khan AA, Ibad H, Ahmed KS, Hoodbhoy Z, Shamim SM. Deep learning applications in neuro-oncology. Surg Neurol Int 2021; 12:435. [PMID: 34513198 PMCID: PMC8422419 DOI: 10.25259/sni_433_2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/30/2021] [Indexed: 11/04/2022] Open
Abstract
Deep learning (DL) is a relatively newer subdomain of machine learning (ML) with incredible potential for certain applications in the medical field. Given recent advances in its use in neuro-oncology, its role in diagnosing, prognosticating, and managing the care of cancer patients has been the subject of many research studies. The gamut of studies has shown that the landscape of algorithmic methods is constantly improving with each iteration from its inception. With the increase in the availability of high-quality data, more training sets will allow for higher fidelity models. However, logistical and ethical concerns over a prospective trial comparing prognostic abilities of DL and physicians severely limit the ability of this technology to be widely adopted. One of the medical tenets is judgment, a facet of medical decision making in DL that is often missing because of its inherent nature as a "black box." A natural distrust for newer technology, combined with a lack of autonomy that is normally expected in our current medical practices, is just one of several important limitations in implementation. In our review, we will first define and outline the different types of artificial intelligence (AI) as well as the role of AI in the current advances of clinical medicine. We briefly highlight several of the salient studies using different methods of DL in the realm of neuroradiology and summarize the key findings and challenges faced when using this nascent technology, particularly ethical challenges that could be faced by users of DL.
Collapse
Affiliation(s)
- Adnan A Khan
- Medical College, Aga Khan University, Karachi, Sindh, Pakistan
| | - Hamza Ibad
- Medical College, Aga Khan University, Karachi, Sindh, Pakistan
| | | | - Zahra Hoodbhoy
- Department of Pediatrics, Aga Khan University, Karachi, Sindh, Pakistan
| | - Shahzad M Shamim
- Department of Neurosurgery, Aga Khan University, Karachi, Sindh, Pakistan
| |
Collapse
|
15
|
Maaref A, Romero FP, Montagnon E, Cerny M, Nguyen B, Vandenbroucke F, Soucy G, Turcotte S, Tang A, Kadoury S. Predicting the Response to FOLFOX-Based Chemotherapy Regimen from Untreated Liver Metastases on Baseline CT: a Deep Neural Network Approach. J Digit Imaging 2021; 33:937-945. [PMID: 32193665 DOI: 10.1007/s10278-020-00332-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
In developed countries, colorectal cancer is the second cause of cancer-related mortality. Chemotherapy is considered a standard treatment for colorectal liver metastases (CLM). Among patients who develop CLM, the assessment of patient response to chemotherapy is often required to determine the need for second-line chemotherapy and eligibility for surgery. However, while FOLFOX-based regimens are typically used for CLM treatment, the identification of responsive patients remains elusive. Computer-aided diagnosis systems may provide insight in the classification of liver metastases identified on diagnostic images. In this paper, we propose a fully automated framework based on deep convolutional neural networks (DCNN) which first differentiates treated and untreated lesions to identify new lesions appearing on CT scans, followed by a fully connected neural networks to predict from untreated lesions in pre-treatment computed tomography (CT) for patients with CLM undergoing chemotherapy, their response to a FOLFOX with Bevacizumab regimen as first-line of treatment. The ground truth for assessment of treatment response was histopathology-determined tumor regression grade. Our DCNN approach trained on 444 lesions from 202 patients achieved accuracies of 91% for differentiating treated and untreated lesions, and 78% for predicting the response to FOLFOX-based chemotherapy regimen. Experimental results showed that our method outperformed traditional machine learning algorithms and may allow for the early detection of non-responsive patients.
Collapse
Affiliation(s)
- Ahmad Maaref
- Polytechnique Montréal, Montreal, QC, Canada
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), Montreal, QC, Canada
| | - Francisco Perdigon Romero
- Polytechnique Montréal, Montreal, QC, Canada
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), Montreal, QC, Canada
| | - Emmanuel Montagnon
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), Montreal, QC, Canada
| | - Milena Cerny
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), Montreal, QC, Canada
| | - Bich Nguyen
- Department of Pathology, Centre hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
- Department of Pathology and Cellular Biology, Université de Montréal, Montreal, QC, Canada
| | - Franck Vandenbroucke
- Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Service, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
| | - Geneviève Soucy
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), Montreal, QC, Canada
- Department of Pathology, Centre hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
- Department of Pathology and Cellular Biology, Université de Montréal, Montreal, QC, Canada
| | - Simon Turcotte
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), Montreal, QC, Canada
- Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Service, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
| | - An Tang
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), Montreal, QC, Canada
- Department of Radiology, Centre hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
| | - Samuel Kadoury
- Polytechnique Montréal, Montreal, QC, Canada.
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), Montreal, QC, Canada.
| |
Collapse
|
16
|
Healthcare and Fitness Data Management Using the IoT-Based Blockchain Platform. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:9978863. [PMID: 34336176 PMCID: PMC8286190 DOI: 10.1155/2021/9978863] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 05/05/2021] [Accepted: 06/25/2021] [Indexed: 12/03/2022]
Abstract
Because of the availability of more than an actor and a wireless component among e-health applications, providing more security and safety is expected. Moreover, ensuring data confidentiality within different services becomes a key requirement. In this paper, we propose to collect data from health and fitness smart devices deployed in connection with the proposed IoT blockchain platform. The use of these devices helps us in extracting an amount of highly valuable heath data that are filtered, analyzed, and stored in electronic health records (EHRs). Different actors of the platform, coaches, patients, and doctors, collaborate to provide an on-time diagnosis and treatment for various diseases in an easy and cost-effective way. Our main purpose is to provide a distributed, secure, and authorized access to these sensitive data using the Ethereum blockchain technology. We have designed an integrated low-powered IoT blockchain platform for a healthcare application to store and review EHRs. This architecture, based on the blockchain Ethereum, includes a web and mobile application allowing the patient as well as the medical and paramedical staff to have a secure access to health information. The Ethereum node is implemented on an embedded platform, which should provide an efficient, flexible, and secure system despite the limited resources and low power consumption of the multiprocessor platform.
Collapse
|
17
|
Hong J, Liu X, Guo Y, Gu H, Gu L, Xu J, Lu Y, Sun X, Ye Z, Liu J, Peters BA, Chen J. A Novel Hierarchical Deep Learning Framework for Diagnosing Multiple Visual Impairment Diseases in the Clinical Environment. Front Med (Lausanne) 2021; 8:654696. [PMID: 34164412 PMCID: PMC8215208 DOI: 10.3389/fmed.2021.654696] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Accepted: 05/07/2021] [Indexed: 01/18/2023] Open
Abstract
Early detection and treatment of visual impairment diseases are critical and integral to combating avoidable blindness. To enable this, artificial intelligence–based disease identification approaches are vital for visual impairment diseases, especially for people living in areas with a few ophthalmologists. In this study, we demonstrated the identification of a large variety of visual impairment diseases using a coarse-to-fine approach. We designed a hierarchical deep learning network, which is composed of a family of multi-task & multi-label learning classifiers representing different levels of eye diseases derived from a predefined hierarchical eye disease taxonomy. A multi-level disease–guided loss function was proposed to learn the fine-grained variability of eye disease features. The proposed framework was trained for both ocular surface and retinal images, independently. The training dataset comprised 7,100 clinical images from 1,600 patients with 100 diseases. To show the feasibility of the proposed framework, we demonstrated eye disease identification on the first two levels of the eye disease taxonomy, namely 7 ocular diseases with 4 ocular surface diseases and 3 retinal fundus diseases in level 1 and 17 subclasses with 9 ocular surface diseases and 8 retinal fundus diseases in level 2. The proposed framework is flexible and extensible, which can be inherently trained on more levels with sufficient training data for each subtype diseases (e.g., the 17 classes of level 2 include 100 subtype diseases defined as level 3 diseases). The performance of the proposed framework was evaluated against 40 board-certified ophthalmologists on clinical cases with various visual impairment diseases and showed that the proposed framework had high sensitivity and specificity with the area under the receiver operating characteristic curve ranging from 0.743 to 0.989 in identifying all identified major causes of blindness. Further assessment of 4,670 cases in a tertiary eye center also demonstrated that the proposed framework achieved a high identification accuracy rate for different visual impairment diseases compared with that of human graders in a clinical setting. The proposed hierarchical deep learning framework would improve clinical practice in ophthalmology and broaden the scope of service available, especially for people living in areas with a few ophthalmologists.
Collapse
Affiliation(s)
- Jiaxu Hong
- Department of Ophthalmology and Visual Science, Eye, and Ear, Nose, and Thorat Hospital, Shanghai Medical, College Fudan University, Shanghai, China.,Department of Ophthalmology, Affiliated Hospital of Guizhou Medical University, Guiyang, China.,Key Laboratory of Myopia, Ministry of Health (Fudan University), Shanghai, China.,Shanghai Engineering Research Center of Synthetic Immunology, Fudan University, Shanghai, China
| | - Xiaoqing Liu
- AI Laboratory, Deepwise Healthcare, Beijing, China
| | - Youwen Guo
- Wuhan Servicebio Technology, Wuhan, China
| | - Hao Gu
- Department of Ophthalmology, Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Lei Gu
- Epigenetics Laboratory, Max Planck Institute for Heart and Lung Research, Bad Nauheim, Germany.,Cardiopulmonary Institute (CPI), Bad Nauheim, Germany
| | - Jianjiang Xu
- Department of Ophthalmology and Visual Science, Eye, and Ear, Nose, and Thorat Hospital, Shanghai Medical, College Fudan University, Shanghai, China
| | - Yi Lu
- Department of Ophthalmology and Visual Science, Eye, and Ear, Nose, and Thorat Hospital, Shanghai Medical, College Fudan University, Shanghai, China
| | - Xinghuai Sun
- Department of Ophthalmology and Visual Science, Eye, and Ear, Nose, and Thorat Hospital, Shanghai Medical, College Fudan University, Shanghai, China
| | - Zhengqiang Ye
- Department of Ophthalmology and Visual Science, Eye, and Ear, Nose, and Thorat Hospital, Shanghai Medical, College Fudan University, Shanghai, China
| | - Jian Liu
- Department of Ophthalmology, Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | | | - Jason Chen
- Complete Genomics Inc., San Jose, CA, United States
| |
Collapse
|
18
|
Ouyang J, Zhao Q, Sullivan EV, Pfefferbaum A, Tapert SF, Adeli E, Pohl KM. Longitudinal Pooling & Consistency Regularization to Model Disease Progression From MRIs. IEEE J Biomed Health Inform 2021; 25:2082-2092. [PMID: 33270567 PMCID: PMC8221531 DOI: 10.1109/jbhi.2020.3042447] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Many neurological diseases are characterized by gradual deterioration of brain structure andfunction. Large longitudinal MRI datasets have revealed such deterioration, in part, by applying machine and deep learning to predict diagnosis. A popular approach is to apply Convolutional Neural Networks (CNN) to extract informative features from each visit of the longitudinal MRI and then use those features to classify each visit via Recurrent Neural Networks (RNNs). Such modeling neglects the progressive nature of the disease, which may result in clinically implausible classifications across visits. To avoid this issue, we propose to combine features across visits by coupling feature extraction with a novel longitudinal pooling layer and enforce consistency of the classification across visits in line with disease progression. We evaluate the proposed method on the longitudinal structural MRIs from three neuroimaging datasets: Alzheimer's Disease Neuroimaging Initiative (ADNI, N=404), a dataset composed of 274 normal controls and 329 patients with Alcohol Use Disorder (AUD), and 255 youths from the National Consortium on Alcohol and NeuroDevelopment in Adolescence (NCANDA). In allthree experiments our method is superior to other widely used approaches for longitudinal classification thus making a unique contribution towards more accurate tracking of the impact of conditions on the brain. The code is available at https://github.com/ouyangjiahong/longitudinal-pooling.
Collapse
|
19
|
Li H, Zhao Q, Zhang Y, Sai K, Xu L, Mou Y, Xie Y, Ren J, Jiang X. Image-driven classification of functioning and nonfunctioning pituitary adenoma by deep convolutional neural networks. Comput Struct Biotechnol J 2021; 19:3077-3086. [PMID: 34136106 PMCID: PMC8178077 DOI: 10.1016/j.csbj.2021.05.023] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 05/05/2021] [Accepted: 05/13/2021] [Indexed: 11/28/2022] Open
Abstract
The secreting function of pituitary adenomas (PAs) plays a critical role in making the treatment strategies. However, Magnetic Resonance Imaging (MRI) analysis for pituitary adenomas is labor intensive and highly variable among radiologists. In this work, by applying convolutional neural network (CNN), we built a segmentation and classification model to help distinguish functioning pituitary adenomas from non-functioning subtypes with 3D MRI images from 185 patients with PAs (two centers). Specifically, the classification model adopts the concept of transfer learning and uses the pre-trained segmentation model to extract deep features from conventional MRI images. As a result, both segmentation and classification models obtained high performance in two internal validation datasets and an external testing dataset (for segmentation model: Dice score = 0.8188, 0.8091 and 0.8093 respectively; for classification model: AUROC = 0.8063, 0.7881 and 0.8478, respectively). In addition, the classification model considers the attention mechanism for better model interpretation. Taken together, this work provides the first deep learning-based tumor region segmentation and classification models of PAs, which enables early diagnosis and subtyping PAs from MRI images.
Collapse
Affiliation(s)
- Hongyu Li
- State Key Laboratory of Oncology in South China, Cancer Center, Collaborative Innovation Center for Cancer Medicine, School of Life Science, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Qi Zhao
- State Key Laboratory of Oncology in South China, Cancer Center, Collaborative Innovation Center for Cancer Medicine, School of Life Science, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Yihua Zhang
- The Department of Neurosurgery, Daping Hospital, Army Medical University, Chongqing 400042, China
| | - Ke Sai
- Department of Neurosurgery/Neuro-oncology, Sun Yat-sen University Cancer Center. State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Lunshan Xu
- The Department of Neurosurgery, Daping Hospital, Army Medical University, Chongqing 400042, China
| | - Yonggao Mou
- Department of Neurosurgery/Neuro-oncology, Sun Yat-sen University Cancer Center. State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Yubin Xie
- State Key Laboratory of Oncology in South China, Cancer Center, Collaborative Innovation Center for Cancer Medicine, School of Life Science, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Jian Ren
- State Key Laboratory of Oncology in South China, Cancer Center, Collaborative Innovation Center for Cancer Medicine, School of Life Science, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Xiaobing Jiang
- State Key Laboratory of Oncology in South China, Cancer Center, Collaborative Innovation Center for Cancer Medicine, School of Life Science, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
- Department of Neurosurgery/Neuro-oncology, Sun Yat-sen University Cancer Center. State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
- Jiangmen Central Hospital, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| |
Collapse
|
20
|
Xiong Y, He X, Zhao D, Tian T, Hong L, Jiang T, Zeng J. Modeling multi-species RNA modification through multi-task curriculum learning. Nucleic Acids Res 2021; 49:3719-3734. [PMID: 33744973 PMCID: PMC8053129 DOI: 10.1093/nar/gkab124] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Accepted: 02/12/2021] [Indexed: 01/01/2023] Open
Abstract
N6-methyladenosine (m6A) is the most pervasive modification in eukaryotic mRNAs. Numerous biological processes are regulated by this critical post-transcriptional mark, such as gene expression, RNA stability, RNA structure and translation. Recently, various experimental techniques and computational methods have been developed to characterize the transcriptome-wide landscapes of m6A modification for understanding its underlying mechanisms and functions in mRNA regulation. However, the experimental techniques are generally costly and time-consuming, while the existing computational models are usually designed only for m6A site prediction in a single-species and have significant limitations in accuracy, interpretability and generalizability. Here, we propose a highly interpretable computational framework, called MASS, based on a multi-task curriculum learning strategy to capture m6A features across multiple species simultaneously. Extensive computational experiments demonstrate the superior performances of MASS when compared to the state-of-the-art prediction methods. Furthermore, the contextual sequence features of m6A captured by MASS can be explained by the known critical binding motifs of the related RNA-binding proteins, which also help elucidate the similarity and difference among m6A features across species. In addition, based on the predicted m6A profiles, we further delineate the relationships between m6A and various properties of gene regulation, including gene expression, RNA stability, translation, RNA structure and histone modification. In summary, MASS may serve as a useful tool for characterizing m6A modification and studying its regulatory code. The source code of MASS can be downloaded from https://github.com/mlcb-thu/MASS.
Collapse
Affiliation(s)
- Yuanpeng Xiong
- Bioinformatics Division, BNRIST/Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Xuan He
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China
| | - Dan Zhao
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China
| | - Tingzhong Tian
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China
| | - Lixiang Hong
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China
| | - Tao Jiang
- Department of Computer Science and Engineering, University of California, Riverside, CA 92521, USA
- Bioinformatics Division, BNRIST/Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Jianyang Zeng
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China
| |
Collapse
|
21
|
Kim T, Kim J, Choi HS, Kim ES, Keum B, Jeen YT, Lee HS, Chun HJ, Han SY, Kim DU, Kwon S, Choo J, Lee JM. Artificial intelligence-assisted analysis of endoscopic retrograde cholangiopancreatography image for identifying ampulla and difficulty of selective cannulation. Sci Rep 2021; 11:8381. [PMID: 33863970 PMCID: PMC8052314 DOI: 10.1038/s41598-021-87737-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 03/17/2021] [Indexed: 12/21/2022] Open
Abstract
The advancement of artificial intelligence (AI) has facilitated its application in medical fields. However, there has been little research for AI-assisted endoscopy, despite the clinical significance of the efficiency and safety of cannulation in the endoscopic retrograde cholangiopancreatography (ERCP). In this study, we aim to assist endoscopists performing ERCP through automatic detection of the ampulla and the identification of cannulation difficulty. We developed a novel AI-assisted system based on convolutional neural networks that predict the location of the ampulla and the difficulty of cannulation to the ampulla. ERCP data of 531 and 451 patients were utilized in the evaluation of our model for each task. Our model detected the ampulla with mean intersection-over-union 64.1%, precision 76.2%, recall 78.4%, and centroid distance 0.021. In classifying the cannulation difficulty, it achieved the recall of 71.9% for the class of easy cases and that of 61.1% for that of difficult cases. Remarkably, our model accurately detected AOV with varying morphological shape, size, and texture on par with the level of a human expert and showed promising results for recognizing cannulation difficulty. It demonstrated its potential to improve the quality of ERCP by assisting endoscopists.
Collapse
Affiliation(s)
- Taesung Kim
- Graduate School of Artificial Intelligence, KAIST, Daehak-ro 291, Yuseong-gu, Daejeon, 34141, Korea
| | - Jinhee Kim
- Graduate School of Artificial Intelligence, KAIST, Daehak-ro 291, Yuseong-gu, Daejeon, 34141, Korea
| | - Hyuk Soon Choi
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea
| | - Eun Sun Kim
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea
| | - Bora Keum
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea
| | - Yoon Tae Jeen
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea
| | - Hong Sik Lee
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea
| | - Hoon Jai Chun
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea
| | - Sung Yong Han
- Department of Internal Medicine, Pusan National University College of Medicine, Pusan, Korea
| | - Dong Uk Kim
- Department of Internal Medicine, Pusan National University College of Medicine, Pusan, Korea
| | - Soonwook Kwon
- Department of Anatomy, Catholic University of Daegu, Daegu, Korea
| | - Jaegul Choo
- Graduate School of Artificial Intelligence, KAIST, Daehak-ro 291, Yuseong-gu, Daejeon, 34141, Korea.
| | - Jae Min Lee
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea.
| |
Collapse
|
22
|
Iuga AI, Carolus H, Höink AJ, Brosch T, Klinder T, Maintz D, Persigehl T, Baeßler B, Püsken M. Automated detection and segmentation of thoracic lymph nodes from CT using 3D foveal fully convolutional neural networks. BMC Med Imaging 2021; 21:69. [PMID: 33849483 PMCID: PMC8045346 DOI: 10.1186/s12880-021-00599-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2020] [Accepted: 04/02/2021] [Indexed: 01/04/2023] Open
Abstract
BACKGROUND In oncology, the correct determination of nodal metastatic disease is essential for patient management, as patient treatment and prognosis are closely linked to the stage of the disease. The aim of the study was to develop a tool for automatic 3D detection and segmentation of lymph nodes (LNs) in computed tomography (CT) scans of the thorax using a fully convolutional neural network based on 3D foveal patches. METHODS The training dataset was collected from the Computed Tomography Lymph Nodes Collection of the Cancer Imaging Archive, containing 89 contrast-enhanced CT scans of the thorax. A total number of 4275 LNs was segmented semi-automatically by a radiologist, assessing the entire 3D volume of the LNs. Using this data, a fully convolutional neuronal network based on 3D foveal patches was trained with fourfold cross-validation. Testing was performed on an unseen dataset containing 15 contrast-enhanced CT scans of patients who were referred upon suspicion or for staging of bronchial carcinoma. RESULTS The algorithm achieved a good overall performance with a total detection rate of 76.9% for enlarged LNs during fourfold cross-validation in the training dataset with 10.3 false-positives per volume and of 69.9% in the unseen testing dataset. In the training dataset a better detection rate was observed for enlarged LNs compared to smaller LNs, the detection rate for LNs with a short-axis diameter (SAD) ≥ 20 mm and SAD 5-10 mm being 91.6% and 62.2% (p < 0.001), respectively. Best detection rates were obtained for LNs located in Level 4R (83.6%) and Level 7 (80.4%). CONCLUSIONS The proposed 3D deep learning approach achieves an overall good performance in the automatic detection and segmentation of thoracic LNs and shows reasonable generalizability, yielding the potential to facilitate detection during routine clinical work and to enable radiomics research without observer-bias.
Collapse
Affiliation(s)
- Andra-Iza Iuga
- Institute of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany
| | - Heike Carolus
- Philips Research, Röntgenstraße 24, 22335 Hamburg, Germany
| | - Anna J. Höink
- Institute of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany
| | - Tom Brosch
- Philips Research, Röntgenstraße 24, 22335 Hamburg, Germany
| | - Tobias Klinder
- Philips Research, Röntgenstraße 24, 22335 Hamburg, Germany
| | - David Maintz
- Institute of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany
| | - Thorsten Persigehl
- Institute of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany
| | - Bettina Baeßler
- Institute of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany
- Institute of Diagnostic and Interventional Radiology, University Hospital Zürich, Zürich, Switzerland
| | - Michael Püsken
- Institute of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany
| |
Collapse
|
23
|
Rakocz N, Chiang JN, Nittala MG, Corradetti G, Tiosano L, Velaga S, Thompson M, Hill BL, Sankararaman S, Haines JL, Pericak-Vance MA, Stambolian D, Sadda SR, Halperin E. Automated identification of clinical features from sparsely annotated 3-dimensional medical imaging. NPJ Digit Med 2021; 4:44. [PMID: 33686212 PMCID: PMC7940637 DOI: 10.1038/s41746-021-00411-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 01/26/2021] [Indexed: 12/30/2022] Open
Abstract
One of the core challenges in applying machine learning and artificial intelligence to medicine is the limited availability of annotated medical data. Unlike in other applications of machine learning, where an abundance of labeled data is available, the labeling and annotation of medical data and images require a major effort of manual work by expert clinicians who do not have the time to annotate manually. In this work, we propose a new deep learning technique (SLIVER-net), to predict clinical features from 3-dimensional volumes using a limited number of manually annotated examples. SLIVER-net is based on transfer learning, where we borrow information about the structure and parameters of the network from publicly available large datasets. Since public volume data are scarce, we use 2D images and account for the 3-dimensional structure using a novel deep learning method which tiles the volume scans, and then adds layers that leverage the 3D structure. In order to illustrate its utility, we apply SLIVER-net to predict risk factors for progression of age-related macular degeneration (AMD), a leading cause of blindness, from optical coherence tomography (OCT) volumes acquired from multiple sites. SLIVER-net successfully predicts these factors despite being trained with a relatively small number of annotated volumes (hundreds) and only dozens of positive training examples. Our empirical evaluation demonstrates that SLIVER-net significantly outperforms standard state-of-the-art deep learning techniques used for medical volumes, and its performance is generalizable as it was validated on an external testing set. In a direct comparison with a clinician panel, we find that SLIVER-net also outperforms junior specialists, and identifies AMD progression risk factors similarly to expert retina specialists.
Collapse
Affiliation(s)
- Nadav Rakocz
- Department of Computer Science, University of California, Los Angeles, CA, USA
| | - Jeffrey N Chiang
- Department of Computational Medicine, University of California, Los Angeles, CA, USA
| | | | - Giulia Corradetti
- Doheny Eye Institute, Los Angeles, CA, USA.,Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Liran Tiosano
- Doheny Eye Institute, Los Angeles, CA, USA.,Faculty of Medicine, Hebrew University of Jerusalem, Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
| | | | - Michael Thompson
- Department of Computer Science, University of California, Los Angeles, CA, USA
| | - Brian L Hill
- Department of Computer Science, University of California, Los Angeles, CA, USA
| | - Sriram Sankararaman
- Department of Computer Science, University of California, Los Angeles, CA, USA.,Department of Computational Medicine, University of California, Los Angeles, CA, USA.,Department of Human Genetics, University of California, Los Angeles, CA, USA
| | - Jonathan L Haines
- Department of Population & Quantitative Health Sciences, Case Western Reserve University, Cleveland, OH, USA
| | - Margaret A Pericak-Vance
- John P. Hussman Institute for Human Genomics, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Dwight Stambolian
- Department of Ophthalmology, University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA, USA
| | - Srinivas R Sadda
- Doheny Eye Institute, Los Angeles, CA, USA.,Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Eran Halperin
- Department of Computer Science, University of California, Los Angeles, CA, USA. .,Department of Computational Medicine, University of California, Los Angeles, CA, USA. .,Faculty of Medicine, Hebrew University of Jerusalem, Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Jerusalem, Israel. .,Department of Anesthesiology, University of California, Los Angeles, CA, USA. .,Institute of Precision Health, University of California, Los Angeles, CA, USA.
| |
Collapse
|
24
|
An F, Li X, Ma X. Medical Image Classification Algorithm Based on Visual Attention Mechanism-MCNN. OXIDATIVE MEDICINE AND CELLULAR LONGEVITY 2021; 2021:6280690. [PMID: 33688390 PMCID: PMC7914083 DOI: 10.1155/2021/6280690] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 02/02/2021] [Accepted: 02/06/2021] [Indexed: 11/23/2022]
Abstract
Due to the complexity of medical images, traditional medical image classification methods have been unable to meet the actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification. However, deep learning has the following problems in the application of medical image classification. First, it is impossible to construct a deep learning model with excellent performance according to the characteristics of medical images. Second, the current deep learning network structure and training strategies are less adaptable to medical images. Therefore, this paper first introduces the visual attention mechanism into the deep learning model so that the information can be extracted more effectively according to the problem of medical images, and the reasoning is realized at a finer granularity. It can increase the interpretability of the model. Additionally, to solve the problem of matching the deep learning network structure and training strategy to medical images, this paper will construct a novel multiscale convolutional neural network model that can automatically extract high-level discriminative appearance features from the original image, and the loss function uses the Mahalanobis distance optimization model to obtain a better training strategy, which can improve the robust performance of the network model. The medical image classification task is completed by the above method. Based on the above ideas, this paper proposes a medical classification algorithm based on a visual attention mechanism-multiscale convolutional neural network. The lung nodules and breast cancer images were classified by the method in this paper. The experimental results show that the accuracy of medical image classification in this paper is not only higher than that of traditional machine learning methods but also improved compared with other deep learning methods, and the method has good stability and robustness.
Collapse
Affiliation(s)
- Fengping An
- School of Physics and Electronic Electrical Engineering, Huaiyin Normal University, Huaian 223300, China
| | - Xiaowei Li
- School of Physics and Electronic Electrical Engineering, Huaiyin Normal University, Huaian 223300, China
| | - Xingmin Ma
- System Second Department, North China Institute of Computing Technology, Beijing 100083, China
| |
Collapse
|
25
|
Mostafiz R, Uddin MS, Alam NA, Hasan MM, Rahman MM. MRI-based brain tumor detection using the fusion of histogram oriented gradients and neural features. EVOLUTIONARY INTELLIGENCE 2021. [DOI: 10.1007/s12065-020-00550-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
26
|
Wang Z, Yu Z, Wang Y, Zhang H, Luo Y, Shi L, Wang Y, Guo C. 3D Compressed Convolutional Neural Network Differentiates Neuromyelitis Optical Spectrum Disorders From Multiple Sclerosis Using Automated White Matter Hyperintensities Segmentations. Front Physiol 2021; 11:612928. [PMID: 33424635 PMCID: PMC7786373 DOI: 10.3389/fphys.2020.612928] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 12/07/2020] [Indexed: 12/15/2022] Open
Abstract
Background Magnetic resonance imaging (MRI) has a wide range of applications in medical imaging. Recently, studies based on deep learning algorithms have demonstrated powerful processing capabilities for medical imaging data. Previous studies have mostly focused on common diseases that usually have large scales of datasets and centralized the lesions in the brain. In this paper, we used deep learning models to process MRI images to differentiate the rare neuromyelitis optical spectrum disorder (NMOSD) from multiple sclerosis (MS) automatically, which are characterized by scattered and overlapping lesions. Methods We proposed a novel model structure to capture 3D MRI images’ essential information and converted them into lower dimensions. To empirically prove the efficiency of our model, firstly, we used a conventional 3-dimensional (3D) model to classify the T2-weighted fluid-attenuated inversion recovery (T2-FLAIR) images and proved that the traditional 3D convolutional neural network (CNN) models lack the learning capacity to distinguish between NMOSD and MS. Then, we compressed the 3D T2-FLAIR images by a two-view compression block to apply two different depths (18 and 34 layers) of 2D models for disease diagnosis and also applied transfer learning by pre-training our model on ImageNet dataset. Results We found that our models possess superior performance when our models were pre-trained on ImageNet dataset, in which the models’ average accuracies of 34 layers model and 18 layers model were 0.75 and 0.725, sensitivities were 0.707 and 0.708, and specificities were 0.759 and 0.719, respectively. Meanwhile, the traditional 3D CNN models lacked the learning capacity to distinguish between NMOSD and MS. Conclusion The novel CNN model we proposed could automatically differentiate the rare NMOSD from MS, especially, our model showed better performance than traditional3D CNN models. It indicated that our 3D compressed CNN models are applicable in handling diseases with small-scale datasets and possess overlapping and scattered lesions.
Collapse
Affiliation(s)
- Zhuo Wang
- Key Laboratory of Symbol Computation & Knowledge Engineering, Ministry of Education, College of Computer Science & Technology, Jilin University, Changchun, China.,Department of Radiology, the First Hospital of Jilin University, Changchun, China
| | - Zhezhou Yu
- Key Laboratory of Symbol Computation & Knowledge Engineering, Ministry of Education, College of Computer Science & Technology, Jilin University, Changchun, China
| | - Yao Wang
- Key Laboratory of Symbol Computation & Knowledge Engineering, Ministry of Education, College of Computer Science & Technology, Jilin University, Changchun, China
| | - Huimao Zhang
- Department of Radiology, the First Hospital of Jilin University, Changchun, China.,Jilin Provincial Key Laboratory for Medical imaging, Changchun, China
| | - Yishan Luo
- BrainNow Research Institute, Hong Kong, China
| | - Lin Shi
- BrainNow Research Institute, Hong Kong, China.,Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Yan Wang
- Key Laboratory of Symbol Computation & Knowledge Engineering, Ministry of Education, College of Computer Science & Technology, Jilin University, Changchun, China
| | - Chunjie Guo
- Department of Radiology, the First Hospital of Jilin University, Changchun, China.,Jilin Provincial Key Laboratory for Medical imaging, Changchun, China
| |
Collapse
|
27
|
Hussain MM, Shabbir A, Bakhshi SK, Shamim MS. Are Thinking Machines Breaking New Frontiers in Neuro-Oncology? A Narrative Review on the Emerging Role of Machine Learning in Neuro-Oncological Practice. Asian J Neurosurg 2021; 16:8-13. [PMID: 34211861 PMCID: PMC8202358 DOI: 10.4103/ajns.ajns_265_20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 08/07/2020] [Accepted: 09/17/2020] [Indexed: 11/21/2022] Open
Abstract
Medical science in general and oncology in particular are dynamic, rapidly evolving subjects. Brain and spine tumors, whether primary or secondary, constitute a significant number of cases in any oncological practice. With the rapid influx of data in all aspects of neuro-oncological care, it is almost impossible for practicing clinicians to remain abreast with the current trends, or to synthesize the available data for it to be maximally beneficial for their patients. Machine-learning (ML) tools are fast gaining acceptance as an alternative to conventional reliance on online data. ML uses artificial intelligence to provide a computer algorithm-based information to clinicians. Different ML models have been proposed in the literature with a variable degree of precision and database requirements. ML can potentially solve the aforementioned problems for practicing clinicians by not just extracting and analyzing useful data, by minimizing or eliminating certain potential areas of human error, by creating patient-specific treatment plans, and also by predicting outcomes with reasonable accuracy. Current information on ML in neuro-oncology is scattered, and this literature review is an attempt to consolidate it and provide recent updates.
Collapse
Affiliation(s)
| | - Ainsia Shabbir
- Department of Computer and Information Systems Engineering, NED University of Engineering and Technology, Karachi, Pakistan
| | | | | |
Collapse
|
28
|
van der Voort SR, Smits M, Klein S. DeepDicomSort: An Automatic Sorting Algorithm for Brain Magnetic Resonance Imaging Data. Neuroinformatics 2021; 19:159-184. [PMID: 32627144 PMCID: PMC7782469 DOI: 10.1007/s12021-020-09475-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
With the increasing size of datasets used in medical imaging research, the need for automated data curation is arising. One important data curation task is the structured organization of a dataset for preserving integrity and ensuring reusability. Therefore, we investigated whether this data organization step can be automated. To this end, we designed a convolutional neural network (CNN) that automatically recognizes eight different brain magnetic resonance imaging (MRI) scan types based on visual appearance. Thus, our method is unaffected by inconsistent or missing scan metadata. It can recognize pre-contrast T1-weighted (T1w),post-contrast T1-weighted (T1wC), T2-weighted (T2w), proton density-weighted (PDw) and derived maps (e.g. apparent diffusion coefficient and cerebral blood flow). In a first experiment,we used scans of subjects with brain tumors: 11065 scans of 719 subjects for training, and 2369 scans of 192 subjects for testing. The CNN achieved an overall accuracy of 98.7%. In a second experiment, we trained the CNN on all 13434 scans from the first experiment and tested it on 7227 scans of 1318 Alzheimer's subjects. Here, the CNN achieved an overall accuracy of 98.5%. In conclusion, our method can accurately predict scan type, and can quickly and automatically sort a brain MRI dataset virtually without the need for manual verification. In this way, our method can assist with properly organizing a dataset, which maximizes the shareability and integrity of the data.
Collapse
Affiliation(s)
- Sebastian R van der Voort
- Biomedical Imaging Group Rotterdam, Departments of Radiology and Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Centre Rotterdam, Rotterdam, The Netherlands.
| | - Marion Smits
- Department of Radiology and Nuclear Medicine, Erasmus MC - University Medical Centre Rotterdam, Rotterdam, The Netherlands
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Departments of Radiology and Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Centre Rotterdam, Rotterdam, The Netherlands
| |
Collapse
|
29
|
Shen C, Tsai MY, Chen L, Li S, Nguyen D, Wang J, Jiang SB, Jia X. On the robustness of deep learning-based lung-nodule classification for CT images with respect to image noise. Phys Med Biol 2020; 65:245037. [PMID: 33152716 PMCID: PMC7870572 DOI: 10.1088/1361-6560/abc812] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Robustness is an important aspect when evaluating a method of medical image analysis. In this study, we investigated the robustness of a deep learning (DL)-based lung-nodule classification model for CT images with respect to noise perturbations. A deep neural network (DNN) was established to classify 3D CT images of lung nodules into malignant or benign groups. The established DNN was able to predict malignancy rate of lung nodules based on CT images, achieving the area under the curve of 0.91 for the testing dataset in a tenfold cross validation as compared to radiologists' prediction. We then evaluated its robustness against noise perturbations. We added to the input CT images noise signals generated randomly or via an optimization scheme using a realistic noise model based on a noise power spectrum for a given mAs level, and monitored the DNN's output. The results showed that the CT noise was able to affect the prediction results of the established DNN model. With random noise perturbations at 100 mAs, DNN's predictions for 11.2% of training data and 17.4% of testing data were successfully altered by at least once. The percentage increased to 23.4% and 34.3%, respectively, for optimization-based perturbations. We further evaluated robustness of models with different architectures, parameters, number of output labels, etc, and robustness concern was found in these models to different degrees. To improve model robustness, we empirically proposed an adaptive training scheme. It fine-tuned the DNN model by including perturbations in the training dataset that successfully altered the DNN's perturbations. The adaptive scheme was repeatedly performed to gradually improve DNN's robustness. The numbers of perturbations at 100 mAs affecting DNN's predictions were reduced to 10.8% for training and 21.1% for testing by the adaptive training scheme after two iterations. Our study illustrated that robustness may potentially be a concern for an exemplary DL-based lung-nodule classification model for CT images, indicating the needs for evaluating and ensuring model robustness when developing similar models. The proposed adaptive training scheme may be able to improve model robustness.
Collapse
Affiliation(s)
- Chenyang Shen
- innovative Technology Of Radiotherapy Computations and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, 75235
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| | - Min-Yu Tsai
- innovative Technology Of Radiotherapy Computations and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, 75235
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Liyuan Chen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| | - Shulong Li
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| | - Steve B. Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| | - Xun Jia
- innovative Technology Of Radiotherapy Computations and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, 75235
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, 75235
| |
Collapse
|
30
|
Hussain L, Nguyen T, Li H, Abbasi AA, Lone KJ, Zhao Z, Zaib M, Chen A, Duong TQ. Machine-learning classification of texture features of portable chest X-ray accurately classifies COVID-19 lung infection. Biomed Eng Online 2020; 19:88. [PMID: 33239006 PMCID: PMC7686836 DOI: 10.1186/s12938-020-00831-x] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 11/17/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND The large volume and suboptimal image quality of portable chest X-rays (CXRs) as a result of the COVID-19 pandemic could post significant challenges for radiologists and frontline physicians. Deep-learning artificial intelligent (AI) methods have the potential to help improve diagnostic efficiency and accuracy for reading portable CXRs. PURPOSE The study aimed at developing an AI imaging analysis tool to classify COVID-19 lung infection based on portable CXRs. MATERIALS AND METHODS Public datasets of COVID-19 (N = 130), bacterial pneumonia (N = 145), non-COVID-19 viral pneumonia (N = 145), and normal (N = 138) CXRs were analyzed. Texture and morphological features were extracted. Five supervised machine-learning AI algorithms were used to classify COVID-19 from other conditions. Two-class and multi-class classification were performed. Statistical analysis was done using unpaired two-tailed t tests with unequal variance between groups. Performance of classification models used the receiver-operating characteristic (ROC) curve analysis. RESULTS For the two-class classification, the accuracy, sensitivity and specificity were, respectively, 100%, 100%, and 100% for COVID-19 vs normal; 96.34%, 95.35% and 97.44% for COVID-19 vs bacterial pneumonia; and 97.56%, 97.44% and 97.67% for COVID-19 vs non-COVID-19 viral pneumonia. For the multi-class classification, the combined accuracy and AUC were 79.52% and 0.87, respectively. CONCLUSION AI classification of texture and morphological features of portable CXRs accurately distinguishes COVID-19 lung infection in patients in multi-class datasets. Deep-learning methods have the potential to improve diagnostic efficiency and accuracy for portable CXRs.
Collapse
Affiliation(s)
- Lal Hussain
- Department of Computer Science and IT, King Abdullah Campus, University of Azad Jammu and Kashmir, Muzaffarabad, 13100, Azad Kashmir, Pakistan.
- Department of Computer Science and IT, Neelum Campus, University of Azad Jammu and Kashmir, Athmuqam, 13230, Azad Kashmir, Pakistan.
| | - Tony Nguyen
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Haifang Li
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Adeel A Abbasi
- Department of Computer Science and IT, King Abdullah Campus, University of Azad Jammu and Kashmir, Muzaffarabad, 13100, Azad Kashmir, Pakistan
| | - Kashif J Lone
- Department of Computer Science and IT, King Abdullah Campus, University of Azad Jammu and Kashmir, Muzaffarabad, 13100, Azad Kashmir, Pakistan
| | - Zirun Zhao
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Mahnoor Zaib
- Department of Computer Science and IT, Neelum Campus, University of Azad Jammu and Kashmir, Athmuqam, 13230, Azad Kashmir, Pakistan
| | - Anne Chen
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Tim Q Duong
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| |
Collapse
|
31
|
Predicting Survival in Glioblastoma Patients Using Diffusion MR Imaging Metrics-A Systematic Review. Cancers (Basel) 2020; 12:cancers12102858. [PMID: 33020420 PMCID: PMC7600641 DOI: 10.3390/cancers12102858] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2020] [Revised: 09/28/2020] [Accepted: 10/01/2020] [Indexed: 12/20/2022] Open
Abstract
Simple Summary An accurate survival analysis is crucial for disease management in glioblastoma (GBM) patients. Due to the ability of the diffusion MRI techniques of providing a quantitative assessment of GBM tumours, an ever-growing number of studies aimed at investigating the role of diffusion MRI metrics in survival prediction of GBM patients. Since the role of diffusion MRI in prediction and evaluation of survival outcomes has not been fully addressed and results are often controversial or unsatisfactory, we performed this systematic review in order to collect, summarize and evaluate all studies evaluating the role of diffusion MRI metrics in predicting survival in GBM patients. We found that quantitative diffusion MRI metrics provide useful information for predicting survival outcomes in GBM patients, mainly in combination with other clinical and multimodality imaging parameters. Abstract Despite advances in surgical and medical treatment of glioblastoma (GBM), the medium survival is about 15 months and varies significantly, with occasional longer survivors and individuals whose tumours show a significant response to therapy with respect to others. Diffusion MRI can provide a quantitative assessment of the intratumoral heterogeneity of GBM infiltration, which is of clinical significance for targeted surgery and therapy, and aimed at improving GBM patient survival. So, the aim of this systematic review is to assess the role of diffusion MRI metrics in predicting survival of patients with GBM. According to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement, a systematic literature search was performed to identify original articles since 2010 that evaluated the association of diffusion MRI metrics with overall survival (OS) and progression-free survival (PFS). The quality of the included studies was evaluated using the QUIPS tool. A total of 52 articles were selected. The most examined metrics were associated with the standard Diffusion Weighted Imaging (DWI) (34 studies) and Diffusion Tensor Imaging (DTI) models (17 studies). Our findings showed that quantitative diffusion MRI metrics provide useful information for predicting survival outcomes in GBM patients, mainly in combination with other clinical and multimodality imaging parameters.
Collapse
|
32
|
Zhang L, Dong D, Zhang W, Hao X, Fang M, Wang S, Li W, Liu Z, Wang R, Zhou J, Tian J. A deep learning risk prediction model for overall survival in patients with gastric cancer: A multicenter study. Radiother Oncol 2020; 150:73-80. [DOI: 10.1016/j.radonc.2020.06.010] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2020] [Revised: 06/08/2020] [Accepted: 06/09/2020] [Indexed: 02/07/2023]
|
33
|
Zhou T, Fu H, Chen G, Shen J, Shao L. Hi-Net: Hybrid-Fusion Network for Multi-Modal MR Image Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2772-2781. [PMID: 32086202 DOI: 10.1109/tmi.2020.2975344] [Citation(s) in RCA: 91] [Impact Index Per Article: 22.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Magnetic resonance imaging (MRI) is a widely used neuroimaging technique that can provide images of different contrasts (i.e., modalities). Fusing this multi-modal data has proven particularly effective for boosting model performance in many tasks. However, due to poor data quality and frequent patient dropout, collecting all modalities for every patient remains a challenge. Medical image synthesis has been proposed as an effective solution, where any missing modalities are synthesized from the existing ones. In this paper, we propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis, which learns a mapping from multi-modal source images (i.e., existing modalities) to target images (i.e., missing modalities). In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality, and a fusion network is employed to learn the common latent representation of multi-modal data. Then, a multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality, acting as a generator to synthesize the target images. Moreover, a layer-wise multi-modal fusion strategy effectively exploits the correlations among multiple modalities, where a Mixed Fusion Block (MFB) is proposed to adaptively weight different fusion strategies. Extensive experiments demonstrate the proposed model outperforms other state-of-the-art medical image synthesis methods.
Collapse
|
34
|
Rashid B, Calhoun V. Towards a brain-based predictome of mental illness. Hum Brain Mapp 2020; 41:3468-3535. [PMID: 32374075 PMCID: PMC7375108 DOI: 10.1002/hbm.25013] [Citation(s) in RCA: 65] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 04/06/2020] [Accepted: 04/06/2020] [Indexed: 01/10/2023] Open
Abstract
Neuroimaging-based approaches have been extensively applied to study mental illness in recent years and have deepened our understanding of both cognitively healthy and disordered brain structure and function. Recent advancements in machine learning techniques have shown promising outcomes for individualized prediction and characterization of patients with psychiatric disorders. Studies have utilized features from a variety of neuroimaging modalities, including structural, functional, and diffusion magnetic resonance imaging data, as well as jointly estimated features from multiple modalities, to assess patients with heterogeneous mental disorders, such as schizophrenia and autism. We use the term "predictome" to describe the use of multivariate brain network features from one or more neuroimaging modalities to predict mental illness. In the predictome, multiple brain network-based features (either from the same modality or multiple modalities) are incorporated into a predictive model to jointly estimate features that are unique to a disorder and predict subjects accordingly. To date, more than 650 studies have been published on subject-level prediction focusing on psychiatric disorders. We have surveyed about 250 studies including schizophrenia, major depression, bipolar disorder, autism spectrum disorder, attention-deficit hyperactivity disorder, obsessive-compulsive disorder, social anxiety disorder, posttraumatic stress disorder, and substance dependence. In this review, we present a comprehensive review of recent neuroimaging-based predictomic approaches, current trends, and common shortcomings and share our vision for future directions.
Collapse
Affiliation(s)
- Barnaly Rashid
- Department of PsychiatryHarvard Medical SchoolBostonMassachusettsUSA
| | - Vince Calhoun
- Tri‐Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS)Georgia State University, Georgia Institute of Technology, and Emory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
35
|
Yang W, Shi Y, Park SH, Yang M, Gao Y, Shen D. An Effective MR-Guided CT Network Training for Segmenting Prostate in CT Images. IEEE J Biomed Health Inform 2020; 24:2278-2291. [DOI: 10.1109/jbhi.2019.2960153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
36
|
Forghani R. Precision Digital Oncology: Emerging Role of Radiomics-based Biomarkers and Artificial Intelligence for Advanced Imaging and Characterization of Brain Tumors. Radiol Imaging Cancer 2020; 2:e190047. [PMID: 33778721 DOI: 10.1148/rycan.2020190047] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 02/21/2020] [Accepted: 03/04/2020] [Indexed: 12/22/2022]
Abstract
Advances in computerized image analysis and the use of artificial intelligence-based approaches for image-based analysis and construction of prediction algorithms represent a new era for noninvasive biomarker discovery. In recent literature, it has become apparent that radiologic images can serve as mineable databases that contain large amounts of quantitative features with potential clinical significance. Extraction and analysis of these quantitative features is commonly referred to as texture or radiomic analysis. Numerous studies have demonstrated applications for texture and radiomic characterization methods for assessing brain tumors to improve noninvasive predictions of tumor histologic characteristics, molecular profile, distinction of treatment-related changes, and prediction of patient survival. In this review, the current use and future potential of texture or radiomic-based approaches with machine learning for brain tumor image analysis and prediction algorithm construction will be discussed. This technology has the potential to advance the value of diagnostic imaging by extracting currently unused information on medical scans that enables more precise, personalized therapy; however, significant barriers must be overcome if this technology is to be successfully implemented on a wide scale for routine use in the clinical setting. Keywords: Adults and Pediatrics, Brain/Brain Stem, CNS, Computer Aided Diagnosis (CAD), Computer Applications-General (Informatics), Image Postprocessing, Informatics, Neural Networks, Neuro-Oncology, Oncology, Treatment Effects, Tumor Response Supplemental material is available for this article. © RSNA, 2020.
Collapse
Affiliation(s)
- Reza Forghani
- Department of Radiology, McGill University Health Centre, 1001 Decarie Blvd, Room C02.5821, Montreal, QC, Canada H4A 3J1; Augmented Intelligence & Precision Health Laboratory (AIPHL), Research Institute of the McGill University Health Centre, Montreal, Canada; Segal Cancer Centre and Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Canada; Gerald Bronfman Department of Oncology, McGill University, Montreal, Canada; and Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, Canada
| |
Collapse
|
37
|
Park JE, Kickingereder P, Kim HS. Radiomics and Deep Learning from Research to Clinical Workflow: Neuro-Oncologic Imaging. Korean J Radiol 2020; 21:1126-1137. [PMID: 32729271 PMCID: PMC7458866 DOI: 10.3348/kjr.2019.0847] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2019] [Revised: 03/03/2020] [Accepted: 03/29/2020] [Indexed: 12/29/2022] Open
Abstract
Imaging plays a key role in the management of brain tumors, including the diagnosis, prognosis, and treatment response assessment. Radiomics and deep learning approaches, along with various advanced physiologic imaging parameters, hold great potential for aiding radiological assessments in neuro-oncology. The ongoing development of new technology needs to be validated in clinical trials and incorporated into the clinical workflow. However, none of the potential neuro-oncological applications for radiomics and deep learning has yet been realized in clinical practice. In this review, we summarize the current applications of radiomics and deep learning in neuro-oncology and discuss challenges in relation to evidence-based medicine and reporting guidelines, as well as potential applications in clinical workflows and routine clinical practice.
Collapse
Affiliation(s)
- Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Philipp Kickingereder
- Department of Neuroradiology, University of Heidelberg, Im Neuenheimer Feld, Heidelberg, Germany
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea.
| |
Collapse
|
38
|
Panayides AS, Amini A, Filipovic ND, Sharma A, Tsaftaris SA, Young A, Foran D, Do N, Golemati S, Kurc T, Huang K, Nikita KS, Veasey BP, Zervakis M, Saltz JH, Pattichis CS. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE J Biomed Health Inform 2020; 24:1837-1857. [PMID: 32609615 PMCID: PMC8580417 DOI: 10.1109/jbhi.2020.2991043] [Citation(s) in RCA: 105] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine.
Collapse
|
39
|
Tang Z, Xu Y, Jin L, Aibaidula A, Lu J, Jiao Z, Wu J, Zhang H, Shen D. Deep Learning of Imaging Phenotype and Genotype for Predicting Overall Survival Time of Glioblastoma Patients. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2100-2109. [PMID: 31905135 PMCID: PMC7289674 DOI: 10.1109/tmi.2020.2964310] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Glioblastoma (GBM) is the most common and deadly malignant brain tumor. For personalized treatment, an accurate pre-operative prognosis for GBM patients is highly desired. Recently, many machine learning-based methods have been adopted to predict overall survival (OS) time based on the pre-operative mono- or multi-modal imaging phenotype. The genotypic information of GBM has been proven to be strongly indicative of the prognosis; however, this has not been considered in the existing imaging-based OS prediction methods. The main reason is that the tumor genotype is unavailable pre-operatively unless deriving from craniotomy. In this paper, we propose a new deep learning-based OS prediction method for GBM patients, which can derive tumor genotype-related features from pre-operative multimodal magnetic resonance imaging (MRI) brain data and feed them to OS prediction. Specifically, we propose a multi-task convolutional neural network (CNN) to accomplish both tumor genotype and OS prediction tasks jointly. As the network can benefit from learning tumor genotype-related features for genotype prediction, the accuracy of predicting OS time can be prominently improved. In the experiments, multimodal MRI brain dataset of 120 GBM patients, with as many as four different genotypic/molecular biomarkers, are used to evaluate our method. Our method achieves the highest OS prediction accuracy compared to other state-of-the-art methods.
Collapse
|
40
|
Nadeem MW, Ghamdi MAA, Hussain M, Khan MA, Khan KM, Almotiri SH, Butt SA. Brain Tumor Analysis Empowered with Deep Learning: A Review, Taxonomy, and Future Challenges. Brain Sci 2020; 10:brainsci10020118. [PMID: 32098333 PMCID: PMC7071415 DOI: 10.3390/brainsci10020118] [Citation(s) in RCA: 74] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 02/07/2020] [Accepted: 02/13/2020] [Indexed: 12/17/2022] Open
Abstract
Deep Learning (DL) algorithms enabled computational models consist of multiple processing layers that represent data with multiple levels of abstraction. In recent years, usage of deep learning is rapidly proliferating in almost every domain, especially in medical image processing, medical image analysis, and bioinformatics. Consequently, deep learning has dramatically changed and improved the means of recognition, prediction, and diagnosis effectively in numerous areas of healthcare such as pathology, brain tumor, lung cancer, abdomen, cardiac, and retina. Considering the wide range of applications of deep learning, the objective of this article is to review major deep learning concepts pertinent to brain tumor analysis (e.g., segmentation, classification, prediction, evaluation.). A review conducted by summarizing a large number of scientific contributions to the field (i.e., deep learning in brain tumor analysis) is presented in this study. A coherent taxonomy of research landscape from the literature has also been mapped, and the major aspects of this emerging field have been discussed and analyzed. A critical discussion section to show the limitations of deep learning techniques has been included at the end to elaborate open research challenges and directions for future work in this emergent area.
Collapse
Affiliation(s)
- Muhammad Waqas Nadeem
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (M.A.K.); (K.M.K.)
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan;
- Correspondence:
| | - Mohammed A. Al Ghamdi
- Department of Computer Science, Umm Al-Qura University, Makkah 23500, Saudi Arabia; (M.A.A.G.); (S.H.A.)
| | - Muzammil Hussain
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan;
| | - Muhammad Adnan Khan
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (M.A.K.); (K.M.K.)
| | - Khalid Masood Khan
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (M.A.K.); (K.M.K.)
| | - Sultan H. Almotiri
- Department of Computer Science, Umm Al-Qura University, Makkah 23500, Saudi Arabia; (M.A.A.G.); (S.H.A.)
| | - Suhail Ashfaq Butt
- Department of Information Sciences, Division of Science and Technology, University of Education Township, Lahore 54700, Pakistan;
| |
Collapse
|
41
|
He Y, Jiao W, Shi Y, Lian J, Zhao B, Zou W, Zhu Y, Zheng Y. Segmenting Diabetic Retinopathy Lesions in Multispectral Images Using Low-Dimensional Spatial-Spectral Matrix Representation. IEEE J Biomed Health Inform 2020; 24:493-502. [DOI: 10.1109/jbhi.2019.2912668] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
42
|
Kam TE, Zhang H, Jiao Z, Shen D. Deep Learning of Static and Dynamic Brain Functional Networks for Early MCI Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:478-487. [PMID: 31329111 PMCID: PMC7122732 DOI: 10.1109/tmi.2019.2928790] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
While convolutional neural network (CNN) has been demonstrating powerful ability to learn hierarchical spatial features from medical images, it is still difficult to apply it directly to resting-state functional MRI (rs-fMRI) and the derived brain functional networks (BFNs). We propose a novel CNN framework to simultaneously learn embedded features from BFNs for brain disease diagnosis. Since BFNs can be built by considering both static and dynamic functional connectivity (FC), we first decompose rs-fMRI into multiple static BFNs with modified independent component analysis. Then, the voxel-wise variability in dynamic FC is used to quantify BFN dynamics. A set of paired 3D images representing static/dynamic BFNs can be fed into 3D CNNs, from which we can hierarchically and simultaneously learn static/dynamic BFN features. As a result, the dynamic BFN features can complement static BFN features and, at the meantime, different BFNs can help each other toward a joint and better classification. We validate our method with a publicly accessible, large cohort of rs-fMRI dataset in early-stage mild cognitive impairment (eMCI) diagnosis, which is one of the most challenging problems to the clinicians. By comparing with a conventional method, our method shows significant diagnostic performance improvement by almost 10%. This result demonstrates the effectiveness of deep learning in preclinical Alzheimer's disease diagnosis, based on the complex and high-dimensional voxel-wise spatiotemporal patterns of the resting-state brain functional connectomics. The framework provides a new but intuitive way to fully exploit deeply embedded diagnostic features from rs-fMRI for a better-individualized diagnosis of various neurological diseases.
Collapse
|
43
|
Liu L, Zhang H, Wu J, Yu Z, Chen X, Rekik I, Wang Q, Lu J, Shen D. Overall survival time prediction for high-grade glioma patients based on large-scale brain functional networks. Brain Imaging Behav 2020; 13:1333-1351. [PMID: 30155788 DOI: 10.1007/s11682-018-9949-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
High-grade glioma (HGG) is a lethal cancer with poor outcome. Accurate preoperative overall survival (OS) time prediction for HGG patients is crucial for treatment planning. Traditional presurgical and noninvasive OS prediction studies have used radiomics features at the local lesion area based on the magnetic resonance images (MRI). However, the highly complex lesion MRI appearance may have large individual variability, which could impede accurate individualized OS prediction. In this paper, we propose a novel concept, namely brain connectomics-based OS prediction. It is based on presurgical resting-state functional MRI (rs-fMRI) and the non-local, large-scale brain functional networks where the global and systemic prognostic features rather than the local lesion appearance are used to predict OS. We propose that the connectomics features could capture tumor-induced network-level alterations that are associated with prognosis. We construct both low-order (by means of sparse representation with regional rs-fMRI signals) and high-order functional connectivity (FC) networks (characterizing more complex multi-regional relationship by synchronized dynamics FC time courses). Then, we conduct a graph-theoretic analysis on both networks for a jointly, machine-learning-based individualized OS prediction. Based on a preliminary dataset (N = 34 with bad OS, mean OS, ~400 days; N = 34 with good OS, mean OS, ~1030 days), we achieve a promising OS prediction accuracy (86.8%) on separating the individuals with bad OS from those with good OS. However, if using only conventionally derived descriptive features (e.g., age and tumor characteristics), the accuracy is low (63.2%). Our study highlights the importance of the rs-fMRI and brain functional connectomics for treatment planning.
Collapse
Affiliation(s)
- Luyan Liu
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, 1954 Huashan Road, Shanghai, 200030, China.,Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Han Zhang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jinsong Wu
- Glioma Surgery Division, Neurosurgery Department of Huashan Hospital, Fudan University, Shanghai, 200040, China.,Shanghai Key Lab of Medical Image Computing and Computer-Assisted Intervention, Shanghai, 200040, China.,Neurosurgery Department of Huashan Hospital, 12 Wulumuqi Zhong Road, Shanghai, 200040, China
| | - Zhengda Yu
- Glioma Surgery Division, Neurosurgery Department of Huashan Hospital, Fudan University, Shanghai, 200040, China
| | - Xiaobo Chen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Islem Rekik
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.,BASIRA Lab, CVIP Group, School of Science and Engineering, Computing, University of Dundee, Dundee, UK
| | - Qian Wang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, 1954 Huashan Road, Shanghai, 200030, China.
| | - Junfeng Lu
- Glioma Surgery Division, Neurosurgery Department of Huashan Hospital, Fudan University, Shanghai, 200040, China. .,Shanghai Key Lab of Medical Image Computing and Computer-Assisted Intervention, Shanghai, 200040, China. .,Neurosurgery Department of Huashan Hospital, 12 Wulumuqi Zhong Road, Shanghai, 200040, China.
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA. .,Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
44
|
Jeong JW, Lee MH, John F, Robinette NL, Amit-Yousif AJ, Barger GR, Mittal S, Juhász C. Feasibility of Multimodal MRI-Based Deep Learning Prediction of High Amino Acid Uptake Regions and Survival in Patients With Glioblastoma. Front Neurol 2020; 10:1305. [PMID: 31920928 PMCID: PMC6928045 DOI: 10.3389/fneur.2019.01305] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Accepted: 11/26/2019] [Indexed: 12/12/2022] Open
Abstract
Purpose: Amino acid PET has shown high accuracy for the diagnosis and prognostication of malignant gliomas, however, this imaging modality is not widely available in clinical practice. This study explores a novel end-to-end deep learning framework ("U-Net") for its feasibility to detect high amino acid uptake glioblastoma regions (i.e., metabolic tumor volume) using clinical multimodal MRI sequences. Methods: T2, fluid-attenuated inversion recovery (FLAIR), apparent diffusion coefficient map, contrast-enhanced T1, and alpha-[11C]-methyl-L-tryptophan (AMT)-PET images were analyzed in 21 patients with newly-diagnosed glioblastoma. U-Net system with data augmentation was implemented to deeply learn non-linear voxel-wise relationships between intensities of multimodal MRI as the input and metabolic tumor volume from AMT-PET as the output. The accuracy of the MRI- and PET-based volume measures to predict progression-free survival was tested. Results: In the augmented dataset using all four MRI modalities to investigate the upper limit of U-Net accuracy in the full study cohort, U-Net achieved high accuracy (sensitivity/specificity/positive predictive value [PPV]/negative predictive value [NPV]: 0.85/1.00/0.81/1.00, respectively) to predict PET-defined tumor volumes. Exclusion of FLAIR from the MRI input set had a strong negative effect on sensitivity (0.60). In repeated hold out validation in randomly selected subjects, specificity and NPV remained high (1.00), but mean sensitivity (0.62), and PPV (0.68) were moderate. AMT-PET-learned MRI tumor volume from this U-net model within the contrast-enhancing volume predicted 6-month progression-free survival with 0.86/0.63 sensitivity/specificity. Conclusions: These data indicate the feasibility of PET-based deep learning for enhanced pretreatment glioblastoma delineation and prognostication by clinical multimodal MRI.
Collapse
Affiliation(s)
- Jeong-Won Jeong
- Department of Pediatrics, Wayne State University School of Medicine and PET Center and Translational Imaging Laboratory, Children's Hospital of Michigan, Detroit, MI, United States.,Department of Neurology, Wayne State University, Detroit, MI, United States.,Translational Neuroscience Program, Wayne State University, Detroit, MI, United States
| | - Min-Hee Lee
- Department of Pediatrics, Wayne State University School of Medicine and PET Center and Translational Imaging Laboratory, Children's Hospital of Michigan, Detroit, MI, United States
| | - Flóra John
- Department of Pediatrics, Wayne State University School of Medicine and PET Center and Translational Imaging Laboratory, Children's Hospital of Michigan, Detroit, MI, United States
| | - Natasha L Robinette
- Department of Oncology, Wayne State University, Detroit, MI, United States.,Karmanos Cancer Institute, Detroit, MI, United States
| | - Alit J Amit-Yousif
- Department of Oncology, Wayne State University, Detroit, MI, United States.,Karmanos Cancer Institute, Detroit, MI, United States
| | - Geoffrey R Barger
- Department of Neurology, Wayne State University, Detroit, MI, United States.,Karmanos Cancer Institute, Detroit, MI, United States
| | - Sandeep Mittal
- Department of Oncology, Wayne State University, Detroit, MI, United States.,Karmanos Cancer Institute, Detroit, MI, United States.,Department of Neurosurgery, Wayne State University, Detroit, MI, United States.,Virginia Tech Carilion School of Medicine and Carilion Clinic, Roanoke, VA, United States
| | - Csaba Juhász
- Department of Pediatrics, Wayne State University School of Medicine and PET Center and Translational Imaging Laboratory, Children's Hospital of Michigan, Detroit, MI, United States.,Department of Neurology, Wayne State University, Detroit, MI, United States.,Translational Neuroscience Program, Wayne State University, Detroit, MI, United States.,Karmanos Cancer Institute, Detroit, MI, United States.,Department of Neurosurgery, Wayne State University, Detroit, MI, United States
| |
Collapse
|
45
|
Booth TC, Williams M, Luis A, Cardoso J, Ashkan K, Shuaib H. Machine learning and glioma imaging biomarkers. Clin Radiol 2020; 75:20-32. [PMID: 31371027 PMCID: PMC6927796 DOI: 10.1016/j.crad.2019.07.001] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2018] [Accepted: 07/04/2019] [Indexed: 12/14/2022]
Abstract
AIM To review how machine learning (ML) is applied to imaging biomarkers in neuro-oncology, in particular for diagnosis, prognosis, and treatment response monitoring. MATERIALS AND METHODS The PubMed and MEDLINE databases were searched for articles published before September 2018 using relevant search terms. The search strategy focused on articles applying ML to high-grade glioma biomarkers for treatment response monitoring, prognosis, and prediction. RESULTS Magnetic resonance imaging (MRI) is typically used throughout the patient pathway because routine structural imaging provides detailed anatomical and pathological information and advanced techniques provide additional physiological detail. Using carefully chosen image features, ML is frequently used to allow accurate classification in a variety of scenarios. Rather than being chosen by human selection, ML also enables image features to be identified by an algorithm. Much research is applied to determining molecular profiles, histological tumour grade, and prognosis using MRI images acquired at the time that patients first present with a brain tumour. Differentiating a treatment response from a post-treatment-related effect using imaging is clinically important and also an area of active study (described here in one of two Special Issue publications dedicated to the application of ML in glioma imaging). CONCLUSION Although pioneering, most of the evidence is of a low level, having been obtained retrospectively and in single centres. Studies applying ML to build neuro-oncology monitoring biomarker models have yet to show an overall advantage over those using traditional statistical methods. Development and validation of ML models applied to neuro-oncology require large, well-annotated datasets, and therefore multidisciplinary and multi-centre collaborations are necessary.
Collapse
Affiliation(s)
- T C Booth
- School of Biomedical Engineering & Imaging Sciences, King's College London, St Thomas' Hospital, London SE1 7EH, UK; Department of Neuroradiology, King's College Hospital NHS Foundation Trust, London SE5 9RS, UK.
| | - M Williams
- Department of Neuro-oncology, Imperial College Healthcare NHS Trust, Fulham Palace Rd, London W6 8RF, UK
| | - A Luis
- School of Biomedical Engineering & Imaging Sciences, King's College London, St Thomas' Hospital, London SE1 7EH, UK; Department of Radiology, St George's University Hospitals NHS Foundation Trust, Blackshaw Road, London SW17 0QT, UK
| | - J Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, St Thomas' Hospital, London SE1 7EH, UK
| | - K Ashkan
- Department of Neurosurgery, King's College Hospital NHS Foundation Trust, London SE5 9RS, UK
| | - H Shuaib
- Department of Medical Physics, Guy's & St. Thomas' NHS Foundation Trust, London SE1 7EH, UK; Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, SE5 8AF, UK
| |
Collapse
|
46
|
Han W, Qin L, Bay C, Chen X, Yu KH, Miskin N, Li A, Xu X, Young G. Deep Transfer Learning and Radiomics Feature Prediction of Survival of Patients with High-Grade Gliomas. AJNR Am J Neuroradiol 2020; 41:40-48. [PMID: 31857325 PMCID: PMC6975328 DOI: 10.3174/ajnr.a6365] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2019] [Accepted: 10/25/2019] [Indexed: 12/14/2022]
Abstract
BACKGROUND AND PURPOSE Patient survival in high-grade glioma remains poor, despite the recent developments in cancer treatment. As new chemo-, targeted molecular, and immune therapies emerge and show promising results in clinical trials, image-based methods for early prediction of treatment response are needed. Deep learning models that incorporate radiomics features promise to extract information from brain MR imaging that correlates with response and prognosis. We report initial production of a combined deep learning and radiomics model to predict overall survival in a clinically heterogeneous cohort of patients with high-grade gliomas. MATERIALS AND METHODS Fifty patients with high-grade gliomas from our hospital and 128 patients with high-grade glioma from The Cancer Genome Atlas were included. For each patient, we calculated 348 hand-crafted radiomics features and 8192 deep features generated by a pretrained convolutional neural network. We then applied feature selection and Elastic Net-Cox modeling to differentiate patients into long- and short-term survivors. RESULTS In the 50 patients with high-grade gliomas from our institution, the combined feature analysis framework classified the patients into long- and short-term survivor groups with a log-rank test P value < .001. In the 128 patients from The Cancer Genome Atlas, the framework classified patients into long- and short-term survivors with a log-rank test P value of .014. For the mixed cohort of 50 patients from our institution and 58 patients from The Cancer Genome Atlas, it yielded a log-rank test P value of .035. CONCLUSIONS A deep learning model combining deep and radiomics features can dichotomize patients with high-grade gliomas into long- and short-term survivors.
Collapse
Affiliation(s)
- W Han
- From the Department of Radiology (W.H., C.B., X.C., N.M., A.L., X.X., G.Y.), Brigham and Women's Hospital, Boston, Massachusetts
- Harvard Medical School (W.H., L.Q., C.B., K.-H.Y., N.M., X.X., G.Y.), Boston, Massachusetts
| | - L Qin
- Department of Imaging (L.Q., G.Y.), Dana-Farber Cancer Institute, Boston, Massachusetts
- Harvard Medical School (W.H., L.Q., C.B., K.-H.Y., N.M., X.X., G.Y.), Boston, Massachusetts
| | - C Bay
- From the Department of Radiology (W.H., C.B., X.C., N.M., A.L., X.X., G.Y.), Brigham and Women's Hospital, Boston, Massachusetts
- Harvard Medical School (W.H., L.Q., C.B., K.-H.Y., N.M., X.X., G.Y.), Boston, Massachusetts
| | - X Chen
- From the Department of Radiology (W.H., C.B., X.C., N.M., A.L., X.X., G.Y.), Brigham and Women's Hospital, Boston, Massachusetts
- Department of Radiology (X.C.), Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, China
| | - K-H Yu
- Harvard Medical School (W.H., L.Q., C.B., K.-H.Y., N.M., X.X., G.Y.), Boston, Massachusetts
| | - N Miskin
- From the Department of Radiology (W.H., C.B., X.C., N.M., A.L., X.X., G.Y.), Brigham and Women's Hospital, Boston, Massachusetts
- Harvard Medical School (W.H., L.Q., C.B., K.-H.Y., N.M., X.X., G.Y.), Boston, Massachusetts
| | - A Li
- From the Department of Radiology (W.H., C.B., X.C., N.M., A.L., X.X., G.Y.), Brigham and Women's Hospital, Boston, Massachusetts
| | - X Xu
- From the Department of Radiology (W.H., C.B., X.C., N.M., A.L., X.X., G.Y.), Brigham and Women's Hospital, Boston, Massachusetts
- Harvard Medical School (W.H., L.Q., C.B., K.-H.Y., N.M., X.X., G.Y.), Boston, Massachusetts
| | - G Young
- From the Department of Radiology (W.H., C.B., X.C., N.M., A.L., X.X., G.Y.), Brigham and Women's Hospital, Boston, Massachusetts
- Department of Imaging (L.Q., G.Y.), Dana-Farber Cancer Institute, Boston, Massachusetts
- Harvard Medical School (W.H., L.Q., C.B., K.-H.Y., N.M., X.X., G.Y.), Boston, Massachusetts
| |
Collapse
|
47
|
Thomas AW, Heekeren HR, Müller KR, Samek W. Analyzing Neuroimaging Data Through Recurrent Deep Learning Models. Front Neurosci 2019; 13:1321. [PMID: 31920491 PMCID: PMC6914836 DOI: 10.3389/fnins.2019.01321] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Accepted: 11/25/2019] [Indexed: 01/25/2023] Open
Abstract
The application of deep learning (DL) models to neuroimaging data poses several challenges, due to the high dimensionality, low sample size, and complex temporo-spatial dependency structure of these data. Even further, DL models often act as black boxes, impeding insight into the association of cognitive state and brain activity. To approach these challenges, we introduce the DeepLight framework, which utilizes long short-term memory (LSTM) based DL models to analyze whole-brain functional Magnetic Resonance Imaging (fMRI) data. To decode a cognitive state (e.g., seeing the image of a house), DeepLight separates an fMRI volume into a sequence of axial brain slices, which is then sequentially processed by an LSTM. To maintain interpretability, DeepLight adapts the layer-wise relevance propagation (LRP) technique. Thereby, decomposing its decoding decision into the contributions of the single input voxels to this decision. Importantly, the decomposition is performed on the level of single fMRI volumes, enabling DeepLight to study the associations between cognitive state and brain activity on several levels of data granularity, from the level of the group down to the level of single time points. To demonstrate the versatility of DeepLight, we apply it to a large fMRI dataset of the Human Connectome Project. We show that DeepLight outperforms conventional approaches of uni- and multivariate fMRI analysis in decoding the cognitive states and in identifying the physiologically appropriate brain regions associated with these states. We further demonstrate DeepLight's ability to study the fine-grained temporo-spatial variability of brain activity over sequences of single fMRI samples.
Collapse
Affiliation(s)
- Armin W. Thomas
- Machine Learning Group, Technische Universität Berlin, Berlin, Germany
- Center for Cognitive Neuroscience Berlin, Freie Universität Berlin, Berlin, Germany
- Max Planck School of Cognition, Leipzig, Germany
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Hauke R. Heekeren
- Center for Cognitive Neuroscience Berlin, Freie Universität Berlin, Berlin, Germany
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Klaus-Robert Müller
- Machine Learning Group, Technische Universität Berlin, Berlin, Germany
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Max Planck Institute for Informatics, Saarbrücken, Germany
| | - Wojciech Samek
- Machine Learning Group, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
| |
Collapse
|
48
|
Machine Learning Approaches to Radiogenomics of Breast Cancer using Low-Dose Perfusion Computed Tomography: Predicting Prognostic Biomarkers and Molecular Subtypes. Sci Rep 2019; 9:17847. [PMID: 31780739 PMCID: PMC6882909 DOI: 10.1038/s41598-019-54371-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Accepted: 11/14/2019] [Indexed: 01/30/2023] Open
Abstract
Radiogenomics investigates the relationship between imaging phenotypes and genetic expression. Breast cancer is a heterogeneous disease that manifests complex genetic changes and various prognosis and treatment response. We investigate the value of machine learning approaches to radiogenomics using low-dose perfusion computed tomography (CT) to predict prognostic biomarkers and molecular subtypes of invasive breast cancer. This prospective study enrolled a total of 723 cases involving 241 patients with invasive breast cancer. The 18 CT parameters of cancers were analyzed using 5 machine learning models to predict lymph node status, tumor grade, tumor size, hormone receptors, HER2, Ki67, and the molecular subtypes. The random forest model was the best model in terms of accuracy and the area under the receiver-operating characteristic curve (AUC). On average, the random forest model had 13% higher accuracy and 0.17 higher AUC than the logistic regression. The most important CT parameters in the random forest model for prediction were peak enhancement intensity (Hounsfield units), time to peak (seconds), blood volume permeability (mL/100 g), and perfusion of tumor (mL/min per 100 mL). Machine learning approaches to radiogenomics using low-dose perfusion breast CT is a useful noninvasive tool for predicting prognostic biomarkers and molecular subtypes of invasive breast cancer.
Collapse
|
49
|
Liu X, Guo S, Yang B, Ma S, Zhang H, Li J, Sun C, Jin L, Li X, Yang Q, Fu Y. Automatic Organ Segmentation for CT Scans Based on Super-Pixel and Convolutional Neural Networks. J Digit Imaging 2019; 31:748-760. [PMID: 29679242 DOI: 10.1007/s10278-018-0052-4] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Accurate segmentation of specific organ from computed tomography (CT) scans is a basic and crucial task for accurate diagnosis and treatment. To avoid time-consuming manual optimization and to help physicians distinguish diseases, an automatic organ segmentation framework is presented. The framework utilized convolution neural networks (CNN) to classify pixels. To reduce the redundant inputs, the simple linear iterative clustering (SLIC) of super-pixels and the support vector machine (SVM) classifier are introduced. To establish the perfect boundary of organs in one-pixel-level, the pixels need to be classified step-by-step. First, the SLIC is used to cut an image into grids and extract respective digital signatures. Next, the signature is classified by the SVM, and the rough edges are acquired. Finally, a precise boundary is obtained by the CNN, which is based on patches around each pixel-point. The framework is applied to abdominal CT scans of livers and high-resolution computed tomography (HRCT) scans of lungs. The experimental CT scans are derived from two public datasets (Sliver 07 and a Chinese local dataset). Experimental results show that the proposed method can precisely and efficiently detect the organs. This method consumes 38 s/slice for liver segmentation. The Dice coefficient of the liver segmentation results reaches to 97.43%. For lung segmentation, the Dice coefficient is 97.93%. This finding demonstrates that the proposed framework is a favorable method for lung segmentation of HRCT scans.
Collapse
Affiliation(s)
- Xiaoming Liu
- College of Electronic Science & Engineering, Jilin University, D451 Room of Tangaoqing Building, No. 2699 of Qianjin Street, Changchun, Jilin, China
| | - Shuxu Guo
- College of Electronic Science & Engineering, Jilin University, D451 Room of Tangaoqing Building, No. 2699 of Qianjin Street, Changchun, Jilin, China
| | - Bingtao Yang
- College of Communication Engineering, Jilin University, Changchun, 130012, China
| | - Shuzhi Ma
- LUSTER LightTech Group, Beijing, China
| | - Huimao Zhang
- Department of Radiology, The First Hospital of Jilin University, Changchun, China
| | - Jing Li
- Department of Radiology, The First Hospital of Jilin University, Changchun, China
| | - Changjian Sun
- College of Electronic Science & Engineering, Jilin University, D451 Room of Tangaoqing Building, No. 2699 of Qianjin Street, Changchun, Jilin, China
| | - Lanyi Jin
- College of Electronic Science & Engineering, Jilin University, D451 Room of Tangaoqing Building, No. 2699 of Qianjin Street, Changchun, Jilin, China
| | - Xueyan Li
- College of Electronic Science & Engineering, Jilin University, D451 Room of Tangaoqing Building, No. 2699 of Qianjin Street, Changchun, Jilin, China.
| | - Qi Yang
- Department of Radiology, The First Hospital of Jilin University, Changchun, China
| | - Yu Fu
- Department of Radiology, The First Hospital of Jilin University, Changchun, China
| |
Collapse
|
50
|
Xiao B, He N, Wang Q, Cheng Z, Jiao Y, Haacke EM, Yan F, Shi F. Quantitative susceptibility mapping based hybrid feature extraction for diagnosis of Parkinson's disease. NEUROIMAGE-CLINICAL 2019; 24:102070. [PMID: 31734535 PMCID: PMC6861598 DOI: 10.1016/j.nicl.2019.102070] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Revised: 09/24/2019] [Accepted: 11/04/2019] [Indexed: 12/31/2022]
Abstract
Radiomics features from QSM data have significant clinical value for the diagnosis of PD. CNN features, which is better than Radiomics Features, are also useful in the diagnosis of PD. The combination of radiomics features and CNN features can enhance the diagnostic accuracy.
Parkinson's disease is the second most common neurodegenerative disease in the elderly after Alzheimer's disease. The aetiology and pathogenesis of Parkinson's disease (PD) are still unclear, but the loss of dopaminergic cells and the excessive iron deposition in the substantia nigra (SN) are associated with the pathophysiology. As an imaging technique that can quantitatively reflect the amount of iron deposition, Quantitative Susceptibility Mapping (QSM) has been shown to be a promising modality for the diagnosis of PD. In the present work, we propose a hybrid feature extraction method for PD diagnosis using QSM images. First, we extract radiomics features from the SN using QSM and employ machine learning algorithms to classify PD and normal controls (NC). This approach allows us to investigate which features are most vulnerable to the effects of the disease. Along with this approach, we propose a Convolutional Neural Network (CNN) based method which can extract different features from the QSM image to further support the diagnosis of PD. Finally, we combine these two types of features and we find that the radiomics features and CNN features are complementary to each other, which helps further improve the classification (diagnostic) performance. We conclude that: (1) radiomics features from QSM data have significant clinical value for the diagnosis of PD; (2) CNN features are also useful in the diagnosis of PD; and (3) the combination of radiomics features and CNN features can enhance the diagnostic accuracy.
Collapse
Affiliation(s)
- Bin Xiao
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Naying He
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, No.197 Ruijin Er Road, Shanghai 200025, China
| | - Qian Wang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
| | - Zenghui Cheng
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, No.197 Ruijin Er Road, Shanghai 200025, China
| | - Yining Jiao
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - E Mark Haacke
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, No.197 Ruijin Er Road, Shanghai 200025, China; Department of Radiology, Wayne State University, Detroit, MI, USA
| | - Fuhua Yan
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, No.197 Ruijin Er Road, Shanghai 200025, China.
| | - Feng Shi
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|