1
|
Srikrishna M, Seo W, Zettergren A, Kern S, Cantré D, Gessler F, Sotoudeh H, Seidlitz J, Bernstock JD, Wahlund LO, Westman E, Skoog I, Virhammar J, Fällmar D, Schöll M. Assessing CT-based Volumetric Analysis via Transfer Learning with MRI and Manual Labels for Idiopathic Normal Pressure Hydrocephalus. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.06.23.24309144. [PMID: 38978640 PMCID: PMC11230337 DOI: 10.1101/2024.06.23.24309144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Background Brain computed tomography (CT) is an accessible and commonly utilized technique for assessing brain structure. In cases of idiopathic normal pressure hydrocephalus (iNPH), the presence of ventriculomegaly is often neuroradiologically evaluated by visual rating and manually measuring each image. Previously, we have developed and tested a deep-learning-model that utilizes transfer learning from magnetic resonance imaging (MRI) for CT-based intracranial tissue segmentation. Accordingly, herein we aimed to enhance the segmentation of ventricular cerebrospinal fluid (VCSF) in brain CT scans and assess the performance of automated brain CT volumetrics in iNPH patient diagnostics. Methods The development of the model used a two-stage approach. Initially, a 2D U-Net model was trained to predict VCSF segmentations from CT scans, using paired MR-VCSF labels from healthy controls. This model was subsequently refined by incorporating manually segmented lateral CT-VCSF labels from iNPH patients, building on the features learned from the initial U-Net model. The training dataset included 734 CT datasets from healthy controls paired with T1-weighted MRI scans from the Gothenburg H70 Birth Cohort Studies and 62 CT scans from iNPH patients at Uppsala University Hospital. To validate the model's performance across diverse patient populations, external clinical images including scans of 11 iNPH patients from the Universitatsmedizin Rostock, Germany, and 30 iNPH patients from the University of Alabama at Birmingham, United States were used. Further, we obtained three CT-based volumetric measures (CTVMs) related to iNPH. Results Our analyses demonstrated strong volumetric correlations (ϱ=0.91, p<0.001) between automatically and manually derived CT-VCSF measurements in iNPH patients. The CTVMs exhibited high accuracy in differentiating iNPH patients from controls in external clinical datasets with an AUC of 0.97 and in the Uppsala University Hospital datasets with an AUC of 0.99. Discussion CTVMs derived through deep learning, show potential for assessing and quantifying morphological features in hydrocephalus. Critically, these measures performed comparably to gold-standard neuroradiology assessments in distinguishing iNPH from healthy controls, even in the presence of intraventricular shunt catheters. Accordingly, such an approach may serve to improve the radiological evaluation of iNPH diagnosis/monitoring (i.e., treatment responses). Since CT is much more widely available than MRI, our results have considerable clinical impact.
Collapse
Affiliation(s)
- Meera Srikrishna
- Wallenberg Centre for Molecular and Translational Medicine, University of Gothenburg, Gothenburg, Sweden
- Department of Psychiatry and Neurochemistry, Institute of Physiology and Neuroscience, University of Gothenburg, Gothenburg, Sweden
| | - Woosung Seo
- Department of Surgical Sciences, Neuroradiology, Uppsala University, Uppsala, Sweden
| | - Anna Zettergren
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
| | - Silke Kern
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
- Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Mölndal, Sweden
| | - Daniel Cantré
- Institute of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, University Medical Center Rostock, Rostock, Germany
| | - Florian Gessler
- Department of Neurosurgery, University Medicine of Rostock, 18057 Rostock, Germany
| | - Houman Sotoudeh
- Department of Neuroradiology, University of Alabama, Birmingham, AL, United States
| | - Jakob Seidlitz
- Lifespan Brain Institute, The Children’s Hospital of Philadelphia and Penn Medicine, Philadelphia, PA, USA
- Institute for Translational Medicine and Therapeutics, University of Pennsylvania, Philadelphia, PA, USA
- Department of Psychiatry, University of Pennsylvania, Philadelphia, United States
- Department of Child and Adolescent Psychiatry and Behavioral Science, The Children’s Hospital of Philadelphia, Philadelphia, United States
| | - Joshua D. Bernstock
- Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts
- David H. Koch Institute for Integrative Cancer Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Lars-Olof Wahlund
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| | - Eric Westman
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| | - Ingmar Skoog
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
| | - Johan Virhammar
- Department of Medical Sciences, Neurology, Uppsala University, Uppsala, Sweden
| | - David Fällmar
- Department of Surgical Sciences, Neuroradiology, Uppsala University, Uppsala, Sweden
| | - Michael Schöll
- Wallenberg Centre for Molecular and Translational Medicine, University of Gothenburg, Gothenburg, Sweden
- Department of Psychiatry and Neurochemistry, Institute of Physiology and Neuroscience, University of Gothenburg, Gothenburg, Sweden
- Dementia Research Centre, Queen Square Institute of Neurology, University College London, London, UK
- Department of Psychiatry, Cognition and Aging Psychiatry, Sahlgrenska University Hospital, Mölndal, Sweden
| |
Collapse
|
2
|
Srikrishna M, Ashton NJ, Moscoso A, Pereira JB, Heckemann RA, van Westen D, Volpe G, Simrén J, Zettergren A, Kern S, Wahlund L, Gyanwali B, Hilal S, Ruifen JC, Zetterberg H, Blennow K, Westman E, Chen C, Skoog I, Schöll M. CT-based volumetric measures obtained through deep learning: Association with biomarkers of neurodegeneration. Alzheimers Dement 2024; 20:629-640. [PMID: 37767905 PMCID: PMC10916947 DOI: 10.1002/alz.13445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 06/29/2023] [Accepted: 08/01/2023] [Indexed: 09/29/2023]
Abstract
INTRODUCTION Cranial computed tomography (CT) is an affordable and widely available imaging modality that is used to assess structural abnormalities, but not to quantify neurodegeneration. Previously we developed a deep-learning-based model that produced accurate and robust cranial CT tissue classification. MATERIALS AND METHODS We analyzed 917 CT and 744 magnetic resonance (MR) scans from the Gothenburg H70 Birth Cohort, and 204 CT and 241 MR scans from participants of the Memory Clinic Cohort, Singapore. We tested associations between six CT-based volumetric measures (CTVMs) and existing clinical diagnoses, fluid and imaging biomarkers, and measures of cognition. RESULTS CTVMs differentiated cognitively healthy individuals from dementia and prodromal dementia patients with high accuracy levels comparable to MR-based measures. CTVMs were significantly associated with measures of cognition and biochemical markers of neurodegeneration. DISCUSSION These findings suggest the potential future use of CT-based volumetric measures as an informative first-line examination tool for neurodegenerative disease diagnostics after further validation. HIGHLIGHTS Computed tomography (CT)-based volumetric measures can distinguish between patients with neurodegenerative disease and healthy controls, as well as between patients with prodromal dementia and controls. CT-based volumetric measures associate well with relevant cognitive, biochemical, and neuroimaging markers of neurodegenerative diseases. Model performance, in terms of brain tissue classification, was consistent across two cohorts of diverse nature. Intermodality agreement between our automated CT-based and established magnetic resonance (MR)-based image segmentations was stronger than the agreement between visual CT and MR imaging assessment.
Collapse
|
3
|
Jiang D, Liao J, Zhao C, Zhao X, Lin R, Yang J, Li Z, Zhou Y, Zhu Y, Liang D, Hu Z, Wang H. Recognizing Pediatric Tuberous Sclerosis Complex Based on Multi-Contrast MRI and Deep Weighted Fusion Network. Bioengineering (Basel) 2023; 10:870. [PMID: 37508897 PMCID: PMC10375986 DOI: 10.3390/bioengineering10070870] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 06/24/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023] Open
Abstract
Multi-contrast magnetic resonance imaging (MRI) is wildly applied to identify tuberous sclerosis complex (TSC) children in a clinic. In this work, a deep convolutional neural network with multi-contrast MRI is proposed to diagnose pediatric TSC. Firstly, by combining T2W and FLAIR images, a new synthesis modality named FLAIR3 was created to enhance the contrast between TSC lesions and normal brain tissues. After that, a deep weighted fusion network (DWF-net) using a late fusion strategy is proposed to diagnose TSC children. In experiments, a total of 680 children were enrolled, including 331 healthy children and 349 TSC children. The experimental results indicate that FLAIR3 successfully enhances the visibility of TSC lesions and improves the classification performance. Additionally, the proposed DWF-net delivers a superior classification performance compared to previous methods, achieving an AUC of 0.998 and an accuracy of 0.985. The proposed method has the potential to be a reliable computer-aided diagnostic tool for assisting radiologists in diagnosing TSC children.
Collapse
Affiliation(s)
- Dian Jiang
- Research Centre for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China; (D.J.); (J.Y.); (Z.L.); (Y.Z.); (D.L.)
- University of Chinese Academy of Sciences, Beijing 100049, China;
| | - Jianxiang Liao
- Department of Neurology, Shenzhen Children’s Hospital, Shenzhen 518000, China; (J.L.); (X.Z.)
| | - Cailei Zhao
- Department of Radiology, Shenzhen Children’s Hospital, Shenzhen 518000, China;
| | - Xia Zhao
- Department of Neurology, Shenzhen Children’s Hospital, Shenzhen 518000, China; (J.L.); (X.Z.)
| | - Rongbo Lin
- Department of Emergency, Shenzhen Children’s Hospital, Shenzhen 518000, China;
| | - Jun Yang
- Research Centre for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China; (D.J.); (J.Y.); (Z.L.); (Y.Z.); (D.L.)
- University of Chinese Academy of Sciences, Beijing 100049, China;
| | - Zhichen Li
- Research Centre for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China; (D.J.); (J.Y.); (Z.L.); (Y.Z.); (D.L.)
- University of Chinese Academy of Sciences, Beijing 100049, China;
| | - Yihang Zhou
- Research Centre for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China; (D.J.); (J.Y.); (Z.L.); (Y.Z.); (D.L.)
- Research Department, Hong Kong Sanatorium & Hospital, Hong Kong 999077, China
| | - Yanjie Zhu
- University of Chinese Academy of Sciences, Beijing 100049, China;
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China
| | - Dong Liang
- Research Centre for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China; (D.J.); (J.Y.); (Z.L.); (Y.Z.); (D.L.)
- University of Chinese Academy of Sciences, Beijing 100049, China;
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China
| | - Zhanqi Hu
- Department of Neurology, Shenzhen Children’s Hospital, Shenzhen 518000, China; (J.L.); (X.Z.)
| | - Haifeng Wang
- University of Chinese Academy of Sciences, Beijing 100049, China;
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China
| |
Collapse
|
4
|
Zhang X, Dong X, Saripan MIB, Du D, Wu Y, Wang Z, Cao Z, Wen D, Liu Y, Marhaban MH. Deep learning PET/CT-based radiomics integrates clinical data: A feasibility study to distinguish between tuberculosis nodules and lung cancer. Thorac Cancer 2023. [PMID: 37183577 DOI: 10.1111/1759-7714.14924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/21/2023] [Accepted: 04/22/2023] [Indexed: 05/16/2023] Open
Abstract
BACKGROUND Radiomic diagnosis models generally consider only a single dimension of information, leading to limitations in their diagnostic accuracy and reliability. The integration of multiple dimensions of information into the deep learning model have the potential to improve its diagnostic capabilities. The purpose of study was to evaluate the performance of deep learning model in distinguishing tuberculosis (TB) nodules and lung cancer (LC) based on deep learning features, radiomic features, and clinical information. METHODS Positron emission tomography (PET) and computed tomography (CT) image data from 97 patients with LC and 77 patients with TB nodules were collected. One hundred radiomic features were extracted from both PET and CT imaging using the pyradiomics platform, and 2048 deep learning features were obtained through a residual neural network approach. Four models included traditional machine learning model with radiomic features as input (traditional radiomics), a deep learning model with separate input of image features (deep convolutional neural networks [DCNN]), a deep learning model with two inputs of radiomic features and deep learning features (radiomics-DCNN) and a deep learning model with inputs of radiomic features and deep learning features and clinical information (integrated model). The models were evaluated using area under the curve (AUC), sensitivity, accuracy, specificity, and F1-score metrics. RESULTS The results of the classification of TB nodules and LC showed that the integrated model achieved an AUC of 0.84 (0.82-0.88), sensitivity of 0.85 (0.80-0.88), and specificity of 0.84 (0.83-0.87), performing better than the other models. CONCLUSION The integrated model was found to be the best classification model in the diagnosis of TB nodules and solid LC.
Collapse
Affiliation(s)
- Xiaolei Zhang
- Faculty of Engineering, Universiti Putra Malaysia, Serdang, Malaysia
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | - Xianling Dong
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
- Hebei International Research Center of Medical Engineering and Hebei Provincial Key Laboratory of Nerve Injury and Repair, Chengde Medical University, Chengde, Hebei, China
| | | | - Dongyang Du
- School of Biomedical Engineering and Guangdong Province Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Yanjun Wu
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | - Zhongxiao Wang
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | - Zhendong Cao
- Department of Radiology, the Affiliated Hospital of Chengde Medical University, Chengde, China
| | - Dong Wen
- Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China
| | - Yanli Liu
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | | |
Collapse
|
5
|
Reisert M, Sajonz BEA, Brugger TS, Reinacher PC, Russe MF, Kellner E, Skibbe H, Coenen VA. Where Position Matters-Deep-Learning-Driven Normalization and Coregistration of Computed Tomography in the Postoperative Analysis of Deep Brain Stimulation. Neuromodulation 2023; 26:302-309. [PMID: 36424266 DOI: 10.1016/j.neurom.2022.10.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 09/12/2022] [Accepted: 10/06/2022] [Indexed: 11/23/2022]
Abstract
INTRODUCTION Recent developments in the postoperative evaluation of deep brain stimulation surgery on the group level warrant the detection of achieved electrode positions based on postoperative imaging. Computed tomography (CT) is a frequently used imaging modality, but because of its idiosyncrasies (high spatial accuracy at low soft tissue resolution), it has not been sufficient for the parallel determination of electrode position and details of the surrounding brain anatomy (nuclei). The common solution is rigid fusion of CT images and magnetic resonance (MR) images, which have much better soft tissue contrast and allow accurate normalization into template spaces. Here, we explored a deep-learning approach to directly relate positions (usually the lead position) in postoperative CT images to the native anatomy of the midbrain and group space. MATERIALS AND METHODS Deep learning is used to create derived tissue contrasts (white matter, gray matter, cerebrospinal fluid, brainstem nuclei) based on the CT image; that is, a convolution neural network (CNN) takes solely the raw CT image as input and outputs several tissue probability maps. The ground truth is based on coregistrations with MR contrasts. The tissue probability maps are then used to either rigidly coregister or normalize the CT image in a deformable way to group space. The CNN was trained in 220 patients and tested in a set of 80 patients. RESULTS Rigorous validation of such an approach is difficult because of the lack of ground truth. We examined the agreements between the classical and proposed approaches and considered the spread of implantation locations across a group of identically implanted subjects, which serves as an indicator of the accuracy of the lead localization procedure. The proposed procedure agrees well with current magnetic resonance imaging-based techniques, and the spread is comparable or even lower. CONCLUSIONS Postoperative CT imaging alone is sufficient for accurate localization of the midbrain nuclei and normalization to the group space. In the context of group analysis, it seems sufficient to have a single postoperative CT image of good quality for inclusion. The proposed approach will allow researchers and clinicians to include cases that were not previously suitable for analysis.
Collapse
Affiliation(s)
- Marco Reisert
- Department of Stereotactic and Functional Neurosurgery, Medical Center of Freiburg University, Freiburg, Germany; Medical Faculty of Freiburg University, Freiburg, Germany; Department of Diagnostic and Interventional Radiology, Medical Physics, Medical Center-University of Freiburg, Freiburg, Germany.
| | - Bastian E A Sajonz
- Department of Stereotactic and Functional Neurosurgery, Medical Center of Freiburg University, Freiburg, Germany; Medical Faculty of Freiburg University, Freiburg, Germany
| | - Timo S Brugger
- Department of Stereotactic and Functional Neurosurgery, Medical Center of Freiburg University, Freiburg, Germany; Medical Faculty of Freiburg University, Freiburg, Germany
| | - Peter C Reinacher
- Department of Stereotactic and Functional Neurosurgery, Medical Center of Freiburg University, Freiburg, Germany; Medical Faculty of Freiburg University, Freiburg, Germany; Fraunhofer Institute for Laser Technology, Aachen, Germany
| | - Maximilian F Russe
- Medical Faculty of Freiburg University, Freiburg, Germany; Department of Diagnostic and Interventional Radiology, Medical Physics, Medical Center-University of Freiburg, Freiburg, Germany
| | - Elias Kellner
- Medical Faculty of Freiburg University, Freiburg, Germany; Department of Diagnostic and Interventional Radiology, Medical Physics, Medical Center-University of Freiburg, Freiburg, Germany
| | - Henrik Skibbe
- RIKEN, Center for Brain Science, Brain Image Analysis Unit, Saitama, Japan
| | - Volker A Coenen
- Department of Stereotactic and Functional Neurosurgery, Medical Center of Freiburg University, Freiburg, Germany; Medical Faculty of Freiburg University, Freiburg, Germany; Center for Deep Brain Stimulation, Medical Center of Freiburg University, Freiburg, Germany
| |
Collapse
|
6
|
A Deep Learning Approach for Detecting Stroke from Brain CT Images Using OzNet. BIOENGINEERING (BASEL, SWITZERLAND) 2022; 9:bioengineering9120783. [PMID: 36550989 PMCID: PMC9774129 DOI: 10.3390/bioengineering9120783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 12/01/2022] [Accepted: 12/04/2022] [Indexed: 12/14/2022]
Abstract
A brain stroke is a life-threatening medical disorder caused by the inadequate blood supply to the brain. After the stroke, the damaged area of the brain will not operate normally. As a result, early detection is crucial for more effective therapy. Computed tomography (CT) images supply a rapid diagnosis of brain stroke. However, while doctors are analyzing each brain CT image, time is running fast. This circumstance may lead to result in a delay in treatment and making errors. Therefore, we targeted the utilization of an efficient artificial intelligence algorithm in stroke detection. In this paper, we designed hybrid algorithms that include a new convolution neural networks (CNN) architecture called OzNet and various machine learning algorithms for binary classification of real brain stroke CT images. When we classified the dataset with OzNet, we acquired successful performance. However, for this target, we combined it with a minimum Redundancy Maximum Relevance (mRMR) method and Decision Tree (DT), k-Nearest Neighbors (kNN), Linear Discriminant Analysis (LDA), Naïve Bayes (NB), and Support Vector Machines (SVM). In addition, 4096 significant features were obtained from the fully connected layer of OzNet, and we reduced the dimension of features from 4096 to 250 using the mRMR method. Finally, we utilized these machine learning algorithms to classify important features. As a result, OzNet-mRMR-NB was an excellent hybrid algorithm and achieved an accuracy of 98.42% and AUC of 0.99 to detect stroke from brain CT images.
Collapse
|
7
|
An Efficient Multi-Scale Convolutional Neural Network Based Multi-Class Brain MRI Classification for SaMD. TOMOGRAPHY (ANN ARBOR, MICH.) 2022; 8:1905-1927. [PMID: 35894026 PMCID: PMC9330870 DOI: 10.3390/tomography8040161] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 06/28/2022] [Accepted: 07/13/2022] [Indexed: 01/05/2023]
Abstract
A brain tumor is the growth of abnormal cells in certain brain tissues with a high mortality rate; therefore, it requires high precision in diagnosis, as a minor human judgment can eventually cause severe consequences. Magnetic Resonance Image (MRI) serves as a non-invasive tool to detect the presence of a tumor. However, Rician noise is inevitably instilled during the image acquisition process, which leads to poor observation and interferes with the treatment. Computer-Aided Diagnosis (CAD) systems can perform early diagnosis of the disease, potentially increasing the chances of survival, and lessening the need for an expert to analyze the MRIs. Convolutional Neural Networks (CNN) have proven to be very effective in tumor detection in brain MRIs. There have been multiple studies dedicated to brain tumor classification; however, these techniques lack the evaluation of the impact of the Rician noise on state-of-the-art deep learning techniques and the consideration of the scaling impact on the performance of the deep learning as the size and location of tumors vary from image to image with irregular shape and boundaries. Moreover, transfer learning-based pre-trained models such as AlexNet and ResNet have been used for brain tumor detection. However, these architectures have many trainable parameters and hence have a high computational cost. This study proposes a two-fold solution: (a) Multi-Scale CNN (MSCNN) architecture to develop a robust classification model for brain tumor diagnosis, and (b) minimizing the impact of Rician noise on the performance of the MSCNN. The proposed model is a multi-class classification solution that classifies MRIs into glioma, meningioma, pituitary, and non-tumor. The core objective is to develop a robust model for enhancing the performance of the existing tumor detection systems in terms of accuracy and efficiency. Furthermore, MRIs are denoised using a Fuzzy Similarity-based Non-Local Means (FSNLM) filter to improve the classification results. Different evaluation metrics are employed, such as accuracy, precision, recall, specificity, and F1-score, to evaluate and compare the performance of the proposed multi-scale CNN and other state-of-the-art techniques, such as AlexNet and ResNet. In addition, trainable and non-trainable parameters of the proposed model and the existing techniques are also compared to evaluate the computational efficiency. The experimental results show that the proposed multi-scale CNN model outperforms AlexNet and ResNet in terms of accuracy and efficiency at a lower computational cost. Based on experimental results, it is found that our proposed MCNN2 achieved accuracy and F1-score of 91.2% and 91%, respectively, which is significantly higher than the existing AlexNet and ResNet techniques. Moreover, our findings suggest that the proposed model is more effective and efficient in facilitating clinical research and practice for MRI classification.
Collapse
|
8
|
Srikrishna M, Heckemann RA, Pereira JB, Volpe G, Zettergren A, Kern S, Westman E, Skoog I, Schöll M. Comparison of Two-Dimensional- and Three-Dimensional-Based U-Net Architectures for Brain Tissue Classification in One-Dimensional Brain CT. Front Comput Neurosci 2022; 15:785244. [PMID: 35082608 PMCID: PMC8784554 DOI: 10.3389/fncom.2021.785244] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 12/02/2021] [Indexed: 11/13/2022] Open
Abstract
Brain tissue segmentation plays a crucial role in feature extraction, volumetric quantification, and morphometric analysis of brain scans. For the assessment of brain structure and integrity, CT is a non-invasive, cheaper, faster, and more widely available modality than MRI. However, the clinical application of CT is mostly limited to the visual assessment of brain integrity and exclusion of copathologies. We have previously developed two-dimensional (2D) deep learning-based segmentation networks that successfully classified brain tissue in head CT. Recently, deep learning-based MRI segmentation models successfully use patch-based three-dimensional (3D) segmentation networks. In this study, we aimed to develop patch-based 3D segmentation networks for CT brain tissue classification. Furthermore, we aimed to compare the performance of 2D- and 3D-based segmentation networks to perform brain tissue classification in anisotropic CT scans. For this purpose, we developed 2D and 3D U-Net-based deep learning models that were trained and validated on MR-derived segmentations from scans of 744 participants of the Gothenburg H70 Cohort with both CT and T1-weighted MRI scans acquired timely close to each other. Segmentation performance of both 2D and 3D models was evaluated on 234 unseen datasets using measures of distance, spatial similarity, and tissue volume. Single-task slice-wise processed 2D U-Nets performed better than multitask patch-based 3D U-Nets in CT brain tissue classification. These findings provide support to the use of 2D U-Nets to segment brain tissue in one-dimensional (1D) CT. This could increase the application of CT to detect brain abnormalities in clinical settings.
Collapse
Affiliation(s)
- Meera Srikrishna
- Wallenberg Centre for Molecular and Translational Medicine, University of Gothenburg, Gothenburg, Sweden
- Department of Psychiatry and Neurochemistry, Institute of Physiology and Neuroscience, University of Gothenburg, Gothenburg, Sweden
| | - Rolf A. Heckemann
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, Gothenburg, Sweden
| | - Joana B. Pereira
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Memory Research Unit, Department of Clinical Sciences, Malmö Lund University, Mälmo, Sweden
| | - Giovanni Volpe
- Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Anna Zettergren
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
| | - Silke Kern
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
- Region Västra Götaland, Sahlgrenska University Hospital, Psychiatry, Cognition and Old Age Psychiatry Clinic, Gothenburg, Sweden
| | - Eric Westman
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| | - Ingmar Skoog
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
- Region Västra Götaland, Sahlgrenska University Hospital, Psychiatry, Cognition and Old Age Psychiatry Clinic, Gothenburg, Sweden
| | - Michael Schöll
- Wallenberg Centre for Molecular and Translational Medicine, University of Gothenburg, Gothenburg, Sweden
- Department of Psychiatry and Neurochemistry, Institute of Physiology and Neuroscience, University of Gothenburg, Gothenburg, Sweden
- Dementia Research Centre, Institute of Neurology, University College London, London, United Kingdom
- Department of Clinical Physiology, Sahlgrenska University Hospital, Gothenburg, Sweden
- *Correspondence: Michael Schöll
| |
Collapse
|