1
|
Iqbal MS, Belal Bin Heyat M, Parveen S, Ammar Bin Hayat M, Roshanzamir M, Alizadehsani R, Akhtar F, Sayeed E, Hussain S, Hussein HS, Sawan M. Progress and trends in neurological disorders research based on deep learning. Comput Med Imaging Graph 2024; 116:102400. [PMID: 38851079 DOI: 10.1016/j.compmedimag.2024.102400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 05/07/2024] [Accepted: 05/13/2024] [Indexed: 06/10/2024]
Abstract
In recent years, deep learning (DL) has emerged as a powerful tool in clinical imaging, offering unprecedented opportunities for the diagnosis and treatment of neurological disorders (NDs). This comprehensive review explores the multifaceted role of DL techniques in leveraging vast datasets to advance our understanding of NDs and improve clinical outcomes. Beginning with a systematic literature review, we delve into the utilization of DL, particularly focusing on multimodal neuroimaging data analysis-a domain that has witnessed rapid progress and garnered significant scientific interest. Our study categorizes and critically analyses numerous DL models, including Convolutional Neural Networks (CNNs), LSTM-CNN, GAN, and VGG, to understand their performance across different types of Neurology Diseases. Through particular analysis, we identify key benchmarks and datasets utilized in training and testing DL models, shedding light on the challenges and opportunities in clinical neuroimaging research. Moreover, we discuss the effectiveness of DL in real-world clinical scenarios, emphasizing its potential to revolutionize ND diagnosis and therapy. By synthesizing existing literature and describing future directions, this review not only provides insights into the current state of DL applications in ND analysis but also covers the way for the development of more efficient and accessible DL techniques. Finally, our findings underscore the transformative impact of DL in reshaping the landscape of clinical neuroimaging, offering hope for enhanced patient care and groundbreaking discoveries in the field of neurology. This review paper is beneficial for neuropathologists and new researchers in this field.
Collapse
Affiliation(s)
- Muhammad Shahid Iqbal
- Department of Computer Science and Information Technology, Women University of Azad Jammu & Kashmir, Bagh, Pakistan.
| | - Md Belal Bin Heyat
- CenBRAIN Neurotech Center of Excellence, School of Engineering, Westlake University, Hangzhou, Zhejiang, China.
| | - Saba Parveen
- College of Electronics and Information Engineering, Shenzhen University, Shenzhen 518060, China.
| | | | - Mohamad Roshanzamir
- Department of Computer Engineering, Faculty of Engineering, Fasa University, Fasa, Iran.
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation, Deakin University, VIC 3216, Australia.
| | - Faijan Akhtar
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China.
| | - Eram Sayeed
- Kisan Inter College, Dhaurahara, Kushinagar, India.
| | - Sadiq Hussain
- Department of Examination, Dibrugarh University, Assam 786004, India.
| | - Hany S Hussein
- Electrical Engineering Department, Faculty of Engineering, King Khalid University, Abha 61411, Saudi Arabia; Electrical Engineering Department, Faculty of Engineering, Aswan University, Aswan 81528, Egypt.
| | - Mohamad Sawan
- CenBRAIN Neurotech Center of Excellence, School of Engineering, Westlake University, Hangzhou, Zhejiang, China.
| |
Collapse
|
2
|
Ma X, Zhao L, Dang S, Zhao Y, Lu Y, Li X, Li P, Chen Y, Mei N, Yin B, Geng D. Multicenter Study of the Utility of Convolutional Neural Network and Transformer Models for the Detection and Segmentation of Meningiomas. J Comput Assist Tomogr 2024; 48:480-490. [PMID: 38013244 DOI: 10.1097/rct.0000000000001565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
PURPOSE This study aimed to investigate the effectiveness and practicality of using models like convolutional neural network and transformer in detecting and precise segmenting meningioma from magnetic resonance images. METHODS The retrospective study on T1-weighted and contrast-enhanced images of 523 meningioma patients from 3 centers between 2010 and 2020. A total of 373 cases split 8:2 for training and validation. Three independent test sets were built based on the remaining 150 cases. Six convolutional neural network detection models trained via transfer learning were evaluated using 4 metrics and receiver operating characteristic analysis. Detected images were used for segmentation. Three segmentation models were trained for meningioma segmentation and were evaluated via 4 metrics. In 3 test sets, intraclass consistency values were used to evaluate the consistency of detection and segmentation models with manually annotated results from 3 different levels of radiologists. RESULTS The average accuracies of the detection model in the 3 test sets were 97.3%, 93.5%, and 96.0%, respectively. The model of segmentation showed mean Dice similarity coefficient values of 0.884, 0.834, and 0.892, respectively. Intraclass consistency values showed that the results of detection and segmentation models were highly consistent with those of intermediate and senior radiologists and lowly consistent with those of junior radiologists. CONCLUSIONS The proposed deep learning system exhibits advanced performance comparable with intermediate and senior radiologists in meningioma detection and segmentation. This system could potentially significantly improve the efficiency of the detection and segmentation of meningiomas.
Collapse
Affiliation(s)
| | - Lingxiao Zhao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou
| | - Shijie Dang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou
| | - Yajing Zhao
- Department of Radiology, Huashan Hospital Affiliated to Fudan University
| | - Yiping Lu
- Department of Radiology, Huashan Hospital Affiliated to Fudan University
| | - Xuanxuan Li
- Department of Radiology, Huashan Hospital Affiliated to Fudan University
| | - Peng Li
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou
| | - Yibo Chen
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou
| | - Nan Mei
- Department of Radiology, Huashan Hospital Affiliated to Fudan University
| | | | | |
Collapse
|
3
|
Satheesh Kumar J, Vinoth Kumar V, Mahesh TR, Alqahtani MS, Prabhavathy P, Manikandan K, Guluwadi S. Detection of Marchiafava Bignami disease using distinct deep learning techniques in medical diagnostics. BMC Med Imaging 2024; 24:100. [PMID: 38684964 PMCID: PMC11059769 DOI: 10.1186/s12880-024-01283-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/25/2024] [Indexed: 05/02/2024] Open
Abstract
PURPOSE To detect the Marchiafava Bignami Disease (MBD) using a distinct deep learning technique. BACKGROUND Advanced deep learning methods are becoming more crucial in contemporary medical diagnostics, particularly for detecting intricate and uncommon neurological illnesses such as MBD. This rare neurodegenerative disorder, sometimes associated with persistent alcoholism, is characterized by the loss of myelin or tissue death in the corpus callosum. It poses significant diagnostic difficulties owing to its infrequency and the subtle signs it exhibits in its first stages, both clinically and on radiological scans. METHODS The novel method of Variational Autoencoders (VAEs) in conjunction with attention mechanisms is used to identify MBD peculiar diseases accurately. VAEs are well-known for their proficiency in unsupervised learning and anomaly detection. They excel at analyzing extensive brain imaging datasets to uncover subtle patterns and abnormalities that traditional diagnostic approaches may overlook, especially those related to specific diseases. The use of attention mechanisms enhances this technique, enabling the model to concentrate on the most crucial elements of the imaging data, similar to the discerning observation of a skilled radiologist. Thus, we utilized the VAE with attention mechanisms in this study to detect MBD. Such a combination enables the prompt identification of MBD and assists in formulating more customized and efficient treatment strategies. RESULTS A significant breakthrough in this field is the creation of a VAE equipped with attention mechanisms, which has shown outstanding performance by achieving accuracy rates of over 90% in accurately differentiating MBD from other neurodegenerative disorders. CONCLUSION This model, which underwent training using a diverse range of MRI images, has shown a notable level of sensitivity and specificity, significantly minimizing the frequency of false positive results and strengthening the confidence and dependability of these sophisticated automated diagnostic tools.
Collapse
Affiliation(s)
- J Satheesh Kumar
- Department of Electronics and Instrumentation Engineering, Dayananda Sagar College of Engineering, Bangalore, India
| | - V Vinoth Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, India
| | - T R Mahesh
- Department of Computer Science and Engineering, JAIN (Deemed-to-Be University), Bengaluru, 562112, India
| | - Mohammed S Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, 61421, Abha, Saudi Arabia
| | - P Prabhavathy
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, India
| | - K Manikandan
- School of Computer Science and Engineering (SCOPE), Vellore Institute of Technology (VIT), Vellore, India
| | - Suresh Guluwadi
- Adama Science and Technology University, 302120, Adama, Ethiopia.
| |
Collapse
|
4
|
Burrows L, Patel J, Islim AI, Jenkinson MD, Mills SJ, Chen K. A semi-automatic segmentation method for meningioma developed using a variational approach model. Neuroradiol J 2024; 37:199-205. [PMID: 38146866 DOI: 10.1177/19714009231224442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2023] Open
Abstract
BACKGROUND Meningioma is the commonest primary brain tumour. Volumetric post-contrast magnetic resonance imaging (MRI) is recognised as gold standard for delineation of meningioma volume but is hindered by manual processing times. We aimed to investigate the utility of a model-based variational approach in segmenting meningioma. METHODS A database of patients with a meningioma (2007-2015) was queried for patients with a contrast-enhanced volumetric MRI, who had consented to a research tissue biobank. Manual segmentation by a neuroradiologist was performed and results were compared to the mathematical model, using a battery of tests including the Sørensen-Dice coefficient (DICE) and JACCARD index. A publicly available meningioma dataset (708 segmented T1 contrast-enhanced slices) was also used to test the reliability of the model. RESULTS 49 meningioma cases were included. The most common meningioma location was convexity (n = 15, 30.6%). The mathematical model segmented all but one incidental meningioma, which failed due to the lack of contrast uptake. The median meningioma volume by manual segmentation was 19.0 cm3 (IQR 4.9-31.2). The median meningioma volume using the mathematical model was 16.9 cm3 (IQR 4.6-28.34). The mean DICE score was 0.90 (SD = 0.04). The mean JACCARD index was 0.82 (SD = 0.07). For the publicly available dataset, the mean DICE and JACCARD scores were 0.90 (SD = 0.06) and 0.82 (SD = 0.10), respectively. CONCLUSIONS Segmentation of meningioma volume using the proposed mathematical model was possible with accurate results. Application of this model on contrast-enhanced volumetric imaging may help reduce work burden on neuroradiologists with the increasing number in meningioma diagnoses.
Collapse
Affiliation(s)
- Liam Burrows
- Department of Mathematical Sciences and Centre for Mathematical Imaging Techniques, University of Liverpool, UK
| | - Jay Patel
- Department of Neuroradiology, The Walton Centre NHS Foundation Trust, UK
| | - Abdurrahman I Islim
- Geoffrey Jefferson Brain Research Centre, The Manchester Academic Health Science Centre, Northern Care Alliance NHS Group, University of Manchester, UK
- Department of Neurosurgery, Manchester Centre for Clinical Neurosciences, Salford Royal Hospital, Northren Care Alliance NHS Foundation Trust, UK
| | - Michael D Jenkinson
- Department of Neurosurgery, The Walton Centre NHS Foundation Trust, UK
- Department of Pharmacology and Therapeutics, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, UK
| | - Samantha J Mills
- Department of Neuroradiology, The Walton Centre NHS Foundation Trust, UK
- Department of Pharmacology and Therapeutics, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, UK
| | - Ke Chen
- Department of Mathematical Sciences and Centre for Mathematical Imaging Techniques, University of Liverpool, UK
- Department of Mathematics and Statistics, University of Strathclyde, UK
| |
Collapse
|
5
|
Iwata T, Hirayama R, Yamada S, Kijima N, Okita Y, Kagawa N, Kishima H. Automated volumetry of meningiomas in contrast-enhanced T1-Weighted MRI using deep learning. World Neurosurg X 2024; 22:100353. [PMID: 38455247 PMCID: PMC10918322 DOI: 10.1016/j.wnsx.2024.100353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 02/21/2024] [Indexed: 03/09/2024] Open
Abstract
BACKGROUND Meningiomas are among the most common intracranial tumors. In these tumors, volumetric assessment is not only important for planning therapeutic intervention but also for follow-up examination.However, a highly accurate automated volumetric method for meningiomas using single-modality magnetic resonance imaging (MRI) has not yet been reported. Here, we aimed to develop a deep learning-based automated volumetry method for meningiomas in MRI and investigate its accuracy and potential clinical applications. METHODS For deep learning, we used MRI images of patients with meningioma who were referred to Osaka University Hospital between January 2007 and October 2020. Imaging data of eligible patients were divided into three non-overlapping groups: training, validation, and testing. The model was trained and tested using the leave-oneout cross-validation method. Dice index (DI) and root mean squared percentage error (RMSPE) were measured to evaluate the model accuracy. Result: A total of 178 patients (64.6 ± 12.3 years [standard deviation]; 147 women) were evaluated. Comparison of the deep learning model and manual segmentation revealed a mean DI of 0.923 ± 0.051 for tumor lesions. For total tumor volume, RMSPE was 9.5 ± 1.2%, and Mann-Whitney U test did not show a significant difference between manual and algorithm-based measurement of the tumor volume (p = 0.96). CONCLUSION The automatic tumor volumetry algorithm developed in this study provides a potential volume-based imaging biomarker for tumor evaluation in the field of neuroradiological imaging, which will contribute to the optimization and personalization of treatment for central nervous system tumors in the near future.
Collapse
Affiliation(s)
- Takamitsu Iwata
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Ryuichi Hirayama
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Shuhei Yamada
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Noriyuki Kijima
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Yoshiko Okita
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Naoki Kagawa
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Haruhiko Kishima
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| |
Collapse
|
6
|
Azamat S, Buz-Yalug B, Dindar SS, Yilmaz Tan K, Ozcan A, Can O, Ersen Danyeli A, Pamir MN, Dincer A, Ozduman K, Ozturk-Isik E. Susceptibility-Weighted MRI for Predicting NF-2 Mutations and S100 Protein Expression in Meningiomas. Diagnostics (Basel) 2024; 14:748. [PMID: 38611661 PMCID: PMC11012050 DOI: 10.3390/diagnostics14070748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 03/25/2024] [Accepted: 03/27/2024] [Indexed: 04/14/2024] Open
Abstract
S100 protein expression levels and neurofibromatosis type 2 (NF-2) mutations result in different disease courses in meningiomas. This study aimed to investigate non-invasive biomarkers of NF-2 copy number loss and S100 protein expression in meningiomas using morphological, radiomics, and deep learning-based features of susceptibility-weighted MRI (SWI). This retrospective study included 99 patients with S100 protein expression data and 92 patients with NF-2 copy number loss information. Preoperative cranial MRI was conducted using a 3T clinical MR scanner. Tumor volumes were segmented on fluid-attenuated inversion recovery (FLAIR) and subsequent registration of FLAIR to high-resolution SWI was performed. First-order textural features of SWI were extracted and assessed using Pyradiomics. Morphological features, including the tumor growth pattern, peritumoral edema, sinus invasion, hyperostosis, bone destruction, and intratumoral calcification, were semi-quantitatively assessed. Mann-Whitney U tests were utilized to assess the differences in the SWI features of meningiomas with and without S100 protein expression or NF-2 copy number loss. A logistic regression analysis was used to examine the relationship between these features and the respective subgroups. Additionally, a convolutional neural network (CNN) was used to extract hierarchical features of SWI, which were subsequently employed in a light gradient boosting machine classifier to predict the NF-2 copy number loss and S100 protein expression. NF-2 copy number loss was associated with a higher risk of developing high-grade tumors. Additionally, elevated signal intensity and a decrease in entropy within the tumoral region on SWI were observed in meningiomas with S100 protein expression. On the other hand, NF-2 copy number loss was associated with lower SWI signal intensity, a growth pattern described as "en plaque", and the presence of calcification within the tumor. The logistic regression model achieved an accuracy of 0.59 for predicting NF-2 copy number loss and an accuracy of 0.70 for identifying S100 protein expression. Deep learning features demonstrated a strong predictive capability for S100 protein expression (AUC = 0.85 ± 0.06) and had reasonable success in identifying NF-2 copy number loss (AUC = 0.74 ± 0.05). In conclusion, SWI showed promise in identifying NF-2 copy number loss and S100 protein expression by revealing neovascularization and microcalcification characteristics in meningiomas.
Collapse
Affiliation(s)
- Sena Azamat
- Institute of Biomedical Engineering, Bogazici University, Istanbul 34342, Turkey
- Basaksehir Cam and Sakura City Hospital, Istanbul 34480, Turkey
| | - Buse Buz-Yalug
- Institute of Biomedical Engineering, Bogazici University, Istanbul 34342, Turkey
| | - Sukru Samet Dindar
- Electrical and Electronics Engineering Department, Bogazici University, Istanbul 34342, Turkey
| | - Kubra Yilmaz Tan
- Department of Medical Biotechnology, Acibadem University, Istanbul 34752, Turkey
- Department of Psychiatry and Neurochemistry, Institute of Neuroscience & Physiology, The Sahlgrenska Academy, University of Gothenburg, 42130 Mölndal, Sweden
| | - Alpay Ozcan
- Electrical and Electronics Engineering Department, Bogazici University, Istanbul 34342, Turkey
| | - Ozge Can
- Department of Biomedical Engineering, Acibadem University, Istanbul 34752, Turkey
| | - Ayca Ersen Danyeli
- Department of Medical Pathology, Acibadem University, Istanbul 34752, Turkey
- Center for Neuroradiological Applications and Research, Acibadem University, Istanbul 34752, Turkey
- Brain Tumor Research Group, Acibadem University, Istanbul 34752, Turkey
| | - M. Necmettin Pamir
- Center for Neuroradiological Applications and Research, Acibadem University, Istanbul 34752, Turkey
- Department of Neurosurgery, Acibadem University, Istanbul 34752, Turkey
| | - Alp Dincer
- Center for Neuroradiological Applications and Research, Acibadem University, Istanbul 34752, Turkey
- Brain Tumor Research Group, Acibadem University, Istanbul 34752, Turkey
- Department of Radiology, Acibadem University, Istanbul 34752, Turkey
| | - Koray Ozduman
- Center for Neuroradiological Applications and Research, Acibadem University, Istanbul 34752, Turkey
- Brain Tumor Research Group, Acibadem University, Istanbul 34752, Turkey
- Department of Neurosurgery, Acibadem University, Istanbul 34752, Turkey
| | - Esin Ozturk-Isik
- Institute of Biomedical Engineering, Bogazici University, Istanbul 34342, Turkey
- Brain Tumor Research Group, Acibadem University, Istanbul 34752, Turkey
| |
Collapse
|
7
|
Yang L, Wang T, Zhang J, Kang S, Xu S, Wang K. Deep learning-based automatic segmentation of meningioma from T1-weighted contrast-enhanced MRI for preoperative meningioma differentiation using radiomic features. BMC Med Imaging 2024; 24:56. [PMID: 38443817 PMCID: PMC10916038 DOI: 10.1186/s12880-024-01218-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 01/21/2024] [Indexed: 03/07/2024] Open
Abstract
BACKGROUND This study aimed to establish a dedicated deep-learning model (DLM) on routine magnetic resonance imaging (MRI) data to investigate DLM performance in automated detection and segmentation of meningiomas in comparison to manual segmentations. Another purpose of our work was to develop a radiomics model based on the radiomics features extracted from automatic segmentation to differentiate low- and high-grade meningiomas before surgery. MATERIALS A total of 326 patients with pathologically confirmed meningiomas were enrolled. Samples were randomly split with a 6:2:2 ratio to the training set, validation set, and test set. Volumetric regions of interest (VOIs) were manually drawn on each slice using the ITK-SNAP software. An automatic segmentation model based on SegResNet was developed for the meningioma segmentation. Segmentation performance was evaluated by dice coefficient and 95% Hausdorff distance. Intra class correlation (ICC) analysis was applied to assess the agreement between radiomic features from manual and automatic segmentations. Radiomics features derived from automatic segmentation were extracted by pyradiomics. After feature selection, a model for meningiomas grading was built. RESULTS The DLM detected meningiomas in all cases. For automatic segmentation, the mean dice coefficient and 95% Hausdorff distance were 0.881 (95% CI: 0.851-0.981) and 2.016 (95% CI:1.439-3.158) in the test set, respectively. Features extracted on manual and automatic segmentation are comparable: the average ICC value was 0.804 (range, 0.636-0.933). Features extracted on manual and automatic segmentation are comparable: the average ICC value was 0.804 (range, 0.636-0.933). For meningioma classification, the radiomics model based on automatic segmentation performed well in grading meningiomas, yielding a sensitivity, specificity, accuracy, and area under the curve (AUC) of 0.778 (95% CI: 0.701-0.856), 0.860 (95% CI: 0.722-0.908), 0.848 (95% CI: 0.715-0.903) and 0.842 (95% CI: 0.807-0.895) in the test set, respectively. CONCLUSIONS The DLM yielded favorable automated detection and segmentation of meningioma and can help deploy radiomics for preoperative meningioma differentiation in clinical practice.
Collapse
Affiliation(s)
- Liping Yang
- Department of PET-CT, Harbin Medical University Cancer Hospital, Harbin, 150001, China
| | - Tianzuo Wang
- Medical Imaging Department, Changzheng Hospital of Harbin City, Harbin, China
| | - Jinling Zhang
- Medical Imaging Department, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shi Kang
- Medical Imaging Department, The Second Hospital of Heilongjiang Province, Harbin, China
| | - Shichuan Xu
- Department of Medical Instruments, Second Hospital of Harbin, Harbin, 150001, China.
| | - Kezheng Wang
- Department of PET-CT, Harbin Medical University Cancer Hospital, Harbin, 150001, China.
| |
Collapse
|
8
|
Fu B, Peng Y, He J, Tian C, Sun X, Wang R. HmsU-Net: A hybrid multi-scale U-net based on a CNN and transformer for medical image segmentation. Comput Biol Med 2024; 170:108013. [PMID: 38271837 DOI: 10.1016/j.compbiomed.2024.108013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 12/26/2023] [Accepted: 01/18/2024] [Indexed: 01/27/2024]
Abstract
Accurate medical image segmentation is of great significance for subsequent diagnosis and analysis. The acquisition of multi-scale information plays an important role in segmenting regions of interest of different sizes. With the emergence of Transformers, numerous networks adopted hybrid structures incorporating Transformers and CNNs to learn multi-scale information. However, the majority of research has focused on the design and composition of CNN and Transformer structures, neglecting the inconsistencies in feature learning between Transformer and CNN. This oversight has resulted in the hybrid network's performance not being fully realized. In this work, we proposed a novel hybrid multi-scale segmentation network named HmsU-Net, which effectively fused multi-scale features. Specifically, HmsU-Net employed a parallel design incorporating both CNN and Transformer architectures. To address the inconsistency in feature learning between CNN and Transformer within the same stage, we proposed the multi-scale feature fusion module. For feature fusion across different stages, we introduced the cross-attention module. Comprehensive experiments conducted on various datasets demonstrate that our approach surpasses current state-of-the-art methods.
Collapse
Affiliation(s)
- Bangkang Fu
- Medical College, Guizhou University, Guizhou 550000, China; Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China
| | - Yunsong Peng
- Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China
| | - Junjie He
- Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China
| | - Chong Tian
- Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China
| | - Xinhuan Sun
- Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China
| | - Rongpin Wang
- Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China.
| |
Collapse
|
9
|
Guo Y, Li R, Li C, Li L, Jiang T, Zhou D. Hotspots and Trends in Meningioma Research Based on Bibliometrics, 2011-2021. World Neurosurg 2024; 183:e328-e338. [PMID: 38145653 DOI: 10.1016/j.wneu.2023.12.097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 12/18/2023] [Indexed: 12/27/2023]
Abstract
BACKGROUND Meningiomas, the most prevalent benign intracranial neoplasms, have been studied extensively for many years, but significant problems remain. To date, there is a scarcity of detailed studies elucidating the hotspots and future directions of meningiomas research. METHODS A comprehensive search and screening strategy was used to collect relevant studies published between 2011 and 2021 in the Web of Science Core Collection database. Thorough and systematic coauthorship and co-occurrence keyword maps were generated, and tables of statistics summarizing countries, organizations, authors, and keywords were created. RESULTS A total of 1544 articles meeting the screening criteria were collected. The countries producing the most publications between 2011 and 2021 were the United States, Germany, and China, with 586, 244, and 197 records, repectively. The cooperation networks also revolved mainly around these 3 countries, particularly the United States. The most frequently used keyword was "surgery," followed by "recurrence" and "management," with the frequencies of 248, 212, and 163, respectively. The most prominent cluster during the last decade was the #0 methylation cluster, and several keywords, including "survival," "brain invasion," and "magnetic resonance imaging," exhibited significant burst strength. CONCLUSIONS This study aimed to provide a comprehensive analysis of the research landscape and to identify potential research directions. Our findings disclose productive individuals and institutions. The current research focuses on the molecular pathology of meningiomas, improvements in techniques, and advances in diagnosis by magnetic resonance imaging. In particular, the improvements in molecular pathology might direct future research directions.
Collapse
Affiliation(s)
- Yiding Guo
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Runting Li
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Chao Li
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Lianwang Li
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Tao Jiang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Dabiao Zhou
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, People's Republic of China.
| |
Collapse
|
10
|
Gui Y, Zhang J. Research Progress of Artificial Intelligence in the Grading and Classification of Meningiomas. Acad Radiol 2024:S1076-6332(24)00073-4. [PMID: 38413314 DOI: 10.1016/j.acra.2024.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Revised: 02/02/2024] [Accepted: 02/02/2024] [Indexed: 02/29/2024]
Abstract
A meningioma is a common primary central nervous system tumor. The histological features of meningiomas vary significantly depending on the grade and subtype, leading to differences in treatment and prognosis. Therefore, early diagnosis, grading, and typing of meningiomas are crucial for developing comprehensive and individualized diagnosis and treatment plans. The advancement of artificial intelligence (AI) in medical imaging, particularly radiomics and deep learning (DL), has contributed to the increasing research on meningioma grading and classification. These techniques are fast and accurate, involve fully automated learning, are non-invasive and objective, enable the efficient and non-invasive prediction of meningioma grades and classifications, and provide valuable assistance in clinical treatment and prognosis. This article provides a summary and analysis of the research progress in radiomics and DL for meningioma grading and classification. It also highlights the existing research findings, limitations, and suggestions for future improvement, aiming to facilitate the future application of AI in the diagnosis and treatment of meningioma.
Collapse
Affiliation(s)
- Yuan Gui
- Department of Radiology, the fifth affiliated hospital of zunyi medical university, zhufengdadao No.1439, Doumen District, Zhuhai, China
| | - Jing Zhang
- Department of Radiology, the fifth affiliated hospital of zunyi medical university, zhufengdadao No.1439, Doumen District, Zhuhai, China.
| |
Collapse
|
11
|
Teng Y, Chen C, Shu X, Zhao F, Zhang L, Xu J. Automated, fast, robust brain extraction on contrast-enhanced T1-weighted MRI in presence of brain tumors: an optimized model based on multi-center datasets. Eur Radiol 2024; 34:1190-1199. [PMID: 37615767 PMCID: PMC10853304 DOI: 10.1007/s00330-023-10078-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 07/12/2023] [Accepted: 07/14/2023] [Indexed: 08/25/2023]
Abstract
OBJECTIVES Existing brain extraction models should be further optimized to provide more information for oncological analysis. We aimed to develop an nnU-Net-based deep learning model for automated brain extraction on contrast-enhanced T1-weighted (T1CE) images in presence of brain tumors. METHODS This is a multi-center, retrospective study involving 920 patients. A total of 720 cases with four types of intracranial tumors from private institutions were collected and set as the training group and the internal test group. Mann-Whitney U test (U test) was used to investigate if the model performance was associated with pathological types and tumor characteristics. Then, the generalization of model was independently tested on public datasets consisting of 100 glioma and 100 vestibular schwannoma cases. RESULTS In the internal test, the model achieved promising performance with median Dice similarity coefficient (DSC) of 0.989 (interquartile range (IQR), 0.988-0.991), and Hausdorff distance (HD) of 6.403 mm (IQR, 5.099-8.426 mm). U test suggested a slightly descending performance in meningioma and vestibular schwannoma group. The results of U test also suggested that there was a significant difference in peritumoral edema group, with median DSC of 0.990 (IQR, 0.989-0.991, p = 0.002), and median HD of 5.916 mm (IQR, 5.000-8.000 mm, p = 0.049). In the external test, our model also showed to be robust performance, with median DSC of 0.991 (IQR, 0.983-0.998) and HD of 8.972 mm (IQR, 6.164-13.710 mm). CONCLUSIONS For automated processing of MRI neuroimaging data presence of brain tumors, the proposed model can perform brain extraction including important superficial structures for oncological analysis. CLINICAL RELEVANCE STATEMENT The proposed model serves as a radiological tool for image preprocessing in tumor cases, focusing on superficial brain structures, which could streamline the workflow and enhance the efficiency of subsequent radiological assessments. KEY POINTS • The nnU-Net-based model is capable of segmenting significant superficial structures in brain extraction. • The proposed model showed feasible performance, regardless of pathological types or tumor characteristics. • The model showed generalization in the public datasets.
Collapse
Affiliation(s)
- Yuen Teng
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Chaoyue Chen
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China.
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China.
- West China Hospital, No. 37, GuoXue Alley, Chengdu, 610041, People's Republic of China.
| | - Xin Shu
- College of Computer Science, Sichuan University, Chengdu, People's Republic of China
| | - Fumin Zhao
- Department of Radiology, West China Second University Hospital, Sichuan University, Chengdu, China
| | - Lei Zhang
- College of Computer Science, Sichuan University, Chengdu, People's Republic of China.
| | - Jianguo Xu
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China.
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China.
- West China Hospital, No. 37, GuoXue Alley, Chengdu, 610041, People's Republic of China.
| |
Collapse
|
12
|
Park S, Lee ES, Shin KS, Lee JE, Ye JC. Self-supervised multi-modal training from uncurated images and reports enables monitoring AI in radiology. Med Image Anal 2024; 91:103021. [PMID: 37952385 DOI: 10.1016/j.media.2023.103021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 08/14/2023] [Accepted: 10/31/2023] [Indexed: 11/14/2023]
Abstract
The escalating demand for artificial intelligence (AI) systems that can monitor and supervise human errors and abnormalities in healthcare presents unique challenges. Recent advances in vision-language models reveal the challenges of monitoring AI by understanding both visual and textual concepts and their semantic correspondences. However, there has been limited success in the application of vision-language models in the medical domain. Current vision-language models and learning strategies for photographic images and captions call for a web-scale data corpus of image and text pairs which is not often feasible in the medical domain. To address this, we present a model named medical cross-attention vision-language model (Medical X-VL), which leverages key components to be tailored for the medical domain. The model is based on the following components: self-supervised unimodal models in medical domain and a fusion encoder to bridge them, momentum distillation, sentencewise contrastive learning for medical reports, and sentence similarity-adjusted hard negative mining. We experimentally demonstrated that our model enables various zero-shot tasks for monitoring AI, ranging from the zero-shot classification to zero-shot error correction. Our model outperformed current state-of-the-art models in two medical image datasets, suggesting a novel clinical application of our monitoring AI model to alleviate human errors. Our method demonstrates a more specialized capacity for fine-grained understanding, which presents a distinct advantage particularly applicable to the medical domain.
Collapse
Affiliation(s)
- Sangjoon Park
- Department of Radiation Oncology, Yonsei College of Medicine, Seoul, Republic of Korea
| | - Eun Sun Lee
- Chung-Ang University Hospital, Seoul, Republic of Korea.
| | - Kyung Sook Shin
- Department of Radiology, Chungnam National University Hospital, Chungnam National University College of Medicine, Daejeon, Republic of Korea
| | - Jeong Eun Lee
- Department of Radiology, Chungnam National University Hospital, Chungnam National University College of Medicine, Daejeon, Republic of Korea.
| | - Jong Chul Ye
- Kim Jaechul Graduate School of AI, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.
| |
Collapse
|
13
|
Chen C, Teng Y, Tan S, Wang Z, Zhang L, Xu J. Performance Test of a Well-Trained Model for Meningioma Segmentation in Health Care Centers: Secondary Analysis Based on Four Retrospective Multicenter Data Sets. J Med Internet Res 2023; 25:e44119. [PMID: 38100181 PMCID: PMC10757229 DOI: 10.2196/44119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 06/21/2023] [Accepted: 11/22/2023] [Indexed: 12/18/2023] Open
Abstract
BACKGROUND Convolutional neural networks (CNNs) have produced state-of-the-art results in meningioma segmentation on magnetic resonance imaging (MRI). However, images obtained from different institutions, protocols, or scanners may show significant domain shift, leading to performance degradation and challenging model deployment in real clinical scenarios. OBJECTIVE This research aims to investigate the realistic performance of a well-trained meningioma segmentation model when deployed across different health care centers and verify the methods to enhance its generalization. METHODS This study was performed in four centers. A total of 606 patients with 606 MRIs were enrolled between January 2015 and December 2021. Manual segmentations, determined through consensus readings by neuroradiologists, were used as the ground truth mask. The model was previously trained using a standard supervised CNN called Deeplab V3+ and was deployed and tested separately in four health care centers. To determine the appropriate approach to mitigating the observed performance degradation, two methods were used: unsupervised domain adaptation and supervised retraining. RESULTS The trained model showed a state-of-the-art performance in tumor segmentation in two health care institutions, with a Dice ratio of 0.887 (SD 0.108, 95% CI 0.903-0.925) in center A and a Dice ratio of 0.874 (SD 0.800, 95% CI 0.854-0.894) in center B. Whereas in the other health care institutions, the performance declined, with Dice ratios of 0.631 (SD 0.157, 95% CI 0.556-0.707) in center C and 0.649 (SD 0.187, 95% CI 0.566-0.732) in center D, as they obtained the MRI using different scanning protocols. The unsupervised domain adaptation showed a significant improvement in performance scores, with Dice ratios of 0.842 (SD 0.073, 95% CI 0.820-0.864) in center C and 0.855 (SD 0.097, 95% CI 0.826-0.886) in center D. Nonetheless, it did not overperform the supervised retraining, which achieved Dice ratios of 0.899 (SD 0.026, 95% CI 0.889-0.906) in center C and 0.886 (SD 0.046, 95% CI 0.870-0.903) in center D. CONCLUSIONS Deploying the trained CNN model in different health care institutions may show significant performance degradation due to the domain shift of MRIs. Under this circumstance, the use of unsupervised domain adaptation or supervised retraining should be considered, taking into account the balance between clinical requirements, model performance, and the size of the available data.
Collapse
Affiliation(s)
- Chaoyue Chen
- Neurosurgery Department, West China Hospital, Sichuan University, Chengdu, China
| | - Yuen Teng
- Neurosurgery Department, West China Hospital, Sichuan University, Chengdu, China
| | - Shuo Tan
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Zizhou Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore, Singapore
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Jianguo Xu
- Neurosurgery Department, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
14
|
Kim H, Kim HG, Oh JH, Lee KM. Deep-learning model for diagnostic clue: detecting the dural tail sign for meningiomas on contrast-enhanced T1 weighted images. Quant Imaging Med Surg 2023; 13:8132-8143. [PMID: 38106283 PMCID: PMC10722041 DOI: 10.21037/qims-23-114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 09/06/2023] [Indexed: 12/19/2023]
Abstract
Background Meningiomas are the most common primary central nervous system tumors, and magnetic resonance imaging (MRI), especially contrast-enhanced T1 weighted image (CE T1WI), is used as a fundamental imaging modality for the detection and analysis of the tumors. In this study, we propose an automated deep-learning model for meningioma detection using the dural tail sign. Methods The dataset included 123 patients with 3,824 dural tail signs on sagittal CE T1WI. The dataset was divided into training and test datasets based on specific time point, and 78 and 45 patients were comprised for the training and test dataset, respectively. To compensate for the small sample size of the training dataset, 39 additional patients with 69 dural tail signs from the open dataset were appended to the training dataset. A You Only Look Once (YOLO) v4 network was trained with sagittal CE T1WI to detect dural tail signs. The normal group dataset, comprised of 51 patients with no abnormal finding on MRI, was employed to evaluate the specificity of the trained model. Results The sensitivity and false positive average were 82.22% and 29.73, respectively, in the test dataset. The specificity and false positive average were 17.65% and 3.16, respectively, in the normal dataset. Most of the false-positive cases in the test dataset were enhancing vessels, misinterpreted as dural thickening. Conclusions The proposed model demonstrates an automated detection system for the dural tail sign to identify meningioma in general screening MRI. Our model can facilitate and alleviate radiologists' reading process by notifying the possibility of incidental dural mass based on dural tail sign detection.
Collapse
Affiliation(s)
- Hyunmin Kim
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, Seoul, Republic of Korea
| | - Hyug-Gi Kim
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, Seoul, Republic of Korea
| | | | | |
Collapse
|
15
|
Toader C, Eva L, Tataru CI, Covache-Busuioc RA, Bratu BG, Dumitrascu DI, Costin HP, Glavan LA, Ciurea AV. Frontiers of Cranial Base Surgery: Integrating Technique, Technology, and Teamwork for the Future of Neurosurgery. Brain Sci 2023; 13:1495. [PMID: 37891862 PMCID: PMC10605159 DOI: 10.3390/brainsci13101495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 10/10/2023] [Accepted: 10/18/2023] [Indexed: 10/29/2023] Open
Abstract
The landscape of cranial base surgery has undergone monumental transformations over the past several decades. This article serves as a comprehensive survey, detailing both the historical and current techniques and technologies that have propelled this field into an era of unprecedented capabilities and sophistication. In the prologue, we traverse the historical evolution from rudimentary interventions to the state-of-the-art neurosurgical methodologies that define today's practice. Subsequent sections delve into the anatomical complexities of the anterior, middle, and posterior cranial fossa, shedding light on the intricacies that dictate surgical approaches. In a section dedicated to advanced techniques and modalities, we explore cutting-edge evolutions in minimally invasive procedures, pituitary surgery, and cranial base reconstruction. Here, we highlight the seamless integration of endocrinology, biomaterial science, and engineering into neurosurgical craftsmanship. The article emphasizes the paradigm shift towards "Functionally" Guided Surgery facilitated by intraoperative neuromonitoring. We explore its historical origins, current technologies, and its invaluable role in tailoring surgical interventions across diverse pathologies. Additionally, the digital era's contributions to cranial base surgery are examined. This includes breakthroughs in endoscopic technology, robotics, augmented reality, and the potential of machine learning and AI-assisted diagnostic and surgical planning. The discussion extends to radiosurgery and radiotherapy, focusing on the harmonization of precision and efficacy through advanced modalities such as Gamma Knife and CyberKnife. The article also evaluates newer protocols that optimize tumor control while preserving neural structures. In acknowledging the holistic nature of cranial base surgery, we advocate for an interdisciplinary approach. The ecosystem of this surgical field is presented as an amalgamation of various medical disciplines, including neurology, radiology, oncology, and rehabilitation, and is further enriched by insights from patient narratives and quality-of-life metrics. The epilogue contemplates future challenges and opportunities, pinpointing potential breakthroughs in stem cell research, regenerative medicine, and genomic tailoring. Ultimately, the article reaffirms the ethos of continuous learning, global collaboration, and patient-first principles, projecting an optimistic trajectory for the field of cranial base surgery in the coming decade.
Collapse
Affiliation(s)
- Corneliu Toader
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (R.-A.C.-B.); (D.-I.D.); (H.P.C.); (L.-A.G.); (A.V.C.)
- Department of Vascular Neurosurgery, National Institute of Neurology and Neurovascular Diseases, 077160 Bucharest, Romania
| | - Lucian Eva
- Department of Neurosurgery, Dunarea de Jos University, 800010 Galati, Romania
- Department of Neurosurgery, Clinical Emergency Hospital “Prof. Dr. Nicolae Oblu”, 700309 Iasi, Romania
| | - Catalina-Ioana Tataru
- Department of Ophthalmology, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania
- Clinical Hospital of Ophthalmological Emergencies, 010464 Bucharest, Romania
| | - Razvan-Adrian Covache-Busuioc
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (R.-A.C.-B.); (D.-I.D.); (H.P.C.); (L.-A.G.); (A.V.C.)
| | - Bogdan-Gabriel Bratu
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (R.-A.C.-B.); (D.-I.D.); (H.P.C.); (L.-A.G.); (A.V.C.)
| | - David-Ioan Dumitrascu
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (R.-A.C.-B.); (D.-I.D.); (H.P.C.); (L.-A.G.); (A.V.C.)
| | - Horia Petre Costin
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (R.-A.C.-B.); (D.-I.D.); (H.P.C.); (L.-A.G.); (A.V.C.)
| | - Luca-Andrei Glavan
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (R.-A.C.-B.); (D.-I.D.); (H.P.C.); (L.-A.G.); (A.V.C.)
| | - Alexandru Vlad Ciurea
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (R.-A.C.-B.); (D.-I.D.); (H.P.C.); (L.-A.G.); (A.V.C.)
- Neurosurgery Department, Sanador Clinical Hospital, 010991 Bucharest, Romania
| |
Collapse
|
16
|
Mohammadi S, Ghaderi S, Ghaderi K, Mohammadi M, Pourasl MH. Automated segmentation of meningioma from contrast-enhanced T1-weighted MRI images in a case series using a marker-controlled watershed segmentation and fuzzy C-means clustering machine learning algorithm. Int J Surg Case Rep 2023; 111:108818. [PMID: 37716060 PMCID: PMC10514425 DOI: 10.1016/j.ijscr.2023.108818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/07/2023] [Accepted: 09/09/2023] [Indexed: 09/18/2023] Open
Abstract
INTRODUCTION AND IMPORTANCE Accurate segmentation of meningiomas from contrast-enhanced T1-weighted (CE T1-w) magnetic resonance imaging (MRI) is crucial for diagnosis and treatment planning. Manual segmentation is time-consuming and prone to variability. To evaluate an automated segmentation approach for meningiomas using marker-controlled watershed segmentation (MCWS) and fuzzy c-means (FCM) algorithms. CASE PRESENTATION AND METHODS CE T1-w MRI of 3 female patients (aged 59, 44, 67 years) with right frontal meningiomas were analyzed. Images were converted to grayscale and preprocessed with Otsu's thresholding and FCM clustering. MCWS segmentation was performed. Segmentation accuracy was assessed by comparing automated segmentations to manual delineations. CLINICAL DISCUSSION The approach successfully segmented meningiomas in all cases. Mean sensitivity was 0.8822, indicating accurate identification of tumors. Mean Dice similarity coefficient between Otsu's and FCM1 was 0.6599, suggesting good overlap between segmentation methods. CONCLUSION The MCWS and FCM approach enables accurate automated segmentation of meningiomas from CE T1-w MRI. With further validation on larger datasets, this could provide an efficient tool to assist in delineating meningioma boundaries for clinical management.
Collapse
Affiliation(s)
- Sana Mohammadi
- Department of Medical Sciences, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Sadegh Ghaderi
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran.
| | - Kayvan Ghaderi
- Department of Information Technology and Computer Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj 66177-15175, Iran
| | - Mahdi Mohammadi
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | | |
Collapse
|
17
|
Dong Y, Wang T, Ma C, Li Z, Chellali R. DE-UFormer: U-shaped dual encoder architectures for brain tumor segmentation. Phys Med Biol 2023; 68:195019. [PMID: 37699403 DOI: 10.1088/1361-6560/acf911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 09/12/2023] [Indexed: 09/14/2023]
Abstract
Objective. In brain tumor segmentation tasks, the convolutional neural network (CNN) or transformer is usually acted as the encoder since the encoder is necessary to be used. On one hand, the convolution operation of CNN has advantages of extracting local information although its performance of obtaining global expressions is bad. On the other hand, the attention mechanism of the transformer is good at establishing remote dependencies while it is lacking in the ability to extract high-precision local information. Either high precision local information or global contextual information is crucial in brain tumor segmentation tasks. The aim of this paper is to propose a brain tumor segmentation model that can simultaneously extract and fuse high-precision local and global contextual information.Approach. We propose a network model DE-Uformer with dual encoders to obtain local features and global representations using both CNN encoder and Transformer encoder. On the basis of this, we further propose the nested encoder-aware feature fusion (NEaFF) module for effective deep fusion of the information under each dimension. It may establishe remote dependencies of features under a single encoder via the spatial attention Transformer. Meanwhile ,it also investigates how features extracted from two encoders are related with the cross-encoder attention transformer.Main results. The proposed algorithm segmentation have been performed on BraTS2020 dataset and private meningioma dataset. Results show that it is significantly better than current state-of-the-art brain tumor segmentation methods.Significance. The method proposed in this paper greatly improves the accuracy of brain tumor segmentation. This advancement helps healthcare professionals perform a more comprehensive analysis and assessment of brain tumors, thereby improving diagnostic accuracy and reliability. This fully automated brain model segmentation model with high accuracy is of great significance for critical decisions made by physicians in selecting treatment strategies and preoperative planning.
Collapse
Affiliation(s)
- Yan Dong
- College of Electrical Engineering And Control Science, Nanjing Tech University NanJing, People's Republic of China
| | - Ting Wang
- College of Electrical Engineering And Control Science, Nanjing Tech University NanJing, People's Republic of China
| | - Chiyuan Ma
- Jinling Hospital, Affiliated Hospital of Medical School, Nanjing University NanJing, People's Republic of China
| | - Zhenxing Li
- Jinling Hospital, Affiliated Hospital of Medical School, Nanjing University NanJing, People's Republic of China
| | - Ryad Chellali
- College of Electrical Engineering And Control Science, Nanjing Tech University NanJing, People's Republic of China
| |
Collapse
|
18
|
Bouget D, Alsinan D, Gaitan V, Helland RH, Pedersen A, Solheim O, Reinertsen I. Raidionics: an open software for pre- and postoperative central nervous system tumor segmentation and standardized reporting. Sci Rep 2023; 13:15570. [PMID: 37730820 PMCID: PMC10511510 DOI: 10.1038/s41598-023-42048-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 09/05/2023] [Indexed: 09/22/2023] Open
Abstract
For patients suffering from central nervous system tumors, prognosis estimation, treatment decisions, and postoperative assessments are made from the analysis of a set of magnetic resonance (MR) scans. Currently, the lack of open tools for standardized and automatic tumor segmentation and generation of clinical reports, incorporating relevant tumor characteristics, leads to potential risks from inherent decisions' subjectivity. To tackle this problem, the proposed Raidionics open-source software has been developed, offering both a user-friendly graphical user interface and stable processing backend. The software includes preoperative segmentation models for each of the most common tumor types (i.e., glioblastomas, lower grade gliomas, meningiomas, and metastases), together with one early postoperative glioblastoma segmentation model. Preoperative segmentation performances were quite homogeneous across the four different brain tumor types, with an average Dice around 85% and patient-wise recall and precision around 95%. Postoperatively, performances were lower with an average Dice of 41%. Overall, the generation of a standardized clinical report, including the tumor segmentation and features computation, requires about ten minutes on a regular laptop. The proposed Raidionics software is the first open solution enabling an easy use of state-of-the-art segmentation models for all major tumor types, including preoperative and postsurgical standardized reports.
Collapse
Affiliation(s)
- David Bouget
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Demah Alsinan
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Valeria Gaitan
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Ragnhild Holden Helland
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), 7491, Trondheim, Norway
| | - André Pedersen
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Ole Solheim
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, 7491, Trondheim, Norway
- Norwegian University of Science and Technology (NTNU), Department of Neuromedicine and Movement Science, 7491, Trondheim, Norway
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway.
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), 7491, Trondheim, Norway.
| |
Collapse
|
19
|
Jun Y, Park YW, Shin H, Shin Y, Lee JR, Han K, Ahn SS, Lim SM, Hwang D, Lee SK. Intelligent noninvasive meningioma grading with a fully automatic segmentation using interpretable multiparametric deep learning. Eur Radiol 2023; 33:6124-6133. [PMID: 37052658 DOI: 10.1007/s00330-023-09590-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 11/30/2022] [Accepted: 02/09/2023] [Indexed: 04/14/2023]
Abstract
OBJECTIVES To establish a robust interpretable multiparametric deep learning (DL) model for automatic noninvasive grading of meningiomas along with segmentation. METHODS In total, 257 patients with pathologically confirmed meningiomas (162 low-grade, 95 high-grade) who underwent a preoperative brain MRI, including T2-weighted (T2) and contrast-enhanced T1-weighted images (T1C), were included in the institutional training set. A two-stage DL grading model was constructed for segmentation and classification based on multiparametric three-dimensional U-net and ResNet. The models were validated in the external validation set consisting of 61 patients with meningiomas (46 low-grade, 15 high-grade). Relevance-weighted Class Activation Mapping (RCAM) method was used to interpret the DL features contributing to the prediction of the DL grading model. RESULTS On external validation, the combined T1C and T2 model showed a Dice coefficient of 0.910 in segmentation and the highest performance for meningioma grading compared to the T2 or T1C only models, with an area under the curve (AUC) of 0.770 (95% confidence interval: 0.644-0.895) and accuracy, sensitivity, and specificity of 72.1%, 73.3%, and 71.7%, respectively. The AUC and accuracy of the combined DL grading model were higher than those of the human readers (AUCs of 0.675-0.690 and accuracies of 65.6-68.9%, respectively). The RCAM of the DL grading model showed activated maps at the surface regions of meningiomas indicating that the model recognized the features at the tumor margin for grading. CONCLUSIONS An interpretable multiparametric DL model combining T1C and T2 can enable fully automatic grading of meningiomas along with segmentation. KEY POINTS • The multiparametric DL model showed robustness in grading and segmentation on external validation. • The diagnostic performance of the combined DL grading model was higher than that of the human readers. • The RCAM interpreted that DL grading model recognized the meaningful features at the tumor margin for grading.
Collapse
Affiliation(s)
- Yohan Jun
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Yae Won Park
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Hyungseob Shin
- School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Yejee Shin
- School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jeong Ryong Lee
- School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Kyunghwa Han
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Sung Soo Ahn
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea.
| | - Soo Mee Lim
- Department of Radiology, Ewha Womans University College of Medicine, Seoul, Korea
| | - Dosik Hwang
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea.
- School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea.
- Center for Healthcare Robotics, Korea Institute of Science and Technology, Seoul, Korea.
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea.
| | - Seung-Koo Lee
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| |
Collapse
|
20
|
Khan M, Hanna C, Findlay M, Lucke-Wold B, Karsy M, Jensen RL. Modeling Meningiomas: Optimizing Treatment Approach. Neurosurg Clin N Am 2023; 34:479-492. [PMID: 37210136 DOI: 10.1016/j.nec.2023.02.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Preclinical meningioma models offer a setting to test molecular mechanisms of tumor development and targeted treatment options but historically have been challenging to generate. Few spontaneous tumor models in rodents have been established, but cell culture and in vivo rodent models have emerged along with artificial intelligence, radiomics, and neural networks to differentiate the clinical heterogeneity of meningiomas. We reviewed 127 studies using PRISMA guideline methodology, including laboratory and animal studies, that addressed preclinical modeling. Our evaluation identified that meningioma preclinical models provide valuable molecular insight into disease progression and effective chemotherapeutic and radiation approaches for specific tumor types.
Collapse
Affiliation(s)
- Majid Khan
- Reno School of Medicine, University of Nevada, Reno, NV, USA
| | - Chadwin Hanna
- Department of Neurosurgery, University of Florida, Gainesville, FL, USA
| | - Matthew Findlay
- School of Medicine, University of Utah, Salt Lake City, UT, USA
| | | | - Michael Karsy
- Department of Neurosurgery, Clinical Neurosciences Center, University of Utah, 175 North Medical Drive East, Salt Lake City, UT 84132, USA.
| | - Randy L Jensen
- Department of Neurosurgery, Clinical Neurosciences Center, University of Utah, 175 North Medical Drive East, Salt Lake City, UT 84132, USA
| |
Collapse
|
21
|
Koechli C, Zwahlen DR, Schucht P, Windisch P. Radiomics and machine learning for predicting the consistency of benign tumors of the central nervous system: A systematic review. Eur J Radiol 2023; 164:110866. [PMID: 37207398 DOI: 10.1016/j.ejrad.2023.110866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/28/2023] [Accepted: 05/03/2023] [Indexed: 05/21/2023]
Abstract
PURPOSE Predicting the consistency of benign central nervous system (CNS) tumors prior to surgery helps to improve surgical outcomes. This review summarizes and analyzes the literature on using radiomics and/or machine learning (ML) for consistency prediction. METHOD The Medical Literature Analysis and Retrieval System Online (MEDLINE) database was screened for studies published in English from January 1st 2000. Data was extracted according to the PRISMA guidelines and quality of the studies was assessed in compliance with the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2). RESULTS Eight publications were included focusing on pituitary macroadenomas (n = 5), pituitary adenomas (n = 1), and meningiomas (n = 2) using a retrospective (n = 6), prospective (n = 1), and unknown (n = 1) study design with a total of 763 patients for the consistency prediction. The studies reported an area under the curve (AUC) of 0.71-0.99 for their respective best performing model regarding the consistency prediction. Of all studies, four articles validated their models internally whereas none validated their models externally. Two articles stated making data available on request with the remaining publications lacking information with regard to data availability. CONCLUSIONS The research on consistency prediction of CNS tumors is still at an early stage regarding the use of radiomics and different ML techniques. Best-practice procedures regarding radiomics and ML need to be followed more rigorously to facilitate the comparison between publications and, accordingly, the possible implementation into clinical practice in the future.
Collapse
Affiliation(s)
- Carole Koechli
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland; Universitätsklinik für Neurochirurgie, Bern University Hospital, 3010 Bern, Switzerland.
| | - Daniel R Zwahlen
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland
| | - Philippe Schucht
- Universitätsklinik für Neurochirurgie, Bern University Hospital, 3010 Bern, Switzerland
| | - Paul Windisch
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland
| |
Collapse
|
22
|
Chen C, Zhang T, Teng Y, Yu Y, Shu X, Zhang L, Zhao F, Xu J. Automated segmentation of craniopharyngioma on MR images using U-Net-based deep convolutional neural network. Eur Radiol 2023; 33:2665-2675. [PMID: 36396792 PMCID: PMC10017618 DOI: 10.1007/s00330-022-09216-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 09/27/2022] [Accepted: 09/30/2022] [Indexed: 11/19/2022]
Abstract
OBJECTIVES To develop a U-Net-based deep learning model for automated segmentation of craniopharyngioma. METHODS A total number of 264 patients diagnosed with craniopharyngiomas were included in this research. Pre-treatment MRIs were collected, annotated, and used as ground truth to learn and evaluate the deep learning model. Thirty-eight patients from another institution were used for independently external testing. The proposed segmentation model was constructed based on a U-Net architecture. Dice similarity coefficients (DSCs), Hausdorff distance of 95% percentile (95HD), Jaccard value, true positive rate (TPR), and false positive rate (FPR) of each case were calculated. One-way ANOVA analysis was used to investigate if the model performance was associated with the radiological characteristics of tumors. RESULTS The proposed model showed a good performance in segmentation with average DSCs of 0.840, Jaccard of 0.734, TPR of 0.820, FPR of 0.000, and 95HD of 3.669 mm. It performed feasibly in the independent external test set, with average DSCs of 0.816, Jaccard of 0.704, TPR of 0.765, FPR of 0.000, and 95HD of 4.201 mm. Also, one-way ANOVA suggested the performance was not statistically associated with radiological characteristics, including predominantly composition (p = 0.370), lobulated shape (p = 0.353), compressed or enclosed ICA (p = 0.809), and cavernous sinus invasion (p = 0.283). CONCLUSIONS The proposed deep learning model shows promising results for the automated segmentation of craniopharyngioma. KEY POINTS • The segmentation model based on U-Net showed good performance in segmentation of craniopharyngioma. • The proposed model showed good performance regardless of the radiological characteristics of craniopharyngioma. • The model achieved feasibility in the independent external dataset obtained from another center.
Collapse
Affiliation(s)
- Chaoyue Chen
- Department of Neurosurgery, Sichuan University, West China Hospital, No. 37, GuoXue Alley, Chengdu, 610041, People's Republic of China.,Department of Radiology, Sichuan University, West China Hospital, No. 37, GuoXue Alley, Chengdu, 610041, People's Republic of China
| | - Ting Zhang
- Department of Neurosurgery, Sichuan University, West China Hospital, No. 37, GuoXue Alley, Chengdu, 610041, People's Republic of China.,Department of Radiology, Sichuan University, West China Hospital, No. 37, GuoXue Alley, Chengdu, 610041, People's Republic of China
| | - Yuen Teng
- Department of Neurosurgery, Sichuan University, West China Hospital, No. 37, GuoXue Alley, Chengdu, 610041, People's Republic of China.,Department of Radiology, Sichuan University, West China Hospital, No. 37, GuoXue Alley, Chengdu, 610041, People's Republic of China
| | - Yijie Yu
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Xin Shu
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Lei Zhang
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China. .,College of Computer Science, Sichuan University, Chengdu, 610041, People's Republic of China.
| | - Fumin Zhao
- Radiology Department, West China Second University Hospital, Sichuan University, No. 20, section 3, Renmin South Road, Wuhou District, Chengdu, 610041, People's Republic of China.
| | - Jianguo Xu
- Department of Neurosurgery, Sichuan University, West China Hospital, No. 37, GuoXue Alley, Chengdu, 610041, People's Republic of China. .,Department of Radiology, Sichuan University, West China Hospital, No. 37, GuoXue Alley, Chengdu, 610041, People's Republic of China.
| |
Collapse
|
23
|
Kang H, Witanto JN, Pratama K, Lee D, Choi KS, Choi SH, Kim KM, Kim MS, Kim JW, Kim YH, Park SJ, Park CK. Fully Automated MRI Segmentation and Volumetric Measurement of Intracranial Meningioma Using Deep Learning. J Magn Reson Imaging 2023; 57:871-881. [PMID: 35775971 DOI: 10.1002/jmri.28332] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 06/16/2022] [Accepted: 06/16/2022] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Accurate and rapid measurement of the MRI volume of meningiomas is essential in clinical practice to determine the growth rate of the tumor. Imperfect automation and disappointing performance for small meningiomas of previous automated volumetric tools limit their use in routine clinical practice. PURPOSE To develop and validate a computational model for fully automated meningioma segmentation and volume measurement on contrast-enhanced MRI scans using deep learning. STUDY TYPE Retrospective. POPULATION A total of 659 intracranial meningioma patients (median age, 59.0 years; interquartile range: 53.0-66.0 years) including 554 women and 105 men. FIELD STRENGTH/SEQUENCE The 1.0 T, 1.5 T, and 3.0 T; three-dimensional, T1 -weighted gradient-echo imaging with contrast enhancement. ASSESSMENT The tumors were manually segmented by two neurosurgeons, H.K. and C.-K.P., with 10 and 26 years of clinical experience, respectively, for use as the ground truth. Deep learning models based on U-Net and nnU-Net were trained using 459 subjects and tested for 100 patients from a single institution (internal validation set [IVS]) and 100 patients from other 24 institutions (external validation set [EVS]), respectively. The performance of each model was evaluated with the Sørensen-Dice similarity coefficient (DSC) compared with the ground truth. STATISTICAL TESTS According to the normality of the data distribution verified by the Shapiro-Wilk test, variables with three or more categories were compared by the Kruskal-Wallis test with Dunn's post hoc analysis. RESULTS A two-dimensional (2D) nnU-Net showed the highest median DSCs of 0.922 and 0.893 for the IVS and EVS, respectively. The nnU-Nets achieved superior performance in meningioma segmentation than the U-Nets. The DSCs of the 2D nnU-Net for small meningiomas less than 1 cm3 were 0.769 and 0.780 with the IVS and EVS, respectively. DATA CONCLUSION A fully automated and accurate volumetric measurement tool for meningioma with clinically applicable performance for small meningioma using nnU-Net was developed. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Ho Kang
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | | | - Kevin Pratama
- Research and Science Division, Research and Development Center, MEDICALIP Co. Ltd, Seoul, Korea
| | - Doohee Lee
- Research and Science Division, Research and Development Center, MEDICALIP Co. Ltd, Seoul, Korea
| | - Kyu Sung Choi
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Seung Hong Choi
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Kyung-Min Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Min-Sung Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Jin Wook Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Yong Hwy Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Sang Joon Park
- Research and Science Division, Research and Development Center, MEDICALIP Co. Ltd, Seoul, Korea.,Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Chul-Kee Park
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| |
Collapse
|
24
|
Volumetric measurement of intracranial meningiomas: a comparison between linear, planimetric, and machine learning with multiparametric voxel-based morphometry methods. J Neurooncol 2023; 161:235-243. [PMID: 36058985 DOI: 10.1007/s11060-022-04127-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 08/30/2022] [Indexed: 10/14/2022]
Abstract
PURPOSE To compare the accuracy of three volumetric methods in the radiological assessment of meningiomas: linear (ABC/2), planimetric, and multiparametric machine learning-based semiautomated voxel-based morphometry (VBM), and to investigate the relevance of tumor shape in volumetric error. METHODS Retrospective imaging database analysis at the authors' institutions. We included patients with a confirmed diagnosis of meningioma and preoperative cranial magnetic resonance imaging eligible for volumetric analyses. After tumor segmentation, images underwent automated computation of shape properties such as sphericity, roundness, flatness, and elongation. RESULTS Sixty-nine patients (85 tumors) were included. Tumor volumes were significantly different using linear (13.82 cm3 [range 0.13-163.74 cm3]), planimetric (11.66 cm3 [range 0.17-196.2 cm3]) and VBM methods (10.24 cm3 [range 0.17-190.32 cm3]) (p < 0.001). Median volume and percentage errors between the planimetric and linear methods and the VBM method were 1.08 cm3 and 11.61%, and 0.23 cm3 and 5.5%, respectively. Planimetry and linear methods overestimated the actual volume in 79% and 63% of the patients, respectively. Correlation studies showed excellent reliability and volumetric agreement between manual- and computer-based methods. Larger and flatter tumors had greater accuracy on planimetry, whereas less rounded tumors contributed negatively to the accuracy of the linear method. CONCLUSION Semiautomated VBM volumetry for meningiomas is not influenced by tumor shape properties, whereas planimetry and linear methods tend to overestimate tumor volume. Furthermore, it is necessary to consider tumor roundness prior to linear measurement so as to choose the most appropriate method for each patient on an individual basis.
Collapse
|
25
|
Wei L, Liu H, Xu J, Shi L, Shan Z, Zhao B, Gao Y. Quantum machine learning in medical image analysis: A Survey. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
26
|
Fully Automated Segmentation Models of Supratentorial Meningiomas Assisted by Inclusion of Normal Brain Images. J Imaging 2022; 8:jimaging8120327. [PMID: 36547492 PMCID: PMC9782766 DOI: 10.3390/jimaging8120327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 12/09/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022] Open
Abstract
To train an automatic brain tumor segmentation model, a large amount of data is required. In this paper, we proposed a strategy to overcome the limited amount of clinically collected magnetic resonance image (MRI) data regarding meningiomas by pre-training a model using a larger public dataset of MRIs of gliomas and augmenting our meningioma training set with normal brain MRIs. Pre-operative MRIs of 91 meningioma patients (171 MRIs) and 10 non-meningioma patients (normal brains) were collected between 2016 and 2019. Three-dimensional (3D) U-Net was used as the base architecture. The model was pre-trained with BraTS 2019 data, then fine-tuned with our datasets consisting of 154 meningioma MRIs and 10 normal brain MRIs. To increase the utility of the normal brain MRIs, a novel balanced Dice loss (BDL) function was used instead of the conventional soft Dice loss function. The model performance was evaluated using the Dice scores across the remaining 17 meningioma MRIs. The segmentation performance of the model was sequentially improved via the pre-training and inclusion of normal brain images. The Dice scores improved from 0.72 to 0.76 when the model was pre-trained. The inclusion of normal brain MRIs to fine-tune the model improved the Dice score; it increased to 0.79. When employing BDL as the loss function, the Dice score reached 0.84. The proposed learning strategy for U-net showed potential for use in segmenting meningioma lesions.
Collapse
|
27
|
Ma X, Zhao Y, Lu Y, Li P, Li X, Mei N, Wang J, Geng D, Zhao L, Yin B. A dual-branch hybrid dilated CNN model for the AI-assisted segmentation of meningiomas in MR images. Comput Biol Med 2022; 151:106279. [PMID: 36375416 DOI: 10.1016/j.compbiomed.2022.106279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 10/11/2022] [Accepted: 10/30/2022] [Indexed: 11/11/2022]
Abstract
BACKGROUND AND OBJECTIVE Treatment for meningiomas usually includes surgical removal, radiation therapy, and chemotherapy. Accurate segmentation of tumors significantly facilitates complete surgical resection and precise radiotherapy, thereby improving patient survival. In this paper, a deep learning model is constructed for magnetic resonance T1-weighted Contrast Enhancement (T1CE) images to develop an automatic processing scheme for accurate tumor segmentation. METHODS In this paper, a novel Convolutional Neural Network (CNN) model is proposed for the accurate meningioma segmentation in MR images. It can extract fused features in multi-scale receptive fields of the same feature map based on MR image characteristics of meningiomas. The attention mechanism is added as a helpful addition to the model to optimize the feature information transmission. RESULTS AND CONCLUSIONS The results were evaluated on two internal testing sets and one external testing set. Mean Dice Similarity Coefficient (DSC) values of 0.886, 0.851, and 0.874 are demonstrated, respectively. In this paper, a deep learning approach is proposed to segment tumors in T1CE images. Multi-center testing sets validated the effectiveness and generalization of the method. The proposed model demonstrates state-of-the-art tumor segmentation performance.
Collapse
Affiliation(s)
- Xin Ma
- The School of Engineering and Technology, Fudan University, Shanghai, 200433, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Yajing Zhao
- Department of Radiology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Yiping Lu
- Department of Radiology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Peng Li
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Xuanxuan Li
- Department of Radiology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Nan Mei
- Department of Radiology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Jiajun Wang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Daoying Geng
- Department of Radiology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China.
| | - Lingxiao Zhao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| | - Bo Yin
- Department of Radiology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China.
| |
Collapse
|
28
|
Multi-instance learning based on spatial continuous category representation for case-level meningioma grading in MRI images. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04114-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
29
|
Yang F, Pan X, Zhu K, Xiao Y, Yue X, Peng P, Zhang X, Huang J, Chen J, Yuan Y, Sun J. Accelerated 3D high-resolution T2-weighted breast MRI with deep learning constrained compressed sensing, comparison with conventional T2-weighted sequence on 3.0 T. Eur J Radiol 2022; 156:110562. [PMID: 36270194 DOI: 10.1016/j.ejrad.2022.110562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 09/18/2022] [Accepted: 10/11/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE To evaluate the feasibility of isotropic 3D high-resolution T2-weighted imaging (T2WI) MRI sequences and compare the images reconstructed by integrating artificial intelligence-compressed sensing (AI-CS), compressed sensing (CS), and conventional 2D T2WI sequences for quality. MATERIALS AND METHODS Fifty-two female patients (ages: 26-80 years) with suspected breast cancer were enrolled. They underwent breast MRI examinations using three sequences: conventional T2WI, CS 3D T2WI, and AI-CS 3D T2WI. Image quality, signal-to-noise ratio (SNR), contrast-to-noise ratio, tumor volume, and maximal tumor diameter were compared using the Friedman test. Image quality was scored on a 5-point scale, with 1 indicating nonassessable quality and 5 indicating excellent quality. Tumor volume and maximal tumor diameter were compared based on AI-CS 3D T2WI (slightly high signal), conventional T2WI, and dynamic contrast-enhanced (DCE) sequences. RESULTS All three T2WI were successfully performed in all patients. 3D CS and AI-CS were significantly better than conventional T2WI in terms of lesion conspicuity and morphology, structural details, overall image quality, diagnostic information for breast lesions, and breast tissue delineation (P < 0.001). The SNR of conventional T2WI was significantly higher for 3D T2WI sequences. The contrast-to-noise ratio was significantly higher for AI-CS 3D T2WI than for conventional T2WI sequence. There was no significant difference in tumor volume between DCE (8.08 ± 16.51) and AI-CS 3D T2WI (8.25 ± 16.29) sequences and no significant differences in tumor diameter among DCE, AI-CS 3D T2WI, and conventional T2WI sequences. CONCLUSION Isotropic-resolution 3D T2WI sequences can be acquired using AI-CS while maintaining image quality and diagnostic value, which may pave the way for isotropic 3D high-resolution T2WI for clinical application.
Collapse
Affiliation(s)
- Fan Yang
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Xuelin Pan
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Ke Zhu
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Yitian Xiao
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Xun Yue
- Department of Radiology, North Sichuan Medical College, Nanchong, China
| | - Pengfei Peng
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | | | - Juan Huang
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Jie Chen
- Department of Breast Surgery, West China Hospital of Sichuan University, Chengdu, China
| | - Yuan Yuan
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.
| | - Jiayu Sun
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.
| |
Collapse
|
30
|
Lee J, Liu C, Kim J, Chen Z, Sun Y, Rogers JR, Chung WK, Weng C. Deep learning for rare disease: A scoping review. J Biomed Inform 2022; 135:104227. [DOI: 10.1016/j.jbi.2022.104227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 08/22/2022] [Accepted: 10/07/2022] [Indexed: 10/31/2022]
|
31
|
Differentiation of Intracerebral Tumor Entities with Quantitative Contrast Attenuation and Iodine Mapping in Dual-Layer Computed Tomography. Diagnostics (Basel) 2022; 12:diagnostics12102494. [PMID: 36292183 PMCID: PMC9601196 DOI: 10.3390/diagnostics12102494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 10/10/2022] [Accepted: 10/12/2022] [Indexed: 11/16/2022] Open
Abstract
Purpose: To investigate if quantitative contrast enhancement and iodine mapping of common brain tumor (BT) entities may correctly differentiate between tumor etiologies in standardized stereotactic CT protocols. Material and Methods: A retrospective monocentric study of 139 consecutive standardized dual-layer dual-energy CT (dlDECT) scans conducted prior to the stereotactic needle biopsy of untreated primary brain tumor lesions. Attenuation of contrast-enhancing BT was derived from polyenergetic images as well as spectral iodine density maps (IDM) and their contrast-to-noise-ratios (CNR) were determined using ROI measures in contrast-enhancing BT and healthy contralateral white matter. The measures were correlated to histopathology regarding tumor entity, isocitrate dehydrogenase (IDH) and MGMT mutation status. Results: The cohort included 52 female and 76 male patients, mean age of 59.4 (±17.1) years. Brain lymphomas showed the highest attenuation (IDM CNR 3.28 ± 1,23), significantly higher than glioblastoma (2.37 ± 1.55, p < 0.005) and metastases (1.95 ± 1.14, p < 0.02), while the differences between glioblastomas and metastases were not significant. These strongly enhancing lesions differed from oligodendroglioma and astrocytoma (Grade II and III) that showed IDM CNR in the range of 1.22−1.27 (±0.45−0.82). Conventional attenuation measurements in DLCT data performed equally or slightly superior to iodine density measurements. Conclusion: Quantitative attenuation and iodine density measurements of contrast-enhancing brain tumors are feasible imaging biomarkers for the discrimination of cerebral tumor lesions but not specifically for single tumor entities. CNR based on simple HU measurements performed equally or slightly superior to iodine quantification.
Collapse
|
32
|
Predicting Meningioma Resection Status: Use of Deep Learning. Acad Radiol 2022:S1076-6332(22)00518-9. [DOI: 10.1016/j.acra.2022.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/20/2022] [Accepted: 10/03/2022] [Indexed: 11/24/2022]
|
33
|
Boaro A, Kaczmarzyk JR, Kavouridis VK, Harary M, Mammi M, Dawood H, Shea A, Cho EY, Juvekar P, Noh T, Rana A, Ghosh S, Arnaout O. Deep neural networks allow expert-level brain meningioma segmentation and present potential for improvement of clinical practice. Sci Rep 2022; 12:15462. [PMID: 36104424 PMCID: PMC9474556 DOI: 10.1038/s41598-022-19356-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 08/29/2022] [Indexed: 11/20/2022] Open
Abstract
Accurate brain meningioma segmentation and volumetric assessment are critical for serial patient follow-up, surgical planning and monitoring response to treatment. Current gold standard of manual labeling is a time-consuming process, subject to inter-user variability. Fully-automated algorithms for meningioma segmentation have the potential to bring volumetric analysis into clinical and research workflows by increasing accuracy and efficiency, reducing inter-user variability and saving time. Previous research has focused solely on segmentation tasks without assessment of impact and usability of deep learning solutions in clinical practice. Herein, we demonstrate a three-dimensional convolutional neural network (3D-CNN) that performs expert-level, automated meningioma segmentation and volume estimation on MRI scans. A 3D-CNN was initially trained by segmenting entire brain volumes using a dataset of 10,099 healthy brain MRIs. Using transfer learning, the network was then specifically trained on meningioma segmentation using 806 expert-labeled MRIs. The final model achieved a median performance of 88.2% reaching the spectrum of current inter-expert variability (82.6–91.6%). We demonstrate in a simulated clinical scenario that a deep learning approach to meningioma segmentation is feasible, highly accurate and has the potential to improve current clinical practice.
Collapse
|
34
|
Liu X, Wang X, Zhang Y, Sun Z, Zhang X, Wang X. Preoperative prediction of pelvic lymph nodes metastasis in prostate cancer using an ADC-based radiomics model: comparison with clinical nomograms and PI-RADS assessment. Abdom Radiol (NY) 2022; 47:3327-3337. [PMID: 35763053 DOI: 10.1007/s00261-022-03583-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 06/07/2022] [Accepted: 06/07/2022] [Indexed: 01/18/2023]
Abstract
PURPOSE To develop and test radiomics models based on manually corrected or automatically gained masks on ADC maps for pelvic lymph node metastasis (PLNM) prediction in patients with prostate cancer (PCa). METHODS A primary cohort of 474 patients with PCa who underwent prostate mpMRI were retrospectively enrolled for PLNM prediction between January 2017 and January 2020. They were then randomly split into training/validation (n = 332) and test (n = 142) groups for model development and internal testing. Four radiomics models were developed using four masks (manually corrected/automatic prostate gland and PCa lesion segmentation) based on the ADC maps using the primary cohort. Another cohort of 128 patients who underwent radical prostatectomy (RP) with extended pelvic lymph node dissection (ePLND) for PCa was used as the testing cohort between February 2020 and October 2021. The performance of the models was evaluated in terms of discrimination and clinical usefulness using the area under the curve (AUC) and decision curve analysis (DCA). The optimal radiomics model was further compared with Memorial Sloan Kettering Cancer Center (MSKCC) and Briganti 2017 nomograms, and PI-RADS assessment. RESULTS 17 (13.28%) Patients with PLNM were included in the testing cohort. The radiomics model based on the mask of automatically segmented prostate obtained the highest AUC among the four radiomics models (0.73 vs. 0.63 vs. 0.70 vs. 0.56). Briganti 2017, MSKCC nomograms, and PI-RADS assessment-yielded AUCs of 0.69, 0.71, and 0.70, respectively, and no significant differences were found compared with the optimal radiomics model (P = 0.605-0.955). CONCLUSION The radiomics model based on the mask of automatically segmented prostate offers a non-invasive method to predict PLNM for patients with PCa. It shows comparable accuracy to the current MKSCC and Briganti nomograms.
Collapse
Affiliation(s)
- Xiang Liu
- Department of Radiology, Peking University First Hospital, No. 8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiangpeng Wang
- Beijing Smart Tree Medical Technology Co. Ltd., No. 24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Yaofeng Zhang
- Beijing Smart Tree Medical Technology Co. Ltd., No. 24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Zhaonan Sun
- Department of Radiology, Peking University First Hospital, No. 8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, No. 8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, No. 8 Xishiku Street, Xicheng District, Beijing, 100034, China.
| |
Collapse
|
35
|
Musigmann M, Akkurt BH, Krähling H, Nacul NG, Remonda L, Sartoretti T, Henssen D, Brokinkel B, Stummer W, Heindel W, Mannil M. Testing the applicability and performance of Auto ML for potential applications in diagnostic neuroradiology. Sci Rep 2022; 12:13648. [PMID: 35953588 PMCID: PMC9366823 DOI: 10.1038/s41598-022-18028-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 08/03/2022] [Indexed: 11/25/2022] Open
Abstract
To investigate the applicability and performance of automated machine learning (AutoML) for potential applications in diagnostic neuroradiology. In the medical sector, there is a rapidly growing demand for machine learning methods, but only a limited number of corresponding experts. The comparatively simple handling of AutoML should enable even non-experts to develop adequate machine learning models with manageable effort. We aim to investigate the feasibility as well as the advantages and disadvantages of developing AutoML models compared to developing conventional machine learning models. We discuss the results in relation to a concrete example of a medical prediction application. In this retrospective IRB-approved study, a cohort of 107 patients who underwent gross total meningioma resection and a second cohort of 31 patients who underwent subtotal resection were included. Image segmentation of the contrast enhancing parts of the tumor was performed semi-automatically using the open-source software platform 3D Slicer. A total of 107 radiomic features were extracted by hand-delineated regions of interest from the pre-treatment MRI images of each patient. Within the AutoML approach, 20 different machine learning algorithms were trained and tested simultaneously. For comparison, a neural network and different conventional machine learning algorithms were trained and tested. With respect to the exemplary medical prediction application used in this study to evaluate the performance of Auto ML, namely the pre-treatment prediction of the achievable resection status of meningioma, AutoML achieved remarkable performance nearly equivalent to that of a feed-forward neural network with a single hidden layer. However, in the clinical case study considered here, logistic regression outperformed the AutoML algorithm. Using independent test data, we observed the following classification results (AutoML/neural network/logistic regression): mean area under the curve = 0.849/0.879/0.900, mean accuracy = 0.821/0.839/0.881, mean kappa = 0.465/0.491/0.644, mean sensitivity = 0.578/0.577/0.692 and mean specificity = 0.891/0.914/0.936. The results obtained with AutoML are therefore very promising. However, the AutoML models in our study did not yet show the corresponding performance of the best models obtained with conventional machine learning methods. While AutoML may facilitate and simplify the task of training and testing machine learning algorithms as applied in the field of neuroradiology and medical imaging, a considerable amount of expert knowledge may still be needed to develop models with the highest possible discriminatory power for diagnostic neuroradiology.
Collapse
Affiliation(s)
- Manfred Musigmann
- University Clinic for Radiology, Westfälische Wilhelms-University Muenster and University Hospital Münster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany
| | - Burak Han Akkurt
- University Clinic for Radiology, Westfälische Wilhelms-University Muenster and University Hospital Münster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany
| | - Hermann Krähling
- University Clinic for Radiology, Westfälische Wilhelms-University Muenster and University Hospital Münster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany
| | - Nabila Gala Nacul
- University Clinic for Radiology, Westfälische Wilhelms-University Muenster and University Hospital Münster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany
| | - Luca Remonda
- Institute of Neuroradiology, Kantonsspital Aarau, Aarau, Switzerland.,Faculty of Medicine, University of Bern, Bern, Switzerland
| | | | - Dylan Henssen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Benjamin Brokinkel
- Department of Neurosurgery, Westfälische Wilhelms-University Muenster and University Hospital Muenster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany
| | - Walter Stummer
- Department of Neurosurgery, Westfälische Wilhelms-University Muenster and University Hospital Muenster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany
| | - Walter Heindel
- University Clinic for Radiology, Westfälische Wilhelms-University Muenster and University Hospital Münster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany
| | - Manoj Mannil
- University Clinic for Radiology, Westfälische Wilhelms-University Muenster and University Hospital Münster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany.
| |
Collapse
|
36
|
Haq EU, Jianjun H, Huarong X, Li K, Weng L. A Hybrid Approach Based on Deep CNN and Machine Learning Classifiers for the Tumor Segmentation and Classification in Brain MRI. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:6446680. [PMID: 36035291 PMCID: PMC9400402 DOI: 10.1155/2022/6446680] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 04/13/2022] [Accepted: 04/20/2022] [Indexed: 11/17/2022]
Abstract
Conventional medical imaging and machine learning techniques are not perfect enough to correctly segment the brain tumor in MRI as the proper identification and segmentation of tumor borders are one of the most important criteria of tumor extraction. The existing approaches are time-consuming, incursive, and susceptible to human mistake. These drawbacks highlight the importance of developing a completely automated deep learning-based approach for segmentation and classification of brain tumors. The expedient and prompt segmentation and classification of a brain tumor are critical for accurate clinical diagnosis and adequately treatment. As a result, deep learning-based brain tumor segmentation and classification algorithms are extensively employed. In the deep learning-based brain tumor segmentation and classification technique, the CNN model has an excellent brain segmentation and classification effect. In this work, an integrated and hybrid approach based on deep convolutional neural network and machine learning classifiers is proposed for the accurate segmentation and classification of brain MRI tumor. A CNN is proposed in the first stage to learn the feature map from image space of brain MRI into the tumor marker region. In the second step, a faster region-based CNN is developed for the localization of tumor region followed by region proposal network (RPN). In the last step, a deep convolutional neural network and machine learning classifiers are incorporated in series in order to further refine the segmentation and classification process to obtain more accurate results and findings. The proposed model's performance is assessed based on evaluation metrics extensively used in medical image processing. The experimental results validate that the proposed deep CNN and SVM-RBF classifier achieved an accuracy of 98.3% and a dice similarity coefficient (DSC) of 97.8% on the task of classifying brain tumors as gliomas, meningioma, or pituitary using brain dataset-1, while on Figshare dataset, it achieved an accuracy of 98.0% and a DSC of 97.1% on classifying brain tumors as gliomas, meningioma, or pituitary. The segmentation and classification results demonstrate that the proposed model outperforms state-of-the-art techniques by a significant margin.
Collapse
Affiliation(s)
- Ejaz Ul Haq
- Guangdong Key Laboratory of Intelligent Information Processing, School of Electronics and Information Engineering, Shenzhen University, China
- School of Computer and Information Engineering, Xiamen University of Technology, China
| | - Huang Jianjun
- Guangdong Key Laboratory of Intelligent Information Processing, School of Electronics and Information Engineering, Shenzhen University, China
| | - Xu Huarong
- School of Computer and Information Engineering, Xiamen University of Technology, China
| | - Kang Li
- Guangdong Key Laboratory of Intelligent Information Processing, School of Electronics and Information Engineering, Shenzhen University, China
| | - Lifen Weng
- School of Computer and Information Engineering, Xiamen University of Technology, China
| |
Collapse
|
37
|
Bouget D, Pedersen A, Jakola AS, Kavouridis V, Emblem KE, Eijgelaar RS, Kommers I, Ardon H, Barkhof F, Bello L, Berger MS, Conti Nibali M, Furtner J, Hervey-Jumper S, Idema AJS, Kiesel B, Kloet A, Mandonnet E, Müller DMJ, Robe PA, Rossi M, Sciortino T, Van den Brink WA, Wagemakers M, Widhalm G, Witte MG, Zwinderman AH, De Witt Hamer PC, Solheim O, Reinertsen I. Preoperative Brain Tumor Imaging: Models and Software for Segmentation and Standardized Reporting. Front Neurol 2022; 13:932219. [PMID: 35968292 PMCID: PMC9364874 DOI: 10.3389/fneur.2022.932219] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 06/23/2022] [Indexed: 11/23/2022] Open
Abstract
For patients suffering from brain tumor, prognosis estimation and treatment decisions are made by a multidisciplinary team based on a set of preoperative MR scans. Currently, the lack of standardized and automatic methods for tumor detection and generation of clinical reports, incorporating a wide range of tumor characteristics, represents a major hurdle. In this study, we investigate the most occurring brain tumor types: glioblastomas, lower grade gliomas, meningiomas, and metastases, through four cohorts of up to 4,000 patients. Tumor segmentation models were trained using the AGU-Net architecture with different preprocessing steps and protocols. Segmentation performances were assessed in-depth using a wide-range of voxel and patient-wise metrics covering volume, distance, and probabilistic aspects. Finally, two software solutions have been developed, enabling an easy use of the trained models and standardized generation of clinical reports: Raidionics and Raidionics-Slicer. Segmentation performances were quite homogeneous across the four different brain tumor types, with an average true positive Dice ranging between 80 and 90%, patient-wise recall between 88 and 98%, and patient-wise precision around 95%. In conjunction to Dice, the identified most relevant other metrics were the relative absolute volume difference, the variation of information, and the Hausdorff, Mahalanobis, and object average symmetric surface distances. With our Raidionics software, running on a desktop computer with CPU support, tumor segmentation can be performed in 16–54 s depending on the dimensions of the MRI volume. For the generation of a standardized clinical report, including the tumor segmentation and features computation, 5–15 min are necessary. All trained models have been made open-access together with the source code for both software solutions and validation metrics computation. In the future, a method to convert results from a set of metrics into a final single score would be highly desirable for easier ranking across trained models. In addition, an automatic classification of the brain tumor type would be necessary to replace manual user input. Finally, the inclusion of post-operative segmentation in both software solutions will be key for generating complete post-operative standardized clinical reports.
Collapse
Affiliation(s)
- David Bouget
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- *Correspondence: David Bouget
| | - André Pedersen
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Trondheim, Norway
- Clinic of Surgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Asgeir S. Jakola
- Department of Neurosurgery, Sahlgrenska University Hospital, Gothenburg, Sweden
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Vasileios Kavouridis
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Kyrre E. Emblem
- Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway
| | - Roelant S. Eijgelaar
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Ivar Kommers
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Hilko Ardon
- Department of Neurosurgery, Twee Steden Hospital, Tilburg, Netherlands
| | - Frederik Barkhof
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Institutes of Neurology and Healthcare Engineering, University College London, London, United Kingdom
| | - Lorenzo Bello
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Mitchel S. Berger
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Marco Conti Nibali
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Julia Furtner
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University Vienna, Wien, Austria
| | - Shawn Hervey-Jumper
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
| | | | - Barbara Kiesel
- Department of Neurosurgery, Medical University Vienna, Wien, Austria
| | - Alfred Kloet
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Netherlands
| | | | - Domenique M. J. Müller
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Pierre A. Robe
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, Netherlands
| | - Marco Rossi
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Tommaso Sciortino
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | | | - Michiel Wagemakers
- Department of Neurosurgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Georg Widhalm
- Department of Neurosurgery, Medical University Vienna, Wien, Austria
| | - Marnix G. Witte
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Aeilko H. Zwinderman
- Department of Clinical Epidemiology and Biostatistics, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, Netherlands
| | - Philip C. De Witt Hamer
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Ole Solheim
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
38
|
Yamazawa E, Takahashi S, Shin M, Tanaka S, Takahashi W, Nakamoto T, Suzuki Y, Takami H, Saito N. MRI-Based Radiomics Differentiates Skull Base Chordoma and Chondrosarcoma: A Preliminary Study. Cancers (Basel) 2022; 14:cancers14133264. [PMID: 35805036 PMCID: PMC9265125 DOI: 10.3390/cancers14133264] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/25/2022] [Accepted: 06/27/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary In this study, we created a novel MRI-based machine learning model to differentiate skull base chordoma and chondrosarcoma with multiparametric signatures. While these tumors share common radiographic characteristics, clinical behavior is distinct. Therefore, distinguishing these tumors before initial surgical intervention would be useful, potentially impacting the surgical strategy. Although there are some limitations, such as the risk of overfitting and the lack of an extramural cohort for truly independent final validation, our machine learning model distinguishing chordoma from chondrosarcoma yielded superior diagnostic accuracy to that achieved by 20 board-certified neurosurgeons. Abstract Chordoma and chondrosarcoma share common radiographic characteristics yet are distinct clinically. A radiomic machine learning model differentiating these tumors preoperatively would help plan surgery. MR images were acquired from 57 consecutive patients with chordoma (N = 32) or chondrosarcoma (N = 25) treated at the University of Tokyo Hospital between September 2012 and February 2020. Preoperative T1-weighted images with gadolinium enhancement (GdT1) and T2-weighted images were analyzed. Datasets from the first 47 cases were used for model creation, and those from the subsequent 10 cases were used for validation. Feature extraction was performed semi-automatically, and 2438 features were obtained per image sequence. Machine learning models with logistic regression and a support vector machine were created. The model with the highest accuracy incorporated seven features extracted from GdT1 in the logistic regression. The average area under the curve was 0.93 ± 0.06, and accuracy was 0.90 (9/10) in the validation dataset. The same validation dataset was assessed by 20 board-certified neurosurgeons. Diagnostic accuracy ranged from 0.50 to 0.80 (median 0.60, 95% confidence interval 0.60 ± 0.06%), which was inferior to that of the machine learning model (p = 0.03), although there are some limitations, such as the risk of overfitting and the lack of an extramural cohort for truly independent final validation. In summary, we created a novel MRI-based machine learning model to differentiate skull base chordoma and chondrosarcoma from multiparametric signatures.
Collapse
Affiliation(s)
- Erika Yamazawa
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan; (E.Y.); (H.T.); (N.S.)
| | - Satoshi Takahashi
- RIKEN Center for Advanced Intelligence Project, 2-1 Hirosawa, Wako 351-0198, Japan;
- Division of Medical AI Research and Development, National Cancer Center, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Masahiro Shin
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan; (E.Y.); (H.T.); (N.S.)
- Department of Neurosurgery, University of Teikyo Hospital, 2-11-1 Kaga, Itabashi-Ku, Tokyo 173-8606, Japan
- Correspondence: (M.S.); (S.T.); Tel.: +81-3-3964-1211 (M.S.); +81-3-3815-5411 (S.T.)
| | - Shota Tanaka
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan; (E.Y.); (H.T.); (N.S.)
- Correspondence: (M.S.); (S.T.); Tel.: +81-3-3964-1211 (M.S.); +81-3-3815-5411 (S.T.)
| | - Wataru Takahashi
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan; (W.T.); (T.N.); (Y.S.)
| | - Takahiro Nakamoto
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan; (W.T.); (T.N.); (Y.S.)
- Department of Biological Science and Engineering, Faculty of Health Sciences, Hokkaido University Kita 12, Nishi 5, Kita-ku, Sapporo-shi 060-0808, Japan
| | - Yuichi Suzuki
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan; (W.T.); (T.N.); (Y.S.)
| | - Hirokazu Takami
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan; (E.Y.); (H.T.); (N.S.)
| | - Nobuhito Saito
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan; (E.Y.); (H.T.); (N.S.)
| |
Collapse
|
39
|
Machine Learning for the Detection and Segmentation of Benign Tumors of the Central Nervous System: A Systematic Review. Cancers (Basel) 2022; 14:cancers14112676. [PMID: 35681655 PMCID: PMC9179850 DOI: 10.3390/cancers14112676] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/18/2022] [Accepted: 05/26/2022] [Indexed: 11/20/2022] Open
Abstract
Simple Summary Machine learning in radiology of the central nervous system has seen many interesting publications in the past few years. Since the focus has largely been on malignant tumors such as brain metastases and high-grade gliomas, we conducted a systematic review on benign tumors to summarize what has been published and where there might be gaps in the research. We found several studies that report good results, but the descriptions of methodologies could be improved to enable better comparisons and assessment of biases. Abstract Objectives: To summarize the available literature on using machine learning (ML) for the detection and segmentation of benign tumors of the central nervous system (CNS) and to assess the adherence of published ML/diagnostic accuracy studies to best practice. Methods: The MEDLINE database was searched for the use of ML in patients with any benign tumor of the CNS, and the records were screened according to PRISMA guidelines. Results: Eleven retrospective studies focusing on meningioma (n = 4), vestibular schwannoma (n = 4), pituitary adenoma (n = 2) and spinal schwannoma (n = 1) were included. The majority of studies attempted segmentation. Links to repositories containing code were provided in two manuscripts, and no manuscripts shared imaging data. Only one study used an external test set, which raises the question as to whether some of the good performances that have been reported were caused by overfitting and may not generalize to data from other institutions. Conclusions: Using ML for detecting and segmenting benign brain tumors is still in its infancy. Stronger adherence to ML best practices could facilitate easier comparisons between studies and contribute to the development of models that are more likely to one day be used in clinical practice.
Collapse
|
40
|
Meningioma Radiomics: At the Nexus of Imaging, Pathology and Biomolecular Characterization. Cancers (Basel) 2022; 14:cancers14112605. [PMID: 35681585 PMCID: PMC9179263 DOI: 10.3390/cancers14112605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 05/20/2022] [Accepted: 05/23/2022] [Indexed: 12/10/2022] Open
Abstract
Simple Summary Meningiomas are typically benign, common extra-axial tumors of the central nervous system. Routine clinical assessment by radiologists presents some limitations regarding long-term patient outcome prediction and risk stratification. Given the exponential growth of interest in radiomics and artificial intelligence in medical imaging, numerous studies have evaluated the potential of these tools in the setting of meningioma imaging. These were aimed at the development of reliable and reproducible models based on quantitative data. Although several limitations have yet to be overcome for their routine use in clinical practice, their innovative potential is evident. In this review, we present a wide-ranging overview of radiomics and artificial intelligence applications in meningioma imaging. Abstract Meningiomas are the most common extra-axial tumors of the central nervous system (CNS). Even though recurrence is uncommon after surgery and most meningiomas are benign, an aggressive behavior may still be exhibited in some cases. Although the diagnosis can be made by radiologists, typically with magnetic resonance imaging, qualitative analysis has some limitations in regard to outcome prediction and risk stratification. The acquisition of this information could help the referring clinician in the decision-making process and selection of the appropriate treatment. Following the increased attention and potential of radiomics and artificial intelligence in the healthcare domain, including oncological imaging, researchers have investigated their use over the years to overcome the current limitations of imaging. The aim of these new tools is the replacement of subjective and, therefore, potentially variable medical image analysis by more objective quantitative data, using computational algorithms. Although radiomics has not yet fully entered clinical practice, its potential for the detection, diagnostic, and prognostic characterization of tumors is evident. In this review, we present a wide-ranging overview of radiomics and artificial intelligence applications in meningioma imaging.
Collapse
|
41
|
Brunasso L, Ferini G, Bonosi L, Costanzo R, Musso S, Benigno UE, Gerardi RM, Giammalva GR, Paolini F, Umana GE, Graziano F, Scalia G, Sturiale CL, Di Bonaventura R, Iacopino DG, Maugeri R. A Spotlight on the Role of Radiomics and Machine-Learning Applications in the Management of Intracranial Meningiomas: A New Perspective in Neuro-Oncology: A Review. Life (Basel) 2022; 12:life12040586. [PMID: 35455077 PMCID: PMC9026541 DOI: 10.3390/life12040586] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/05/2022] [Accepted: 04/06/2022] [Indexed: 12/12/2022] Open
Abstract
Background: In recent decades, the application of machine learning technologies to medical imaging has opened up new perspectives in neuro-oncology, in the so-called radiomics field. Radiomics offer new insight into glioma, aiding in clinical decision-making and patients’ prognosis evaluation. Although meningiomas represent the most common primary CNS tumor and the majority of them are benign and slow-growing tumors, a minor part of them show a more aggressive behavior with an increased proliferation rate and a tendency to recur. Therefore, their treatment may represent a challenge. Methods: According to PRISMA guidelines, a systematic literature review was performed. We included selected articles (meta-analysis, review, retrospective study, and case–control study) concerning the application of radiomics method in the preoperative diagnostic and prognostic algorithm, and planning for intracranial meningiomas. We also analyzed the contribution of radiomics in differentiating meningiomas from other CNS tumors with similar radiological features. Results: In the first research stage, 273 papers were identified. After a careful screening according to inclusion/exclusion criteria, 39 articles were included in this systematic review. Conclusions: Several preoperative features have been identified to increase preoperative intracranial meningioma assessment for guiding decision-making processes. The development of valid and reliable non-invasive diagnostic and prognostic modalities could have a significant clinical impact on meningioma treatment.
Collapse
Affiliation(s)
- Lara Brunasso
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
- Correspondence:
| | - Gianluca Ferini
- Department of Radiation Oncology, REM Radioterapia SRL, 95125 Catania, Italy;
| | - Lapo Bonosi
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Roberta Costanzo
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Sofia Musso
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Umberto E. Benigno
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Rosa M. Gerardi
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Giuseppe R. Giammalva
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Federica Paolini
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Giuseppe E. Umana
- Gamma Knife Center, Trauma Center, Department of Neurosurgery, Cannizzaro Hospital, 95100 Catania, Italy;
| | - Francesca Graziano
- Unit of Neurosurgery, Garibaldi Hospital, 95124 Catania, Italy; (F.G.); (G.S.)
| | - Gianluca Scalia
- Unit of Neurosurgery, Garibaldi Hospital, 95124 Catania, Italy; (F.G.); (G.S.)
| | - Carmelo L. Sturiale
- Division of Neurosurgery, Fondazione Policlinico Universitario A. Gemelli IRCCS, Università Cattolica del Sacro Cuore, 00100 Rome, Italy; (C.L.S.); (R.D.B.)
| | - Rina Di Bonaventura
- Division of Neurosurgery, Fondazione Policlinico Universitario A. Gemelli IRCCS, Università Cattolica del Sacro Cuore, 00100 Rome, Italy; (C.L.S.); (R.D.B.)
| | - Domenico G. Iacopino
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Rosario Maugeri
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| |
Collapse
|
42
|
Chen H, Li S, Zhang Y, Liu L, Lv X, Yi Y, Ruan G, Ke C, Feng Y. Deep learning-based automatic segmentation of meningioma from multiparametric MRI for preoperative meningioma differentiation using radiomic features: a multicentre study. Eur Radiol 2022; 32:7248-7259. [PMID: 35420299 DOI: 10.1007/s00330-022-08749-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 02/18/2022] [Accepted: 03/14/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVES Develop and evaluate a deep learning-based automatic meningioma segmentation method for preoperative meningioma differentiation using radiomic features. METHODS A retrospective multicentre inclusion of MR examinations (T1/T2-weighted and contrast-enhanced T1-weighted imaging) was conducted. Data from centre 1 were allocated to training (n = 307, age = 50.94 ± 11.51) and internal testing (n = 238, age = 50.70 ± 12.72) cohorts, and data from centre 2 external testing cohort (n = 64, age = 48.45 ± 13.59). A modified attention U-Net was trained for meningioma segmentation. Segmentation accuracy was evaluated by five quantitative metrics. The agreement between radiomic features from manual and automatic segmentations was assessed using intra class correlation coefficient (ICC). After univariate and minimum-redundancy-maximum-relevance feature selection, L1-regularized logistic regression models for differentiating between low-grade (I) and high-grade (II and III) meningiomas were separately constructed using manual and automatic segmentations; their performances were evaluated using ROC analysis. RESULTS Dice of meningioma segmentation for the internal testing cohort were 0.94 ± 0.04 and 0.91 ± 0.05 for tumour volumes in contrast-enhanced T1-weighted and T2-weighted images, respectively; those for the external testing cohort were 0.90 ± 0.07 and 0.88 ± 0.07. Features extracted using manual and automatic segmentations agreed well, for both the internal (ICC = 0.94, interquartile range: 0.88-0.97) and external (ICC = 0.90, interquartile range: 0.78-70.96) testing cohorts. AUC of radiomic model with automatic segmentation was comparable with that of the model with manual segmentation for both the internal (0.95 vs. 0.93, p = 0.176) and external (0.88 vs. 0.91, p = 0.419) testing cohorts. CONCLUSIONS The developed deep learning-based segmentation method enables automatic and accurate extraction of meningioma from multiparametric MR images and can help deploy radiomics for preoperative meningioma differentiation in clinical practice. KEY POINTS • A deep learning-based method was developed for automatic segmentation of meningioma from multiparametric MR images. • The automatic segmentation method enabled accurate extraction of meningiomas and yielded radiomic features that were highly consistent with those that were obtained using manual segmentation. • High-grade meningiomas were preoperatively differentiated from low-grade meningiomas using a radiomic model constructed on features from automatic segmentation.
Collapse
Affiliation(s)
- Haolin Chen
- School of Biomedical Engineering, Southern Medical University, 1023 Shatainan Road, Guangzhou, 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing & Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China.,Guangdong-Hong Kong-Macao Greater Bay Area Centre for Brain Science and Brain-Inspired Intelligence & Key Laboratory of Mental Health of the Ministry of Education, Guangzhou, China
| | - Shuqi Li
- Department of Radiology, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,State Key Laboratory of Oncology in South China, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,Collaborative Innovation Centre for Cancer Medicine, Sun Yat-Sen University Cancer Centre, Guangzhou, China
| | - Youming Zhang
- Department of Radiology, Xiangya Hospital, Central South University, Changsha, China
| | - Lizhi Liu
- Department of Radiology, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,State Key Laboratory of Oncology in South China, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,Collaborative Innovation Centre for Cancer Medicine, Sun Yat-Sen University Cancer Centre, Guangzhou, China
| | - Xiaofei Lv
- Department of Radiology, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,State Key Laboratory of Oncology in South China, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,Collaborative Innovation Centre for Cancer Medicine, Sun Yat-Sen University Cancer Centre, Guangzhou, China
| | - Yongju Yi
- School of Biomedical Engineering, Southern Medical University, 1023 Shatainan Road, Guangzhou, 510515, China.,Network Information Centre, The Sixth Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Guangying Ruan
- Department of Radiology, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,State Key Laboratory of Oncology in South China, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,Collaborative Innovation Centre for Cancer Medicine, Sun Yat-Sen University Cancer Centre, Guangzhou, China
| | - Chao Ke
- State Key Laboratory of Oncology in South China, Sun Yat-Sen University Cancer Centre, Guangzhou, China. .,Collaborative Innovation Centre for Cancer Medicine, Sun Yat-Sen University Cancer Centre, Guangzhou, China. .,Department of Neurosurgery and Neuro-oncology, Sun Yat-Sen University Cancer Centre, 651 Dongfeng East Road, Guangzhou, 510060, China.
| | - Yanqiu Feng
- School of Biomedical Engineering, Southern Medical University, 1023 Shatainan Road, Guangzhou, 510515, China. .,Guangdong Provincial Key Laboratory of Medical Image Processing & Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China. .,Guangdong-Hong Kong-Macao Greater Bay Area Centre for Brain Science and Brain-Inspired Intelligence & Key Laboratory of Mental Health of the Ministry of Education, Guangzhou, China. .,Department of Rehabilitation, Zhujiang Hospital, Southern Medical University, Guangzhou, China.
| |
Collapse
|
43
|
Deng Y, Li C, Lv X, Xia W, Shen L, Jing B, Li B, Guo X, Sun Y, Xie C, Ke L. The contrast-enhanced MRI can be substituted by unenhanced MRI in identifying and automatically segmenting primary nasopharyngeal carcinoma with the aid of deep learning models: An exploratory study in large-scale population of endemic area. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 217:106702. [PMID: 35228147 DOI: 10.1016/j.cmpb.2022.106702] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 01/25/2022] [Accepted: 02/13/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES Administration of contrast is not desirable for all cases in clinical setting, and no consensus in sequence selection for deep learning model development has been achieved, thus we aim to explore whether contrast-enhanced magnetic resonance imaging (ceMRI) can be substituted in the identification and segmentation of nasopharyngeal carcinoma (NPC) with the aid of deep learning models in a large-scale cohort. METHODS A total of 4478 eligible individuals were randomly split into training, validation and test sets, and self-constrained 3D DenseNet and V-Net models were developed using axial T1-weighted imaging (T1WI), T2WI or enhanced T1WI (T1WIC) images separately. The differential diagnostic performance between NPC and benign hyperplasia were compared among models using chi-square test. Segmentation evaluation metrics, including dice similarity coefficient (DSC) and average surface distance (ASD), were compared using paired student's t-test between T1WIC and T1WI or T2WI models or M_T1/T2, a merged output of malignant region derived from T1WI and T2WI models. RESULTS All models exhibited similar satisfactory diagnostic performance in discriminating NPC from benign hyperplasia, all attaining overall accuracy over 99.00% in all T stages of NPC. And T1WIC model exhibited similar average DSC and ASD with those of M_T1/T2 (DSC, 0.768±0.070 vs 0.764±0.070; ASD, 1.573±10.954 mm vs 1.626±10.975 mm 1.626±0.975 mm vs 1.573±0.954 mm, all p > 0.0167) in primary NPC using DenseNet, but yielded a significantly higher DSC and lower ASD than either T1WI model or T2WI model (DSC, 0.759±0.065 or 0.755±0.071; ASD, 1.661±0.898 mm or 1.722±1.133 mm, respectively, all p < 0.01) in the entire test set of NPC cohort. Moreover, the average DSCs and ASDs were not statistically significant between T1WIC model and M_T1/T2 in both.
Collapse
Affiliation(s)
- Yishu Deng
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Chaofeng Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; Precision Medicine Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Xing Lv
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Weixiong Xia
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Lujun Shen
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Minimally Invasive Therapy, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Bingzhong Jing
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Bin Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Xiang Guo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Ying Sun
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China.
| | - Chuanmiao Xie
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China.
| | - Liangru Ke
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China.
| |
Collapse
|
44
|
Automated methods for diagnosis of Parkinson’s disease and predicting severity level. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06626-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
45
|
Fang Z, Ren J, MacLellan C, Li H, Zhao H, Hussain A, Fortino G. A Novel Multi-Stage Residual Feature Fusion Network for Detection of COVID-19 in Chest X-Ray Images. IEEE TRANSACTIONS ON MOLECULAR, BIOLOGICAL, AND MULTI-SCALE COMMUNICATIONS 2022; 8:17-27. [PMID: 35935666 PMCID: PMC9280851 DOI: 10.1109/tmbmc.2021.3099367] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Revised: 06/30/2021] [Accepted: 07/07/2021] [Indexed: 12/22/2022]
Abstract
To suppress the spread of COVID-19, accurate diagnosis at an early stage is crucial, chest screening with radiography imaging plays an important role in addition to the real-time reverse transcriptase polymerase chain reaction (RT-PCR) swab test. Due to the limited data, existing models suffer from incapable feature extraction and poor network convergence and optimization. Accordingly, a multi-stage residual network, MSRCovXNet, is proposed for effective detection of COVID-19 from chest x-ray (CXR) images. As a shallow yet effective classifier with the ResNet-18 as the feature extractor, MSRCovXNet is optimized by fusing two proposed feature enhancement modules (FEM), i.e., low-level and high-level feature maps (LLFMs and HLFMs), which contain respectively more local information and rich semantic information, respectively. For effective fusion of these two features, a single-stage FEM (MSFEM) and a multi-stage FEM (MSFEM) are proposed to enhance the semantic feature representation of the LLFMs and the local feature representation of the HLFMs, respectively. Without ensembling other deep learning models, our MSRCovXNet has a precision of 98.9% and a recall of 94% in detection of COVID-19, which outperforms several state-of-the-art models. When evaluated on the COVIDGR dataset, an average accuracy of 82.2% is achieved, leading other methods by at least 1.2%.
Collapse
Affiliation(s)
- Zhenyu Fang
- School of Computer SciencesGuangdong Polytechnic Normal UniversityGuangzhou510065China
- School of Computer Software and MicroelectronicsNorthwestern Polytechnical UniversityXi’an710072China
| | - Jinchang Ren
- School of Computer SciencesGuangdong Polytechnic Normal UniversityGuangzhou510065China
- National Subsea CentreRobert Gordon UniversityAberdeenAB10 7QBU.K.
| | - Calum MacLellan
- Centre for Signal and Image ProcessingUniversity of StrathclydeGlasgowG1 1XQU.K.
| | - Huihui Li
- School of Computer SciencesGuangdong Polytechnic Normal UniversityGuangzhou510065China
| | - Huimin Zhao
- School of Computer SciencesGuangdong Polytechnic Normal UniversityGuangzhou510065China
| | - Amir Hussain
- School of ComputingEdinburgh Napier UniversityEdinburghEH11 4BNU.K.
| | - Giancarlo Fortino
- Department of Informatics, Modelling, Electronics and SystemsUniversity of Calabria87036RendeItaly
| |
Collapse
|
46
|
A Comprehensive Analysis of Recent Deep and Federated-Learning-Based Methodologies for Brain Tumor Diagnosis. J Pers Med 2022; 12:jpm12020275. [PMID: 35207763 PMCID: PMC8880689 DOI: 10.3390/jpm12020275] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/05/2022] [Accepted: 02/09/2022] [Indexed: 12/12/2022] Open
Abstract
Brain tumors are a deadly disease with a high mortality rate. Early diagnosis of brain tumors improves treatment, which results in a better survival rate for patients. Artificial intelligence (AI) has recently emerged as an assistive technology for the early diagnosis of tumors, and AI is the primary focus of researchers in the diagnosis of brain tumors. This study provides an overview of recent research on the diagnosis of brain tumors using federated and deep learning methods. The primary objective is to explore the performance of deep and federated learning methods and evaluate their accuracy in the diagnosis process. A systematic literature review is provided, discussing the open issues and challenges, which are likely to guide future researchers working in the field of brain tumor diagnosis.
Collapse
|
47
|
Zopfs D, Laukamp K, Reimer R, Grosse Hokamp N, Kabbasch C, Borggrefe J, Pennig L, Bunck AC, Schlamann M, Lennartz S. Automated Color-Coding of Lesion Changes in Contrast-Enhanced 3D T1-Weighted Sequences for MRI Follow-up of Brain Metastases. AJNR Am J Neuroradiol 2022; 43:188-194. [PMID: 34992128 PMCID: PMC8985679 DOI: 10.3174/ajnr.a7380] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 10/06/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND AND PURPOSE MR imaging is the technique of choice for follow-up of patients with brain metastases, yet the radiologic assessment is often tedious and error-prone, especially in examinations with multiple metastases or subtle changes. This study aimed to determine whether using automated color-coding improves the radiologic assessment of brain metastases compared with conventional reading. MATERIALS AND METHODS One hundred twenty-one pairs of follow-up examinations of patients with brain metastases were assessed. Two radiologists determined the presence of progression, regression, mixed changes, or stable disease between the follow-up examinations and indicated subjective diagnostic certainty regarding their decisions in a conventional reading and a second reading using automated color-coding after an interval of 8 weeks. RESULTS The rate of correctly classified diagnoses was higher (91.3%, 221/242, versus 74.0%, 179/242, P < .01) when using automated color-coding, and the median Likert score for diagnostic certainty improved from 2 (interquartile range, 2-3) to 4 (interquartile range, 3-5) (P < .05) compared with the conventional reading. Interrater agreement was excellent (κ = 0.80; 95% CI, 0.71-0.89) with automated color-coding compared with a moderate agreement (κ = 0.46; 95% CI, 0.34-0.58) with the conventional reading approach. When considering the time required for image preprocessing, the overall average time for reading an examination was longer in the automated color-coding approach (91.5 [SD, 23.1] seconds versus 79.4 [SD, 34.7 ] seconds, P < .001). CONCLUSIONS Compared with the conventional reading, automated color-coding of lesion changes in follow-up examinations of patients with brain metastases significantly increased the rate of correct diagnoses and resulted in higher diagnostic certainty.
Collapse
Affiliation(s)
- D Zopfs
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - K Laukamp
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - R Reimer
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - N Grosse Hokamp
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - C Kabbasch
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - J Borggrefe
- Department of Radiology (J.B.), Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - L Pennig
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - A C Bunck
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - M Schlamann
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - S Lennartz
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
48
|
A Hybrid CNN-GLCM Classifier For Detection And Grade Classification Of Brain Tumor. Brain Imaging Behav 2022; 16:1410-1427. [PMID: 35048264 DOI: 10.1007/s11682-021-00598-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/02/2021] [Indexed: 11/02/2022]
Abstract
A supervised CNN Deep net classifier is proposed for the detection, classification and diagnosis of meningioma brain tumor using deep learning approach. This proposed method includes preprocessing, classification, and segmentation of the primary occurring brain tumor in adults. The proposed CNN Deep Net classifier extracts the features internally from the enhanced image and classifies them into normal and abnormal tumor images. The segmentation of tumor region is performed by global thresholding along with an area morphological function. This proposed method of fully automated classification and segmentation of brain tumor preserves the spatial invariance and inheritance. Furthermore, based on its feature attributes the proposed CNN Deep net classifier, classifies the detected tumor image either as (low grade) benign or (high grade) malignant. This proposed CNN Deep net classification approach with grading system is evaluated both quantitatively and qualitatively. The quantitative measures such as sensitivity, specificity, accuracy, Dice similarity coefficient, precision, F-score of the proposed classifier states a better segmentation accuracy and classification rate of 99.4% and 99.5% with respect to ground truth images.
Collapse
|
49
|
Overview of Artificial Intelligence in Medicine. Artif Intell Med 2022. [DOI: 10.1007/978-981-19-1223-8_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
50
|
Bhalodiya JM, Lim Choi Keung SN, Arvanitis TN. Magnetic resonance image-based brain tumour segmentation methods: A systematic review. Digit Health 2022; 8:20552076221074122. [PMID: 35340900 PMCID: PMC8943308 DOI: 10.1177/20552076221074122] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 11/20/2021] [Accepted: 12/27/2021] [Indexed: 01/10/2023] Open
Abstract
Background Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation. Methods We conducted a systematic review of 572 brain tumour segmentation studies during 2015-2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score). Statistical tests We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour. Results We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation. Conclusion U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available.
Collapse
Affiliation(s)
- Jayendra M Bhalodiya
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Sarah N Lim Choi Keung
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Theodoros N Arvanitis
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| |
Collapse
|