1
|
Awuah WA, Adebusoye FT, Wellington J, David L, Salam A, Weng Yee AL, Lansiaux E, Yarlagadda R, Garg T, Abdul-Rahman T, Kalmanovich J, Miteu GD, Kundu M, Mykolaivna NI. Recent Outcomes and Challenges of Artificial Intelligence, Machine Learning, and Deep Learning in Neurosurgery. World Neurosurg X 2024; 23:100301. [PMID: 38577317 PMCID: PMC10992893 DOI: 10.1016/j.wnsx.2024.100301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/23/2023] [Accepted: 02/21/2024] [Indexed: 04/06/2024] Open
Abstract
Neurosurgeons receive extensive technical training, which equips them with the knowledge and skills to specialise in various fields and manage the massive amounts of information and decision-making required throughout the various stages of neurosurgery, including preoperative, intraoperative, and postoperative care and recovery. Over the past few years, artificial intelligence (AI) has become more useful in neurosurgery. AI has the potential to improve patient outcomes by augmenting the capabilities of neurosurgeons and ultimately improving diagnostic and prognostic outcomes as well as decision-making during surgical procedures. By incorporating AI into both interventional and non-interventional therapies, neurosurgeons may provide the best care for their patients. AI, machine learning (ML), and deep learning (DL) have made significant progress in the field of neurosurgery. These cutting-edge methods have enhanced patient outcomes, reduced complications, and improved surgical planning.
Collapse
Affiliation(s)
| | | | - Jack Wellington
- Cardiff University School of Medicine, Cardiff University, Wales, United Kingdom
| | - Lian David
- Norwich Medical School, University of East Anglia, United Kingdom
| | - Abdus Salam
- Department of Surgery, Khyber Teaching Hospital, Peshawar, Pakistan
| | | | | | - Rohan Yarlagadda
- Rowan University School of Osteopathic Medicine, Stratford, NJ, USA
| | - Tulika Garg
- Government Medical College and Hospital Chandigarh, India
| | | | | | | | - Mrinmoy Kundu
- Institute of Medical Sciences and SUM Hospital, Bhubaneswar, India
| | | |
Collapse
|
2
|
He K, Peng B, Yu W, Liu Y, Liu S, Cheng J, Dai Y. A Novel Mis-Seg-Focus Loss Function Based on a Two-Stage nnU-Net Framework for Accurate Brain Tissue Segmentation. Bioengineering (Basel) 2024; 11:427. [PMID: 38790294 PMCID: PMC11118222 DOI: 10.3390/bioengineering11050427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 04/14/2024] [Accepted: 04/24/2024] [Indexed: 05/26/2024] Open
Abstract
Brain tissue segmentation plays a critical role in the diagnosis, treatment, and study of brain diseases. Accurately identifying these boundaries is essential for improving segmentation accuracy. However, distinguishing boundaries between different brain tissues can be challenging, as they often overlap. Existing deep learning methods primarily calculate the overall segmentation results without adequately addressing local regions, leading to error propagation and mis-segmentation along boundaries. In this study, we propose a novel mis-segmentation-focused loss function based on a two-stage nnU-Net framework. Our approach aims to enhance the model's ability to handle ambiguous boundaries and overlapping anatomical structures, thereby achieving more accurate brain tissue segmentation results. Specifically, the first stage targets the identification of mis-segmentation regions using a global loss function, while the second stage involves defining a mis-segmentation loss function to adaptively adjust the model, thus improving its capability to handle ambiguous boundaries and overlapping anatomical structures. Experimental evaluations on two datasets demonstrate that our proposed method outperforms existing approaches both quantitatively and qualitatively.
Collapse
Affiliation(s)
- Keyi He
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; (K.H.); (B.P.); (Y.L.); (S.L.)
- The School of Electrical and Electronic Engineering, Changchun University of Technology, Changchun 130012, China;
| | - Bo Peng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; (K.H.); (B.P.); (Y.L.); (S.L.)
| | - Weibo Yu
- The School of Electrical and Electronic Engineering, Changchun University of Technology, Changchun 130012, China;
| | - Yan Liu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; (K.H.); (B.P.); (Y.L.); (S.L.)
| | - Surui Liu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; (K.H.); (B.P.); (Y.L.); (S.L.)
| | - Jian Cheng
- State Key Laboratory of Complex & Critical Software Environment, Beihang University, Beijing 100191, China
- International Innovation Institute, Beihang University, 166 Shuanghongqiao Street, Pingyao Town, Yuhang District, Hangzhou 311115, China
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; (K.H.); (B.P.); (Y.L.); (S.L.)
| |
Collapse
|
3
|
Zuo D, Yang L, Jin Y, Qi H, Liu Y, Ren L. Machine learning-based models for the prediction of breast cancer recurrence risk. BMC Med Inform Decis Mak 2023; 23:276. [PMID: 38031071 PMCID: PMC10688055 DOI: 10.1186/s12911-023-02377-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 11/17/2023] [Indexed: 12/01/2023] Open
Abstract
Breast cancer is the most common malignancy diagnosed in women worldwide. The prevalence and incidence of breast cancer is increasing every year; therefore, early diagnosis along with suitable relapse detection is an important strategy for prognosis improvement. This study aimed to compare different machine algorithms to select the best model for predicting breast cancer recurrence. The prediction model was developed by using eleven different machine learning (ML) algorithms, including logistic regression (LR), random forest (RF), support vector classification (SVC), extreme gradient boosting (XGBoost), gradient boosting decision tree (GBDT), decision tree, multilayer perceptron (MLP), linear discriminant analysis (LDA), adaptive boosting (AdaBoost), Gaussian naive Bayes (GaussianNB), and light gradient boosting machine (LightGBM), to predict breast cancer recurrence. The area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and F1 score were used to evaluate the performance of the prognostic model. Based on performance, the optimal ML was selected, and feature importance was ranked by Shapley Additive Explanation (SHAP) values. Compared to the other 10 algorithms, the results showed that the AdaBoost algorithm had the best prediction performance for successfully predicting breast cancer recurrence and was adopted in the establishment of the prediction model. Moreover, CA125, CEA, Fbg, and tumor diameter were found to be the most important features in our dataset to predict breast cancer recurrence. More importantly, our study is the first to use the SHAP method to improve the interpretability of clinicians to predict the recurrence model of breast cancer based on the AdaBoost algorithm. The AdaBoost algorithm offers a clinical decision support model and successfully identifies the recurrence of breast cancer.
Collapse
Affiliation(s)
- Duo Zuo
- Department of Clinical Laboratory, Tianjin Medical University Cancer Institute & Hospital, Tianjin, 300060, China
- National Clinical Research Center for Cancer, Tianjin, 300060, China
- Tianjin's Clinical Research Center for Cancer, Tianjin, 300060, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, 300060, China
- Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Tianjin, 300060, China
| | - Lexin Yang
- Department of Clinical Laboratory, Tianjin Medical University Cancer Institute & Hospital, Tianjin, 300060, China
- National Clinical Research Center for Cancer, Tianjin, 300060, China
- Tianjin's Clinical Research Center for Cancer, Tianjin, 300060, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, 300060, China
- Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Tianjin, 300060, China
| | - Yu Jin
- Department of Clinical Laboratory, Tianjin Medical University Cancer Institute & Hospital, Tianjin, 300060, China
- Tongji University Cancer Center, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, 200072, China
| | - Huan Qi
- China Mobile Group Tianjin Company Limited, Tianjin, 300308, China
| | - Yahui Liu
- Department of Clinical Laboratory, Tianjin Medical University Cancer Institute & Hospital, Tianjin, 300060, China
- National Clinical Research Center for Cancer, Tianjin, 300060, China
- Tianjin's Clinical Research Center for Cancer, Tianjin, 300060, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, 300060, China
- Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Tianjin, 300060, China
| | - Li Ren
- Department of Clinical Laboratory, Tianjin Medical University Cancer Institute & Hospital, Tianjin, 300060, China.
- National Clinical Research Center for Cancer, Tianjin, 300060, China.
- Tianjin's Clinical Research Center for Cancer, Tianjin, 300060, China.
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, 300060, China.
- Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Tianjin, 300060, China.
| |
Collapse
|
4
|
Ding AS, Lu A, Li Z, Sahu M, Galaiya D, Siewerdsen JH, Unberath M, Taylor RH, Creighton FX. A Self-Configuring Deep Learning Network for Segmentation of Temporal Bone Anatomy in Cone-Beam CT Imaging. Otolaryngol Head Neck Surg 2023; 169:988-998. [PMID: 36883992 PMCID: PMC11060418 DOI: 10.1002/ohn.317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 01/19/2023] [Accepted: 02/19/2023] [Indexed: 03/09/2023]
Abstract
OBJECTIVE Preoperative planning for otologic or neurotologic procedures often requires manual segmentation of relevant structures, which can be tedious and time-consuming. Automated methods for segmenting multiple geometrically complex structures can not only streamline preoperative planning but also augment minimally invasive and/or robot-assisted procedures in this space. This study evaluates a state-of-the-art deep learning pipeline for semantic segmentation of temporal bone anatomy. STUDY DESIGN A descriptive study of a segmentation network. SETTING Academic institution. METHODS A total of 15 high-resolution cone-beam temporal bone computed tomography (CT) data sets were included in this study. All images were co-registered, with relevant anatomical structures (eg, ossicles, inner ear, facial nerve, chorda tympani, bony labyrinth) manually segmented. Predicted segmentations from no new U-Net (nnU-Net), an open-source 3-dimensional semantic segmentation neural network, were compared against ground-truth segmentations using modified Hausdorff distances (mHD) and Dice scores. RESULTS Fivefold cross-validation with nnU-Net between predicted and ground-truth labels were as follows: malleus (mHD: 0.044 ± 0.024 mm, dice: 0.914 ± 0.035), incus (mHD: 0.051 ± 0.027 mm, dice: 0.916 ± 0.034), stapes (mHD: 0.147 ± 0.113 mm, dice: 0.560 ± 0.106), bony labyrinth (mHD: 0.038 ± 0.031 mm, dice: 0.952 ± 0.017), and facial nerve (mHD: 0.139 ± 0.072 mm, dice: 0.862 ± 0.039). Comparison against atlas-based segmentation propagation showed significantly higher Dice scores for all structures (p < .05). CONCLUSION Using an open-source deep learning pipeline, we demonstrate consistently submillimeter accuracy for semantic CT segmentation of temporal bone anatomy compared to hand-segmented labels. This pipeline has the potential to greatly improve preoperative planning workflows for a variety of otologic and neurotologic procedures and augment existing image guidance and robot-assisted systems for the temporal bone.
Collapse
Affiliation(s)
- Andy S. Ding
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alexander Lu
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Manish Sahu
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Deepa Galaiya
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Jeffrey H. Siewerdsen
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
5
|
Raghavendra U, Gudigar A, Paul A, Goutham TS, Inamdar MA, Hegde A, Devi A, Ooi CP, Deo RC, Barua PD, Molinari F, Ciaccio EJ, Acharya UR. Brain tumor detection and screening using artificial intelligence techniques: Current trends and future perspectives. Comput Biol Med 2023; 163:107063. [PMID: 37329621 DOI: 10.1016/j.compbiomed.2023.107063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 05/16/2023] [Accepted: 05/19/2023] [Indexed: 06/19/2023]
Abstract
A brain tumor is an abnormal mass of tissue located inside the skull. In addition to putting pressure on the healthy parts of the brain, it can lead to significant health problems. Depending on the region of the brain tumor, it can cause a wide range of health issues. As malignant brain tumors grow rapidly, the mortality rate of individuals with this cancer can increase substantially with each passing week. Hence it is vital to detect these tumors early so that preventive measures can be taken at the initial stages. Computer-aided diagnostic (CAD) systems, in coordination with artificial intelligence (AI) techniques, have a vital role in the early detection of this disorder. In this review, we studied 124 research articles published from 2000 to 2022. Here, the challenges faced by CAD systems based on different modalities are highlighted along with the current requirements of this domain and future prospects in this area of research.
Collapse
Affiliation(s)
- U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India.
| | - Aritra Paul
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - T S Goutham
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - Mahesh Anil Inamdar
- Department of Mechatronics, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - Ajay Hegde
- Consultant Neurosurgeon Manipal Hospitals, Sarjapur Road, Bangalore, India
| | - Aruna Devi
- School of Education and Tertiary Access, University of the Sunshine Coast, Caboolture Campus, Australia
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore, 599494, Singapore
| | - Ravinesh C Deo
- School of Mathematics, Physics, and Computing, University of Southern Queensland, Springfield, QLD, 4300, Australia
| | - Prabal Datta Barua
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW, 2010, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD, 4350, Australia; Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, 2007, Australia
| | - Filippo Molinari
- Department of Electronics and Telecommunications, Politecnico di Torino, 10129, Torino, Italy
| | - Edward J Ciaccio
- Department of Medicine, Columbia University Medical Center, New York, NY, 10032, USA
| | - U Rajendra Acharya
- School of Mathematics, Physics, and Computing, University of Southern Queensland, Springfield, QLD, 4300, Australia; International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto, 860-8555, Japan
| |
Collapse
|
6
|
Abdusalomov AB, Mukhiddinov M, Whangbo TK. Brain Tumor Detection Based on Deep Learning Approaches and Magnetic Resonance Imaging. Cancers (Basel) 2023; 15:4172. [PMID: 37627200 PMCID: PMC10453020 DOI: 10.3390/cancers15164172] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/11/2023] [Accepted: 08/17/2023] [Indexed: 08/27/2023] Open
Abstract
The rapid development of abnormal brain cells that characterizes a brain tumor is a major health risk for adults since it can cause severe impairment of organ function and even death. These tumors come in a wide variety of sizes, textures, and locations. When trying to locate cancerous tumors, magnetic resonance imaging (MRI) is a crucial tool. However, detecting brain tumors manually is a difficult and time-consuming activity that might lead to inaccuracies. In order to solve this, we provide a refined You Only Look Once version 7 (YOLOv7) model for the accurate detection of meningioma, glioma, and pituitary gland tumors within an improved detection of brain tumors system. The visual representation of the MRI scans is enhanced by the use of image enhancement methods that apply different filters to the original pictures. To further improve the training of our proposed model, we apply data augmentation techniques to the openly accessible brain tumor dataset. The curated data include a wide variety of cases, such as 2548 images of gliomas, 2658 images of pituitary, 2582 images of meningioma, and 2500 images of non-tumors. We included the Convolutional Block Attention Module (CBAM) attention mechanism into YOLOv7 to further enhance its feature extraction capabilities, allowing for better emphasis on salient regions linked with brain malignancies. To further improve the model's sensitivity, we have added a Spatial Pyramid Pooling Fast+ (SPPF+) layer to the network's core infrastructure. YOLOv7 now includes decoupled heads, which allow it to efficiently glean useful insights from a wide variety of data. In addition, a Bi-directional Feature Pyramid Network (BiFPN) is used to speed up multi-scale feature fusion and to better collect features associated with tumors. The outcomes verify the efficiency of our suggested method, which achieves a higher overall accuracy in tumor detection than previous state-of-the-art models. As a result, this framework has a lot of potential as a helpful decision-making tool for experts in the field of diagnosing brain tumors.
Collapse
Affiliation(s)
| | | | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Seongnam-si 13120, Republic of Korea;
| |
Collapse
|
7
|
Sahoo S, Mishra S, Panda B, Bhoi AK, Barsocchi P. An Augmented Modulated Deep Learning Based Intelligent Predictive Model for Brain Tumor Detection Using GAN Ensemble. SENSORS (BASEL, SWITZERLAND) 2023; 23:6930. [PMID: 37571713 PMCID: PMC10422344 DOI: 10.3390/s23156930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/25/2023] [Accepted: 07/28/2023] [Indexed: 08/13/2023]
Abstract
Brain tumor detection in the initial stage is becoming an intricate task for clinicians worldwide. The diagnosis of brain tumor patients is rigorous in the later stages, which is a serious concern. Although there are related pragmatic clinical tools and multiple models based on machine learning (ML) for the effective diagnosis of patients, these models still provide less accuracy and take immense time for patient screening during the diagnosis process. Hence, there is still a need to develop a more precise model for more accurate screening of patients to detect brain tumors in the beginning stages and aid clinicians in diagnosis, making the brain tumor assessment more reliable. In this research, a performance analysis of the impact of different generative adversarial networks (GAN) on the early detection of brain tumors is presented. Based on it, a novel hybrid enhanced predictive convolution neural network (CNN) model using a hybrid GAN ensemble is proposed. Brain tumor image data is augmented using a GAN ensemble, which is fed for classification using a hybrid modulated CNN technique. The outcome is generated through a soft voting approach where the final prediction is based on the GAN, which computes the highest value for different performance metrics. This analysis demonstrated that evaluation with a progressive-growing generative adversarial network (PGGAN) architecture produced the best result. In the analysis, PGGAN outperformed others, computing the accuracy, precision, recall, F1-score, and negative predictive value (NPV) to be 98.85, 98.45%, 97.2%, 98.11%, and 98.09%, respectively. Additionally, a very low latency of 3.4 s is determined with PGGAN. The PGGAN model enhanced the overall performance of the identification of brain cell tissues in real time. Therefore, it may be inferred to suggest that brain tumor detection in patients using PGGAN augmentation with the proposed modulated CNN technique generates the optimum performance using the soft voting approach.
Collapse
Affiliation(s)
- Saswati Sahoo
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, India;
| | - Sushruta Mishra
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, India;
| | - Baidyanath Panda
- LTIMindtree, 1 American Row, 3rd Floor, Hartford, CT 06103, USA;
| | - Akash Kumar Bhoi
- Directorate of Research, Sikkim Manipal University, Gangtok 737102, India;
- KIET Group of Institutions, Delhi-NCR, Ghaziabad 201206, India
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| | - Paolo Barsocchi
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| |
Collapse
|
8
|
Balasundaram A, Kavitha MS, Pratheepan Y, Akshat D, Kaushik MV. A Foreground Prototype-Based One-Shot Segmentation of Brain Tumors. Diagnostics (Basel) 2023; 13:diagnostics13071282. [PMID: 37046500 PMCID: PMC10093064 DOI: 10.3390/diagnostics13071282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/07/2023] [Accepted: 03/16/2023] [Indexed: 03/30/2023] Open
Abstract
The potential for enhancing brain tumor segmentation with few-shot learning is enormous. While several deep learning networks (DNNs) show promising segmentation results, they all take a substantial amount of training data in order to yield appropriate results. Moreover, a prominent problem for most of these models is to perform well in unseen classes. To overcome these challenges, we propose a one-shot learning model to segment brain tumors on brain magnetic resonance images (MRI) based on a single prototype similarity score. With the use of recently developed few-shot learning techniques, where training and testing are carried out utilizing support and query sets of images, we attempt to acquire a definitive tumor region by focusing on slices containing foreground classes. It is unlike other recent DNNs that employed the entire set of images. The training of this model is carried out in an iterative manner where in each iteration, random slices containing foreground classes of randomly sampled data are selected as the query set, along with a different random slice from the same sample as the support set. In order to differentiate query images from class prototypes, we used a metric learning-based approach based on non-parametric thresholds. We employed the multimodal Brain Tumor Image Segmentation (BraTS) 2021 dataset with 60 training images and 350 testing images. The effectiveness of the model is evaluated using the mean dice score and mean IoU score. The experimental results provided a dice score of 83.42 which was greater than other works in the literature. Additionally, the proposed one-shot segmentation model outperforms the conventional methods in terms of computational time, memory usage, and the number of data.
Collapse
Affiliation(s)
- Ananthakrishnan Balasundaram
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, Tamil Nadu, India; (A.B.)
| | - Muthu Subash Kavitha
- School of Information and Data Sciences, Nagasaki University, Nagasaki 852-8521, Japan
- Correspondence:
| | - Yogarajah Pratheepan
- School of Computing, Engineering and Intelligent System, Ulster University, Londonderry BT48 7JL, UK;
| | - Dhamale Akshat
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, Tamil Nadu, India; (A.B.)
| | - Maddirala Venkata Kaushik
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, Tamil Nadu, India; (A.B.)
| |
Collapse
|
9
|
Tong J, Wang C. A dual tri-path CNN system for brain tumor segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
10
|
Sørensen PJ, Carlsen JF, Larsen VA, Andersen FL, Ladefoged CN, Nielsen MB, Poulsen HS, Hansen AE. Evaluation of the HD-GLIO Deep Learning Algorithm for Brain Tumour Segmentation on Postoperative MRI. Diagnostics (Basel) 2023; 13:diagnostics13030363. [PMID: 36766468 PMCID: PMC9914320 DOI: 10.3390/diagnostics13030363] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 01/11/2023] [Accepted: 01/17/2023] [Indexed: 01/21/2023] Open
Abstract
In the context of brain tumour response assessment, deep learning-based three-dimensional (3D) tumour segmentation has shown potential to enter the routine radiological workflow. The purpose of the present study was to perform an external evaluation of a state-of-the-art deep learning 3D brain tumour segmentation algorithm (HD-GLIO) on an independent cohort of consecutive, post-operative patients. For 66 consecutive magnetic resonance imaging examinations, we compared delineations of contrast-enhancing (CE) tumour lesions and non-enhancing T2/FLAIR hyperintense abnormality (NE) lesions by the HD-GLIO algorithm and radiologists using Dice similarity coefficients (Dice). Volume agreement was assessed using concordance correlation coefficients (CCCs) and Bland-Altman plots. The algorithm performed very well regarding the segmentation of NE volumes (median Dice = 0.79) and CE tumour volumes larger than 1.0 cm3 (median Dice = 0.86). If considering all cases with CE tumour lesions, the performance dropped significantly (median Dice = 0.40). Volume agreement was excellent with CCCs of 0.997 (CE tumour volumes) and 0.922 (NE volumes). The findings have implications for the application of the HD-GLIO algorithm in the routine radiological workflow where small contrast-enhancing tumours will constitute a considerable share of the follow-up cases. Our study underlines that independent validations on clinical datasets are key to asserting the robustness of deep learning algorithms.
Collapse
Affiliation(s)
- Peter Jagd Sørensen
- Department of Radiology, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
- The DCCC Brain Tumor Center, 2100 Copenhagen, Denmark
- Correspondence:
| | - Jonathan Frederik Carlsen
- Department of Radiology, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Vibeke Andrée Larsen
- Department of Radiology, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
| | - Flemming Littrup Andersen
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
- Department of Clinical Physiology and Nuclear Medicine, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
| | - Claes Nøhr Ladefoged
- Department of Clinical Physiology and Nuclear Medicine, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
| | - Michael Bachmann Nielsen
- Department of Radiology, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Hans Skovgaard Poulsen
- The DCCC Brain Tumor Center, 2100 Copenhagen, Denmark
- Department of Oncology, Centre for Cancer and Organ Diseases, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
| | - Adam Espe Hansen
- Department of Radiology, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
- The DCCC Brain Tumor Center, 2100 Copenhagen, Denmark
| |
Collapse
|
11
|
Javaid Iqbal M, Waseem Iqbal M, Anwar M, Murad Khan M, Jabar Nazimi A, Nazir Ahmad M. Brain Tumor Segmentation in Multimodal MRI Using U-Net Layered Structure. COMPUTERS, MATERIALS & CONTINUA 2023; 74:5267-5281. [DOI: 10.32604/cmc.2023.033024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Accepted: 09/22/2022] [Indexed: 09/02/2023]
|
12
|
Two-Stage Deep Learning Model for Automated Segmentation and Classification of Splenomegaly. Cancers (Basel) 2022; 14:cancers14225476. [PMID: 36428569 PMCID: PMC9688308 DOI: 10.3390/cancers14225476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/22/2022] [Accepted: 11/04/2022] [Indexed: 11/09/2022] Open
Abstract
Splenomegaly is a common cross-sectional imaging finding with a variety of differential diagnoses. This study aimed to evaluate whether a deep learning model could automatically segment the spleen and identify the cause of splenomegaly in patients with cirrhotic portal hypertension versus patients with lymphoma disease. This retrospective study included 149 patients with splenomegaly on computed tomography (CT) images (77 patients with cirrhotic portal hypertension, 72 patients with lymphoma) who underwent a CT scan between October 2020 and July 2021. The dataset was divided into a training (n = 99), a validation (n = 25) and a test cohort (n = 25). In the first stage, the spleen was automatically segmented using a modified U-Net architecture. In the second stage, the CT images were classified into two groups using a 3D DenseNet to discriminate between the causes of splenomegaly, first using the whole abdominal CT, and second using only the spleen segmentation mask. The classification performances were evaluated using the area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). Occlusion sensitivity maps were applied to the whole abdominal CT images, to illustrate which regions were important for the prediction. When trained on the whole abdominal CT volume, the DenseNet was able to differentiate between the lymphoma and liver cirrhosis in the test cohort with an AUC of 0.88 and an ACC of 0.88. When the model was trained on the spleen segmentation mask, the performance decreased (AUC = 0.81, ACC = 0.76). Our model was able to accurately segment splenomegaly and recognize the underlying cause. Training on whole abdomen scans outperformed training using the segmentation mask. Nonetheless, considering the performance, a broader and more general application to differentiate other causes for splenomegaly is also conceivable.
Collapse
|
13
|
Kihira S, Mei X, Mahmoudi K, Liu Z, Dogra S, Belani P, Tsankova N, Hormigo A, Fayad ZA, Doshi A, Nael K. U-Net Based Segmentation and Characterization of Gliomas. Cancers (Basel) 2022; 14:4457. [PMID: 36139616 PMCID: PMC9496685 DOI: 10.3390/cancers14184457] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 09/12/2022] [Accepted: 09/13/2022] [Indexed: 11/18/2022] Open
Abstract
(1) Background: Gliomas are the most common primary brain neoplasms accounting for roughly 40−50% of all malignant primary central nervous system tumors. We aim to develop a deep learning-based framework for automated segmentation and prediction of biomarkers and prognosis in patients with gliomas. (2) Methods: In this retrospective two center study, patients were included if they (1) had a diagnosis of glioma with known surgical histopathology and (2) had preoperative MRI with FLAIR sequence. The entire tumor volume including FLAIR hyperintense infiltrative component and necrotic and cystic components was segmented. Deep learning-based U-Net framework was developed based on symmetric architecture from the 512 × 512 segmented maps from FLAIR as the ground truth mask. (3) Results: The final cohort consisted of 208 patients with mean ± standard deviation of age (years) of 56 ± 15 with M/F of 130/78. DSC of the generated mask was 0.93. Prediction for IDH-1 and MGMT status had a performance of AUC 0.88 and 0.62, respectively. Survival prediction of <18 months demonstrated AUC of 0.75. (4) Conclusions: Our deep learning-based framework can detect and segment gliomas with excellent performance for the prediction of IDH-1 biomarker status and survival.
Collapse
Affiliation(s)
- Shingo Kihira
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
- Department of Radiological Sciences, David Geffen School of Medicine at University of California Los Angeles, Los Angeles, CA 90033, USA
| | - Xueyan Mei
- Biomedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Keon Mahmoudi
- Department of Radiological Sciences, David Geffen School of Medicine at University of California Los Angeles, Los Angeles, CA 90033, USA
| | - Zelong Liu
- Biomedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Siddhant Dogra
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Puneet Belani
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Nadejda Tsankova
- Department of Pathology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Adilia Hormigo
- Department of Pathology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Zahi A. Fayad
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
- Biomedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Amish Doshi
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Kambiz Nael
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
- Department of Radiological Sciences, David Geffen School of Medicine at University of California Los Angeles, Los Angeles, CA 90033, USA
| |
Collapse
|
14
|
Dual attention-guided and learnable spatial transformation data augmentation multi-modal unsupervised medical image segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
15
|
Khorasani A, Kafieh R, Saboori M, Tavakoli MB. Glioma segmentation with DWI weighted images, conventional anatomical images, and post-contrast enhancement magnetic resonance imaging images by U-Net. Phys Eng Sci Med 2022; 45:925-934. [PMID: 35997927 DOI: 10.1007/s13246-022-01164-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 07/16/2022] [Indexed: 11/24/2022]
Abstract
Glioma segmentation is believed to be one of the most important stages of treatment management. Recent developments in magnetic resonance imaging (MRI) protocols have led to a renewed interest in using automatic glioma segmentation with different MRI image weights. U-Net is a major area of interest within the field of automatic glioma segmentation. This paper examines the impact of different input MRI image-weight on the U-Net output performance for glioma segmentation. One hundred forty-nine glioma patients were scanned with a 1.5T MRI scanner. The main MRI image-weights acquired are diffusion-weighted imaging (DWI) weighted images (b50, b500, b1000, Apparent diffusion coefficient (ADC) map, Exponential apparent diffusion coefficient (eADC) map), anatomical image-weights (T2, T1, T2-FLAIR), and post enhancement image-weights (T1Gd). The U-Net and data augmentation are used to segment the glioma tumors. Having the Dice coefficient and accuracy enabled us to compare our results with the previous study. The first set of analyses examined the impact of epoch number on the accuracy of U-Net, and n_epoch = 20 was selected for U-Net training. The mean Dice coefficient for b50, b500, b1000, ADC map, eADC map, T2, T1, T2-FLAIR, and T1Gd image weights for glioma segmentation with U-Net were calculated 0.892, 0.872, 0.752, 0.931, 0.944, 0.762, 0.721, 0.896, 0.694 respectively. This study has found that, DWI image-weights have a higher diagnostic value for glioma segmentation with U-Net in comparison with anatomical image-weights and post enhancement image-weights. The results of this investigation show that ADC and eADC maps have higher performance for glioma segmentation with U-Net.
Collapse
Affiliation(s)
- Amir Khorasani
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Rahele Kafieh
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran.,Department of Engineering, Durham University, Durham, UK
| | - Masih Saboori
- Department of Neurosurgery, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Mohamad Bagher Tavakoli
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran.
| |
Collapse
|
16
|
MRF-IUNet: A Multiresolution Fusion Brain Tumor Segmentation Network Based on Improved Inception U-Net. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:6305748. [PMID: 35966244 PMCID: PMC9371863 DOI: 10.1155/2022/6305748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 07/12/2022] [Accepted: 07/20/2022] [Indexed: 11/17/2022]
Abstract
The automatic segmentation method of MRI brain tumors uses computer technology to segment and label tumor areas and normal tissues, which plays an important role in assisting doctors in the clinical diagnosis and treatment of brain tumors. This paper proposed a multiresolution fusion MRI brain tumor segmentation algorithm based on improved inception U-Net named MRF-IUNet (multiresolution fusion inception U-Net). By replacing the original convolution modules in U-Net with the inception modules, the width and depth of the network are increased. The inception module connects convolution kernels of different sizes in parallel to obtain receptive fields of different sizes, which can extract features of different scales. In order to reduce the loss of detailed information during the downsampling process, atrous convolutions are introduced in the inception module to expand the receptive field. The multiresolution feature fusion modules are connected between the encoder and decoder of the proposed network to fuse the semantic features learned by the deeper layers and the spatial detail features learned by the early layers, which improves the recognition and segmentation of local detail features by the network and effectively improves the segmentation accuracy. The experimental results on the BraTS (the Multimodal Brain Tumor Segmentation Challenge) dataset show that the Dice similarity coefficient (DSC) obtained by the method in this paper is 0.94 for the enhanced tumor area, 0.83 for the whole tumor area, and 0.93 for the tumor core area. The segmentation accuracy has been improved.
Collapse
|
17
|
Amiri Tehrani Zade A, Aziz MJ, Masoudnia S, Mirbagheri A, Ahmadian A. An improved capsule network for glioma segmentation on MRI images: A curriculum learning approach. Comput Biol Med 2022; 148:105917. [DOI: 10.1016/j.compbiomed.2022.105917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 07/07/2022] [Accepted: 07/23/2022] [Indexed: 11/03/2022]
|
18
|
Matsubara K, Ibaraki M, Kinoshita T. DeepPVC: prediction of a partial volume-corrected map for brain positron emission tomography studies via a deep convolutional neural network. EJNMMI Phys 2022; 9:50. [PMID: 35907100 PMCID: PMC9339068 DOI: 10.1186/s40658-022-00478-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 07/20/2022] [Indexed: 12/04/2022] Open
Abstract
Background Partial volume correction with anatomical magnetic resonance (MR) images (MR-PVC) is useful for accurately quantifying tracer uptake on brain positron emission tomography (PET) images. However, MR segmentation processes for MR-PVC are time-consuming and prevent the widespread clinical use of MR-PVC. Here, we aimed to develop a deep learning model to directly predict PV-corrected maps from PET and MR images, ultimately improving the MR-PVC throughput. Methods We used MR T1-weighted and [11C]PiB PET images as input data from 192 participants from the Alzheimer’s Disease Neuroimaging Initiative database. We calculated PV-corrected maps as the training target using the region-based voxel-wise PVC method. Two-dimensional U-Net model was trained and validated by sixfold cross-validation with the dataset from the 156 participants, and then tested using MR T1-weighted and [11C]PiB PET images from 36 participants acquired at sites other than the training dataset. We calculated the structural similarity index (SSIM) of the PV-corrected maps and intraclass correlation (ICC) of the PV-corrected standardized uptake value between the region-based voxel-wise (RBV) PVC and deepPVC as indicators for validation and testing. Results A high SSIM (0.884 ± 0.021) and ICC (0.921 ± 0.042) were observed in the validation and test data (SSIM, 0.876 ± 0.028; ICC, 0.894 ± 0.051). The computation time required to predict a PV-corrected map for a participant (48 s without a graphics processing unit) was much shorter than that for the RBV PVC and MR segmentation processes. Conclusion These results suggest that the deepPVC model directly predicts PV-corrected maps from MR and PET images and improves the throughput of MR-PVC by skipping the MR segmentation processes. Supplementary Information The online version contains supplementary material available at 10.1186/s40658-022-00478-8.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Management Science and Engineering, Faculty of System Science and Technology, Akita Prefectural University, 84-4 Aza Ebinokuchi Tsuchiya, Yurihonjo, 015-0055, Japan. .,Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, 010-0874, Japan.
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, 010-0874, Japan
| | - Toshibumi Kinoshita
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, 010-0874, Japan
| | | |
Collapse
|
19
|
Akinyelu AA, Zaccagna F, Grist JT, Castelli M, Rundo L. Brain Tumor Diagnosis Using Machine Learning, Convolutional Neural Networks, Capsule Neural Networks and Vision Transformers, Applied to MRI: A Survey. J Imaging 2022; 8:205. [PMID: 35893083 PMCID: PMC9331677 DOI: 10.3390/jimaging8080205] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 06/20/2022] [Accepted: 07/12/2022] [Indexed: 02/01/2023] Open
Abstract
Management of brain tumors is based on clinical and radiological information with presumed grade dictating treatment. Hence, a non-invasive assessment of tumor grade is of paramount importance to choose the best treatment plan. Convolutional Neural Networks (CNNs) represent one of the effective Deep Learning (DL)-based techniques that have been used for brain tumor diagnosis. However, they are unable to handle input modifications effectively. Capsule neural networks (CapsNets) are a novel type of machine learning (ML) architecture that was recently developed to address the drawbacks of CNNs. CapsNets are resistant to rotations and affine translations, which is beneficial when processing medical imaging datasets. Moreover, Vision Transformers (ViT)-based solutions have been very recently proposed to address the issue of long-range dependency in CNNs. This survey provides a comprehensive overview of brain tumor classification and segmentation techniques, with a focus on ML-based, CNN-based, CapsNet-based, and ViT-based techniques. The survey highlights the fundamental contributions of recent studies and the performance of state-of-the-art techniques. Moreover, we present an in-depth discussion of crucial issues and open challenges. We also identify some key limitations and promising future research directions. We envisage that this survey shall serve as a good springboard for further study.
Collapse
Affiliation(s)
- Andronicus A. Akinyelu
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal;
- Department of Computer Science and Informatics, University of the Free State, Phuthaditjhaba 9866, South Africa
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum-University of Bologna, 40138 Bologna, Italy;
- IRCCS Istituto delle Scienze Neurologiche di Bologna, Functional and Molecular Neuroimaging Unit, 40139 Bologna, Italy
| | - James T. Grist
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK;
- Department of Radiology, Oxford University Hospitals NHS Foundation Trust, Oxford OX3 9DU, UK
- Oxford Centre for Clinical Magnetic Research Imaging, University of Oxford, Oxford OX3 9DU, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham B15 2SY, UK
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal;
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy
| |
Collapse
|
20
|
Shaukat Z, Farooq QUA, Tu S, Xiao C, Ali S. A state-of-the-art technique to perform cloud-based semantic segmentation using deep learning 3D U-Net architecture. BMC Bioinformatics 2022; 23:251. [PMID: 35751030 PMCID: PMC9229514 DOI: 10.1186/s12859-022-04794-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 06/15/2022] [Indexed: 11/11/2022] Open
Abstract
Glioma is the most aggressive and dangerous primary brain tumor with a survival time of less than 14 months. Segmentation of tumors is a necessary task in the image processing of the gliomas and is important for its timely diagnosis and starting a treatment. Using 3D U-net architecture to perform semantic segmentation on brain tumor dataset is at the core of deep learning. In this paper, we present a unique cloud-based 3D U-Net method to perform brain tumor segmentation using BRATS dataset. The system was effectively trained by using Adam optimization solver by utilizing multiple hyper parameters. We got an average dice score of 95% which makes our method the first cloud-based method to achieve maximum accuracy. The dice score is calculated by using Sørensen-Dice similarity coefficient. We also performed an extensive literature review of the brain tumor segmentation methods implemented in the last five years to get a state-of-the-art picture of well-known methodologies with a higher dice score. In comparison to the already implemented architectures, our method ranks on top in terms of accuracy in using a cloud-based 3D U-Net framework for glioma segmentation.
Collapse
Affiliation(s)
- Zeeshan Shaukat
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China.
- Faculty of Computer Science, University of South Asia, Lahore, Pakistan.
| | - Qurat Ul Ain Farooq
- Faculty of Environmental and Life Sciences, Beijing University of Technology, Beijing, People's Republic of China
| | - Shanshan Tu
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China
| | - Chuangbai Xiao
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China.
| | - Saqib Ali
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China
| |
Collapse
|
21
|
Das S, Nayak GK, Saba L, Kalra M, Suri JS, Saxena S. An artificial intelligence framework and its bias for brain tumor segmentation: A narrative review. Comput Biol Med 2022; 143:105273. [PMID: 35228172 DOI: 10.1016/j.compbiomed.2022.105273] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 01/15/2022] [Accepted: 01/24/2022] [Indexed: 02/06/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has become a prominent technique for medical diagnosis and represents an essential role in detecting brain tumors. Although AI-based models are widely used in brain lesion segmentation (BLS), understanding their effectiveness is challenging due to their complexity and diversity. Several reviews on brain tumor segmentation are available, but none of them describe a link between the threats due to risk-of-bias (RoB) in AI and its architectures. In our review, we focused on linking RoB and different AI-based architectural Cluster in popular DL framework. Further, due to variance in these designs and input data types in medical imaging, it is necessary to present a narrative review considering all facets of BLS. APPROACH The proposed study uses a PRISMA strategy based on 75 relevant studies found by searching PubMed, Scopus, and Google Scholar. Based on the architectural evolution, DL studies were subsequently categorized into four classes: convolutional neural network (CNN)-based, encoder-decoder (ED)-based, transfer learning (TL)-based, and hybrid DL (HDL)-based architectures. These studies were then analyzed considering 32 AI attributes, with clusters including AI architecture, imaging modalities, hyper-parameters, performance evaluation metrics, and clinical evaluation. Then, after these studies were scored for all attributes, a composite score was computed, normalized, and ranked. Thereafter, a bias cutoff (AP(ai)Bias 1.0, AtheroPoint, Roseville, CA, USA) was established to detect low-, moderate- and high-bias studies. CONCLUSION The four classes of architectures, from best-to worst-performing, are TL > ED > CNN > HDL. ED-based models had the lowest AI bias for BLS. This study presents a set of three primary and six secondary recommendations for lowering the RoB.
Collapse
Affiliation(s)
- Suchismita Das
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India; CSE Department, KIIT Deemed to be University, Bhubaneswar, Odisha, India
| | - G K Nayak
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| | - Luca Saba
- Department of Radiology, AOU, University of Cagliari, Cagliari, Italy
| | - Mannudeep Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA, USA
| | - Jasjit S Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™ LLC, Roseville, CA, USA.
| | - Sanjay Saxena
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| |
Collapse
|
22
|
Intelligent Model for Brain Tumor Identification Using Deep Learning. APPLIED COMPUTATIONAL INTELLIGENCE AND SOFT COMPUTING 2022. [DOI: 10.1155/2022/8104054] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Brain tumors can be a major cause of psychiatric complications such as depression and panic attacks. Quick and timely recognition of a brain tumor is more effective in tumor healing. The processing of medical images plays a crucial role in assisting humans in identifying different diseases. The classification of brain tumors is a significant part that depends on the expertise and knowledge of the physician. An intelligent system for detecting and classifying brain tumors is essential to help physicians. The novel feature of the study is the division of brain tumors into glioma, meningioma, and pituitary using a hierarchical deep learning method. The diagnosis and tumor classification are significant for the quick and productive cure, and medical image processing using a convolutional neural network (CNN) is giving excellent outcomes in this capacity. CNN uses the image fragments to train the data and classify them into tumor types. Hierarchical Deep Learning-Based Brain Tumor (HDL2BT) classification is proposed with the help of CNN for the detection and classification of brain tumors. The proposed system categorizes the tumor into four types: glioma, meningioma, pituitary, and no-tumor. The suggested model achieves 92.13% precision and a miss rate of 7.87%, being superior to earlier methods for detecting and segmentation brain tumors. The proposed system will provide clinical assistance in the area of medicine.
Collapse
|
23
|
Machine Learning in Medical Imaging – Clinical Applications and Challenges in Computer Vision. Artif Intell Med 2022. [DOI: 10.1007/978-981-19-1223-8_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
24
|
Nowakowski A, Lahijanian Z, Panet-Raymond V, Siegel PM, Petrecca K, Maleki F, Dankner M. Radiomics as an emerging tool in the management of brain metastases. Neurooncol Adv 2022; 4:vdac141. [PMID: 36284932 PMCID: PMC9583687 DOI: 10.1093/noajnl/vdac141] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Brain metastases (BM) are associated with significant morbidity and mortality in patients with advanced cancer. Despite significant advances in surgical, radiation, and systemic therapy in recent years, the median overall survival of patients with BM is less than 1 year. The acquisition of medical images, such as computed tomography (CT) and magnetic resonance imaging (MRI), is critical for the diagnosis and stratification of patients to appropriate treatments. Radiomic analyses have the potential to improve the standard of care for patients with BM by applying artificial intelligence (AI) with already acquired medical images to predict clinical outcomes and direct the personalized care of BM patients. Herein, we outline the existing literature applying radiomics for the clinical management of BM. This includes predicting patient response to radiotherapy and identifying radiation necrosis, performing virtual biopsies to predict tumor mutation status, and determining the cancer of origin in brain tumors identified via imaging. With further development, radiomics has the potential to aid in BM patient stratification while circumventing the need for invasive tissue sampling, particularly for patients not eligible for surgical resection.
Collapse
Affiliation(s)
- Alexander Nowakowski
- Rosalind and Morris Goodman Cancer Institute, McGill University, Montreal, Québec, Canada
| | - Zubin Lahijanian
- McGill University Health Centre, Department of Diagnostic Radiology, McGill University, Montreal, Québec, Canada
| | - Valerie Panet-Raymond
- McGill University Health Centre, Department of Diagnostic Radiology, McGill University, Montreal, Québec, Canada
| | - Peter M Siegel
- Rosalind and Morris Goodman Cancer Institute, McGill University, Montreal, Québec, Canada
| | - Kevin Petrecca
- Montreal Neurological Institute-Hospital, McGill University, Montreal, Québec, Canada
| | - Farhad Maleki
- Department of Computer Science, University of Calgary, Calgary, Alberta, Canada
| | - Matthew Dankner
- Rosalind and Morris Goodman Cancer Institute, McGill University, Montreal, Québec, Canada
| |
Collapse
|
25
|
Li B, Liu C, Wu S, Li G. Verte-Box: A Novel Convolutional Neural Network for Fully Automatic Segmentation of Vertebrae in CT Image. Tomography 2022; 8:45-58. [PMID: 35076631 PMCID: PMC8788501 DOI: 10.3390/tomography8010005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 12/14/2021] [Accepted: 12/17/2021] [Indexed: 12/19/2022] Open
Abstract
Due to the complex shape of the vertebrae and the background containing a lot of interference information, it is difficult to accurately segment the vertebrae from the computed tomography (CT) volume by manual segmentation. This paper proposes a convolutional neural network for vertebrae segmentation, named Verte-Box. Firstly, in order to enhance feature representation and suppress interference information, this paper places a robust attention mechanism on the central processing unit, including a channel attention module and a dual attention module. The channel attention module is used to explore and emphasize the interdependence between channel graphs of low-level features. The dual attention module is used to enhance features along the location and channel dimensions. Secondly, we design a multi-scale convolution block to the network, which can make full use of different combinations of receptive field sizes and significantly improve the network’s perception of the shape and size of the vertebrae. In addition, we connect the rough segmentation prediction maps generated by each feature in the feature box to generate the final fine prediction result. Therefore, the deep supervision network can effectively capture vertebrae information. We evaluated our method on the publicly available dataset of the CSI 2014 Vertebral Segmentation Challenge and achieved a mean Dice similarity coefficient of 92.18 ± 0.45%, an intersection over union of 87.29 ± 0.58%, and a 95% Hausdorff distance of 7.7107 ± 0.5958, outperforming other algorithms.
Collapse
Affiliation(s)
- Bing Li
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China; (C.L.); (S.W.); (G.L.)
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin 150080, China
- Correspondence:
| | - Chuang Liu
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China; (C.L.); (S.W.); (G.L.)
| | - Shaoyong Wu
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China; (C.L.); (S.W.); (G.L.)
| | - Guangqing Li
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China; (C.L.); (S.W.); (G.L.)
| |
Collapse
|
26
|
Automated Detection of Brain Tumor through Magnetic Resonance Images Using Convolutional Neural Network. BIOMED RESEARCH INTERNATIONAL 2021; 2021:3365043. [PMID: 34912889 PMCID: PMC8668304 DOI: 10.1155/2021/3365043] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 10/20/2021] [Accepted: 11/16/2021] [Indexed: 12/30/2022]
Abstract
Brain tumor is a fatal disease, caused by the growth of abnormal cells in the brain tissues. Therefore, early and accurate detection of this disease can save patient's life. This paper proposes a novel framework for the detection of brain tumor using magnetic resonance (MR) images. The framework is based on the fully convolutional neural network (FCNN) and transfer learning techniques. The proposed framework has five stages which are preprocessing, skull stripping, CNN-based tumor segmentation, postprocessing, and transfer learning-based brain tumor binary classification. In preprocessing, the MR images are filtered to eliminate the noise and are improve the contrast. For segmentation of brain tumor images, the proposed CNN architecture is used, and for postprocessing, the global threshold technique is utilized to eliminate small nontumor regions that enhanced segmentation results. In classification, GoogleNet model is employed on three publicly available datasets. The experimental results depict that the proposed method is achieved average accuracies of 96.50%, 97.50%, and 98% for segmentation and 96.49%, 97.31%, and 98.79% for classification of brain tumor on BRATS2018, BRATS2019, and BRATS2020 datasets, respectively. The outcomes demonstrate that the proposed framework is effective and efficient that attained high performance on BRATS2020 dataset than the other two datasets. According to the experimentation results, the proposed framework outperforms other recent studies in the literature. In addition, this research will uphold doctors and clinicians for automatic diagnosis of brain tumor disease.
Collapse
|
27
|
Foundations of Lesion Detection Using Machine Learning in Clinical Neuroimaging. ACTA NEUROCHIRURGICA. SUPPLEMENT 2021; 134:171-182. [PMID: 34862541 DOI: 10.1007/978-3-030-85292-4_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
This chapter describes technical considerations and current and future clinical applications of lesion detection using machine learning in the clinical setting. Lesion detection is central to neuroradiology and precedes all further processes which include but are not limited to lesion characterization, quantification, longitudinal disease assessment, prognosis, and prediction of treatment response. A number of machine learning algorithms focusing on lesion detection have been developed or are currently under development which may either support or extend the imaging process. Examples include machine learning applications in stroke, aneurysms, multiple sclerosis, neuro-oncology, neurodegeneration, and epilepsy.
Collapse
|
28
|
de Dios E, Ali MB, Gu IYH, Vecchio TG, Ge C, Jakola AS. Introduction to Deep Learning in Clinical Neuroscience. ACTA NEUROCHIRURGICA. SUPPLEMENT 2021; 134:79-89. [PMID: 34862531 DOI: 10.1007/978-3-030-85292-4_11] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
The use of deep learning (DL) is rapidly increasing in clinical neuroscience. The term denotes models with multiple sequential layers of learning algorithms, architecturally similar to neural networks of the brain. We provide examples of DL in analyzing MRI data and discuss potential applications and methodological caveats.Important aspects are data pre-processing, volumetric segmentation, and specific task-performing DL methods, such as CNNs and AEs. Additionally, GAN-expansion and domain mapping are useful DL techniques for generating artificial data and combining several smaller datasets.We present results of DL-based segmentation and accuracy in predicting glioma subtypes based on MRI features. Dice scores range from 0.77 to 0.89. In mixed glioma cohorts, IDH mutation can be predicted with a sensitivity of 0.98 and specificity of 0.97. Results in test cohorts have shown improvements of 5-7% in accuracy, following GAN-expansion of data and domain mapping of smaller datasets.The provided DL examples are promising, although not yet in clinical practice. DL has demonstrated usefulness in data augmentation and for overcoming data variability. DL methods should be further studied, developed, and validated for broader clinical use. Ultimately, DL models can serve as effective decision support systems, and are especially well-suited for time-consuming, detail-focused, and data-ample tasks.
Collapse
Affiliation(s)
- Eddie de Dios
- Department of Neurosurgery, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Muhaddisa Barat Ali
- Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | - Irene Yu-Hua Gu
- Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | - Tomás Gomez Vecchio
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, University of Gothenburg, Sahlgrenska Academy, Gothenburg, Sweden
| | - Chenjie Ge
- Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | - Asgeir S Jakola
- Department of Neurosurgery, Sahlgrenska University Hospital, Gothenburg, Sweden. .,Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, University of Gothenburg, Sahlgrenska Academy, Gothenburg, Sweden. .,Department of Neurosurgery, St. Olavs University Hospital HF, Trondheim, Norway.
| |
Collapse
|
29
|
Parkinson C, Matthams C, Foley K, Spezi E. Artificial intelligence in radiation oncology: A review of its current status and potential application for the radiotherapy workforce. Radiography (Lond) 2021; 27 Suppl 1:S63-S68. [PMID: 34493445 DOI: 10.1016/j.radi.2021.07.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 07/05/2021] [Accepted: 07/20/2021] [Indexed: 12/15/2022]
Abstract
OBJECTIVE Radiation oncology is a continually evolving speciality. With the development of new imaging modalities and advanced imaging processing techniques, there is an increasing amount of data available to practitioners. In this narrative review, Artificial Intelligence (AI) is used as a reference to machine learning, and its potential, along with current problems in the field of radiation oncology, are considered from a technical position. KEY FINDINGS AI has the potential to harness the availability of data for improving patient outcomes, reducing toxicity, and easing clinical burdens. However, problems including the requirement of complexity of data, undefined core outcomes and limited generalisability are apparent. CONCLUSION This original review highlights considerations for the radiotherapy workforce, particularly therapeutic radiographers, as there will be an increasing requirement for their familiarity with AI due to their unique position as the interface between imaging technology and patients. IMPLICATIONS FOR PRACTICE Collaboration between AI experts and the radiotherapy workforce are required to overcome current issues before clinical adoption. The development of educational resources and standardised reporting of AI studies may help facilitate this.
Collapse
Affiliation(s)
- C Parkinson
- School of Engineering, Cardiff University, UK.
| | | | | | - E Spezi
- School of Engineering, Cardiff University, UK
| |
Collapse
|
30
|
Wang Z, Shu X, Chen C, Teng Y, Zhang L, Xu J. A semi-symmetric domain adaptation network based on multi-level adversarial features for meningioma segmentation. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107245] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
31
|
Wang YL, Zhao ZJ, Hu SY, Chang FL. CLCU-Net: Cross-level connected U-shaped network with selective feature aggregation attention module for brain tumor segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 207:106154. [PMID: 34034031 DOI: 10.1016/j.cmpb.2021.106154] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 04/30/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Brain tumors are among the most deadly cancers worldwide. Due to the development of deep convolutional neural networks, many brain tumor segmentation methods help clinicians diagnose and operate. However, most of these methods insufficiently use multi-scale features, reducing their ability to extract brain tumors' features and details. To assist clinicians in the accurate automatic segmentation of brain tumors, we built a new deep learning network to make full use of multi-scale features for improving the performance of brain tumor segmentation. METHODS We propose a novel cross-level connected U-shaped network (CLCU-Net) to connect different scales' features for fully utilizing multi-scale features. Besides, we propose a generic attention module (Segmented Attention Module, SAM) on the connections of different scale features for selectively aggregating features, which provides a more efficient connection of different scale features. Moreover, we employ deep supervision and spatial pyramid pooling (SSP) to improve the method's performance further. RESULTS We evaluated our method on the BRATS 2018 dataset by five indexes and achieved excellent performance with a Dice Score of 88.5%, a Precision of 91.98%, a Recall of 85.62%, a Params of 36.34M and Inference Time of 8.89ms for the whole tumor, which outperformed six state-of-the-art methods. Moreover, the performed analysis of different attention modules' heatmaps proved that the attention module proposed in this study was more suitable for segmentation tasks than the other existing popular attention modules. CONCLUSION Both the qualitative and quantitative experimental results indicate that our cross-level connected U-shaped network with selective feature aggregation attention module can achieve accurate brain tumor segmentation and is considered quite instrumental in clinical practice implementation.
Collapse
Affiliation(s)
- Y L Wang
- School of Control Science and Engineering, Shandong University, Jinan 250061, China
| | - Z J Zhao
- School of Control Science and Engineering, Shandong University, Jinan 250061, China.
| | - S Y Hu
- the Department of General surgery, First Affiliated Hospital of Shandong First Medical University, Jinan 250012, China
| | - F L Chang
- School of Control Science and Engineering, Shandong University, Jinan 250061, China
| |
Collapse
|
32
|
Lin M, Momin S, Lei Y, Wang H, Curran WJ, Liu T, Yang X. Fully automated segmentation of brain tumor from multiparametric MRI using 3D context deep supervised U-Net. Med Phys 2021; 48:4365-4374. [PMID: 34101845 DOI: 10.1002/mp.15032] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 05/14/2021] [Accepted: 05/31/2021] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Owing to histologic complexities of brain tumors, its diagnosis requires the use of multimodalities to obtain valuable structural information so that brain tumor subregions can be properly delineated. In current clinical workflow, physicians typically perform slice-by-slice delineation of brain tumor subregions, which is a time-consuming process and also more susceptible to intra- and inter-rater variabilities possibly leading to misclassification. To deal with this issue, this study aims to develop an automatic segmentation of brain tumor in MR images using deep learning. METHOD In this study, we develop a context deep-supervised U-Net to segment brain tumor subregions. A context block which aggregates multiscale contextual information for dense segmentation was proposed. This approach enlarges the effective receptive field of convolutional neural networks, which, in turn, improves the segmentation accuracy of brain tumor subregions. We performed the fivefold cross-validation on the Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. The BraTS 2020 testing datasets were obtained via BraTS online website as a hold-out test. For BraTS, the evaluation system divides the tumor into three regions: whole tumor (WT), tumor core (TC), and enhancing tumor (ET). The performance of our proposed method was compared against two state-of-the-arts CNN networks in terms of segmentation accuracy via Dice similarity coefficient (DSC) and Hausdorff distance (HD). The tumor volumes generated by our proposed method were compared with manually contoured volumes via Bland-Altman plots and Pearson analysis. RESULTS The proposed method achieved the segmentation results with a DSC of 0.923 ± 0.047, 0.893 ± 0.176, and 0.846 ± 0.165 and a 95% HD95 of 3.946 ± 7.041, 3.981 ± 6.670, and 10.128 ± 51.136 mm on WT, TC, and ET, respectively. Experimental results demonstrate that our method achieved comparable to significantly (p < 0.05) better segmentation accuracies than other two state-of-the-arts CNN networks. Pearson correlation analysis showed a high positive correlation between the tumor volumes generated by proposed method and manual contour. CONCLUSION Overall qualitative and quantitative results of this work demonstrate the potential of translating proposed technique into clinical practice for segmenting brain tumor subregions, and further facilitate brain tumor radiotherapy workflow.
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Shadab Momin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hesheng Wang
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
33
|
Fontanella MM, Bacigaluppi S, Doglietto F, Zanin L, Agosti E, Panciani P, Belotti F, Saraceno G, Spena G, Draghi R, Fiorindi A, Cornali C, Biroli A, Kivelev J, Chiesa M, Retta SF, Gasparotti R, Kato Y, Hernesniemi J, Rigamonti D. An international call for a new grading system for cerebral and cerebellar cavernomas. J Neurosurg Sci 2021; 65:239-246. [PMID: 34184861 DOI: 10.23736/s0390-5616.21.05433-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Surgical indications for cerebral cavernous malformations remain significantly center- and surgeon-dependent. Available grading systems are potentially limited, as they do not include epileptological and radiological data. A novel grading system is proposed for supratentorial and cerebellar cavernomas: it considers neuroradiological features (bleeding, increase in size), neurological status (focal deficits and seizures), location of the lesion and age of the patient. The score ranges from -1 to 10; furthermore, surgery should be considered when a score of 4 or higher is present. Based on neuroradiological characteristics, 0 points are assigned if the CCM is stable in size at different neuroradiological controls, 1 point if there is an increase in volume during follow-up, 2 points if intra- or extra-lesional bleeding <1 cm is present and 3 points if the CCM produced a hematoma >1 cm. Regarding focal neurological deficits, 0 points are assigned if absent and 2 points if present. For seizures, 0 points are assigned if absent, 1 point if present, but controlled by medications, and 2 points if drug resistant. We considered the site of the CCM, and in case of deep-seated lesions in a critical area (basal ganglia, thalamus) 1 point (-1) is subtracted, while for subcortical or deep cerebellar lesions 0 points are assigned, for CCMs in a cortical critical area 1 point is assigned and in case of lesions in cortical not in critical area or superficial cerebellar area, 2 points are assigned. As far as age is concerned, 0 points are assigned for patients older than 50 years and 1 point for patients younger than 50. In conclusion, a novel grading for surgical decision making in cerebral cavernomas, based on the experience of selected neurosurgeons, basic scientists, and patients, is suggested with the aim of further improving and standardizing the treatment of CCMs. The aim of this paper was also to call for both retrospective and prospective multicenter studies with the aim of testing the efficacy of the grading system in different centers.
Collapse
Affiliation(s)
- Marco M Fontanella
- Unit of Neurosurgery, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | | | - Francesco Doglietto
- Unit of Neurosurgery, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Luca Zanin
- Unit of Neurosurgery, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy -
| | - Edoardo Agosti
- Unit of Neurosurgery, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Pierpaolo Panciani
- Unit of Neurosurgery, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Francesco Belotti
- Unit of Neurosurgery, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Giorgio Saraceno
- Unit of Neurosurgery, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | | | - Riccardo Draghi
- Department of Neurosurgery, Maria Cecilia Hospital, GVM Care & Research, Cotignola, Ravenna, Italy
| | - Alessandro Fiorindi
- Unit of Neurosurgery, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Claudio Cornali
- Unit of Neurosurgery, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Antonio Biroli
- Unit of Neurosurgery, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Juri Kivelev
- Department of Neurosurgery, Neurocenter, Turku University Hospital, Turku, Finland
| | | | - Saverio F Retta
- Department of Clinical and Biological Sciences, University of Turin, Turin, Italy.,CCM Italian Research Network, National Coordination Center at the Department of Clinical and Biological Sciences, University of Turin, Turin, Italy
| | - Roberto Gasparotti
- Unit of Neuroradiology, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Yoko Kato
- Department of Neurosurgery, Fujita Health University Aichi, Toyoake, Japan
| | | | | |
Collapse
|
34
|
Manco L, Maffei N, Strolin S, Vichi S, Bottazzi L, Strigari L. Basic of machine learning and deep learning in imaging for medical physicists. Phys Med 2021; 83:194-205. [DOI: 10.1016/j.ejmp.2021.03.026] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 03/07/2021] [Accepted: 03/16/2021] [Indexed: 02/08/2023] Open
|
35
|
Abstract
The National Cancer Institute's Quantitative Imaging Network (QIN) has thrived over the past 12 years with an emphasis on the development of image-based decision support software tools for improving measurements of imaging metrics. An overarching goal has been to develop advanced tools that could be translated into clinical trials to provide for improved prediction of response to therapeutic interventions. This article provides an overview of the successes in development and translation of new algorithms into the clinical workflow by the many research teams of the Quantitative Imaging Network.
Collapse
|