1
|
Batool A, Byun YC. Brain tumor detection with integrating traditional and computational intelligence approaches across diverse imaging modalities - Challenges and future directions. Comput Biol Med 2024; 175:108412. [PMID: 38691914 DOI: 10.1016/j.compbiomed.2024.108412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 03/18/2024] [Accepted: 04/02/2024] [Indexed: 05/03/2024]
Abstract
Brain tumor segmentation and classification play a crucial role in the diagnosis and treatment planning of brain tumors. Accurate and efficient methods for identifying tumor regions and classifying different tumor types are essential for guiding medical interventions. This study comprehensively reviews brain tumor segmentation and classification techniques, exploring various approaches based on image processing, machine learning, and deep learning. Furthermore, our study aims to review existing methodologies, discuss their advantages and limitations, and highlight recent advancements in this field. The impact of existing segmentation and classification techniques for automated brain tumor detection is also critically examined using various open-source datasets of Magnetic Resonance Images (MRI) of different modalities. Moreover, our proposed study highlights the challenges related to segmentation and classification techniques and datasets having various MRI modalities to enable researchers to develop innovative and robust solutions for automated brain tumor detection. The results of this study contribute to the development of automated and robust solutions for analyzing brain tumors, ultimately aiding medical professionals in making informed decisions and providing better patient care.
Collapse
Affiliation(s)
- Amreen Batool
- Department of Electronic Engineering, Institute of Information Science & Technology, Jeju National University, Jeju, 63243, South Korea
| | - Yung-Cheol Byun
- Department of Computer Engineering, Major of Electronic Engineering, Jeju National University, Institute of Information Science Technology, Jeju, 63243, South Korea.
| |
Collapse
|
2
|
Song J, Lu X, Gu Y. GMAlignNet: multi-scale lightweight brain tumor image segmentation with enhanced semantic information consistency. Phys Med Biol 2024; 69:115033. [PMID: 38657628 DOI: 10.1088/1361-6560/ad4301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 04/24/2024] [Indexed: 04/26/2024]
Abstract
Although the U-shaped architecture, represented by UNet, has become a major network model for brain tumor segmentation, the repeated convolution and sampling operations can easily lead to the loss of crucial information. Additionally, directly fusing features from different levels without distinction can easily result in feature misalignment, affecting segmentation accuracy. On the other hand, traditional convolutional blocks used for feature extraction cannot capture the abundant multi-scale information present in brain tumor images. This paper proposes a multi-scale feature-aligned segmentation model called GMAlignNet that fully utilizes Ghost convolution to solve these problems. Ghost hierarchical decoupled fusion unit and Ghost hierarchical decoupled unit are used instead of standard convolutions in the encoding and decoding paths. This transformation replaces the holistic learning of volume structures by traditional convolutional blocks with multi-level learning on a specific view, facilitating the acquisition of abundant multi-scale contextual information through low-cost operations. Furthermore, a feature alignment unit is proposed that can utilize semantic information flow to guide the recovery of upsampled features. It performs pixel-level semantic information correction on misaligned features due to feature fusion. The proposed method is also employed to optimize three classic networks, namely DMFNet, HDCNet, and 3D UNet, demonstrating its effectiveness in automatic brain tumor segmentation. The proposed network model was applied to the BraTS 2018 dataset, and the results indicate that the proposed GMAlignNet achieved Dice coefficients of 81.65%, 90.07%, and 85.16% for enhancing tumor, whole tumor, and tumor core segmentation, respectively. Moreover, with only 0.29 M parameters and 26.88G FLOPs, it demonstrates better potential in terms of computational efficiency and possesses the advantages of lightweight. Extensive experiments on the BraTS 2018, BraTS 2019, and BraTS 2020 datasets suggest that the proposed model exhibits better potential in handling edge details and contour recognition.
Collapse
Affiliation(s)
- Jianli Song
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Digital and Intelligent Industry, Inner Mongolia University of Science and Technology, Baotou 014010, People's Republic of China
| | - Xiaoqi Lu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Digital and Intelligent Industry, Inner Mongolia University of Science and Technology, Baotou 014010, People's Republic of China
- School of Information Engineering, Inner Mongolia University of Technology, Hohhot 010051, People's Republic of China
| | - Yu Gu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Digital and Intelligent Industry, Inner Mongolia University of Science and Technology, Baotou 014010, People's Republic of China
| |
Collapse
|
3
|
Al-Otaibi S, Rehman A, Raza A, Alyami J, Saba T. CVG-Net: novel transfer learning based deep features for diagnosis of brain tumors using MRI scans. PeerJ Comput Sci 2024; 10:e2008. [PMID: 38855235 PMCID: PMC11157570 DOI: 10.7717/peerj-cs.2008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 04/01/2024] [Indexed: 06/11/2024]
Abstract
Brain tumors present a significant medical challenge, demanding accurate and timely diagnosis for effective treatment planning. These tumors disrupt normal brain functions in various ways, giving rise to a broad spectrum of physical, cognitive, and emotional challenges. The daily increase in mortality rates attributed to brain tumors underscores the urgency of this issue. In recent years, advanced medical imaging techniques, particularly magnetic resonance imaging (MRI), have emerged as indispensable tools for diagnosing brain tumors. Brain MRI scans provide high-resolution, non-invasive visualization of brain structures, facilitating the precise detection of abnormalities such as tumors. This study aims to propose an effective neural network approach for the timely diagnosis of brain tumors. Our experiments utilized a multi-class MRI image dataset comprising 21,672 images related to glioma tumors, meningioma tumors, and pituitary tumors. We introduced a novel neural network-based feature engineering approach, combining 2D convolutional neural network (2DCNN) and VGG16. The resulting 2DCNN-VGG16 network (CVG-Net) extracted spatial features from MRI images using 2DCNN and VGG16 without human intervention. The newly created hybrid feature set is then input into machine learning models to diagnose brain tumors. We have balanced the multi-class MRI image features data using the Synthetic Minority Over-sampling Technique (SMOTE) approach. Extensive research experiments demonstrate that utilizing the proposed CVG-Net, the k-neighbors classifier outperformed state-of-the-art studies with a k-fold accuracy performance score of 0.96. We also applied hyperparameter tuning to enhance performance for multi-class brain tumor diagnosis. Our novel proposed approach has the potential to revolutionize early brain tumor diagnosis, providing medical professionals with a cost-effective and timely diagnostic mechanism.
Collapse
Affiliation(s)
- Shaha Al-Otaibi
- Department of Information Systems, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Ali Raza
- Institute of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Rahim Yar Khan, Pakistan
| | - Jaber Alyami
- Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
4
|
Cheng Z, Wang S, Gao Y, Zhu Z, Yan C. Invariant Content Representation for Generalizable Medical Image Segmentation. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01088-9. [PMID: 38758420 DOI: 10.1007/s10278-024-01088-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 01/20/2024] [Accepted: 02/09/2024] [Indexed: 05/18/2024]
Abstract
Domain generalization (DG) for medical image segmentation due to privacy preservation prefers learning from a single-source domain and expects good robustness on unseen target domains. To achieve this goal, previous methods mainly use data augmentation to expand the distribution of samples and learn invariant content from them. However, most of these methods commonly perform global augmentation, leading to limited augmented sample diversity. In addition, the style of the augmented image is more scattered than the source domain, which may cause the model to overfit the style of the source domain. To address the above issues, we propose an invariant content representation network (ICRN) to enhance the learning of invariant content and suppress the learning of variability styles. Specifically, we first design a gamma correction-based local style augmentation (LSA) to expand the distribution of samples by augmenting foreground and background styles, respectively. Then, based on the augmented samples, we introduce invariant content learning (ICL) to learn generalizable invariant content from both augmented and source-domain samples. Finally, we design domain-specific batch normalization (DSBN) based style adversarial learning (SAL) to suppress the learning of preferences for source-domain styles. Experimental results show that our proposed method improves by 8.74% and 11.33% in overall dice coefficient (Dice) and reduces 15.88 mm and 3.87 mm in overall average surface distance (ASD) on two publicly available cross-domain datasets, Fundus and Prostate, compared to the state-of-the-art DG methods. The code is available at https://github.com/ZMC-IIIM/ICRN-DG .
Collapse
Affiliation(s)
- Zhiming Cheng
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Shuai Wang
- School of Cyberspace, Hangzhou Dianzi University, Hangzhou, 310018, China.
- Suzhou Research Institute of Shandong University, SuZhou, 215123, China.
| | - Yuhan Gao
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, China
- Lishui Institute of Hangzhou Dianzi Universitu, Lishui, 323010, China
| | - Zunjie Zhu
- Lishui Institute of Hangzhou Dianzi Universitu, Lishui, 323010, China
- School of Communication, Engineering, Hangzhou Dianzi Universitu, Hangzhou, 310018, China
| | - Chenggang Yan
- School of Communication, Engineering, Hangzhou Dianzi Universitu, Hangzhou, 310018, China
| |
Collapse
|
5
|
Ellison J, Caliva F, Damasceno P, Luks TL, LaFontaine M, Cluceru J, Kemisetti A, Li Y, Molinaro AM, Pedoia V, Villanueva-Meyer JE, Lupo JM. Improving the Generalizability of Deep Learning for T2-Lesion Segmentation of Gliomas in the Post-Treatment Setting. Bioengineering (Basel) 2024; 11:497. [PMID: 38790363 PMCID: PMC11117752 DOI: 10.3390/bioengineering11050497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/24/2024] [Accepted: 05/07/2024] [Indexed: 05/26/2024] Open
Abstract
Although fully automated volumetric approaches for monitoring brain tumor response have many advantages, most available deep learning models are optimized for highly curated, multi-contrast MRI from newly diagnosed gliomas, which are not representative of post-treatment cases in the clinic. Improving segmentation for treated patients is critical to accurately tracking changes in response to therapy. We investigated mixing data from newly diagnosed (n = 208) and treated (n = 221) gliomas in training, applying transfer learning (TL) from pre- to post-treatment imaging domains, and incorporating spatial regularization for T2-lesion segmentation using only T2 FLAIR images as input to improve generalization post-treatment. These approaches were evaluated on 24 patients suspected of progression who had received prior treatment. Including 26% of treated patients in training improved performance by 13.9%, and including more treated and untreated patients resulted in minimal changes. Fine-tuning with treated glioma improved sensitivity compared to data mixing by 2.5% (p < 0.05), and spatial regularization further improved performance when used with TL by 95th HD, Dice, and sensitivity (6.8%, 0.8%, 2.2%; p < 0.05). While training with ≥60 treated patients yielded the majority of performance gain, TL and spatial regularization further improved T2-lesion segmentation to treated gliomas using a single MR contrast and minimal processing, demonstrating clinical utility in response assessment.
Collapse
Affiliation(s)
- Jacob Ellison
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
- UCSF/UC Berkeley Graduate Program in Bioengineering, San Francisco, CA 94143, USA
| | - Francesco Caliva
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
| | - Pablo Damasceno
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
| | - Tracy L. Luks
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
| | - Marisa LaFontaine
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
| | - Julia Cluceru
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
| | - Anil Kemisetti
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
| | - Yan Li
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
| | | | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
- UCSF/UC Berkeley Graduate Program in Bioengineering, San Francisco, CA 94143, USA
| | - Javier E. Villanueva-Meyer
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
| | - Janine M. Lupo
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
- UCSF/UC Berkeley Graduate Program in Bioengineering, San Francisco, CA 94143, USA
| |
Collapse
|
6
|
Nazir M, Shakil S, Khurshid K. End-to-End Multi-task Learning Architecture for Brain Tumor Analysis with Uncertainty Estimation in MRI Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01009-w. [PMID: 38565728 DOI: 10.1007/s10278-024-01009-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 11/25/2023] [Accepted: 11/28/2023] [Indexed: 04/04/2024]
Abstract
Brain tumors are a threat to life for every other human being, be it adults or children. Gliomas are one of the deadliest brain tumors with an extremely difficult diagnosis. The reason is their complex and heterogenous structure which gives rise to subjective as well as objective errors. Their manual segmentation is a laborious task due to their complex structure and irregular appearance. To cater to all these issues, a lot of research has been done and is going on to develop AI-based solutions that can help doctors and radiologists in the effective diagnosis of gliomas with the least subjective and objective errors, but an end-to-end system is still missing. An all-in-one framework has been proposed in this research. The developed end-to-end multi-task learning (MTL) architecture with a feature attention module can classify, segment, and predict the overall survival of gliomas by leveraging task relationships between similar tasks. Uncertainty estimation has also been incorporated into the framework to enhance the confidence level of healthcare practitioners. Extensive experimentation was performed by using combinations of MRI sequences. Brain tumor segmentation (BraTS) challenge datasets of 2019 and 2020 were used for experimental purposes. Results of the best model with four sequences show 95.1% accuracy for classification, 86.3% dice score for segmentation, and a mean absolute error (MAE) of 456.59 for survival prediction on the test data. It is evident from the results that deep learning-based MTL models have the potential to automate the whole brain tumor analysis process and give efficient results with least inference time without human intervention. Uncertainty quantification confirms the idea that more data can improve the generalization ability and in turn can produce more accurate results with less uncertainty. The proposed model has the potential to be utilized in a clinical setup for the initial screening of glioma patients.
Collapse
Affiliation(s)
- Maria Nazir
- Medical Imaging and Diagnostics Lab, NCAI COMSATS University Islamabad, Islamabad, Pakistan.
- iVision Lab, Department of Electrical Engineering, Institute of Space Technology, Islamabad, Pakistan.
- BiCoNeS Lab, Department of Electrical Engineering, Institute of Space Technology, Islamabad, Pakistan.
| | - Sadia Shakil
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Khurram Khurshid
- iVision Lab, Department of Electrical Engineering, Institute of Space Technology, Islamabad, Pakistan
| |
Collapse
|
7
|
Yang Z, Lafata K, Vaios E, Hu Z, Mullikin T, Yin FF, Wang C. Quantifying U-Net uncertainty in multi-parametric MRI-based glioma segmentation by spherical image projection. Med Phys 2024; 51:1931-1943. [PMID: 37696029 PMCID: PMC10925552 DOI: 10.1002/mp.16695] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 07/18/2023] [Accepted: 08/08/2023] [Indexed: 09/13/2023] Open
Abstract
BACKGROUND Uncertainty quantification in deep learning is an important research topic. For medical image segmentation, the uncertainty measurements are usually reported as the likelihood that each pixel belongs to the predicted segmentation region. In potential clinical applications, the uncertainty result reflects the algorithm's robustness and supports the confidence and trust of the segmentation result when the ground-truth result is absent. For commonly studied deep learning models, novel methods for quantifying segmentation uncertainty are in demand. PURPOSE To develop a U-Net segmentation uncertainty quantification method based on spherical image projection of multi-parametric MRI (MP-MRI) in glioma segmentation. METHODS The projection of planar MRI data onto a spherical surface is equivalent to a nonlinear image transformation that retains global anatomical information. By incorporating this image transformation process in our proposed spherical projection-based U-Net (SPU-Net) segmentation model design, multiple independent segmentation predictions can be obtained from a single MRI. The final segmentation is the average of all available results, and the variation can be visualized as a pixel-wise uncertainty map. An uncertainty score was introduced to evaluate and compare the performance of uncertainty measurements. The proposed SPU-Net model was implemented on the basis of 369 glioma patients with MP-MRI scans (T1, T1-Ce, T2, and FLAIR). Three SPU-Net models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The SPU-Net model was compared with (1) the classic U-Net model with test-time augmentation (TTA) and (2) linear scaling-based U-Net (LSU-Net) segmentation models in terms of both segmentation accuracy (Dice coefficient, sensitivity, specificity, and accuracy) and segmentation uncertainty (uncertainty map and uncertainty score). RESULTS The developed SPU-Net model successfully achieved low uncertainty for correct segmentation predictions (e.g., tumor interior or healthy tissue interior) and high uncertainty for incorrect results (e.g., tumor boundaries). This model could allow the identification of missed tumor targets or segmentation errors in U-Net. Quantitatively, the SPU-Net model achieved the highest uncertainty scores for three segmentation targets (ET/TC/WT): 0.826/0.848/0.936, compared to 0.784/0.643/0.872 using the U-Net with TTA and 0.743/0.702/0.876 with the LSU-Net (scaling factor = 2). The SPU-Net also achieved statistically significantly higher Dice coefficients, underscoring the improved segmentation accuracy. CONCLUSION The SPU-Net model offers a powerful tool to quantify glioma segmentation uncertainty while improving segmentation accuracy. The proposed method can be generalized to other medical image-related deep-learning applications for uncertainty evaluation.
Collapse
Affiliation(s)
- Zhenyu Yang
- Department of Radiation Oncology, Duke University, Durham, NC, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
- Medical Physics Graduate Program, Duke University, Durham, NC, USA
| | - Kyle Lafata
- Department of Radiation Oncology, Duke University, Durham, NC, USA
- Department of Radiology, Duke University, Durham, NC, USA
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Eugene Vaios
- Department of Radiation Oncology, Duke University, Durham, NC, USA
| | - Zongsheng Hu
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Graduate School of Biomedical Science, Houston, TX, USA
| | - Trey Mullikin
- Department of Radiation Oncology, Duke University, Durham, NC, USA
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University, Durham, NC, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Chunhao Wang
- Department of Radiation Oncology, Duke University, Durham, NC, USA
| |
Collapse
|
8
|
Li R, Ye J, Huang Y, Jin W, Xu P, Guo L. A continuous learning approach to brain tumor segmentation: integrating multi-scale spatial distillation and pseudo-labeling strategies. Front Oncol 2024; 13:1247603. [PMID: 38260848 PMCID: PMC10801036 DOI: 10.3389/fonc.2023.1247603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 12/18/2023] [Indexed: 01/24/2024] Open
Abstract
Introduction This study presents a novel continuous learning framework tailored for brain tumour segmentation, addressing a critical step in both diagnosis and treatment planning. This framework addresses common challenges in brain tumour segmentation, such as computational complexity, limited generalisability, and the extensive need for manual annotation. Methods Our approach uniquely combines multi-scale spatial distillation with pseudo-labelling strategies, exploiting the coordinated capabilities of the ResNet18 and DeepLabV3+ network architectures. This integration enhances feature extraction and efficiently manages model size, promoting accurate and fast segmentation. To mitigate the problem of catastrophic forgetting during model training, our methodology incorporates a multi-scale spatial distillation scheme. This scheme is essential for maintaining model diversity and preserving knowledge from previous training phases. In addition, a confidence-based pseudo-labelling technique is employed, allowing the model to self-improve based on its predictions and ensuring a balanced treatment of data categories. Results The effectiveness of our framework has been evaluated on three publicly available datasets (BraTS2019, BraTS2020, BraTS2021) and one proprietary dataset (BraTS_FAHZU) using performance metrics such as Dice coefficient, sensitivity, specificity and Hausdorff95 distance. The results consistently show competitive performance against other state-of-the-art segmentation techniques, demonstrating improved accuracy and efficiency. Discussion This advance has significant implications for the field of medical image segmentation. Our code is freely available at https://github.com/smallboy-code/A-brain-tumor-segmentation-frameworkusing-continual-learning.
Collapse
Affiliation(s)
- Ruipeng Li
- Department of Urology, Hangzhou Third People’s Hospital, Hangzhou, China
| | - Jianming Ye
- Department of Oncology, First Affiliated Hospital, Gannan Medical University, Ganzhou, China
| | - Yueqi Huang
- Department of Psychiatry, Hangzhou Seventh People’s Hospital, Hangzhou, China
| | - Wei Jin
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | - Peng Xu
- Third Affiliated Hospital, Zhejiang Chinese Medical University, Hangzhou, China
| | - Lilin Guo
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| |
Collapse
|
9
|
Li C, Liu Y, Dong R, Zhang T, Song Y, Zhang Q. Deep learning radiomics on shear wave elastography and b-mode ultrasound videos of diaphragm for weaning outcome prediction. Med Eng Phys 2024; 123:104090. [PMID: 38365343 DOI: 10.1016/j.medengphy.2023.104090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 11/17/2023] [Accepted: 12/16/2023] [Indexed: 02/18/2024]
Abstract
PURPOSE We proposed an automatic method based on deep learning radiomics (DLR) on shear wave elastography (SWE) and B-mode ultrasound videos of diaphragm for two classification tasks, one for differentiation between the control and patient groups, and the other for weaning outcome prediction. MATERIALS AND METHODS We included a total of 581 SWE and B-mode ultrasound videos, of which 466 were from the control group of 179 normal subjects, and 115 were from the patient group of 35 mechanically ventilated subjects in the intensive care unit (ICU). Among the patient group, 17 subjects successfully weaned and 18 failed. The deep neural network of U-Net was utilized to automatically segment diaphragm regions in dual-modal videos of SWE and B-mode. High-throughput radiomics features were then extracted, the statistical test and least absolute shrinkage and selection operator (LASSO) were applied for feature dimension reduction. The optimal classification models for the two tasks were established using the support vector machine (SVM). RESULTS The automatic segmentation model achieved Dice score of 87.89 %. A total of 4524 radiomics features were extracted, 10 and 20 important features were left after feature dimension reduction for constructing the two classification models. The best areas under receiver operating characteristic curves of the two models reached 84.01 % and 94.37 %, respectively. CONCLUSIONS Our proposed DLR methods are innovative for automatic segmentation of diaphragm regions in SWE and B-mode videos and deep mining of high-throughput radiomics features from dual-modal images. The approaches have been proved to be effective for prediction of weaning outcomes.
Collapse
Affiliation(s)
- Changchun Li
- The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Yan Liu
- Department of Ultrasonography, Zhoupu Hospital Affiliated to Shanghai University of Medicine & Health Sciences, Shanghai, China
| | - Rui Dong
- Department of Ultrasonography, Zhoupu Hospital Affiliated to Shanghai University of Medicine & Health Sciences, Shanghai, China
| | - Tianjie Zhang
- Department of Ultrasonography, Zhoupu Hospital Affiliated to Shanghai University of Medicine & Health Sciences, Shanghai, China
| | - Ye Song
- Department of Ultrasonography, Zhoupu Hospital Affiliated to Shanghai University of Medicine & Health Sciences, Shanghai, China.
| | - Qi Zhang
- The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai, China.
| |
Collapse
|
10
|
Weninger L, Ecke J, Jütten K, Clusmann H, Wiesmann M, Merhof D, Na CH. Diffusion MRI anomaly detection in glioma patients. Sci Rep 2023; 13:20366. [PMID: 37990121 PMCID: PMC10663596 DOI: 10.1038/s41598-023-47563-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 11/15/2023] [Indexed: 11/23/2023] Open
Abstract
Diffusion-MRI (dMRI) measures molecular diffusion, which allows to characterize microstructural properties of the human brain. Gliomas strongly alter these microstructural properties. Delineation of brain tumors currently mainly relies on conventional MRI-techniques, which are, however, known to underestimate tumor volumes in diffusely infiltrating glioma. We hypothesized that dMRI is well suited for tumor delineation, and developed two different deep-learning approaches. The first diffusion-anomaly detection architecture is a denoising autoencoder, the second consists of a reconstruction and a discrimination network. Each model was exclusively trained on non-annotated dMRI of healthy subjects, and then applied on glioma patients' data. To validate these models, a state-of-the-art supervised tumor segmentation network was modified to generate groundtruth tumor volumes based on structural MRI. Compared to groundtruth segmentations, a dice score of 0.67 ± 0.2 was obtained. Further inspecting mismatches between diffusion-anomalous regions and groundtruth segmentations revealed, that these colocalized with lesions delineated only later on in structural MRI follow-up data, which were not visible at the initial time of recording. Anomaly-detection methods are suitable for tumor delineation in dMRI acquisitions, and may further enhance brain-imaging analysis by detection of occult tumor infiltration in glioma patients, which could improve prognostication of disease evolution and tumor treatment strategies.
Collapse
Affiliation(s)
- Leon Weninger
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen University, Aachen, Germany
- Department of Electrical Engineering, RWTH Aachen University, Aachen, Germany
| | - Jarek Ecke
- Department of Electrical Engineering, RWTH Aachen University, Aachen, Germany
| | - Kerstin Jütten
- Department of Neurosurgery, RWTH Aachen University, Aachen, Germany
| | - Hans Clusmann
- Department of Neurosurgery, RWTH Aachen University, Aachen, Germany
- Center for Integrated Oncology Aachen Bonn Cologne Duesseldorf (CIO ABCD), Aachen, Germany
| | - Martin Wiesmann
- Department of Neuroradiology, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Faculty of Informatics and Computer Science, University of Regensburg, Regensburg, Germany
- Frauenhofer-Institut für Digitale Medizin, MEVIS, Bremen, Germany
| | - Chuh-Hyoun Na
- Department of Neurosurgery, RWTH Aachen University, Aachen, Germany.
- Center for Integrated Oncology Aachen Bonn Cologne Duesseldorf (CIO ABCD), Aachen, Germany.
| |
Collapse
|
11
|
Cheng D, Gao X, Mao Y, Xiao B, You P, Gai J, Zhu M, Kang J, Zhao F, Mao N. Brain tumor feature extraction and edge enhancement algorithm based on U-Net network. Heliyon 2023; 9:e22536. [PMID: 38034799 PMCID: PMC10687284 DOI: 10.1016/j.heliyon.2023.e22536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 11/02/2023] [Accepted: 11/14/2023] [Indexed: 12/02/2023] Open
Abstract
Background Statistics show that each year more than 100,000 patients pass away from brain tumors. Due to the diverse morphology, hazy boundaries, or unbalanced categories of medical data lesions, segmentation prediction of brain tumors has significant challenges. Purpose In this thesis, we highlight EAV-UNet, a system designed to accurately detect lesion regions. Optimizing feature extraction, utilizing automatic segmentation techniques to detect anomalous regions, and strengthening the structure. We prioritize the segmentation problem of lesion regions, especially in cases where the margins of the tumor are more hazy. Methods The VGG-19 network structure is incorporated into the coding stage of the U-Net, resulting in a deeper network structure, and an attention mechanism module is introduced to augment the feature information. Additionally, an edge detection module is added to the encoder to extract edge information in the image, which is then passed to the decoder to aid in reconstructing the original image. Our method uses the VGG-19 in place of the U-Net encoder. To strengthen feature details, we integrate a CBAM (Channel and Spatial Attention Mechanism) module into the decoder to enhance it. To extract vital edge details from the data, we incorporate an edge recognition section into the encoder. Results All evaluation metrics show major improvements with our recommended EAV-UNet technique, which is based on a thorough analysis of experimental data. Specifically, for low contrast and blurry lesion edge images, the EAV-Unet method consistently produces forecasts that are very similar to the initial images. This technique reduced the Hausdorff distance to 1.82, achieved an F1 score of 96.1%, and attained a precision of 93.2% on Dataset 1. It obtained an F1 score of 76.8%, a Precision of 85.3%, and a Hausdorff distance reduction to 1.31 on Dataset 2. Dataset 3 displayed a Hausdorff distance cut in 2.30, an F1 score of 86.9%, and Precision of 95.3%. Conclusions We conducted extensive segmentation experiments using various datasets related to brain tumors. We refined the network architecture by employing smaller convolutional kernels in our strategy. To further improve segmentation accuracy, we integrated attention modules and an edge enhancement module to reinforce edge information and boost attention scores.
Collapse
Affiliation(s)
- Dapeng Cheng
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
- Shandong Co-Innovation Center of Future Intelligent Computing, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Xiaolian Gao
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Yanyan Mao
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Baozhen Xiao
- Early Spring Garden Primary School, 6452 Fushou East Street, Weifang City, Shandong Province, Weifang, 261000, China
| | - Panlu You
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Jiale Gai
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Minghui Zhu
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Jialong Kang
- School of Information and Electronic Engineering, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Feng Zhao
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
- Shandong Co-Innovation Center of Future Intelligent Computing, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Ning Mao
- Department of Radiology, Yantai Yuhuangding Hospital, No.20, Yudong Road, Yantai City, Shandong Province, Yantai, 264000, China
| |
Collapse
|
12
|
Rodríguez Outeiral R, Ferreira Silvério N, González PJ, Schaake EE, Janssen T, van der Heide UA, Simões R. A network score-based metric to optimize the quality assurance of automatic radiotherapy target segmentations. Phys Imaging Radiat Oncol 2023; 28:100500. [PMID: 37869474 PMCID: PMC10587515 DOI: 10.1016/j.phro.2023.100500] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 10/06/2023] [Accepted: 10/09/2023] [Indexed: 10/24/2023] Open
Abstract
Background and purpose Existing methods for quality assurance of the radiotherapy auto-segmentations focus on the correlation between the average model entropy and the Dice Similarity Coefficient (DSC) only. We identified a metric directly derived from the output of the network and correlated it with clinically relevant metrics for contour accuracy. Materials and Methods Magnetic Resonance Imaging auto-segmentations were available for the gross tumor volume for cervical cancer brachytherapy (106 segmentations) and for the clinical target volume for rectal cancer external-beam radiotherapy (77 segmentations). The nnU-Net's output before binarization was taken as a score map. We defined a metric as the mean of the voxels in the score map above a threshold (λ). Comparisons were made with the mean and standard deviation over the score map and with the mean over the entropy map. The DSC, the 95th Hausdorff distance, the mean surface distance (MSD) and the surface DSC were computed for segmentation quality. Correlations between the studied metrics and model quality were assessed with the Pearson correlation coefficient (r). The area under the curve (AUC) was determined for detecting segmentations that require reviewing. Results For both tasks, our metric (λ = 0.30) correlated more strongly with the segmentation quality than the mean over the entropy map (for surface DSC, r > 0.65 vs. r < 0.60). The AUC was above 0.84 for detecting MSD values above 2 mm. Conclusions Our metric correlated strongly with clinically relevant segmentation metrics and detected segmentations that required reviewing, indicating its potential for automatic quality assurance of radiotherapy target auto-segmentations.
Collapse
Affiliation(s)
- Roque Rodríguez Outeiral
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Nicole Ferreira Silvério
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Patrick J. González
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Eva E. Schaake
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Tomas Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Uulke A. van der Heide
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Rita Simões
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| |
Collapse
|
13
|
Mohammadi S, Ghaderi S, Ghaderi K, Mohammadi M, Pourasl MH. Automated segmentation of meningioma from contrast-enhanced T1-weighted MRI images in a case series using a marker-controlled watershed segmentation and fuzzy C-means clustering machine learning algorithm. Int J Surg Case Rep 2023; 111:108818. [PMID: 37716060 PMCID: PMC10514425 DOI: 10.1016/j.ijscr.2023.108818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/07/2023] [Accepted: 09/09/2023] [Indexed: 09/18/2023] Open
Abstract
INTRODUCTION AND IMPORTANCE Accurate segmentation of meningiomas from contrast-enhanced T1-weighted (CE T1-w) magnetic resonance imaging (MRI) is crucial for diagnosis and treatment planning. Manual segmentation is time-consuming and prone to variability. To evaluate an automated segmentation approach for meningiomas using marker-controlled watershed segmentation (MCWS) and fuzzy c-means (FCM) algorithms. CASE PRESENTATION AND METHODS CE T1-w MRI of 3 female patients (aged 59, 44, 67 years) with right frontal meningiomas were analyzed. Images were converted to grayscale and preprocessed with Otsu's thresholding and FCM clustering. MCWS segmentation was performed. Segmentation accuracy was assessed by comparing automated segmentations to manual delineations. CLINICAL DISCUSSION The approach successfully segmented meningiomas in all cases. Mean sensitivity was 0.8822, indicating accurate identification of tumors. Mean Dice similarity coefficient between Otsu's and FCM1 was 0.6599, suggesting good overlap between segmentation methods. CONCLUSION The MCWS and FCM approach enables accurate automated segmentation of meningiomas from CE T1-w MRI. With further validation on larger datasets, this could provide an efficient tool to assist in delineating meningioma boundaries for clinical management.
Collapse
Affiliation(s)
- Sana Mohammadi
- Department of Medical Sciences, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Sadegh Ghaderi
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran.
| | - Kayvan Ghaderi
- Department of Information Technology and Computer Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj 66177-15175, Iran
| | - Mahdi Mohammadi
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | | |
Collapse
|
14
|
Boyd A, Ye Z, Prabhu S, Tjong MC, Zha Y, Zapaishchykova A, Vajapeyam S, Hayat H, Chopra R, Liu KX, Nabavidazeh A, Resnick A, Mueller S, Haas-Kogan D, Aerts HJ, Poussaint T, Kann BH. Expert-level pediatric brain tumor segmentation in a limited data scenario with stepwise transfer learning. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.06.29.23292048. [PMID: 37425854 PMCID: PMC10327271 DOI: 10.1101/2023.06.29.23292048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
Purpose Artificial intelligence (AI)-automated tumor delineation for pediatric gliomas would enable real-time volumetric evaluation to support diagnosis, treatment response assessment, and clinical decision-making. Auto-segmentation algorithms for pediatric tumors are rare, due to limited data availability, and algorithms have yet to demonstrate clinical translation. Methods We leveraged two datasets from a national brain tumor consortium (n=184) and a pediatric cancer center (n=100) to develop, externally validate, and clinically benchmark deep learning neural networks for pediatric low-grade glioma (pLGG) segmentation using a novel in-domain, stepwise transfer learning approach. The best model [via Dice similarity coefficient (DSC)] was externally validated and subject to randomized, blinded evaluation by three expert clinicians wherein clinicians assessed clinical acceptability of expert- and AI-generated segmentations via 10-point Likert scales and Turing tests. Results The best AI model utilized in-domain, stepwise transfer learning (median DSC: 0.877 [IQR 0.715-0.914]) versus baseline model (median DSC 0.812 [IQR 0.559-0.888]; p<0.05). On external testing (n=60), the AI model yielded accuracy comparable to inter-expert agreement (median DSC: 0.834 [IQR 0.726-0.901] vs. 0.861 [IQR 0.795-0.905], p=0.13). On clinical benchmarking (n=100 scans, 300 segmentations from 3 experts), the experts rated the AI model higher on average compared to other experts (median Likert rating: 9 [IQR 7-9]) vs. 7 [IQR 7-9], p<0.05 for each). Additionally, the AI segmentations had significantly higher (p<0.05) overall acceptability compared to experts on average (80.2% vs. 65.4%). Experts correctly predicted the origins of AI segmentations in an average of 26.0% of cases. Conclusions Stepwise transfer learning enabled expert-level, automated pediatric brain tumor auto-segmentation and volumetric measurement with a high level of clinical acceptability. This approach may enable development and translation of AI imaging segmentation algorithms in limited data scenarios.
Collapse
Affiliation(s)
- Aidan Boyd
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Zezhong Ye
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Sanjay Prabhu
- Department of Radiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA
| | - Michael C. Tjong
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Yining Zha
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Anna Zapaishchykova
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Sridhar Vajapeyam
- Department of Radiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA
| | - Hasaan Hayat
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Rishi Chopra
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Kevin X. Liu
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Ali Nabavidazeh
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Adam Resnick
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA
- Department of Neurosurgery, Children’s Hospital of Philadelphia, Philadelphia, PA
| | - Sabine Mueller
- Department of Neurology, University of California San Francisco, San Francisco, California
- Department of Pediatrics, University of California San Francisco, San Francisco, California
- Department of Neurological Surgery, University of California San Francisco, San Francisco, California
| | - Daphne Haas-Kogan
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Hugo J.W.L. Aerts
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands
| | - Tina Poussaint
- Department of Radiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA
| | - Benjamin H. Kann
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| |
Collapse
|
15
|
Asiri AA, Shaf A, Ali T, Pasha MA, Aamir M, Irfan M, Alqahtani S, Alghamdi AJ, Alghamdi AH, Alshamrani AFA, Alelyani M, Alamri S. Advancing Brain Tumor Classification through Fine-Tuned Vision Transformers: A Comparative Study of Pre-Trained Models. SENSORS (BASEL, SWITZERLAND) 2023; 23:7913. [PMID: 37765970 PMCID: PMC10535333 DOI: 10.3390/s23187913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 09/09/2023] [Accepted: 09/12/2023] [Indexed: 09/29/2023]
Abstract
This paper presents a comprehensive study on the classification of brain tumor images using five pre-trained vision transformer (ViT) models, namely R50-ViT-l16, ViT-l16, ViT-l32, ViT-b16, and ViT-b32, employing a fine-tuning approach. The objective of this study is to advance the state-of-the-art in brain tumor classification by harnessing the power of these advanced models. The dataset utilized for experimentation consists of a total of 4855 images in the training set and 857 images in the testing set, encompassing four distinct tumor classes. The performance evaluation of each model is conducted through an extensive analysis encompassing precision, recall, F1-score, accuracy, and confusion matrix metrics. Among the models assessed, ViT-b32 demonstrates exceptional performance, achieving a high accuracy of 98.24% in accurately classifying brain tumor images. Notably, the obtained results outperform existing methodologies, showcasing the efficacy of the proposed approach. The contributions of this research extend beyond conventional methods, as it not only employs cutting-edge ViT models but also surpasses the performance of existing approaches for brain tumor image classification. This study not only demonstrates the potential of ViT models in medical image analysis but also provides a benchmark for future research in the field of brain tumor classification.
Collapse
Affiliation(s)
- Abdullah A. Asiri
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 61441, Saudi Arabia
| | - Ahmad Shaf
- Department of Computer Science, COMSATS University Islamabad Sahiwal Campus, Sahiwal 57000, Pakistan
| | - Tariq Ali
- Department of Computer Science, COMSATS University Islamabad Sahiwal Campus, Sahiwal 57000, Pakistan
| | - Muhammad Ahmad Pasha
- Department of Computer Science, COMSATS University Islamabad Sahiwal Campus, Sahiwal 57000, Pakistan
| | - Muhammad Aamir
- Department of Computer Science, COMSATS University Islamabad Sahiwal Campus, Sahiwal 57000, Pakistan
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran 61441, Saudi Arabia
| | - Saeed Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 61441, Saudi Arabia
| | - Ahmad Joman Alghamdi
- Radiological Sciences Department, College of Applied Medical Sciences, Taif University, Taif 21944, Saudi Arabia
| | - Ali H. Alghamdi
- Department of Radiological Sciences, Faculty of Applied Medical Sciences, The University of Tabuk, Tabuk 47512, Saudi Arabia
| | - Abdullah Fahad A. Alshamrani
- Department of Diagnostic Radiology Technology, College of Applied Medical Sciences, Taibah University, Madinah 41477, Saudi Arabia
| | - Magbool Alelyani
- Department of Radiological Sciences, College of Applied Medical Science, King Khalid University, Abha 61421, Saudi Arabia
| | - Sultan Alamri
- Radiological Sciences Department, College of Applied Medical Sciences, Taif University, Taif 21944, Saudi Arabia
| |
Collapse
|
16
|
Escudero Sanchez L, Buddenkotte T, Al Sa’d M, McCague C, Darcy J, Rundo L, Samoshkin A, Graves MJ, Hollamby V, Browne P, Crispin-Ortuzar M, Woitek R, Sala E, Schönlieb CB, Doran SJ, Öktem O. Integrating Artificial Intelligence Tools in the Clinical Research Setting: The Ovarian Cancer Use Case. Diagnostics (Basel) 2023; 13:2813. [PMID: 37685352 PMCID: PMC10486639 DOI: 10.3390/diagnostics13172813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 07/31/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
Artificial intelligence (AI) methods applied to healthcare problems have shown enormous potential to alleviate the burden of health services worldwide and to improve the accuracy and reproducibility of predictions. In particular, developments in computer vision are creating a paradigm shift in the analysis of radiological images, where AI tools are already capable of automatically detecting and precisely delineating tumours. However, such tools are generally developed in technical departments that continue to be siloed from where the real benefit would be achieved with their usage. Significant effort still needs to be made to make these advancements available, first in academic clinical research and ultimately in the clinical setting. In this paper, we demonstrate a prototype pipeline based entirely on open-source software and free of cost to bridge this gap, simplifying the integration of tools and models developed within the AI community into the clinical research setting, ensuring an accessible platform with visualisation applications that allow end-users such as radiologists to view and interact with the outcome of these AI tools.
Collapse
Affiliation(s)
- Lorena Escudero Sanchez
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, Li Ka Shing Centre, Cambridge CB2 0RE, UK
- National Cancer Imaging Translational Accelerator (NCITA) Consortium, UK
| | - Thomas Buddenkotte
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK
- Department for Diagnostic and Interventional Radiology and Nuclear Medicine, University Hospital Hamburg-Eppendorf, 20246 Hamburg, Germany
- Jung Diagnostics GmbH, 22335 Hamburg, Germany
| | - Mohammad Al Sa’d
- National Cancer Imaging Translational Accelerator (NCITA) Consortium, UK
- Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College, London SW7 2AZ, UK
| | - Cathal McCague
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, Li Ka Shing Centre, Cambridge CB2 0RE, UK
- Cambridge University Hospitals NHS Foundation Trust, Cambridge CB2 0QQ, UK
| | - James Darcy
- National Cancer Imaging Translational Accelerator (NCITA) Consortium, UK
- Division of Radiotherapy and Imaging, Institute of Cancer Research, London SW7 3RP, UK
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, Li Ka Shing Centre, Cambridge CB2 0RE, UK
- Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, 84084 Fisciano, Italy
| | - Alex Samoshkin
- Office for Translational Research, School of Clinical Medicine, University of Cambridge, Cambridge CB2 0SP, UK
| | - Martin J. Graves
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Cambridge University Hospitals NHS Foundation Trust, Cambridge CB2 0QQ, UK
| | - Victoria Hollamby
- Research and Information Governance, School of Clinical Medicine, University of Cambridge, Cambridge CB2 0SP, UK
| | - Paul Browne
- High Performance Computing Department, University of Cambridge, Cambridge CB3 0RB, UK
| | - Mireia Crispin-Ortuzar
- Cancer Research UK Cambridge Centre, Li Ka Shing Centre, Cambridge CB2 0RE, UK
- Department of Oncology, University of Cambridge, Cambridge CB2 0XZ, UK
| | - Ramona Woitek
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, Li Ka Shing Centre, Cambridge CB2 0RE, UK
- Research Centre for Medical Image Analysis and Artificial Intelligence (MIAAI), Department of Medicine, Faculty of Medicine and Dentistry, Danube Private University, 3500 Krems, Austria
| | - Evis Sala
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, Li Ka Shing Centre, Cambridge CB2 0RE, UK
- National Cancer Imaging Translational Accelerator (NCITA) Consortium, UK
- Cambridge University Hospitals NHS Foundation Trust, Cambridge CB2 0QQ, UK
- Dipartimento di Scienze Radiologiche ed Ematologiche, Universita Cattolica del Sacro Cuore, 00168 Rome, Italy
- Dipartimento Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy
| | - Carola-Bibiane Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK
| | - Simon J. Doran
- National Cancer Imaging Translational Accelerator (NCITA) Consortium, UK
- Division of Radiotherapy and Imaging, Institute of Cancer Research, London SW7 3RP, UK
| | - Ozan Öktem
- Department of Mathematics, KTH-Royal Institute of Technology, SE-100 44 Stockholm, Sweden
| |
Collapse
|
17
|
Ismail M, Craig S, Ahmed R, de Blank P, Tiwari P. Opportunities and Advances in Radiomics and Radiogenomics for Pediatric Medulloblastoma Tumors. Diagnostics (Basel) 2023; 13:2727. [PMID: 37685265 PMCID: PMC10487205 DOI: 10.3390/diagnostics13172727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/19/2023] [Accepted: 08/21/2023] [Indexed: 09/10/2023] Open
Abstract
Recent advances in artificial intelligence have greatly impacted the field of medical imaging and vastly improved the development of computational algorithms for data analysis. In the field of pediatric neuro-oncology, radiomics, the process of obtaining high-dimensional data from radiographic images, has been recently utilized in applications including survival prognostication, molecular classification, and tumor type classification. Similarly, radiogenomics, or the integration of radiomic and genomic data, has allowed for building comprehensive computational models to better understand disease etiology. While there exist excellent review articles on radiomics and radiogenomic pipelines and their applications in adult solid tumors, in this review article, we specifically review these computational approaches in the context of pediatric medulloblastoma tumors. Based on our systematic literature research via PubMed and Google Scholar, we provide a detailed summary of a total of 15 articles that have utilized radiomic and radiogenomic analysis for survival prognostication, tumor segmentation, and molecular subgroup classification in the context of pediatric medulloblastoma. Lastly, we shed light on the current challenges with the existing approaches as well as future directions and opportunities with using these computational radiomic and radiogenomic approaches for pediatric medulloblastoma tumors.
Collapse
Affiliation(s)
- Marwa Ismail
- Department of Radiology, University of Wisconsin-Madison, Madison, WI 53706, USA; (S.C.); (P.T.)
| | - Stephen Craig
- Department of Radiology, University of Wisconsin-Madison, Madison, WI 53706, USA; (S.C.); (P.T.)
| | - Raheel Ahmed
- Department of Neurosurgery, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53706, USA;
| | - Peter de Blank
- Department of Pediatrics, College of Medicine, University of Cincinnati, Cincinnati, OH 45267, USA;
| | - Pallavi Tiwari
- Department of Radiology, University of Wisconsin-Madison, Madison, WI 53706, USA; (S.C.); (P.T.)
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
| |
Collapse
|
18
|
Beizaee F, Bona M, Desrosiers C, Dolz J, Lodygensky G. Determining regional brain growth in premature and mature infants in relation to age at MRI using deep neural networks. Sci Rep 2023; 13:13259. [PMID: 37582862 PMCID: PMC10427665 DOI: 10.1038/s41598-023-40244-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 08/07/2023] [Indexed: 08/17/2023] Open
Abstract
Neonatal MRIs are used increasingly in preterm infants. However, it is not always feasible to analyze this data. Having a tool that assesses brain maturation during this period of extraordinary changes would be immensely helpful. Approaches based on deep learning approaches could solve this task since, once properly trained and validated, they can be used in practically any system and provide holistic quantitative information in a matter of minutes. However, one major deterrent for radiologists is that these tools are not easily interpretable. Indeed, it is important that structures driving the results be detailed and survive comparison to the available literature. To solve these challenges, we propose an interpretable pipeline based on deep learning to predict postmenstrual age at scan, a key measure for assessing neonatal brain development. For this purpose, we train a state-of-the-art deep neural network to segment the brain into 87 different regions using normal preterm and term infants from the dHCP study. We then extract informative features for brain age estimation using the segmented MRIs and predict the brain age at scan with a regression model. The proposed framework achieves a mean absolute error of 0.46 weeks to predict postmenstrual age at scan. While our model is based solely on structural T2-weighted images, the results are superior to recent, arguably more complex approaches. Furthermore, based on the extracted knowledge from the trained models, we found that frontal and parietal lobes are among the most important structures for neonatal brain age estimation.
Collapse
Affiliation(s)
- Farzad Beizaee
- Software and IT Department, École de Technologie Supérieure, Montreal, QC, H3C 1K3, Canada.
- Department of Pediatrics, CHU Sainte-Justine, University of Montreal, Montreal, QC, H3T 1C5, Canada.
| | - Michele Bona
- Software and IT Department, École de Technologie Supérieure, Montreal, QC, H3C 1K3, Canada
| | - Christian Desrosiers
- Software and IT Department, École de Technologie Supérieure, Montreal, QC, H3C 1K3, Canada
| | - Jose Dolz
- Software and IT Department, École de Technologie Supérieure, Montreal, QC, H3C 1K3, Canada
| | - Gregory Lodygensky
- Department of Pediatrics, CHU Sainte-Justine, University of Montreal, Montreal, QC, H3T 1C5, Canada
- Canadian Neonatal Brain Platform, Montreal, QC, Canada
| |
Collapse
|
19
|
Yousef R, Khan S, Gupta G, Albahlal BM, Alajlan SA, Ali A. Bridged-U-Net-ASPP-EVO and Deep Learning Optimization for Brain Tumor Segmentation. Diagnostics (Basel) 2023; 13:2633. [PMID: 37627893 PMCID: PMC10453237 DOI: 10.3390/diagnostics13162633] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 07/25/2023] [Accepted: 07/28/2023] [Indexed: 08/27/2023] Open
Abstract
Brain tumor segmentation from Magnetic Resonance Images (MRI) is considered a big challenge due to the complexity of brain tumor tissues, and segmenting these tissues from the healthy tissues is an even more tedious challenge when manual segmentation is undertaken by radiologists. In this paper, we have presented an experimental approach to emphasize the impact and effectiveness of deep learning elements like optimizers and loss functions towards a deep learning optimal solution for brain tumor segmentation. We evaluated our performance results on the most popular brain tumor datasets (MICCAI BraTS 2020 and RSNA-ASNR-MICCAI BraTS 2021). Furthermore, a new Bridged U-Net-ASPP-EVO was introduced that exploits Atrous Spatial Pyramid Pooling to enhance capturing multi-scale information to help in segmenting different tumor sizes, Evolving Normalization layers, squeeze and excitation residual blocks, and the max-average pooling for down sampling. Two variants of this architecture were constructed (Bridged U-Net_ASPP_EVO v1 and Bridged U-Net_ASPP_EVO v2). The best results were achieved using these two models when compared with other state-of-the-art models; we have achieved average segmentation dice scores of 0.84, 0.85, and 0.91 from variant1, and 0.83, 0.86, and 0.92 from v2 for the Enhanced Tumor (ET), Tumor Core (TC), and Whole Tumor (WT) tumor sub-regions, respectively, in the BraTS 2021validation dataset.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI, Computers and Data Sciences, Shoolini University, Solan 173229, India
| | - Shakir Khan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia (S.A.A.)
- Department of Computer Science and Engineering, University Centre for Research and Development, Chandigarh University, Mohali 140413, India
| | - Gaurav Gupta
- Yogananda School of AI, Computers and Data Sciences, Shoolini University, Solan 173229, India
| | - Bader M. Albahlal
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia (S.A.A.)
| | - Saad Abdullah Alajlan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia (S.A.A.)
| | - Aleem Ali
- Department of Computer Science and Engineering, University Centre for Research and Development, Chandigarh University, Mohali 140413, India
| |
Collapse
|
20
|
Ali HS, Ismail AI, El‐Rabaie EM, Abd El‐Samie FE. Deep residual architectures and ensemble learning for efficient brain tumour classification. EXPERT SYSTEMS 2023; 40. [DOI: 10.1111/exsy.13226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 12/12/2022] [Indexed: 09/02/2023]
Abstract
AbstractThe prompt and accurate detection of brain tumours is essential for disease management and life‐saving. This paper introduces an efficient and robust completely automated system for classifying the three prominent types of brain tumour. The aim is to contribute for enhanced classification accuracy with minimum pre‐processing and less inference time. The power of deep networks is thoroughly investigated, with and without transfer learning. Fine‐tuned deep Residual Networks (ResNets) with depth up to 101 are introduced to manage the complex nature of brain images, and to capture their microstructural information. The proposed residual architectures with their in‐depth representations are evaluated and compared to other fine‐tuned networks (AlexNet, GoogLeNet and VGG16). A novel Convolutional Network (ConvNet) built and trained from scratch is also proposed for tumour type classification. Proven models are integrated by combining their decisions using majority voting to obtain the final classification accuracy. Results show that the residual architectures can be optimized efficiently, and a noticeable accuracy can be gained with them. Although ResNet models are deeper than VGG16, they show lower complexity. Results also indicate that building ensemble of models is a successful strategy to enhance the system performance. Each model in the ensemble learns specific patterns with certain filters. This stochastic nature boosts the classification accuracy. The accuracies obtained from ResNet18, ResNet101, and the proposed ConvNet are 98.91%, 97.39% and 95.43%, respectively. The accuracy based on decision fusion for the three networks is 99.57%, which is better than those of all state‐of‐the‐art techniques. The accuracy obtained with ResNet50 is 98.26%, and its fusion with ResNet18 and the designed network yields a 99.35% accuracy, which is also better than those of previous methods, meanwhile achieving minimum detection time requirements. Finally, visual representation of the learned features is provided to understand what the models have learned.
Collapse
Affiliation(s)
- Hanaa S. Ali
- Electronics & Communication Department, Faculty of Engineering Zagazig University Zagazig Egypt
| | - Asmaa I. Ismail
- Department of Electronics and Electrical Communications, Faculty of Electronic Engineering Menoufia University Menouf Egypt
| | - El‐Sayed M. El‐Rabaie
- Department of Electronics and Electrical Communications, Faculty of Electronic Engineering Menoufia University Menouf Egypt
| | - Fathi E. Abd El‐Samie
- Department of Electronics and Electrical Communications, Faculty of Electronic Engineering Menoufia University Menouf Egypt
| |
Collapse
|
21
|
Ran B, Huang B, Liang S, Hou Y. Surgical Instrument Detection Algorithm Based on Improved YOLOv7x. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115037. [PMID: 37299761 DOI: 10.3390/s23115037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 05/19/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
The counting of surgical instruments is an important task to ensure surgical safety and patient health. However, due to the uncertainty of manual operations, there is a risk of missing or miscounting instruments. Applying computer vision technology to the instrument counting process can not only improve efficiency, but also reduce medical disputes and promote the development of medical informatization. However, during the counting process, surgical instruments may be densely arranged or obstruct each other, and they may be affected by different lighting environments, all of which can affect the accuracy of instrument recognition. In addition, similar instruments may have only minor differences in appearance and shape, which increases the difficulty of identification. To address these issues, this paper improves the YOLOv7x object detection algorithm and applies it to the surgical instrument detection task. First, the RepLK Block module is introduced into the YOLOv7x backbone network, which can increase the effective receptive field and guide the network to learn more shape features. Second, the ODConv structure is introduced into the neck module of the network, which can significantly enhance the feature extraction ability of the basic convolution operation of the CNN and capture more rich contextual information. At the same time, we created the OSI26 data set, which contains 452 images and 26 surgical instruments, for model training and evaluation. The experimental results show that our improved algorithm exhibits higher accuracy and robustness in surgical instrument detection tasks, with F1, AP, AP50, and AP75 reaching 94.7%, 91.5%, 99.1%, and 98.2%, respectively, which are 4.6%, 3.1%, 3.6%, and 3.9% higher than the baseline. Compared to other mainstream object detection algorithms, our method has significant advantages. These results demonstrate that our method can more accurately identify surgical instruments, thereby improving surgical safety and patient health.
Collapse
Affiliation(s)
- Boping Ran
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066000, China
| | - Bo Huang
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066000, China
| | - Shunpan Liang
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066000, China
| | - Yulei Hou
- School of Mechanical Engineering, Yanshan University, Qinhuangdao 066000, China
| |
Collapse
|
22
|
Sailunaz K, Bestepe D, Alhajj S, Özyer T, Rokne J, Alhajj R. Brain tumor detection and segmentation: Interactive framework with a visual interface and feedback facility for dynamically improved accuracy and trust. PLoS One 2023; 18:e0284418. [PMID: 37068084 PMCID: PMC10109523 DOI: 10.1371/journal.pone.0284418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 03/30/2023] [Indexed: 04/18/2023] Open
Abstract
Brain cancers caused by malignant brain tumors are one of the most fatal cancer types with a low survival rate mostly due to the difficulties in early detection. Medical professionals therefore use various invasive and non-invasive methods for detecting and treating brain tumors at the earlier stages thus enabling early treatment. The main non-invasive methods for brain tumor diagnosis and assessment are brain imaging like computed tomography (CT), positron emission tomography (PET) and magnetic resonance imaging (MRI) scans. In this paper, the focus is on detection and segmentation of brain tumors from 2D and 3D brain MRIs. For this purpose, a complete automated system with a web application user interface is described which detects and segments brain tumors with more than 90% accuracy and Dice scores. The user can upload brain MRIs or can access brain images from hospital databases to check presence or absence of brain tumor, to check the existence of brain tumor from brain MRI features and to extract the tumor region precisely from the brain MRI using deep neural networks like CNN, U-Net and U-Net++. The web application also provides an option for entering feedbacks on the results of the detection and segmentation to allow healthcare professionals to add more precise information on the results that can be used to train the model for better future predictions and segmentations.
Collapse
Affiliation(s)
- Kashfia Sailunaz
- Department of Computer Science, University of Calgary, Alberta, Canada
| | - Deniz Bestepe
- Department of Computer Engineering, Istanbul Medipol University, Istanbul, Turkey
| | - Sleiman Alhajj
- International School of Medicine, Istanbul Medipol University, Istanbul, Turkey
| | - Tansel Özyer
- Department of Computer Engineering, Ankara Medipol University, Ankara, Turkey
| | - Jon Rokne
- Department of Computer Science, University of Calgary, Alberta, Canada
| | - Reda Alhajj
- Department of Computer Science, University of Calgary, Alberta, Canada
- Department of Computer Engineering, Istanbul Medipol University, Istanbul, Turkey
- Department of Health Informatics, University of Southern Denmark, Odense, Denmark
| |
Collapse
|