1
|
Zhu J, Bolsterlee B, Chow BVY, Song Y, Meijering E. Hybrid dual mean-teacher network with double-uncertainty guidance for semi-supervised segmentation of magnetic resonance images. Comput Med Imaging Graph 2024; 115:102383. [PMID: 38643551 DOI: 10.1016/j.compmedimag.2024.102383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 03/26/2024] [Accepted: 04/14/2024] [Indexed: 04/23/2024]
Abstract
Semi-supervised learning has made significant progress in medical image segmentation. However, existing methods primarily utilize information from a single dimensionality, resulting in sub-optimal performance on challenging magnetic resonance imaging (MRI) data with multiple segmentation objects and anisotropic resolution. To address this issue, we present a Hybrid Dual Mean-Teacher (HD-Teacher) model with hybrid, semi-supervised, and multi-task learning to achieve effective semi-supervised segmentation. HD-Teacher employs a 2D and a 3D mean-teacher network to produce segmentation labels and signed distance fields from the hybrid information captured in both dimensionalities. This hybrid mechanism allows HD-Teacher to utilize features from 2D, 3D, or both dimensions as needed. Outputs from 2D and 3D teacher models are dynamically combined based on confidence scores, forming a single hybrid prediction with estimated uncertainty. We propose a hybrid regularization module to encourage both student models to produce results close to the uncertainty-weighted hybrid prediction to further improve their feature extraction capability. Extensive experiments of binary and multi-class segmentation conducted on three MRI datasets demonstrated that the proposed framework could (1) significantly outperform state-of-the-art semi-supervised methods (2) surpass a fully-supervised VNet trained on substantially more annotated data, and (3) perform on par with human raters on muscle and bone segmentation task. Code will be available at https://github.com/ThisGame42/Hybrid-Teacher.
Collapse
Affiliation(s)
- Jiayi Zhu
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia; Neuroscience Research Australia (NeuRA), Randwick, NSW 2031, Australia.
| | - Bart Bolsterlee
- Neuroscience Research Australia (NeuRA), Randwick, NSW 2031, Australia; Graduate School of Biomedical Engineering, University of New South Wales, Sydney, NSW 2052, Australia
| | - Brian V Y Chow
- Neuroscience Research Australia (NeuRA), Randwick, NSW 2031, Australia; School of Biomedical Sciences, University of New South Wales, Sydney, NSW 2052, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia
| |
Collapse
|
2
|
Iqbal S, Qureshi AN, Alhussein M, Aurangzeb K, Choudhry IA, Anwar MS. Hybrid deep spatial and statistical feature fusion for accurate MRI brain tumor classification. Front Comput Neurosci 2024; 18:1423051. [PMID: 38978524 PMCID: PMC11228303 DOI: 10.3389/fncom.2024.1423051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 06/06/2024] [Indexed: 07/10/2024] Open
Abstract
The classification of medical images is crucial in the biomedical field, and despite attempts to address the issue, significant challenges persist. To effectively categorize medical images, collecting and integrating statistical information that accurately describes the image is essential. This study proposes a unique method for feature extraction that combines deep spatial characteristics with handmade statistical features. The approach involves extracting statistical radiomics features using advanced techniques, followed by a novel handcrafted feature fusion method inspired by the ResNet deep learning model. A new feature fusion framework (FusionNet) is then used to reduce image dimensionality and simplify computation. The proposed approach is tested on MRI images of brain tumors from the BraTS dataset, and the results show that it outperforms existing methods regarding classification accuracy. The study presents three models, including a handcrafted-based model and two CNN models, which completed the binary classification task. The recommended hybrid approach achieved a high F1 score of 96.12 ± 0.41, precision of 97.77 ± 0.32, and accuracy of 97.53 ± 0.24, indicating that it has the potential to serve as a valuable tool for pathologists.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology and Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Adnan N. Qureshi
- Faculty of Arts, Society, and Professional Studies, Newman University, Birmingham, United Kingdom
| | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Imran Arshad Choudhry
- Department of Computer Science, Faculty of Information Technology and Computer Science, University of Central Punjab, Lahore, Pakistan
| | | |
Collapse
|
3
|
Nazir M, Shakil S, Khurshid K. End-to-End Multi-task Learning Architecture for Brain Tumor Analysis with Uncertainty Estimation in MRI Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01009-w. [PMID: 38565728 DOI: 10.1007/s10278-024-01009-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 11/25/2023] [Accepted: 11/28/2023] [Indexed: 04/04/2024]
Abstract
Brain tumors are a threat to life for every other human being, be it adults or children. Gliomas are one of the deadliest brain tumors with an extremely difficult diagnosis. The reason is their complex and heterogenous structure which gives rise to subjective as well as objective errors. Their manual segmentation is a laborious task due to their complex structure and irregular appearance. To cater to all these issues, a lot of research has been done and is going on to develop AI-based solutions that can help doctors and radiologists in the effective diagnosis of gliomas with the least subjective and objective errors, but an end-to-end system is still missing. An all-in-one framework has been proposed in this research. The developed end-to-end multi-task learning (MTL) architecture with a feature attention module can classify, segment, and predict the overall survival of gliomas by leveraging task relationships between similar tasks. Uncertainty estimation has also been incorporated into the framework to enhance the confidence level of healthcare practitioners. Extensive experimentation was performed by using combinations of MRI sequences. Brain tumor segmentation (BraTS) challenge datasets of 2019 and 2020 were used for experimental purposes. Results of the best model with four sequences show 95.1% accuracy for classification, 86.3% dice score for segmentation, and a mean absolute error (MAE) of 456.59 for survival prediction on the test data. It is evident from the results that deep learning-based MTL models have the potential to automate the whole brain tumor analysis process and give efficient results with least inference time without human intervention. Uncertainty quantification confirms the idea that more data can improve the generalization ability and in turn can produce more accurate results with less uncertainty. The proposed model has the potential to be utilized in a clinical setup for the initial screening of glioma patients.
Collapse
Affiliation(s)
- Maria Nazir
- Medical Imaging and Diagnostics Lab, NCAI COMSATS University Islamabad, Islamabad, Pakistan.
- iVision Lab, Department of Electrical Engineering, Institute of Space Technology, Islamabad, Pakistan.
- BiCoNeS Lab, Department of Electrical Engineering, Institute of Space Technology, Islamabad, Pakistan.
| | - Sadia Shakil
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Khurram Khurshid
- iVision Lab, Department of Electrical Engineering, Institute of Space Technology, Islamabad, Pakistan
| |
Collapse
|
4
|
Zeineldin RA, Karar ME, Burgert O, Mathis-Ullrich F. NeuroIGN: Explainable Multimodal Image-Guided System for Precise Brain Tumor Surgery. J Med Syst 2024; 48:25. [PMID: 38393660 DOI: 10.1007/s10916-024-02037-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 02/03/2024] [Indexed: 02/25/2024]
Abstract
Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11 min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes.
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany.
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| | - Franziska Mathis-Ullrich
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany
| |
Collapse
|
5
|
Zeineldin RA, Karar ME, Elshaer Z, Coburger J, Wirtz CR, Burgert O, Mathis-Ullrich F. Explainable hybrid vision transformers and convolutional network for multimodal glioma segmentation in brain MRI. Sci Rep 2024; 14:3713. [PMID: 38355678 PMCID: PMC10866944 DOI: 10.1038/s41598-024-54186-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 02/09/2024] [Indexed: 02/16/2024] Open
Abstract
Accurate localization of gliomas, the most common malignant primary brain cancer, and its different sub-region from multimodal magnetic resonance imaging (MRI) volumes are highly important for interventional procedures. Recently, deep learning models have been applied widely to assist automatic lesion segmentation tasks for neurosurgical interventions. However, these models are often complex and represented as "black box" models which limit their applicability in clinical practice. This article introduces new hybrid vision Transformers and convolutional neural networks for accurate and robust glioma segmentation in Brain MRI scans. Our proposed method, TransXAI, provides surgeon-understandable heatmaps to make the neural networks transparent. TransXAI employs a post-hoc explanation technique that provides visual interpretation after the brain tumor localization is made without any network architecture modifications or accuracy tradeoffs. Our experimental findings showed that TransXAI achieves competitive performance in extracting both local and global contexts in addition to generating explainable saliency maps to help understand the prediction of the deep network. Further, visualization maps are obtained to realize the flow of information in the internal layers of the encoder-decoder network and understand the contribution of MRI modalities in the final prediction. The explainability process could provide medical professionals with additional information about the tumor segmentation results and therefore aid in understanding how the deep learning model is capable of processing MRI data successfully. Thus, it enables the physicians' trust in such deep learning systems towards applying them clinically. To facilitate TransXAI model development and results reproducibility, we will share the source code and the pre-trained models after acceptance at https://github.com/razeineldin/TransXAI .
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany.
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt
| | - Ziad Elshaer
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Jan Coburger
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Christian R Wirtz
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| | - Franziska Mathis-Ullrich
- Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany
| |
Collapse
|
6
|
Jiao R, Zhang Y, Ding L, Xue B, Zhang J, Cai R, Jin C. Learning with limited annotations: A survey on deep semi-supervised learning for medical image segmentation. Comput Biol Med 2024; 169:107840. [PMID: 38157773 DOI: 10.1016/j.compbiomed.2023.107840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 10/30/2023] [Accepted: 12/07/2023] [Indexed: 01/03/2024]
Abstract
Medical image segmentation is a fundamental and critical step in many image-guided clinical approaches. Recent success of deep learning-based segmentation methods usually relies on a large amount of labeled data, which is particularly difficult and costly to obtain, especially in the medical imaging domain where only experts can provide reliable and accurate annotations. Semi-supervised learning has emerged as an appealing strategy and been widely applied to medical image segmentation tasks to train deep models with limited annotations. In this paper, we present a comprehensive review of recently proposed semi-supervised learning methods for medical image segmentation and summarize both the technical novelties and empirical results. Furthermore, we analyze and discuss the limitations and several unsolved problems of existing approaches. We hope this review can inspire the research community to explore solutions to this challenge and further advance the field of medical image segmentation.
Collapse
Affiliation(s)
- Rushi Jiao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; School of Engineering Medicine, Beihang University, Beijing, 100191, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China.
| | - Yichi Zhang
- School of Data Science, Fudan University, Shanghai, 200433, China; Artificial Intelligence Innovation and Incubation Institute, Fudan University, Shanghai, 200433, China.
| | - Le Ding
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China.
| | - Bingsen Xue
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China.
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, China.
| | - Rong Cai
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beihang University, Beijing, 100191, China.
| | - Cheng Jin
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China; Beijing Anding Hospital, Capital Medical University, Beijing, 100088, China.
| |
Collapse
|
7
|
Song G, Xie G, Nie Y, Majid MS, Yavari I. Noninvasive grading of glioma brain tumors using magnetic resonance imaging and deep learning methods. J Cancer Res Clin Oncol 2023; 149:16293-16309. [PMID: 37698684 DOI: 10.1007/s00432-023-05389-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/01/2023] [Indexed: 09/13/2023]
Abstract
PURPOSE Convolutional Neural Networks (ConvNets) have quickly become popular machine learning techniques in recent years, particularly in the classification and segmentation of medical images. One of the most prevalent types of brain cancers is glioma, and early, accurate diagnosis is essential for both treatment and survival. In this study, MRI scans were examined utilizing deep learning techniques to examine glioma diagnosis studies. METHODS In this systematic review, keywords were used to obtain English-language studies from the Arxiv, IEEE, Springer, ScienceDirect, and PubMed databases for the years 2010-2022. The material needed for review was then collected from the articles once they had been chosen based on the entry and exit criteria and in accordance with the research's goal. RESULTS Finally, 77 different academic articles were chosen. According to a study of published articles, glioma brain tumors were discovered, categorized, and segmented utilizing a coordinated approach that included image collecting, pre-processing, model design and execution, and model output evaluation. The majority of investigations have used publicly accessible photo databases and already-trained algorithms. The bulk of studies have employed Dice's classification accuracy and similarity coefficient metrics to assess model performance. CONCLUSION The results of this study indicate that glioma segmentation has received more attention from researchers than glioma detection and classification. It is advised that more research be done in the areas of glioma detection and, particularly, grading in order to be included in systems that support medical diagnosis.
Collapse
Affiliation(s)
- Guanghui Song
- School of Computer and Data Engineering, Ningbo Tech University, Ningbo, 315100, Zhejiang, China.
| | - Guanbao Xie
- School of Computer and Data Engineering, Ningbo Tech University, Ningbo, 315100, Zhejiang, China
| | - Yan Nie
- College of Science & Technology, Ningbo University, Ningbo, 315100, Zhejiang, China
| | - Mohammed Sh Majid
- Computer Techniques Engineering Department, Al-Mustaqbal University College, Babylon, 51001, Iraq
| | - Iman Yavari
- School of Computing and Technology, Eastern Mediterranean University, Northern Cyprus, Famagusta, Cyprus.
| |
Collapse
|
8
|
Xu Z, Wang Y, Lu D, Luo X, Yan J, Zheng Y, Tong RKY. Ambiguity-selective consistency regularization for mean-teacher semi-supervised medical image segmentation. Med Image Anal 2023; 88:102880. [PMID: 37413792 DOI: 10.1016/j.media.2023.102880] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 05/17/2023] [Accepted: 06/22/2023] [Indexed: 07/08/2023]
Abstract
Semi-supervised learning has greatly advanced medical image segmentation since it effectively alleviates the need of acquiring abundant annotations from experts, wherein the mean-teacher model, known as a milestone of perturbed consistency learning, commonly serves as a standard and simple baseline. Inherently, learning from consistency can be regarded as learning from stability under perturbations. Recent improvement leans toward more complex consistency learning frameworks, yet, little attention is paid to the consistency target selection. Considering that the ambiguous regions from unlabeled data contain more informative complementary clues, in this paper, we improve the mean-teacher model to a novel ambiguity-consensus mean-teacher (AC-MT) model. Particularly, we comprehensively introduce and benchmark a family of plug-and-play strategies for ambiguous target selection from the perspectives of entropy, model uncertainty and label noise self-identification, respectively. Then, the estimated ambiguity map is incorporated into the consistency loss to encourage consensus between the two models' predictions in these informative regions. In essence, our AC-MT aims to find out the most worthwhile voxel-wise targets from the unlabeled data, and the model especially learns from the perturbed stability of these informative regions. The proposed methods are extensively evaluated on left atrium segmentation and brain tumor segmentation. Encouragingly, our strategies bring substantial improvement over recent state-of-the-art methods. The ablation study further demonstrates our hypothesis and shows impressive results under various extreme annotation conditions.
Collapse
Affiliation(s)
- Zhe Xu
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China.
| | - Yixin Wang
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | | | - Xiangde Luo
- Department of Mechanical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jiangpeng Yan
- Department of Automation, Tsinghua University, Beijing, China
| | | | - Raymond Kai-Yu Tong
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China.
| |
Collapse
|
9
|
Asiri AA, Shaf A, Ali T, Shakeel U, Irfan M, Mehdar KM, Halawani HT, Alghamdi AH, Alshamrani AFA, Alqhtani SM. Exploring the Power of Deep Learning: Fine-Tuned Vision Transformer for Accurate and Efficient Brain Tumor Detection in MRI Scans. Diagnostics (Basel) 2023; 13:2094. [PMID: 37370989 DOI: 10.3390/diagnostics13122094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 06/06/2023] [Accepted: 06/13/2023] [Indexed: 06/29/2023] Open
Abstract
A brain tumor is a significant health concern that directly or indirectly affects thousands of people worldwide. The early and accurate detection of brain tumors is vital to the successful treatment of brain tumors and the improved quality of life of the patient. There are several imaging techniques used for brain tumor detection. Among these techniques, the most common are MRI and CT scans. To overcome the limitations associated with these traditional techniques, computer-aided analysis of brain images has gained attention in recent years as a promising approach for accurate and reliable brain tumor detection. In this study, we proposed a fine-tuned vision transformer model that uses advanced image processing and deep learning techniques to accurately identify the presence of brain tumors in the input data images. The proposed model FT-ViT involves several stages, including the processing of data, patch processing, concatenation, feature selection and learning, and fine tuning. Upon training the model on the CE-MRI dataset containing 5712 brain tumor images, the model could accurately identify the tumors. The FT-Vit model achieved an accuracy of 98.13%. The proposed method offers high accuracy and can significantly reduce the workload of radiologists, making it a practical approach in medical science. However, further research can be conducted to diagnose more complex and rare types of tumors with more accuracy and reliability.
Collapse
Affiliation(s)
- Abdullah A Asiri
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 61441, Saudi Arabia
| | - Ahmad Shaf
- Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Sahiwal 57000, Pakistan
| | - Tariq Ali
- Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Sahiwal 57000, Pakistan
| | - Unza Shakeel
- Department of Biosciences, COMSATS University Islamabad, Sahiwal Campus, Sahiwal 57000, Pakistan
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran 61441, Saudi Arabia
| | - Khlood M Mehdar
- Anatomy Department, Medicine College, Najran University, Najran 61441, Saudi Arabia
| | - Hanan Talal Halawani
- Computer Science Department, College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia
| | - Ali H Alghamdi
- Department of Radiological Sciences, Faculty of Applied Medical Sciences, The University of Tabuk, Tabuk 47512, Saudi Arabia
| | - Abdullah Fahad A Alshamrani
- Department of Diagnostic Radiology Technology, College of Applied Medical Sciences, Taibah University, Madinah 41477, Saudi Arabia
| | - Samar M Alqhtani
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia
| |
Collapse
|
10
|
Aggarwal M, Tiwari AK, Sarathi MP, Bijalwan A. An early detection and segmentation of Brain Tumor using Deep Neural Network. BMC Med Inform Decis Mak 2023; 23:78. [PMID: 37101176 PMCID: PMC10134539 DOI: 10.1186/s12911-023-02174-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 04/12/2023] [Indexed: 04/28/2023] Open
Abstract
BACKGROUND Magnetic resonance image (MRI) brain tumor segmentation is crucial and important in the medical field, which can help in diagnosis and prognosis, overall growth predictions, Tumor density measures, and care plans needed for patients. The difficulty in segmenting brain Tumors is primarily because of the wide range of structures, shapes, frequency, position, and visual appeal of Tumors, like intensity, contrast, and visual variation. With recent advancements in Deep Neural Networks (DNN) for image classification tasks, intelligent medical image segmentation is an exciting direction for Brain Tumor research. DNN requires a lot of time & processing capabilities to train because of only some gradient diffusion difficulty and its complication. METHODS To overcome the gradient issue of DNN, this research work provides an efficient method for brain Tumor segmentation based on the Improved Residual Network (ResNet). Existing ResNet can be improved by maintaining the details of all the available connection links or by improving projection shortcuts. These details are fed to later phases, due to which improved ResNet achieves higher precision and can speed up the learning process. RESULTS The proposed improved Resnet address all three main components of existing ResNet: the flow of information through the network layers, the residual building block, and the projection shortcut. This approach minimizes computational costs and speeds up the process. CONCLUSION An experimental analysis of the BRATS 2020 MRI sample data reveals that the proposed methodology achieves competitive performance over the traditional methods like CNN and Fully Convolution Neural Network (FCN) in more than 10% improved accuracy, recall, and f-measure.
Collapse
Affiliation(s)
- Mukul Aggarwal
- Dr. A.P.J. Abdul Kalam Technical University, Lucknow, Uttar Pradesh, India
| | | | - M Partha Sarathi
- Amity School of Engineering and Technology, Amity University, Noida, Uttar Pradesh, India
| | - Anchit Bijalwan
- Faculty of Electrical and Computer Engineering, Arba Minch University, Arba Minch, Ethiopia.
| |
Collapse
|
11
|
Shiney TSS, Jerome SA. An Intelligent System to Enhance the Performance of Brain Tumor Diagnosis from MR Images. J Digit Imaging 2023; 36:510-525. [PMID: 36385675 PMCID: PMC10039190 DOI: 10.1007/s10278-022-00715-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 08/10/2022] [Accepted: 10/03/2022] [Indexed: 11/17/2022] Open
Abstract
In the human body, cancer is caused by aberrant cell proliferation. Brain tumors are created when cells in the human brain proliferate out of control. Brain tumors consist of two types: benign and malignant. The aberrant parts of benign tumors, which contain dormant tumor cells, can be cured with the appropriate medication. On the other hand, malignant tumors are tumors that contain abnormal cells and an unorganized area of these abnormal cells that cannot be treated with medication. Therefore, surgery is required to remove these brain tumors. Brain cancers are manually identified and diagnosed by a skilled radiologist using traditional procedures. It's a lengthy and error-prone procedure. As a result, it is unsuitable for emerging countries with large populations. So computer-assisted automatic identification and diagnosis of brain tumors are recommended. This work proposes and implements a CAD system for the diagnosis of brain cancers using magnetic resonance imaging (MRI). Preprocessing, segmentation, feature extraction, and classification are the stages of automatic brain MRI processing that necessitate software based on a sophisticated algorithm. Image normalization with contourlet transform (INCT) is used in the preprocessing step to remove undesirable or noisy data. The performance metrics such as PSNR, MSE, and RMSE are computed. Then, the modified hierarchical k-means with firefly clustering (MHKFC) technique is used in the segmentation step to precisely recover the afflicted (tumor) area from the preprocessed image. The enhanced monarch butterfly optimization (EMBO) is used to select and then extract the most important gray-level co-occurrence matrix feature from the segmented image. The classification task was finally completed using the adaptive neuro-fuzzy inference system (ANFIS). The overall classification accuracy is 95.4% ( BRATS 2015), 96.6% ( BRATS 2021), and 93.7% (clinical data) is obtained.
Collapse
Affiliation(s)
- T. S. Sheela Shiney
- Department of CSE, St. Xavier’s Catholic College of Engineering, Chunkankadai, Nagercoil India
| | - S. Albert Jerome
- Biomedical Engineering, Noorul Islam Centre for Higher Education, Kumaracoil, Kanyakumari India
| |
Collapse
|
12
|
Nalepa J, Kotowski K, Machura B, Adamski S, Bozek O, Eksner B, Kokoszka B, Pekala T, Radom M, Strzelczak M, Zarudzki L, Krason A, Arcadu F, Tessier J. Deep learning automates bidimensional and volumetric tumor burden measurement from MRI in pre- and post-operative glioblastoma patients. Comput Biol Med 2023; 154:106603. [PMID: 36738710 DOI: 10.1016/j.compbiomed.2023.106603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 01/11/2023] [Accepted: 01/22/2023] [Indexed: 02/05/2023]
Abstract
Tumor burden assessment by magnetic resonance imaging (MRI) is central to the evaluation of treatment response for glioblastoma. This assessment is, however, complex to perform and associated with high variability due to the high heterogeneity and complexity of the disease. In this work, we tackle this issue and propose a deep learning pipeline for the fully automated end-to-end analysis of glioblastoma patients. Our approach simultaneously identifies tumor sub-regions, including the enhancing tumor, peritumoral edema and surgical cavity in the first step, and then calculates the volumetric and bidimensional measurements that follow the current Response Assessment in Neuro-Oncology (RANO) criteria. Also, we introduce a rigorous manual annotation process which was followed to delineate the tumor sub-regions by the human experts, and to capture their segmentation confidences that are later used while training deep learning models. The results of our extensive experimental study performed over 760 pre-operative and 504 post-operative adult patients with glioma obtained from the public database (acquired at 19 sites in years 2021-2020) and from a clinical treatment trial (47 and 69 sites for pre-/post-operative patients, 2009-2011) and backed up with thorough quantitative, qualitative and statistical analysis revealed that our pipeline performs accurate segmentation of pre- and post-operative MRIs in a fraction of the manual delineation time (up to 20 times faster than humans). Volumetric measurements were in strong agreement with experts with the Intraclass Correlation Coefficient (ICC): 0.959, 0.703, 0.960 for ET, ED, and cavity. Similarly, automated RANO compared favorably with experienced readers (ICC: 0.681 and 0.866) producing consistent and accurate results. Additionally, we showed that RANO measurements are not always sufficient to quantify tumor burden. The high performance of the automated tumor burden measurement highlights the potential of the tool for considerably improving and simplifying radiological evaluation of glioblastoma in clinical trials and clinical practice.
Collapse
Affiliation(s)
- Jakub Nalepa
- Graylight Imaging, Gliwice, Poland; Department of Algorithmics and Software, Silesian University of Technology, Gliwice, Poland.
| | | | | | | | - Oskar Bozek
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland
| | - Bartosz Eksner
- Department of Radiology and Nuclear Medicine, ZSM Chorzów, Chorzów, Poland
| | - Bartosz Kokoszka
- Department of Radiodiagnostics, Interventional Radiology and Nuclear Medicine, University Clinical Centre, Katowice, Poland
| | - Tomasz Pekala
- Department of Radiodiagnostics, Interventional Radiology and Nuclear Medicine, University Clinical Centre, Katowice, Poland
| | - Mateusz Radom
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland
| | - Marek Strzelczak
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland
| | - Lukasz Zarudzki
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland
| | - Agata Krason
- Roche Pharmaceutical Research & Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland
| | - Filippo Arcadu
- Roche Pharmaceutical Research & Early Development, Early Clinical Development Informatics, Roche Innovation Center Basel, Basel, Switzerland
| | - Jean Tessier
- Roche Pharmaceutical Research & Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland
| |
Collapse
|
13
|
Deep belief network Assisted quadratic logit boost classifier for brain tumor detection using MR images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
14
|
Karami G, Pascuzzo R, Figini M, Del Gratta C, Zhang H, Bizzi A. Combining Multi-Shell Diffusion with Conventional MRI Improves Molecular Diagnosis of Diffuse Gliomas with Deep Learning. Cancers (Basel) 2023; 15:cancers15020482. [PMID: 36672430 PMCID: PMC9856805 DOI: 10.3390/cancers15020482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 12/21/2022] [Accepted: 01/03/2023] [Indexed: 01/14/2023] Open
Abstract
The WHO classification since 2016 confirms the importance of integrating molecular diagnosis for prognosis and treatment decisions of adult-type diffuse gliomas. This motivates the development of non-invasive diagnostic methods, in particular MRI, to predict molecular subtypes of gliomas before surgery. At present, this development has been focused on deep-learning (DL)-based predictive models, mainly with conventional MRI (cMRI), despite recent studies suggesting multi-shell diffusion MRI (dMRI) offers complementary information to cMRI for molecular subtyping. The aim of this work is to evaluate the potential benefit of combining cMRI and multi-shell dMRI in DL-based models. A model implemented with deep residual neural networks was chosen as an illustrative example. Using a dataset of 146 patients with gliomas (from grade 2 to 4), the model was trained and evaluated, with nested cross-validation, on pre-operative cMRI, multi-shell dMRI, and a combination of the two for the following classification tasks: (i) IDH-mutation; (ii) 1p/19q-codeletion; and (iii) three molecular subtypes according to WHO 2021. The results from a subset of 100 patients with lower grades gliomas (2 and 3 according to WHO 2016) demonstrated that combining cMRI and multi-shell dMRI enabled the best performance in predicting IDH mutation and 1p/19q codeletion, achieving an accuracy of 75 ± 9% in predicting the IDH-mutation status, higher than using cMRI and multi-shell dMRI separately (both 70 ± 7%). Similar findings were observed for predicting the 1p/19q-codeletion status, with the accuracy from combining cMRI and multi-shell dMRI (72 ± 4%) higher than from each modality used alone (cMRI: 65 ± 6%; multi-shell dMRI: 66 ± 9%). These findings remain when we considered all 146 patients for predicting the IDH status (combined: 81 ± 5% accuracy; cMRI: 74 ± 5%; multi-shell dMRI: 73 ± 6%) and for the diagnosis of the three molecular subtypes according to WHO 2021 (combined: 60 ± 5%; cMRI: 57 ± 8%; multi-shell dMRI: 56 ± 7%). Together, these findings suggest that combining cMRI and multi-shell dMRI can offer higher accuracy than using each modality alone for predicting the IDH and 1p/19q status and in diagnosing the three molecular subtypes with DL-based models.
Collapse
Affiliation(s)
- Golestan Karami
- Department of Neuroscience, Imaging and Clinical Sciences, Gabriele D’Annunzio University, 66100 Chieti, Italy
- Institute for Advanced Biomedical Technologies, Gabriele D’Annunzio University, 66100 Chieti, Italy
| | - Riccardo Pascuzzo
- Department of Neuroradiology, Fondazione IRCCS Istituto Neurologico Carlo Besta, 20133 Milan, Italy
- Correspondence:
| | - Matteo Figini
- Centre for Medical Image Computing and Department of Computer Science, University College London, London WC1V 6LJ, UK
| | - Cosimo Del Gratta
- Department of Neuroscience, Imaging and Clinical Sciences, Gabriele D’Annunzio University, 66100 Chieti, Italy
- Institute for Advanced Biomedical Technologies, Gabriele D’Annunzio University, 66100 Chieti, Italy
| | - Hui Zhang
- Centre for Medical Image Computing and Department of Computer Science, University College London, London WC1V 6LJ, UK
| | - Alberto Bizzi
- Department of Neuroradiology, Fondazione IRCCS Istituto Neurologico Carlo Besta, 20133 Milan, Italy
| |
Collapse
|
15
|
Uncertainty-guided mutual consistency learning for semi-supervised medical image segmentation. Artif Intell Med 2022; 138:102476. [PMID: 36990583 DOI: 10.1016/j.artmed.2022.102476] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 11/20/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022]
Abstract
Medical image segmentation is a fundamental and critical step in many clinical approaches. Semi-supervised learning has been widely applied to medical image segmentation tasks since it alleviates the heavy burden of acquiring expert-examined annotations and takes the advantage of unlabeled data which is much easier to acquire. Although consistency learning has been proven to be an effective approach by enforcing an invariance of predictions under different distributions, existing approaches cannot make full use of region-level shape constraint and boundary-level distance information from unlabeled data. In this paper, we propose a novel uncertainty-guided mutual consistency learning framework to effectively exploit unlabeled data by integrating intra-task consistency learning from up-to-date predictions for self-ensembling and cross-task consistency learning from task-level regularization to exploit geometric shape information. The framework is guided by the estimated segmentation uncertainty of models to select out relatively certain predictions for consistency learning, so as to effectively exploit more reliable information from unlabeled data. Experiments on two publicly available benchmark datasets showed that: (1) Our proposed method can achieve significant performance improvement by leveraging unlabeled data, with up to 4.13% and 9.82% in Dice coefficient compared to supervised baseline on left atrium segmentation and brain tumor segmentation, respectively. (2) Compared with other semi-supervised segmentation methods, our proposed method achieve better segmentation performance under the same backbone network and task settings on both datasets, demonstrating the effectiveness and robustness of our method and potential transferability for other medical image segmentation tasks.
Collapse
|
16
|
Ali NF, Md Nasry NN, Shah I, Mokri SS, Abd Rahni AA. Segmentation of Brain Glioma in MRI Images Using Deep Learning. 2022 IEEE 20TH STUDENT CONFERENCE ON RESEARCH AND DEVELOPMENT (SCORED) 2022. [DOI: 10.1109/scored57082.2022.9973855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Affiliation(s)
- Nurul Fatihah Ali
- Faculty of Engineering and Built Environment,Dept. of Electrical Electronic and Systems Engineering,Selangor,Malaysia
| | - Nurul Nadhirah Md Nasry
- Faculty of Engineering and Built Environment,Dept. of Electrical Electronic and Systems Engineering,Selangor,Malaysia
| | - Idris Shah
- Faculty of Engineering and Built Environment,Dept. of Electrical Electronic and Systems Engineering,Selangor,Malaysia
| | - Siti Salasiah Mokri
- Faculty of Engineering and Built Environment,Dept. of Electrical Electronic and Systems Engineering,Selangor,Malaysia
| | - Ashrani Aizuddin Abd Rahni
- Faculty of Engineering and Built Environment,Dept. of Electrical Electronic and Systems Engineering,Selangor,Malaysia
| |
Collapse
|
17
|
Dong J, Zhang Y, Meng Y, Yang T, Ma W, Wu H. Segmentation Algorithm of Magnetic Resonance Imaging Glioma under Fully Convolutional Densely Connected Convolutional Networks. Stem Cells Int 2022; 2022:8619690. [PMID: 36299467 PMCID: PMC9592238 DOI: 10.1155/2022/8619690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 08/22/2022] [Accepted: 09/26/2022] [Indexed: 11/18/2022] Open
Abstract
This work focused on the application value of magnetic resonance imaging (MRI) image segmentation algorithm based on fully convolutional DenseNet neural network (FCDNN) in glioma diagnosis. In this work, based on the fully convolutional DenseNet algorithm, a new MRI image automatic semantic segmentation method cerebral gliomas semantic segmentation network (CGSSNet) was established and was applied to glioma MRI image segmentation by using the BraTS public dataset as research data. Under the same conditions, compare the differences of dice similarity coefficient (DSC), sensitivity, and Hausdroff distance (HD) between this algorithm and other algorithms in MRI image processing. The results showed that the CGSSNet network segmentation algorithm significantly improved the segmentation accuracy of glioma MRI images. In addition, its DSC, sensitivity, and HD values for glioma MRI images were 0.937, 0.811, and 1.201, respectively. Under different iteration times, the DSC, sensitivity, and HD values of the CGSSNet network segmentation algorithm are significantly better than other algorithms. It showed that the CGSSNet model based on the DenseNet can improve the segmentation accuracy of glioma MRI images, and has potential application value in clinical practice.
Collapse
Affiliation(s)
- Jie Dong
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450045, China
| | - Yueying Zhang
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450045, China
| | - Yun Meng
- Department of Magnetic Resonance, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Tingxiao Yang
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450045, China
| | - Wei Ma
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450045, China
| | - Huixin Wu
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450045, China
| |
Collapse
|
18
|
Deep Learning for the Automatic Segmentation of Extracranial Venous Malformations of the Head and Neck from MR Images Using 3D U-Net. J Clin Med 2022; 11:jcm11195593. [PMID: 36233460 PMCID: PMC9573069 DOI: 10.3390/jcm11195593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Revised: 09/08/2022] [Accepted: 09/22/2022] [Indexed: 11/16/2022] Open
Abstract
Background: It is difficult to characterize extracranial venous malformations (VMs) of the head and neck region from magnetic resonance imaging (MRI) manually and one at a time. We attempted to perform the automatic segmentation of lesions from MRI of extracranial VMs using a convolutional neural network as a deep learning tool. Methods: T2-weighted MRI from 53 patients with extracranial VMs in the head and neck region was used for annotations. Preprocessing management was performed before training. Three-dimensional U-Net was used as a segmentation model. Dice similarity coefficients were evaluated along with other indicators. Results: Dice similarity coefficients in 3D U-Net were found to be 99.75% in the training set and 60.62% in the test set. The models showed overfitting, which can be resolved with a larger number of objects, i.e., MRI VM images. Conclusions: Our pilot study showed sufficient potential for the automatic segmentation of extracranial VMs through deep learning using MR images from VM patients. The overfitting phenomenon observed will be resolved with a larger number of MRI VM images.
Collapse
|
19
|
Khorasani A, Kafieh R, Saboori M, Tavakoli MB. Glioma segmentation with DWI weighted images, conventional anatomical images, and post-contrast enhancement magnetic resonance imaging images by U-Net. Phys Eng Sci Med 2022; 45:925-934. [PMID: 35997927 DOI: 10.1007/s13246-022-01164-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 07/16/2022] [Indexed: 11/24/2022]
Abstract
Glioma segmentation is believed to be one of the most important stages of treatment management. Recent developments in magnetic resonance imaging (MRI) protocols have led to a renewed interest in using automatic glioma segmentation with different MRI image weights. U-Net is a major area of interest within the field of automatic glioma segmentation. This paper examines the impact of different input MRI image-weight on the U-Net output performance for glioma segmentation. One hundred forty-nine glioma patients were scanned with a 1.5T MRI scanner. The main MRI image-weights acquired are diffusion-weighted imaging (DWI) weighted images (b50, b500, b1000, Apparent diffusion coefficient (ADC) map, Exponential apparent diffusion coefficient (eADC) map), anatomical image-weights (T2, T1, T2-FLAIR), and post enhancement image-weights (T1Gd). The U-Net and data augmentation are used to segment the glioma tumors. Having the Dice coefficient and accuracy enabled us to compare our results with the previous study. The first set of analyses examined the impact of epoch number on the accuracy of U-Net, and n_epoch = 20 was selected for U-Net training. The mean Dice coefficient for b50, b500, b1000, ADC map, eADC map, T2, T1, T2-FLAIR, and T1Gd image weights for glioma segmentation with U-Net were calculated 0.892, 0.872, 0.752, 0.931, 0.944, 0.762, 0.721, 0.896, 0.694 respectively. This study has found that, DWI image-weights have a higher diagnostic value for glioma segmentation with U-Net in comparison with anatomical image-weights and post enhancement image-weights. The results of this investigation show that ADC and eADC maps have higher performance for glioma segmentation with U-Net.
Collapse
Affiliation(s)
- Amir Khorasani
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Rahele Kafieh
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran.,Department of Engineering, Durham University, Durham, UK
| | - Masih Saboori
- Department of Neurosurgery, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Mohamad Bagher Tavakoli
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran.
| |
Collapse
|
20
|
Chen W, Zhou W, Zhu L, Cao Y, Gu H, Yu B. MTDCNet: A 3D multi-threading dilated convolutional network for brain tumor automatic segmentation. J Biomed Inform 2022; 133:104173. [PMID: 35998815 DOI: 10.1016/j.jbi.2022.104173] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Revised: 04/20/2022] [Accepted: 08/15/2022] [Indexed: 11/24/2022]
Abstract
Glioma is one of the most threatening tumors and the survival rate of the infected patient is low. The automatic segmentation of the tumors by reliable algorithms can reduce diagnosis time. In this paper, a novel 3D multi-threading dilated convolutional network (MTDC-Net) is proposed for the automatic brain tumor segmentation. First of all, a multi-threading dilated convolution (MTDC) strategy is introduced in the encoder part, so that the low dimensional structural features can be extracted and integrated better. At the same time, the pyramid matrix fusion (PMF) algorithm is used to integrate the characteristic structural information better. Secondly, in order to make the better use of context semantical information, this paper proposed a spatial pyramid convolution (SPC) operation. By using convolution with different kernel sizes, the model can aggregate more semantic information. Finally, the multi-threading adaptive pooling up-sampling (MTAU) strategy is used to increase the weight of semantic information, and improve the recognition ability of the model. And a pixel-based post-processing method is used to prevent the effects of error prediction. On the brain tumors segmentation challenge 2018 (BraTS2018) public validation dataset, the dice scores of MTDC-Net are 0.832, 0.892 and 0.809 for core, whole and enhanced of the tumor, respectively. On the BraTS2020 public validation dataset, the dice scores of MTDC-Net are 0.833, 0.896 and 0.797 for the core tumor, whole tumor and enhancing tumor, respectively. Mass numerical experiments show that MTDC-Net is a state-of-the-art network for automatic brain tumor segmentation.
Collapse
Affiliation(s)
- Wankun Chen
- College of Mathematics and Physics, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Weifeng Zhou
- College of Mathematics and Physics, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Ling Zhu
- College of Mathematics and Physics, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Yuan Cao
- College of Information Science and Technology, School of Data Science, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Haiming Gu
- College of Mathematics and Physics, Qingdao University of Science and Technology, Qingdao 266061, China.
| | - Bin Yu
- College of Information Science and Technology, School of Data Science, Qingdao University of Science and Technology, Qingdao 266061, China; School of Data Science, University of Science and Technology of China, Hefei 230027, China.
| |
Collapse
|
21
|
Akinyelu AA, Zaccagna F, Grist JT, Castelli M, Rundo L. Brain Tumor Diagnosis Using Machine Learning, Convolutional Neural Networks, Capsule Neural Networks and Vision Transformers, Applied to MRI: A Survey. J Imaging 2022; 8:205. [PMID: 35893083 PMCID: PMC9331677 DOI: 10.3390/jimaging8080205] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 06/20/2022] [Accepted: 07/12/2022] [Indexed: 02/01/2023] Open
Abstract
Management of brain tumors is based on clinical and radiological information with presumed grade dictating treatment. Hence, a non-invasive assessment of tumor grade is of paramount importance to choose the best treatment plan. Convolutional Neural Networks (CNNs) represent one of the effective Deep Learning (DL)-based techniques that have been used for brain tumor diagnosis. However, they are unable to handle input modifications effectively. Capsule neural networks (CapsNets) are a novel type of machine learning (ML) architecture that was recently developed to address the drawbacks of CNNs. CapsNets are resistant to rotations and affine translations, which is beneficial when processing medical imaging datasets. Moreover, Vision Transformers (ViT)-based solutions have been very recently proposed to address the issue of long-range dependency in CNNs. This survey provides a comprehensive overview of brain tumor classification and segmentation techniques, with a focus on ML-based, CNN-based, CapsNet-based, and ViT-based techniques. The survey highlights the fundamental contributions of recent studies and the performance of state-of-the-art techniques. Moreover, we present an in-depth discussion of crucial issues and open challenges. We also identify some key limitations and promising future research directions. We envisage that this survey shall serve as a good springboard for further study.
Collapse
Affiliation(s)
- Andronicus A. Akinyelu
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal;
- Department of Computer Science and Informatics, University of the Free State, Phuthaditjhaba 9866, South Africa
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum-University of Bologna, 40138 Bologna, Italy;
- IRCCS Istituto delle Scienze Neurologiche di Bologna, Functional and Molecular Neuroimaging Unit, 40139 Bologna, Italy
| | - James T. Grist
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK;
- Department of Radiology, Oxford University Hospitals NHS Foundation Trust, Oxford OX3 9DU, UK
- Oxford Centre for Clinical Magnetic Research Imaging, University of Oxford, Oxford OX3 9DU, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham B15 2SY, UK
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal;
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy
| |
Collapse
|
22
|
Shaukat Z, Farooq QUA, Tu S, Xiao C, Ali S. A state-of-the-art technique to perform cloud-based semantic segmentation using deep learning 3D U-Net architecture. BMC Bioinformatics 2022; 23:251. [PMID: 35751030 PMCID: PMC9229514 DOI: 10.1186/s12859-022-04794-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 06/15/2022] [Indexed: 11/11/2022] Open
Abstract
Glioma is the most aggressive and dangerous primary brain tumor with a survival time of less than 14 months. Segmentation of tumors is a necessary task in the image processing of the gliomas and is important for its timely diagnosis and starting a treatment. Using 3D U-net architecture to perform semantic segmentation on brain tumor dataset is at the core of deep learning. In this paper, we present a unique cloud-based 3D U-Net method to perform brain tumor segmentation using BRATS dataset. The system was effectively trained by using Adam optimization solver by utilizing multiple hyper parameters. We got an average dice score of 95% which makes our method the first cloud-based method to achieve maximum accuracy. The dice score is calculated by using Sørensen-Dice similarity coefficient. We also performed an extensive literature review of the brain tumor segmentation methods implemented in the last five years to get a state-of-the-art picture of well-known methodologies with a higher dice score. In comparison to the already implemented architectures, our method ranks on top in terms of accuracy in using a cloud-based 3D U-Net framework for glioma segmentation.
Collapse
Affiliation(s)
- Zeeshan Shaukat
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China. .,Faculty of Computer Science, University of South Asia, Lahore, Pakistan.
| | - Qurat Ul Ain Farooq
- Faculty of Environmental and Life Sciences, Beijing University of Technology, Beijing, People's Republic of China
| | - Shanshan Tu
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China
| | - Chuangbai Xiao
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China.
| | - Saqib Ali
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China
| |
Collapse
|
23
|
Ali TM, Nawaz A, Ur Rehman A, Ahmad RZ, Javed AR, Gadekallu TR, Chen CL, Wu CM. A Sequential Machine Learning-cum-Attention Mechanism for Effective Segmentation of Brain Tumor. Front Oncol 2022; 12:873268. [PMID: 35719987 PMCID: PMC9202559 DOI: 10.3389/fonc.2022.873268] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 04/18/2022] [Indexed: 12/21/2022] Open
Abstract
Magnetic resonance imaging is the most generally utilized imaging methodology that permits radiologists to look inside the cerebrum using radio waves and magnets for tumor identification. However, it is tedious and complex to identify the tumorous and nontumorous regions due to the complexity in the tumorous region. Therefore, reliable and automatic segmentation and prediction are necessary for the segmentation of brain tumors. This paper proposes a reliable and efficient neural network variant, i.e., an attention-based convolutional neural network for brain tumor segmentation. Specifically, an encoder part of the UNET is a pre-trained VGG19 network followed by the adjacent decoder parts with an attention gate for segmentation noise induction and a denoising mechanism for avoiding overfitting. The dataset we are using for segmentation is BRATS’20, which comprises four different MRI modalities and one target mask file. The abovementioned algorithm resulted in a dice similarity coefficient of 0.83, 0.86, and 0.90 for enhancing, core, and whole tumors, respectively.
Collapse
Affiliation(s)
- Tahir Mohammad Ali
- Department of Computer Science, GULF University for Science and Technology, Mishref, Kuwait
| | - Ali Nawaz
- Department of Computer Science, GULF University for Science and Technology, Mishref, Kuwait
| | - Attique Ur Rehman
- Department of Computer Science, GULF University for Science and Technology, Mishref, Kuwait.,Department of Software Engineering, University of Sialkot, Sialkot, Pakistan
| | - Rana Zeeshan Ahmad
- Department of Information Technology, University of Sialkot, Sialkot, Pakistan
| | | | - Thippa Reddy Gadekallu
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - Chin-Ling Chen
- School of Information Engineering, Changchun Sci-Tech University, Changchun, China.,School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China.,Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung, Taiwan
| | - Chih-Ming Wu
- School of Civil Engineering and Architecture, Xiamen University of Technology, Xiamen, China
| |
Collapse
|
24
|
Khan MM, Omee AS, Tazin T, Almalki FA, Aljohani M, Algethami H. A Novel Approach to Predict Brain Cancerous Tumor Using Transfer Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:2702328. [PMID: 35770126 PMCID: PMC9236785 DOI: 10.1155/2022/2702328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 05/20/2022] [Accepted: 05/30/2022] [Indexed: 11/22/2022]
Abstract
As the most prevalent and deadly malignancy, brain tumors have a dismal survival rate when they are at their most hazardous. Using mostly traditional medical image processing methods, segmenting and classifying brain malignant tumors is a challenging and time-consuming task. Indeed, medical research reveals that categorization performed manually with the help of a person might result in inaccurate prediction and diagnosis. This is mostly due to the fact that malignancies and normal tissues are so dissimilar and comparable. The brain, lung, liver, breast, and prostate are all studied using imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound. This research makes significant use of CT and X-ray imaging to identify brain malignant tumors. The purpose of this article is to examine the use of convolutional neural networks (CNNs) in image-based diagnosis of brain cancers. It expedites and improves the treatment's reliability. As a result of the abundance of research on this issue, the provided model focuses on increasing accuracy via the use of a transfer learning method. This experiment was conducted using Python and Google Colab. Deep features were extracted using VGG19 and MobileNetV2, two pretrained deep CNN models. The classification accuracy is used to evaluate this work's performance. This research achieved a 97 percent accuracy rate by MobileNetV2 and a 91 percent accuracy rate by the VGG19 algorithm. This allows us to find malignancies before they have a negative effect on our bodies, like paralysis.
Collapse
Affiliation(s)
- Mohammad Monirujjaman Khan
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Atiyea Sharmeen Omee
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Tahia Tazin
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Faris A. Almalki
- Department of Computer Engineering, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Maha Aljohani
- Software Engineering Department, College of Computer Science and Engineering, University of Jeddah, Jeddah 21959, Saudi Arabia
| | - Haneen Algethami
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif 21974, Saudi Arabia
| |
Collapse
|
25
|
Wang X, Wu C, Liu S, Peng D. Combinatorial therapeutic strategies for enhanced delivery of therapeutics to brain cancer cells through nanocarriers: current trends and future perspectives. Drug Deliv 2022; 29:1370-1383. [PMID: 35532094 PMCID: PMC9090367 DOI: 10.1080/10717544.2022.2069881] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Brain cancer is the most aggressive one among various cancers. It has a drastic impact on people's lives because of the failure in treatment efficacy of the currently employed strategies. Various strategies used to relieve pain in brain cancer patients and to prolong survival time include radiotherapy, chemotherapy, and surgery. Nevertheless, several inevitable limitations are accompanied by such treatments due to unsatisfactory curative effects. Generally, the treatment of cancers is very challenging due to many reasons including drugs’ intrinsic factors and physiological barriers. Blood-brain barrier (BBB) and blood-cerebrospinal fluid barrier (BCSFB) are the two additional hurdles in the way of therapeutic agents to brain tumors delivery. Combinatorial and targeted therapies specifically in cancer show a very promising role where nanocarriers’ based formulations are designed primarily to achieve tumor-specific drug release. A dual-targeting strategy is a versatile way of chemotherapeutics delivery to brain tumors that gets the aid of combined ligands and mediators that cross the BBB and reaches the target site efficiently. In contrast to single targeting where one receptor or mediator is targeted, the dual-targeting strategy is expected to produce a multiple-fold increase in therapeutic efficacy for cancer therapy, especially in brain tumors. In a nutshell, a dual-targeting strategy for brain tumors enhances the delivery efficiency of chemotherapeutic agents via penetration across the blood-brain barrier and enhances the targeting of tumor cells. This review article highlights the ongoing status of the brain tumor therapy enhanced by nanoparticle based delivery with the aid of dual-targeting strategies. The future perspectives in this regard have also been highlighted.
Collapse
Affiliation(s)
- Xiande Wang
- Department of Neurosurgery, Hangzhou Medical College Affiliated Lin'an People's Hospital, The First People's Hospital of Hangzhou Lin'an District, Hangzhou, China
| | - Cheng Wu
- Cancer Center, Department of Neurosurgery, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China
| | - Shiming Liu
- Cancer Center, Department of Neurosurgery, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China
| | - Deqing Peng
- Cancer Center, Department of Neurosurgery, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China
| |
Collapse
|
26
|
Zeineldin RA, Karar ME, Elshaer Z, Coburger J, Wirtz CR, Burgert O, Mathis-Ullrich F. Explainability of deep neural networks for MRI analysis of brain tumors. Int J Comput Assist Radiol Surg 2022; 17:1673-1683. [PMID: 35460019 PMCID: PMC9463287 DOI: 10.1007/s11548-022-02619-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 03/23/2022] [Indexed: 11/26/2022]
Abstract
PURPOSE Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice. METHODS In this study, we propose a NeuroXAI framework for explainable AI of deep learning networks to increase the trust of medical experts. NeuroXAI implements seven state-of-the-art explanation methods providing visualization maps to help make deep learning models transparent. RESULTS NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e., image classification and segmentation using magnetic resonance (MR) modality. Visual attention maps of multiple XAI methods have been generated and compared for both applications. Another experiment demonstrated that NeuroXAI can provide information flow visualization on internal layers of a segmentation CNN. CONCLUSION Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist radiologists and medical professionals in the detection and diagnosis of brain tumors in the clinical routine of cancer patients. The code of NeuroXAI is publicly accessible at https://github.com/razeineldin/NeuroXAI .
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology (KIT), 76131, Karlsruhe, Germany.
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
- Faculty of Electronic Engineering (FEE), Menoufia University, Menouf, 32952, Egypt.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Menouf, 32952, Egypt
| | - Ziad Elshaer
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Jan Coburger
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Christian R Wirtz
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| | - Franziska Mathis-Ullrich
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology (KIT), 76131, Karlsruhe, Germany
| |
Collapse
|
27
|
Das S, Nayak GK, Saba L, Kalra M, Suri JS, Saxena S. An artificial intelligence framework and its bias for brain tumor segmentation: A narrative review. Comput Biol Med 2022; 143:105273. [PMID: 35228172 DOI: 10.1016/j.compbiomed.2022.105273] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 01/15/2022] [Accepted: 01/24/2022] [Indexed: 02/06/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has become a prominent technique for medical diagnosis and represents an essential role in detecting brain tumors. Although AI-based models are widely used in brain lesion segmentation (BLS), understanding their effectiveness is challenging due to their complexity and diversity. Several reviews on brain tumor segmentation are available, but none of them describe a link between the threats due to risk-of-bias (RoB) in AI and its architectures. In our review, we focused on linking RoB and different AI-based architectural Cluster in popular DL framework. Further, due to variance in these designs and input data types in medical imaging, it is necessary to present a narrative review considering all facets of BLS. APPROACH The proposed study uses a PRISMA strategy based on 75 relevant studies found by searching PubMed, Scopus, and Google Scholar. Based on the architectural evolution, DL studies were subsequently categorized into four classes: convolutional neural network (CNN)-based, encoder-decoder (ED)-based, transfer learning (TL)-based, and hybrid DL (HDL)-based architectures. These studies were then analyzed considering 32 AI attributes, with clusters including AI architecture, imaging modalities, hyper-parameters, performance evaluation metrics, and clinical evaluation. Then, after these studies were scored for all attributes, a composite score was computed, normalized, and ranked. Thereafter, a bias cutoff (AP(ai)Bias 1.0, AtheroPoint, Roseville, CA, USA) was established to detect low-, moderate- and high-bias studies. CONCLUSION The four classes of architectures, from best-to worst-performing, are TL > ED > CNN > HDL. ED-based models had the lowest AI bias for BLS. This study presents a set of three primary and six secondary recommendations for lowering the RoB.
Collapse
Affiliation(s)
- Suchismita Das
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India; CSE Department, KIIT Deemed to be University, Bhubaneswar, Odisha, India
| | - G K Nayak
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| | - Luca Saba
- Department of Radiology, AOU, University of Cagliari, Cagliari, Italy
| | - Mannudeep Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA, USA
| | - Jasjit S Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™ LLC, Roseville, CA, USA.
| | - Sanjay Saxena
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| |
Collapse
|
28
|
Xu Z, Wang Y, Lu D, Yu L, Yan J, Luo J, Ma K, Zheng Y, Tong RKY. All-Around Real Label Supervision: Cyclic Prototype Consistency Learning for Semi-supervised Medical Image Segmentation. IEEE J Biomed Health Inform 2022; 26:3174-3184. [PMID: 35324450 DOI: 10.1109/jbhi.2022.3162043] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Semi-supervised learning has substantially advanced medical image segmentation since it alleviates the heavy burden of acquiring the costly expert-examined annotations. Especially, the consistency-based approaches have attracted more attention for their superior performance, wherein the real labels are only utilized to supervise their paired images via supervised loss while the unlabeled images are exploited by enforcing the perturbation-based \textit{``unsupervised"} consistency without explicit guidance from those real labels. However, intuitively, the expert-examined real labels contain more reliable supervision signals. Observing this, we ask an unexplored but interesting question: can we exploit the unlabeled data via explicit real label supervision for semi-supervised training To this end, we discard the previous perturbation-based consistency but absorb the essence of non-parametric prototype learning. Based on the prototypical networks, we then propose a novel cyclic prototype consistency learning (CPCL) framework, which is constructed by a labeled-to-unlabeled (L2U) prototypical forward process and an unlabeled-to-labeled (U2L) backward process. Such two processes synergistically enhance the segmentation network by encouraging more discriminative and compact features. In this way, our framework turns previous \textit{``unsupervised"} consistency into new \textit{``supervised"} consistency, obtaining the \textit{``all-around real label supervision"} property of our method. Extensive experiments on brain tumor segmentation from MRI and kidney segmentation from CT images show that our CPCL can effectively exploit the unlabeled data and outperform other state-of-the-art semi-supervised medical image segmentation methods.
Collapse
|
29
|
Ilhan A, Sekeroglu B, Abiyev R. Brain tumor segmentation in MRI images using nonparametric localization and enhancement methods with U-net. Int J Comput Assist Radiol Surg 2022; 17:589-600. [DOI: 10.1007/s11548-022-02566-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 01/12/2022] [Indexed: 11/05/2022]
|
30
|
Krauze AV, Zhuge Y, Zhao R, Tasci E, Camphausen K. AI-Driven Image Analysis in Central Nervous System Tumors-Traditional Machine Learning, Deep Learning and Hybrid Models. JOURNAL OF BIOTECHNOLOGY AND BIOMEDICINE 2022; 5:1-19. [PMID: 35106480 PMCID: PMC8802234 DOI: 10.26502/jbb.2642-91280046] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
The interpretation of imaging in medicine in general and in oncology specifically remains problematic due to several limitations which include the need to incorporate detailed clinical history, patient and disease-specific history, clinical exam features, previous and ongoing treatment, and account for the dependency on reproducible human interpretation of multiple factors with incomplete data linkage. To standardize reporting, minimize bias, expedite management, and improve outcomes, the use of Artificial Intelligence (AI) has gained significant prominence in imaging analysis. In oncology, AI methods have as a result been explored in most cancer types with ongoing progress in employing AI towards imaging for oncology treatment, assessing treatment response, and understanding and communicating prognosis. Challenges remain with limited available data sets, variability in imaging changes over time augmented by a growing heterogeneity in analysis approaches. We review the imaging analysis workflow and examine how hand-crafted features also referred to as traditional Machine Learning (ML), Deep Learning (DL) approaches, and hybrid analyses, are being employed in AI-driven imaging analysis in central nervous system tumors. ML, DL, and hybrid approaches coexist, and their combination may produce superior results although data in this space is as yet novel, and conclusions and pitfalls have yet to be fully explored. We note the growing technical complexities that may become increasingly separated from the clinic and enforce the acute need for clinician engagement to guide progress and ensure that conclusions derived from AI-driven imaging analysis reflect that same level of scrutiny lent to other avenues of clinical research.
Collapse
Affiliation(s)
- A V Krauze
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| | - Y Zhuge
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| | - R Zhao
- University of British Columbia, Faculty of Medicine, 317 - 2194 Health Sciences Mall, Vancouver, Canada
| | - E Tasci
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| | - K Camphausen
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| |
Collapse
|
31
|
Bhalodiya JM, Lim Choi Keung SN, Arvanitis TN. Magnetic resonance image-based brain tumour segmentation methods: A systematic review. Digit Health 2022; 8:20552076221074122. [PMID: 35340900 PMCID: PMC8943308 DOI: 10.1177/20552076221074122] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 11/20/2021] [Accepted: 12/27/2021] [Indexed: 01/10/2023] Open
Abstract
Background Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation. Methods We conducted a systematic review of 572 brain tumour segmentation studies during 2015-2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score). Statistical tests We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour. Results We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation. Conclusion U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available.
Collapse
Affiliation(s)
- Jayendra M Bhalodiya
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Sarah N Lim Choi Keung
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Theodoros N Arvanitis
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| |
Collapse
|
32
|
Nalepa J. AIM and Brain Tumors. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
33
|
Risk Factors of Restroke in Patients with Lacunar Cerebral Infarction Using Magnetic Resonance Imaging Image Features under Deep Learning Algorithm. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:2527595. [PMID: 34887708 PMCID: PMC8616697 DOI: 10.1155/2021/2527595] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/18/2021] [Accepted: 10/23/2021] [Indexed: 02/08/2023]
Abstract
This study was aimed to explore the magnetic resonance imaging (MRI) image features based on the fuzzy local information C-means clustering (FLICM) image segmentation method to analyze the risk factors of restroke in patients with lacunar infarction. In this study, based on the FLICM algorithm, the Canny edge detection algorithm and the Fourier shape descriptor were introduced to optimize the algorithm. The difference of Jaccard coefficient, Dice coefficient, peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), running time, and segmentation accuracy of the optimized FLICM algorithm and other algorithms when the brain tissue MRI images were segmented was studied. 36 patients with lacunar infarction were selected as the research objects, and they were divided into a control group (no restroke, 20 cases) and a stroke group (restroke, 16 cases) according to whether the patients had restroke. The differences in MRI imaging characteristics of the two groups of patients were compared, and the risk factors for restroke in lacunar infarction were analyzed by logistic multivariate regression. The results showed that the Jaccard coefficient, Dice coefficient, PSNR value, and SSIM value of the optimized FLICM algorithm for segmenting brain tissue were all higher than those of other algorithms. The shortest running time was 26 s, and the highest accuracy rate was 97.86%. The proportion of patients with a history of hypertension, the proportion of patients with paraventricular white matter lesion (WML) score greater than 2 in the stroke group, the proportion of patients with a deep WML score of 2, and the average age of patients in the stroke group were much higher than those in the control group (P < 0.05). Logistic multivariate regression showed that age and history of hypertension were risk factors for restroke after lacunar infarction (P < 0.05). It showed that the optimized FLICM algorithm can effectively segment brain MRI images, and the risk factors for restroke in patients with lacunar infarction were age and hypertension history. This study could provide a reference for the diagnosis and prognosis of lacunar infarction.
Collapse
|
34
|
Multi-scale brain tumor segmentation combined with deep supervision. Int J Comput Assist Radiol Surg 2021; 17:561-568. [PMID: 34894336 DOI: 10.1007/s11548-021-02515-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 09/29/2021] [Indexed: 10/19/2022]
Abstract
PURPOSE Fully convolutional neural networks (FCNNs) have achieved good performance in the field of medical image segmentation. FCNNs that use multimodal images and multi-scale feature extraction have higher accuracy for brain tumor segmentation. Therefore, we have made some improvements to U-Net for fully automated segmentation of gliomas using multimodal images. And we named it multi-scale dilate network with deep supervision (MSD-Net). METHODS MSD-Net is a symmetrical structure composed of a down-sampling process and an up-sampling process. In the down-sampling process, we use the multi-scale feature extraction block (ME) to extract multi-scale features and focus on primary features. Unlike other methods, ME consists of dilate convolution and standard convolution. Dilate convolution extracts multi-scale informations and standard convolution merges features of different scales. Hence, the output of the ME contains local information and global information. During the up-sampling process, we add a deep supervision block (DSB), which can shorten the length of back-propagation. In this paper, we pay more attention to the importance of shallow features for feature restoration. RESULTS Our network validated in the BraTS17's validation dataset. The DSC scores of MSD-Net for complete tumor, tumor core and enhancing tumor were 0.88, 0.81 and 0.78, respectively, which outperforms most networks. CONCLUSION This study shows that ME enhances the feature extraction ability of the network and improves the accuracy of segmentation results. DSB speeds up the convergence of the network. In addition, we should also pay attention to the contribution of shallow features to feature restoration.
Collapse
|
35
|
Foundations of Lesion Detection Using Machine Learning in Clinical Neuroimaging. ACTA NEUROCHIRURGICA. SUPPLEMENT 2021; 134:171-182. [PMID: 34862541 DOI: 10.1007/978-3-030-85292-4_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
This chapter describes technical considerations and current and future clinical applications of lesion detection using machine learning in the clinical setting. Lesion detection is central to neuroradiology and precedes all further processes which include but are not limited to lesion characterization, quantification, longitudinal disease assessment, prognosis, and prediction of treatment response. A number of machine learning algorithms focusing on lesion detection have been developed or are currently under development which may either support or extend the imaging process. Examples include machine learning applications in stroke, aneurysms, multiple sclerosis, neuro-oncology, neurodegeneration, and epilepsy.
Collapse
|
36
|
Deepa PV, Joseph Jawhar S, Merry Geisa J. Diagnosis of Brain Tumor Using Nano Segmentation and Advanced-Convolutional Neural Networks Classification. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
The field of nanotechnology has lately acquired prominence according to the raised level of correct identification and performance in the patients using Computer-Aided Diagnosis (CAD). Nano-scale imaging model enables for a high level of precision and accuracy in determining if a brain
tumour is malignant or benign. This contributes to people with brain tumours having a better standard of living. In this study, We present a revolutionary Semantic nano-segmentation methodology for the nanoscale classification of brain tumours. The suggested Advanced-Convolutional Neural Networks-based
Semantic Nano-segmentation will aid radiologists in detecting brain tumours even when lesions are minor. ResNet-50 was employed in the suggested Advanced-Convolutional Neural Networks (A-CNN) approach. The tumour image is partitioned using Semantic Nano-segmentation, that has averaged dice
and SSIM values of 0.9704 and 0.2133, correspondingly. The input is a nano-image, and the tumour image is segmented using Semantic Nano-segmentation, which has averaged dice and SSIM values of 0.9704 and 0.2133, respectively. The suggested Semantic nano segments achieves 93.2 percent and 92.7
percent accuracy for benign and malignant tumour pictures, correspondingly. For malignant or benign pictures, The accuracy of the A-CNN methodology of correct segmentation is 99.57 percent and 95.7 percent, respectively. This unique nano-method is designed to detect tumour areas in nanometers
(nm) and hence accurately assess the illness. The suggested technique’s closeness to with regard to True Positive values, the ROC curve implies that it outperforms earlier approaches. A comparison analysis is conducted on ResNet-50 using testing and training data at rates of 90%–10%,
80%–20%, and 70%–30%, corresponding, indicating the utility of the suggested work.
Collapse
Affiliation(s)
- P. V. Deepa
- Department of ECE, Arunachala College of Engineering for Women, Manavilai 629203, Tamilnadu, India
| | - S. Joseph Jawhar
- Department of EEE, Arunachala College of Engineering for Women, Manavilai 629203, Tamilnadu, India
| | - J. Merry Geisa
- Department of EEE, St. Xavier’s Catholic College of Engineering, 629003, Tamilnadu, India
| |
Collapse
|
37
|
Abstract
AbstractBrain tumor occurs owing to uncontrolled and rapid growth of cells. If not treated at an initial phase, it may lead to death. Despite many significant efforts and promising outcomes in this domain, accurate segmentation and classification remain a challenging task. A major challenge for brain tumor detection arises from the variations in tumor location, shape, and size. The objective of this survey is to deliver a comprehensive literature on brain tumor detection through magnetic resonance imaging to help the researchers. This survey covered the anatomy of brain tumors, publicly available datasets, enhancement techniques, segmentation, feature extraction, classification, and deep learning, transfer learning and quantum machine learning for brain tumors analysis. Finally, this survey provides all important literature for the detection of brain tumors with their advantages, limitations, developments, and future trends.
Collapse
|
38
|
Brain Cancer Prediction Based on Novel Interpretable Ensemble Gene Selection Algorithm and Classifier. Diagnostics (Basel) 2021; 11:diagnostics11101936. [PMID: 34679634 PMCID: PMC8535043 DOI: 10.3390/diagnostics11101936] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 10/12/2021] [Accepted: 10/13/2021] [Indexed: 11/17/2022] Open
Abstract
The growth of abnormal cells in the brain causes human brain tumors. Identifying the type of tumor is crucial for the prognosis and treatment of the patient. Data from cancer microarrays typically include fewer samples with many gene expression levels as features, reflecting the curse of dimensionality and making classifying data from microarrays challenging. In most of the examined studies, cancer classification (Malignant and benign) accuracy was examined without disclosing biological information related to the classification process. A new approach was proposed to bridge the gap between cancer classification and the interpretation of the biological studies of the genes implicated in cancer. This study aims to develop a new hybrid model for cancer classification (by using feature selection mRMRe as a key step to improve the performance of classification methods and a distributed hyperparameter optimization for gradient boosting ensemble methods). To evaluate the proposed method, NB, RF, and SVM classifiers have been chosen. In terms of the AUC, sensitivity, and specificity, the optimized CatBoost classifier performed better than the optimized XGBoost in cross-validation 5, 6, 8, and 10. With an accuracy of 0.91±0.12, the optimized CatBoost classifier is more accurate than the CatBoost classifier without optimization, which is 0.81± 0.24. By using hybrid algorithms, SVM, RF, and NB automatically become more accurate. Furthermore, in terms of accuracy, SVM and RF (0.97±0.08) achieve equivalent and higher classification accuracy than NB (0.91±0.12). The findings of relevant biomedical studies confirm the findings of the selected genes.
Collapse
|
39
|
Zhu H, Jiang L, Zhang H, Luo L, Chen Y, Chen Y. An automatic machine learning approach for ischemic stroke onset time identification based on DWI and FLAIR imaging. NEUROIMAGE-CLINICAL 2021; 31:102744. [PMID: 34245995 PMCID: PMC8271155 DOI: 10.1016/j.nicl.2021.102744] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Revised: 06/22/2021] [Accepted: 06/23/2021] [Indexed: 11/11/2022]
Abstract
We only used two-modal MR image (DWI, FLAIR) for fast time since stroke identification. We constructed cross-modal convolutional network for lesion ROI segmentation in FLAIR. The network used ROI features in DWI as prior information for better FLAIR segmentation. Five independent machine learning classifiers were trained and voted to obtain the final classification label. The voting of five classifiers can improve classification accuracy effectively.
Current thrombolysis for acute ischemic stroke (AIS) treatment strictly relies on the time since stroke (TSS) less than 4.5 h. However, some patients are excluded from thrombolytic treatment because of the unknown TSS. The diffusion-weighted imaging (DWI) and fluid-attenuated inversion recovery (FLAIR) mismatch can simply identify TSS since lesion intensities are not identical at different onset time. In this paper, we propose an automatic machine learning method to classify the TSS less than or more than 4.5 h. First, we develop a cross-modal convolutional neural network to accurately segment the stroke lesions from DWI and FLAIR images. Second, the features are extracted from DWI and FLAIR according to the segmentation regions of interest (ROI). Finally, the features are fed to machine learning models to identify TSS. In DWI and FLAIR ROI segmentation, the networks obtain high Dice coefficients with 0.803 and 0.647. The classification test results show that our model achieves an accuracy of 0.805, with a sensitivity of 0.769 and a specificity of 0.840. Our approach outperforms human reading DWI-FLAIR mismatch model, illustrating the potential for automatic and fast TSS identification.
Collapse
Affiliation(s)
- Haichen Zhu
- Lab of Image Science and Technology, Key Laboratory of Computer Network and Information Integration (Ministry of Education), School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
| | - Liang Jiang
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing 210006, China
| | - Hong Zhang
- Department of Radiology, Affiliated Jiangning Hospital of Nanjing Medical University, Nanjing 210000, China
| | - Limin Luo
- Lab of Image Science and Technology, Key Laboratory of Computer Network and Information Integration (Ministry of Education), School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
| | - Yang Chen
- Lab of Image Science and Technology, Key Laboratory of Computer Network and Information Integration (Ministry of Education), School of Computer Science and Engineering, Southeast University, Nanjing 210096, China; School of Cyber Science and Engineering, Southeast University, Nanjing 210096, China.
| | - Yuchen Chen
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing 210006, China.
| |
Collapse
|
40
|
An efficient brain tumor image classifier by combining multi-pathway cascaded deep neural network and handcrafted features in MR images. Med Biol Eng Comput 2021; 59:1495-1527. [PMID: 34184181 DOI: 10.1007/s11517-021-02370-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Accepted: 04/27/2021] [Indexed: 10/21/2022]
Abstract
Accurate segmentation and delineation of the sub-tumor regions are very challenging tasks due to the nature of the tumor. Traditionally, convolutional neural networks (CNNs) have succeeded in achieving most promising performance for the segmentation of brain tumor; however, handcrafted features remain very important in identification of tumor's boundary regions accurately. The present work proposes a robust deep learning-based model with three different CNN architectures along with pre-defined handcrafted features for brain tumor segmentation, mainly to find out more prominent boundaries of the core and enhanced tumor regions. Generally, automatic CNN architecture does not use the pre-defined handcrafted features because it extracts the features automatically. In this present work, several pre-defined handcrafted features are computed from four MRI modalities (T2, FLAIR, T1c, and T1) with the help of additional handcrafted masks according to user interest and fed to the convolutional features (automatic features) to improve the overall performance of the proposed CNN model for tumor segmentation. Multi-pathway CNN is explored in this present work along with single-pathway CNN, which extracts simultaneously both local and global features to identify the accurate sub-regions of the tumor with the help of handcrafted features. The present work uses a cascaded CNN architecture, where the outcome of a CNN is considered as an additional input information to next subsequent CNNs. To extract the handcrafted features, convolutional operation was applied on the four MRI modalities with the help of several pre-defined masks to produce a predefined set of handcrafted features. The present work also investigates the usefulness of intensity normalization and data augmentation in pre-processing stage in order to handle the difficulties related to the imbalance of tumor labels. The proposed method is experimented on the BraST 2018 datasets and achieved promising results than the existing (currently published) methods with respect to different metrics such as specificity, sensitivity, and dice similarity coefficient (DSC) for complete, core, and enhanced tumor regions. Quantitatively, a notable gain is achieved around the boundaries of the sub-tumor regions using the proposed two-pathway CNN along with the handcrafted features. Graphical Abstract This data is mandatory. Please provide.
Collapse
|
41
|
Huang D, Bai H, Wang L, Hou Y, Li L, Xia Y, Yan Z, Chen W, Chang L, Li W. The Application and Development of Deep Learning in Radiotherapy: A Systematic Review. Technol Cancer Res Treat 2021; 20:15330338211016386. [PMID: 34142614 PMCID: PMC8216350 DOI: 10.1177/15330338211016386] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.
Collapse
Affiliation(s)
- Danju Huang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Han Bai
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Wang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yu Hou
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Lan Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yaoxiong Xia
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Zhirui Yan
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenrui Chen
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Chang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenhui Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| |
Collapse
|
42
|
Lin M, Momin S, Lei Y, Wang H, Curran WJ, Liu T, Yang X. Fully automated segmentation of brain tumor from multiparametric MRI using 3D context deep supervised U-Net. Med Phys 2021; 48:4365-4374. [PMID: 34101845 DOI: 10.1002/mp.15032] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 05/14/2021] [Accepted: 05/31/2021] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Owing to histologic complexities of brain tumors, its diagnosis requires the use of multimodalities to obtain valuable structural information so that brain tumor subregions can be properly delineated. In current clinical workflow, physicians typically perform slice-by-slice delineation of brain tumor subregions, which is a time-consuming process and also more susceptible to intra- and inter-rater variabilities possibly leading to misclassification. To deal with this issue, this study aims to develop an automatic segmentation of brain tumor in MR images using deep learning. METHOD In this study, we develop a context deep-supervised U-Net to segment brain tumor subregions. A context block which aggregates multiscale contextual information for dense segmentation was proposed. This approach enlarges the effective receptive field of convolutional neural networks, which, in turn, improves the segmentation accuracy of brain tumor subregions. We performed the fivefold cross-validation on the Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. The BraTS 2020 testing datasets were obtained via BraTS online website as a hold-out test. For BraTS, the evaluation system divides the tumor into three regions: whole tumor (WT), tumor core (TC), and enhancing tumor (ET). The performance of our proposed method was compared against two state-of-the-arts CNN networks in terms of segmentation accuracy via Dice similarity coefficient (DSC) and Hausdorff distance (HD). The tumor volumes generated by our proposed method were compared with manually contoured volumes via Bland-Altman plots and Pearson analysis. RESULTS The proposed method achieved the segmentation results with a DSC of 0.923 ± 0.047, 0.893 ± 0.176, and 0.846 ± 0.165 and a 95% HD95 of 3.946 ± 7.041, 3.981 ± 6.670, and 10.128 ± 51.136 mm on WT, TC, and ET, respectively. Experimental results demonstrate that our method achieved comparable to significantly (p < 0.05) better segmentation accuracies than other two state-of-the-arts CNN networks. Pearson correlation analysis showed a high positive correlation between the tumor volumes generated by proposed method and manual contour. CONCLUSION Overall qualitative and quantitative results of this work demonstrate the potential of translating proposed technique into clinical practice for segmenting brain tumor subregions, and further facilitate brain tumor radiotherapy workflow.
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Shadab Momin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hesheng Wang
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
43
|
Eweje FR, Bao B, Wu J, Dalal D, Liao WH, He Y, Luo Y, Lu S, Zhang P, Peng X, Sebro R, Bai HX, States L. Deep Learning for Classification of Bone Lesions on Routine MRI. EBioMedicine 2021; 68:103402. [PMID: 34098339 PMCID: PMC8190437 DOI: 10.1016/j.ebiom.2021.103402] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 04/10/2021] [Accepted: 05/03/2021] [Indexed: 11/24/2022] Open
Abstract
Background Radiologists have difficulty distinguishing benign from malignant bone lesions because these lesions may have similar imaging appearances. The purpose of this study was to develop a deep learning algorithm that can differentiate benign and malignant bone lesions using routine magnetic resonance imaging (MRI) and patient demographics. Methods 1,060 histologically confirmed bone lesions with T1- and T2-weighted pre-operative MRI were retrospectively identified and included, with lesions from 4 institutions used for model development and internal validation, and data from a fifth institution used for external validation. Image-based models were generated using the EfficientNet-B0 architecture and a logistic regression model was trained using patient age, sex, and lesion location. A voting ensemble was created as the final model. The performance of the model was compared to classification performance by radiology experts. Findings The cohort had a mean age of 30±23 years and was 58.3% male, with 582 benign lesions and 478 malignant. Compared to a contrived expert committee result, the ensemble deep learning model achieved (ensemble vs. experts): similar accuracy (0·76 vs. 0·73, p=0·7), sensitivity (0·79 vs. 0·81, p=1·0) and specificity (0·75 vs. 0·66, p=0·48), with a ROC AUC of 0·82. On external testing, the model achieved ROC AUC of 0·79. Interpretation Deep learning can be used to distinguish benign and malignant bone lesions on par with experts. These findings could aid in the development of computer-aided diagnostic tools to reduce unnecessary referrals to specialized centers from community clinics and limit unnecessary biopsies. Funding This work was funded by a Radiological Society of North America Research Medical Student Grant (#RMS2013) and supported by the Amazon Web Services Diagnostic Development Initiative.
Collapse
Affiliation(s)
- Feyisope R Eweje
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, 19104, USA; Perelman School of Medicine at University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Bingting Bao
- Department of Radiology, Second Xiangya Hospital of Central South University, Changsha, Hunan, 410011, China
| | - Jing Wu
- Department of Radiology, Second Xiangya Hospital of Central South University, Changsha, Hunan, 410011, China
| | - Deepa Dalal
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, 19104, USA
| | - Wei-Hua Liao
- Department of Radiology, Xiangya Hospital of Central South University, Changsha, Hunan, 410008, China
| | - Yu He
- Department of Radiology, Second Xiangya Hospital of Central South University, Changsha, Hunan, 410011, China
| | - Yongheng Luo
- Department of Radiology, Second Xiangya Hospital of Central South University, Changsha, Hunan, 410011, China
| | - Shaolei Lu
- Department of Pathology and Laboratory Medicine, Warren Alpert Medical School of Brown University, Providence, RI, 02903, USA
| | - Paul Zhang
- Department of Pathology and Laboratory Medicine, Hospital of the University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Xianjing Peng
- Department of Radiology, Xiangya Hospital of Central South University, Changsha, Hunan, 410008, China.
| | - Ronnie Sebro
- Mayo Clinic Radiology, Jacksonville, FL, 32224, USA
| | - Harrison X Bai
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, 02903, USA.
| | - Lisa States
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, 19104, USA.
| |
Collapse
|
44
|
Recurrent Multi-Fiber Network for 3D MRI Brain Tumor Segmentation. Symmetry (Basel) 2021. [DOI: 10.3390/sym13020320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Automated brain tumor segmentation based on 3D magnetic resonance imaging (MRI) is critical to disease diagnosis. Moreover, robust and accurate achieving automatic extraction of brain tumor is a big challenge because of the inherent heterogeneity of the tumor structure. In this paper, we present an efficient semantic segmentation 3D recurrent multi-fiber network (RMFNet), which is based on encoder–decoder architecture to segment the brain tumor accurately. 3D RMFNet is applied in our paper to solve the problem of brain tumor segmentation, including a 3D recurrent unit and 3D multi-fiber unit. First of all, we propose that recurrent units segment brain tumors by connecting recurrent units and convolutional layers. This quality enhances the model’s ability to integrate contextual information and is of great significance to enhance the contextual information. Then, a 3D multi-fiber unit is added to the overall network to solve the high computational cost caused by the use of a 3D network architecture to capture local features. 3D RMFNet combines both advantages from a 3D recurrent unit and 3D multi-fiber unit. Extensive experiments on the Brain Tumor Segmentation (BraTS) 2018 challenge dataset show that our RMFNet remarkably outperforms state-of-the-art methods, and achieves average Dice scores of 89.62%, 83.65% and 78.72% for the whole tumor, tumor core and enhancing tumor, respectively. The experimental results prove our architecture to be an efficient tool for brain tumor segmentation accurately.
Collapse
|
45
|
Reyad O, Karar ME. Secure CT-Image Encryption for COVID-19 Infections Using HBBS-Based Multiple Key-Streams. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2021; 46:3581-3593. [PMID: 33425645 PMCID: PMC7783709 DOI: 10.1007/s13369-020-05196-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Accepted: 11/30/2020] [Indexed: 12/25/2022]
Abstract
The task of preserving patient data is becoming more sophisticated with the evolution of technology and its integration with the medical sector in the form of telemedicine and electronic health (e-health). Secured medical image transmission requires adequate techniques for protecting patient privacy. This study aims at encrypting Coronavirus (COVID-19) images of Computed Tomography (CT) chest scan into cipherimages for secure real-world data transmission of infected patients. Provably safe pseudo-random generators are used for the production of a "key-stream" to achieve high privacy of patient data. The Blum Blum Shub (BBS) generator is a powerful generator of pseudo-random bit-strings. In this article, a hashing version of BBS, namely Hash-BBS (HBBS) generator, is presented to exploit the benefits of a hash function to reinforce the integrity of extracted binary sequences for creating multiple key-streams. The NIST-test-suite has been used to analyze and verify the statistical properties of resulted key bit-strings of all tested operations. The obtained bit-strings showed good randomness properties; consequently, uniform distributed binary sequence was achieved over the key length. Based on the obtained key-streams, an encryption scheme of four COVID-19 CT-images is proposed and designed to attain a high grade of confidentiality and integrity in transmission of medical data. In addition, a comprehensive performance analysis was done using different evaluation metrics. The evaluation results of this study demonstrated that the proposed key-stream generator outperforms the other security methods of previous studies. Therefore, it can be successfully applied to satisfy security requirements of transmitting CT-images for COVID-19 patients.
Collapse
Affiliation(s)
- Omar Reyad
- College of Computing and Information Technology, Shaqra University, Shaqra, Saudi Arabia.,Faculty of Science, Sohag University, Sohag, 82524 Egypt
| | - Mohamed Esmail Karar
- College of Computing and Information Technology, Shaqra University, Shaqra, Saudi Arabia.,Faculty of Electronic Engineering, Menoufia University, Menoufia, Egypt
| |
Collapse
|
46
|
Tian X, Li C, Liu H, Li P, He J, Gao W. Applications of artificial intelligence in radiophysics. J Cancer Res Ther 2021; 17:1603-1607. [DOI: 10.4103/jcrt.jcrt_1438_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
47
|
Ross BD, Chenevert TL, Meyer CR. Retrospective Registration in Molecular Imaging. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00080-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
48
|
Nalepa J. AIM and Brain Tumors. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_284-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
49
|
Reyad O, Hamed K, Karar ME. Hash-enhanced elliptic curve bit-string generator for medical image encryption. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-201146] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Bit-string generator (BSG) is based on the hardness of known number theoretical problems, such as the discrete logarithm problem with the elliptic curve (ECDLP). Such type of generators will have good randomness and unpredictability properties as it is challenged to find a solution regarding this mathematical dilemma. Hash functions in turn play a remarkable role in many cryptographic tasks to accomplish different security levels. Hash-enhanced elliptic curve bit-string generator (HEECBSG) mechanism is proposed in this study based on the ECDLP and secure hash function. The cryptographic hash function is used to achieve integrity and security of the obtained bit-strings for highly sensitive plain data. The main contribution of the proposed HEECBSG is transforming the x-coordinate of the elliptic curve points using a hash function H to generate bit-strings of any desirable length. The obtained pseudo-random bits are tested by the NIST test suite to analyze and verify its statistical and randomness properties. The resulted bit-string is utilized here for encrypting various medical images of the vital organs, i.e. the brain, bone, fetuses, and lungs. Then, extensive evaluation metrics have been applied to analyze the successful performance of the cipherimage, including key-space analysis, histogram analysis, correlation analysis, entropy analysis and sensitivity analysis. The results demonstrated that our proposed HEECBSG mechanism is feasible for achieving security and privacy purposes of the medical image transmission over unsecure communication networks.
Collapse
Affiliation(s)
- Omar Reyad
- College of Computing and Information Technology, Shaqra University, Saudi Arabia
- Faculty of Science, Sohag University, Egypt
| | - Kadry Hamed
- College of Sciences and Humanities, Shaqra University, Afif, Saudi Arabia
- Faculty of Computers and Information, Minia University, Egypt
| | - Mohamed Esmail Karar
- College of Computing and Information Technology, Shaqra University, Saudi Arabia
- Faculty of Electronic Engineering, Menoufia University, Egypt
| |
Collapse
|
50
|
Debelee TG, Kebede SR, Schwenker F, Shewarega ZM. Deep Learning in Selected Cancers' Image Analysis-A Survey. J Imaging 2020; 6:121. [PMID: 34460565 PMCID: PMC8321208 DOI: 10.3390/jimaging6110121] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/19/2020] [Accepted: 10/26/2020] [Indexed: 02/08/2023] Open
Abstract
Deep learning algorithms have become the first choice as an approach to medical image analysis, face recognition, and emotion recognition. In this survey, several deep-learning-based approaches applied to breast cancer, cervical cancer, brain tumor, colon and lung cancers are studied and reviewed. Deep learning has been applied in almost all of the imaging modalities used for cervical and breast cancers and MRIs for the brain tumor. The result of the review process indicated that deep learning methods have achieved state-of-the-art in tumor detection, segmentation, feature extraction and classification. As presented in this paper, the deep learning approaches were used in three different modes that include training from scratch, transfer learning through freezing some layers of the deep learning network and modifying the architecture to reduce the number of parameters existing in the network. Moreover, the application of deep learning to imaging devices for the detection of various cancer cases has been studied by researchers affiliated to academic and medical institutes in economically developed countries; while, the study has not had much attention in Africa despite the dramatic soar of cancer risks in the continent.
Collapse
Affiliation(s)
- Taye Girma Debelee
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, 120611 Addis Ababa, Ethiopia
| | - Samuel Rahimeto Kebede
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- Department of Electrical and Computer Engineering, Debreberhan University, 445 Debre Berhan, Ethiopia
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, University of Ulm, 89081 Ulm, Germany;
| | | |
Collapse
|