1
|
Abdelwahab SI, Taha MME, Alfaifi HA, Farasani A, Hassan W. Bibliometric Analysis of Machine Learning Applications in Ischemia Research. Curr Probl Cardiol 2024; 49:102754. [PMID: 39079619 DOI: 10.1016/j.cpcardiol.2024.102754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2024] [Accepted: 07/23/2024] [Indexed: 08/10/2024]
Abstract
OBJECTIVE The objective of this study is to conduct a comprehensive bibliometric analysis to elucidate the landscape of machine learning applications in ischemia research. METHODS The analysis can be divided in three sections: part 1 scrutinizes articles and reviews with "ischemia" in their titles, while part 2 further narrows the focus to publications containing both "ischemia" and "machine learning" in their titles. Additionally, part 3 delves into the examination of the top 50 most cited papers, exploring their thematic focus and co-word dynamics. RESULTS The findings reveal a significant increase in publications over the years, with notable trends identified through detailed analysis. The growth in publication counts over time, the leading contributors, institutions, geographical distribution of research output and journals are numerically presented for part 1 and part 2. For the top 50 most cited papers the dynamics of co-words, which offer a nuanced understanding of thematic trends and emerging concepts, are presented. Based on the number of citations the top 10 authors were selected, and later for each, total number of publications, h-index, g-index and m-index are provided. Additionally, figures depicting the co-authorship network among authors, departments, and countries involved in the top 50 cited papers may enrich our comprehension of collaborative networks in ischemia research. CONCLUSION This comprehensive bibliometric analysis provides valuable insights into the evolving landscape of machine learning applications in ischemia research.
Collapse
Affiliation(s)
| | | | - Hassan Ahmad Alfaifi
- Pharmaceutical Care Administration (Jeddah Second Health Cluster), Ministry of Health, Saudi Arabia
| | - Abdullah Farasani
- Department of Medical Laboratory Technology, Faculty of Applied Medical Sciences, Jazan University
| | - Waseem Hassan
- Institute of Chemical Sciences, University of Peshawar, Peshawar 25120, Khyber Pakhtunkhwa, Pakistan.
| |
Collapse
|
2
|
Sulaiman A, Anand V, Gupta S, Al Reshan MS, Alshahrani H, Shaikh A, Elmagzoub MA. An intelligent LinkNet-34 model with EfficientNetB7 encoder for semantic segmentation of brain tumor. Sci Rep 2024; 14:1345. [PMID: 38228639 PMCID: PMC10792164 DOI: 10.1038/s41598-024-51472-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 01/05/2024] [Indexed: 01/18/2024] Open
Abstract
A brain tumor is an unnatural expansion of brain cells that can't be stopped, making it one of the deadliest diseases of the nervous system. The brain tumor segmentation for its earlier diagnosis is a difficult task in the field of medical image analysis. Earlier, segmenting brain tumors was done manually by radiologists but that requires a lot of time and effort. Inspite of this, in the manual segmentation there was possibility of making mistakes due to human intervention. It has been proved that deep learning models can outperform human experts for the diagnosis of brain tumor in MRI images. These algorithms employ a huge number of MRI scans to learn the difficult patterns of brain tumors to segment them automatically and accurately. Here, an encoder-decoder based architecture with deep convolutional neural network is proposed for semantic segmentation of brain tumor in MRI images. The proposed method focuses on the image downsampling in the encoder part. For this, an intelligent LinkNet-34 model with EfficientNetB7 encoder based semantic segmentation model is proposed. The performance of LinkNet-34 model is compared with other three models namely FPN, U-Net, and PSPNet. Further, the performance of EfficientNetB7 used as encoder in LinkNet-34 model has been compared with three encoders namely ResNet34, MobileNet_V2, and ResNet50. After that, the proposed model is optimized using three different optimizers such as RMSProp, Adamax and Adam. The LinkNet-34 model has outperformed with EfficientNetB7 encoder using Adamax optimizer with the value of jaccard index as 0.89 and dice coefficient as 0.915.
Collapse
Affiliation(s)
- Adel Sulaiman
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - Vatsala Anand
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India.
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India
| | - Mana Saleh Al Reshan
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - Hani Alshahrani
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - Asadullah Shaikh
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - M A Elmagzoub
- Department of Network and Communication Engineering, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| |
Collapse
|
3
|
Ahamed MF, Hossain MM, Nahiduzzaman M, Islam MR, Islam MR, Ahsan M, Haider J. A review on brain tumor segmentation based on deep learning methods with federated learning techniques. Comput Med Imaging Graph 2023; 110:102313. [PMID: 38011781 DOI: 10.1016/j.compmedimag.2023.102313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 11/13/2023] [Accepted: 11/13/2023] [Indexed: 11/29/2023]
Abstract
Brain tumors have become a severe medical complication in recent years due to their high fatality rate. Radiologists segment the tumor manually, which is time-consuming, error-prone, and expensive. In recent years, automated segmentation based on deep learning has demonstrated promising results in solving computer vision problems such as image classification and segmentation. Brain tumor segmentation has recently become a prevalent task in medical imaging to determine the tumor location, size, and shape using automated methods. Many researchers have worked on various machine and deep learning approaches to determine the most optimal solution using the convolutional methodology. In this review paper, we discuss the most effective segmentation techniques based on the datasets that are widely used and publicly available. We also proposed a survey of federated learning methodologies to enhance global segmentation performance and ensure privacy. A comprehensive literature review is suggested after studying more than 100 papers to generalize the most recent techniques in segmentation and multi-modality information. Finally, we concentrated on unsolved problems in brain tumor segmentation and a client-based federated model training strategy. Based on this review, future researchers will understand the optimal solution path to solve these issues.
Collapse
Affiliation(s)
- Md Faysal Ahamed
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Munawar Hossain
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Rabiul Islam
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Robiul Islam
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, UK
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester St, Manchester M1 5GD, UK.
| |
Collapse
|
4
|
Zhang Z, Wu H, Zhao H, Shi Y, Wang J, Bai H, Sun B. A Novel Deep Learning Model for Medical Image Segmentation with Convolutional Neural Network and Transformer. Interdiscip Sci 2023; 15:663-677. [PMID: 37665496 DOI: 10.1007/s12539-023-00585-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 07/26/2023] [Accepted: 08/01/2023] [Indexed: 09/05/2023]
Abstract
Accurate segmentation of medical images is essential for clinical decision-making, and deep learning techniques have shown remarkable results in this area. However, existing segmentation models that combine transformer and convolutional neural networks often use skip connections in U-shaped networks, which may limit their ability to capture contextual information in medical images. To address this limitation, we propose a coordinated mobile and residual transformer UNet (MRC-TransUNet) that combines the strengths of transformer and UNet architectures. Our approach uses a lightweight MR-ViT to address the semantic gap and a reciprocal attention module to compensate for the potential loss of details. To better explore long-range contextual information, we use skip connections only in the first layer and add MR-ViT and RPA modules in the subsequent downsampling layers. In our study, we evaluated the effectiveness of our proposed method on three different medical image segmentation datasets, namely, breast, brain, and lung. Our proposed method outperformed state-of-the-art methods in terms of various evaluation metrics, including the Dice coefficient and Hausdorff distance. These results demonstrate that our proposed method can significantly improve the accuracy of medical image segmentation and has the potential for clinical applications. Illustration of the proposed MRC-TransUNet. For the input medical images, we first subject them to an intrinsic downsampling operation and then replace the original jump connection structure using MR-ViT. The output feature representations at different scales are fused by the RPA module. Finally, an upsampling operation is performed to fuse the features to restore them to the same resolution as the input image.
Collapse
Affiliation(s)
- Zhuo Zhang
- Tianjin Key Laboratory of Optoelectronic Detection Technology and Systems, School of Electronic and Information Engineering, Tiangong University, Tianjin, 300387, China
| | - Hongbing Wu
- School of Computer Science and Technology, Tiangong University, Tianjin, 300387, China
| | - Huan Zhao
- Tianjin Key Laboratory of Optoelectronic Detection Technology and Systems, School of Electronic and Information Engineering, Tiangong University, Tianjin, 300387, China
| | - Yicheng Shi
- College of Management and Economics, Tianjin University, Tianjin, 300072, China
| | - Jifang Wang
- Tianjin Key Laboratory of Optoelectronic Detection Technology and Systems, School of Electronic and Information Engineering, Tiangong University, Tianjin, 300387, China
| | - Hua Bai
- Tianjin Key Laboratory of Optoelectronic Detection Technology and Systems, School of Electronic and Information Engineering, Tiangong University, Tianjin, 300387, China.
| | - Baoshan Sun
- School of Computer Science and Technology, Tiangong University, Tianjin, 300387, China.
| |
Collapse
|
5
|
Wu S, Cao Y, Li X, Liu Q, Ye Y, Liu X, Zeng L, Tian M. Attention-guided multi-scale context aggregation network for multi-modal brain glioma segmentation. Med Phys 2023; 50:7629-7640. [PMID: 37151131 DOI: 10.1002/mp.16452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 03/17/2023] [Accepted: 03/20/2023] [Indexed: 05/09/2023] Open
Abstract
BACKGROUND Accurate segmentation of brain glioma is a critical prerequisite for clinical diagnosis, surgical planning and treatment evaluation. In current clinical workflow, physicians typically perform delineation of brain tumor subregions slice-by-slice, which is more susceptible to variabilities in raters and also time-consuming. Besides, even though convolutional neural networks (CNNs) are driving progress, the performance of standard models still have some room for further improvement. PURPOSE To deal with these issues, this paper proposes an attention-guided multi-scale context aggregation network (AMCA-Net) for the accurate segmentation of brain glioma in the magnetic resonance imaging (MRI) images with multi-modalities. METHODS AMCA-Net extracts the multi-scale features from the MRI images and fuses the extracted discriminative features via a self-attention mechanism for brain glioma segmentation. The extraction is performed via a series of down-sampling, convolution layers, and the global context information guidance (GCIG) modules are developed to fuse the features extracted for contextual features. At the end of the down-sampling, a multi-scale fusion (MSF) module is designed to exploit and combine all the extracted multi-scale features. Each of the GCIG and MSF modules contain a channel attention (CA) module that can adaptively calibrate feature responses and emphasize the most relevant features. Finally, multiple predictions with different resolutions are fused through different weightings given by a multi-resolution adaptation (MRA) module instead of the use of averaging or max-pooling to improve the final segmentation results. RESULTS Datasets used in this paper are publicly accessible, that is, the Multimodal Brain Tumor Segmentation Challenges 2018 (BraTS2018) and 2019 (BraTS2019). BraTS2018 contains 285 patient cases and BraTS2019 contains 335 cases. Simulations show that the AMCA-Net has better or comparable performance against that of the other state-of-the-art models. In terms of the Dice score and Hausdorff 95 for the BraTS2018 dataset, 90.4% and 10.2 mm for the whole tumor region (WT), 83.9% and 7.4 mm for the tumor core region (TC), 80.2% and 4.3 mm for the enhancing tumor region (ET), whereas the Dice score and Hausdorff 95 for the BraTS2019 dataset, 91.0% and 10.7 mm for the WT, 84.2% and 8.4 mm for the TC, 80.1% and 4.8 mm for the ET. CONCLUSIONS The proposed AMCA-Net performs comparably well in comparison to several state-of-the-art neural net models in identifying the areas involving the peritumoral edema, enhancing tumor, and necrotic and non-enhancing tumor core of brain glioma, which has great potential for clinical practice. In future research, we will further explore the feasibility of applying AMCA-Net to other similar segmentation tasks.
Collapse
Affiliation(s)
- Shaozhi Wu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yunjian Cao
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
| | - Xinke Li
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Qiyu Liu
- Radiology Department, Mianyang Central Hospital, Mianyang, China
| | - Yuyun Ye
- Department of Electrical and Computer Engineering, University of Tulsa, Tulsa, Oklahoma, USA
| | - Xingang Liu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Liaoyuan Zeng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Miao Tian
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
6
|
Zia MS, Baig UA, Rehman ZU, Yaqub M, Ahmed S, Zhang Y, Wang S, Khan R. Contextual information extraction in brain tumour segmentation. IET IMAGE PROCESSING 2023; 17:3371-3391. [DOI: 10.1049/ipr2.12869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 06/30/2023] [Indexed: 09/22/2024]
Abstract
AbstractAutomatic brain tumour segmentation in MRI scans aims to separate the brain tumour's endoscopic core, edema, non‐enhancing tumour core, peritumoral edema, and enhancing tumour core from three‐dimensional MR voxels. Due to the wide range of brain tumour intensity, shape, location, and size, it is challenging to segment these regions automatically. UNet is the prime three‐dimensional CNN network performance source for medical imaging applications like brain tumour segmentation. This research proposes a context aware 3D ARDUNet (Attentional Residual Dropout UNet) network, a modified version of UNet to take advantage of the ResNet and soft attention. A novel residual dropout block (RDB) is implemented in the analytical encoder path to replace traditional UNet convolutional blocks to extract more contextual information. A unique Attentional Residual Dropout Block (ARDB) in the decoder path utilizes skip connections and attention gates to retrieve local and global contextual information. The attention gate enabled the Network to focus on the relevant part of the input image and suppress irrelevant details. Finally, the proposed Network assessed BRATS2018, BRATS2019, and BRATS2020 to some best‐in‐class segmentation approaches. The proposed Network achieved dice scores of 0.90, 0.92, and 0.93 for the whole tumour. On BRATS2018, BRATS2019, and BRATS2020, tumour core is 0.90, 0.92, 0.93, and enhancing tumour is 0.92, 0.93, 0.94.
Collapse
Affiliation(s)
- Muhammad Sultan Zia
- Department of Computer Science NFC Institute of Engineering and Fertilizer Research Faisalabad Pakistan
- Department of Computer Science The University of Chenab Gujrat Pakistan
| | - Usman Ali Baig
- Department of Computer Science The University of Chenab Gujrat Pakistan
| | - Zaka Ur Rehman
- Department of Computer Science The University of Chenab Gujrat Pakistan
| | - Muhammad Yaqub
- Faculty of Information Technology Beijing University of Technology Beijing China
| | - Shahzad Ahmed
- Faculty of Information Technology Beijing University of Technology Beijing China
| | - Yudong Zhang
- School of Computing and Mathematical Sciences University of Leicester Leicester UK
- Department of Information Systems Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
| | - Shuihua Wang
- School of Computing and Mathematical Sciences University of Leicester Leicester UK
| | - Rizwan Khan
- Department of Computer Science and Technology Zhejiang Normal University, Zhejiang Jinhua China
| |
Collapse
|
7
|
Tangsrivimol JA, Schonfeld E, Zhang M, Veeravagu A, Smith TR, Härtl R, Lawton MT, El-Sherbini AH, Prevedello DM, Glicksberg BS, Krittanawong C. Artificial Intelligence in Neurosurgery: A State-of-the-Art Review from Past to Future. Diagnostics (Basel) 2023; 13:2429. [PMID: 37510174 PMCID: PMC10378231 DOI: 10.3390/diagnostics13142429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 07/06/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
In recent years, there has been a significant surge in discussions surrounding artificial intelligence (AI), along with a corresponding increase in its practical applications in various facets of everyday life, including the medical industry. Notably, even in the highly specialized realm of neurosurgery, AI has been utilized for differential diagnosis, pre-operative evaluation, and improving surgical precision. Many of these applications have begun to mitigate risks of intraoperative and postoperative complications and post-operative care. This article aims to present an overview of the principal published papers on the significant themes of tumor, spine, epilepsy, and vascular issues, wherein AI has been applied to assess its potential applications within neurosurgery. The method involved identifying high-cited seminal papers using PubMed and Google Scholar, conducting a comprehensive review of various study types, and summarizing machine learning applications to enhance understanding among clinicians for future utilization. Recent studies demonstrate that machine learning (ML) holds significant potential in neuro-oncological care, spine surgery, epilepsy management, and other neurosurgical applications. ML techniques have proven effective in tumor identification, surgical outcomes prediction, seizure outcome prediction, aneurysm prediction, and more, highlighting its broad impact and potential in improving patient management and outcomes in neurosurgery. This review will encompass the current state of research, as well as predictions for the future of AI within neurosurgery.
Collapse
Affiliation(s)
- Jonathan A Tangsrivimol
- Division of Neurosurgery, Department of Surgery, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok 10210, Thailand
- Department of Neurological Surgery, The Ohio State University Wexner Medical Center and Jame Cancer Institute, Columbus, OH 43210, USA
| | - Ethan Schonfeld
- Department Biomedical Informatics, Stanford University School of Medicine, Palo Alto, CA 94305, USA
| | - Michael Zhang
- Department of Neurosurgery, Stanford University School of Medicine, Palo Alto, CA 94305, USA
| | - Anand Veeravagu
- Stanford Neurosurgical Artificial Intelligence and Machine Learning Laboratory, Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Timothy R Smith
- Department of Neurosurgery, Computational Neuroscience Outcomes Center (CNOC), Mass General Brigham, Harvard Medical School, Boston, MA 02115, USA
| | - Roger Härtl
- Weill Cornell Medicine Brain and Spine Center, New York, NY 10022, USA
| | - Michael T Lawton
- Department of Neurosurgery, Barrow Neurological Institute (BNI), Phoenix, AZ 85013, USA
| | - Adham H El-Sherbini
- Faculty of Health Sciences, Queen's University, Kingston, ON K7L 3N6, Canada
| | - Daniel M Prevedello
- Department of Neurological Surgery, The Ohio State University Wexner Medical Center and Jame Cancer Institute, Columbus, OH 43210, USA
| | - Benjamin S Glicksberg
- Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Chayakrit Krittanawong
- Cardiology Division, New York University Langone Health, New York University School of Medicine, New York, NY 10016, USA
| |
Collapse
|
8
|
Zhang Z, Zhang X, Yang Y, Liu J, Zheng C, Bai H, Ma Q. Accurate segmentation algorithm of acoustic neuroma in the cerebellopontine angle based on ACP-TransUNet. Front Neurosci 2023; 17:1207149. [PMID: 37292160 PMCID: PMC10244508 DOI: 10.3389/fnins.2023.1207149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 05/09/2023] [Indexed: 06/10/2023] Open
Abstract
Acoustic neuroma is one of the most common tumors in the cerebellopontine angle area. Patients with acoustic neuroma have clinical manifestations of the cerebellopontine angle occupying syndrome, such as tinnitus, hearing impairment and even hearing loss. Acoustic neuromas often grow in the internal auditory canal. Neurosurgeons need to observe the lesion contour with the help of MRI images, which not only takes a lot of time, but also is easily affected by subjective factors. Therefore, the automatic and accurate segmentation of acoustic neuroma in cerebellopontine angle on MRI is of great significance for surgical treatment and expected rehabilitation. In this paper, an automatic segmentation method based on Transformer is proposed, using TransUNet as the core model. As some acoustic neuromas are irregular in shape and grow into the internal auditory canal, larger receptive fields are thus needed to synthesize the features. Therefore, we added Atrous Spatial Pyramid Pooling to CNN, which can obtain a larger receptive field without losing too much resolution. Since acoustic neuromas often occur in the cerebellopontine angle area with relatively fixed position, we combined channel attention with pixel attention in the up-sampling stage so as to make our model automatically learn different weights by adding the attention mechanism. In addition, we collected 300 MRI sequence nuclear resonance images of patients with acoustic neuromas in Tianjin Huanhu hospital for training and verification. The ablation experimental results show that the proposed method is reasonable and effective. The comparative experimental results show that the Dice and Hausdorff 95 metrics of the proposed method reach 95.74% and 1.9476 mm respectively, indicating that it is not only superior to the classical models such as UNet, PANet, PSPNet, UNet++, and DeepLabv3, but also show better performance than the newly-proposed SOTA (state-of-the-art) models such as CCNet, MANet, BiseNetv2, Swin-Unet, MedT, TransUNet, and UCTransNet.
Collapse
Affiliation(s)
- Zhuo Zhang
- Tianjin Key Laboratory of Optoelectronic Detection Technology and Systems, School of Electronic and Information Engineering, Tiangong University, Tianjin, China
| | - Xiaochen Zhang
- Tianjin Cerebral Vascular and Neural Degenerative Disease Key Laboratory, Tianjin Huanhu Hospital, Tianjin, China
| | - Yong Yang
- School of Computer Science and Technology, Tiangong University, Tianjin, China
| | - Jieyu Liu
- Tianjin Key Laboratory of Optoelectronic Detection Technology and Systems, School of Electronic and Information Engineering, Tiangong University, Tianjin, China
| | - Chenzi Zheng
- College of Foreign Languages, Nankai University, Tianjin, China
| | - Hua Bai
- Tianjin Key Laboratory of Optoelectronic Detection Technology and Systems, School of Electronic and Information Engineering, Tiangong University, Tianjin, China
| | - Quanfeng Ma
- Tianjin Cerebral Vascular and Neural Degenerative Disease Key Laboratory, Tianjin Huanhu Hospital, Tianjin, China
| |
Collapse
|
9
|
Cifci MA, Hussain S, Canatalay PJ. Hybrid Deep Learning Approach for Accurate Tumor Detection in Medical Imaging Data. Diagnostics (Basel) 2023; 13:diagnostics13061025. [PMID: 36980333 PMCID: PMC10047127 DOI: 10.3390/diagnostics13061025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Revised: 03/04/2023] [Accepted: 03/06/2023] [Indexed: 03/30/2023] Open
Abstract
The automated extraction of critical information from electronic medical records, such as oncological medical events, has become increasingly important with the widespread use of electronic health records. However, extracting tumor-related medical events can be challenging due to their unique characteristics. To address this difficulty, we propose a novel approach that utilizes Generative Adversarial Networks (GANs) for data augmentation and pseudo-data generation algorithms to improve the model's transfer learning skills for various tumor-related medical events. Our approach involves a two-stage pre-processing and model training process, where the data is cleansed, normalized, and augmented using pseudo-data. We evaluate our approach using the i2b2/UTHealth 2010 dataset and observe promising results in extracting primary tumor site size, tumor size, and metastatic site information. The proposed method has significant implications for healthcare and medical research as it can extract vital information from electronic medical records for oncological medical events.
Collapse
Affiliation(s)
- Mehmet Akif Cifci
- The Institute of Computer Technology, Tu Wien University, 1040 Vienna, Austria
- Department of Computer Engineering, Bandirma Onyedi Eylul University, Balikesir 10200, Turkey
- Engineering and Informatics Department, Klaipėdos Valstybinė Kolegija/Higher Education Institution, 92294 Klaipeda, Lithuania
| | - Sadiq Hussain
- Examination Branch, Dibrugarh University, Dibrugarh 786004, Assam, India
| | | |
Collapse
|
10
|
Mahesh Kumar G, Parthasarathy E. Development of an enhanced U-Net model for brain tumor segmentation with optimized architecture. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
11
|
Sunsuhi G, Albin Jose S. An Adaptive Eroded Deep Convolutional neural network for brain image segmentation and classification using Inception ResnetV2. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
12
|
Abstract
AbstractBrain tumor segmentation is one of the most challenging problems in medical image analysis. The goal of brain tumor segmentation is to generate accurate delineation of brain tumor regions. In recent years, deep learning methods have shown promising performance in solving various computer vision problems, such as image classification, object detection and semantic segmentation. A number of deep learning based methods have been applied to brain tumor segmentation and achieved promising results. Considering the remarkable breakthroughs made by state-of-the-art technologies, we provide this survey with a comprehensive study of recently developed deep learning based brain tumor segmentation techniques. More than 150 scientific papers are selected and discussed in this survey, extensively covering technical aspects such as network architecture design, segmentation under imbalanced conditions, and multi-modality processes. We also provide insightful discussions for future development directions.
Collapse
|
13
|
Yao Y, Qian P, Zhao Z, Zeng Z. Residual Channel Attention Network for Brain Glioma Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:2132-2135. [PMID: 36086010 DOI: 10.1109/embc48229.2022.9871233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
A glioma is a malignant brain tumor that seriously affects cognitive functions and lowers patients' life quality. Segmentation of brain glioma is challenging because of inter-class ambiguities in tumor regions. Recently, deep learning approaches have achieved outstanding performance in the automatic segmentation of brain glioma. However, existing al-gorithms fail to exploit channel-wise feature interdependence to select semantic attributes for glioma segmentation. In this study, we implement a novel deep neural network that integrates residual channel attention modules to calibrate intermediate features for glioma segmentation. The proposed channel at-tention mechanism adaptively weights feature channel-wise to optimize the latent representation of gliomas. We evaluate our method on the established dataset BraTS2017. Experimental results indicate the superiority of our method. Clinical relevance - While existing glioma segmentation approaches do not leverage channel-wise feature dependence for feature selection our method can generate segmentation masks with higher accuracies and provide more insights on graphic patterns in brain MRI images for further clinical reference.
Collapse
|
14
|
Cheng J, Liu J, Kuang H, Wang J. A Fully Automated Multimodal MRI-Based Multi-Task Learning for Glioma Segmentation and IDH Genotyping. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1520-1532. [PMID: 35020590 DOI: 10.1109/tmi.2022.3142321] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The accurate prediction of isocitrate dehydrogenase (IDH) mutation and glioma segmentation are important tasks for computer-aided diagnosis using preoperative multimodal magnetic resonance imaging (MRI). The two tasks are ongoing challenges due to the significant inter-tumor and intra-tumor heterogeneity. The existing methods to address them are mostly based on single-task approaches without considering the correlation between the two tasks. In addition, the acquisition of IDH genetic labels is expensive and costly, resulting in a limited number of IDH mutation data for modeling. To comprehensively address these problems, we propose a fully automated multimodal MRI-based multi-task learning framework for simultaneous glioma segmentation and IDH genotyping. Specifically, the task correlation and heterogeneity are tackled with a hybrid CNN-Transformer encoder that consists of a convolutional neural network and a transformer to extract the shared spatial and global information learned from a decoder for glioma segmentation and a multi-scale classifier for IDH genotyping. Then, a multi-task learning loss is designed to balance the two tasks by combining the segmentation and classification loss functions with uncertain weights. Finally, an uncertainty-aware pseudo-label selection is proposed to generate IDH pseudo-labels from larger unlabeled data for improving the accuracy of IDH genotyping by using semi-supervised learning. We evaluate our method on a multi-institutional public dataset. Experimental results show that our proposed multi-task network achieves promising performance and outperforms the single-task learning counterparts and other existing state-of-the-art methods. With the introduction of unlabeled data, the semi-supervised multi-task learning framework further improves the performance of glioma segmentation and IDH genotyping. The source codes of our framework are publicly available at https://github.com/miacsu/MTTU-Net.git.
Collapse
|
15
|
Preethi Saroj S, Gurunathan P. Cascaded layer-coalescing convolution network for brain tumor segmentation. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-220167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Accurate segmentation of brain tumor regions from magnetic resonance images continues to be one of the active topics of research due to the high usability levels of the automation process. Faster processing helps clinicians in identification at initial stage of tumor and hence saves valuable time taken for manual image analysis. This work proposes a Cascaded Layer-Coalescing (CLC) model using convolution neural networks for brain tumor segmentation. The process includes three layers of convolution networks, each with cascading inputs from the previous layer and provides multiple outputs segmenting complete, core and enhancing tumor regions. The initial layer identifies complete tumor, coalesces the discriminative features and the input data, and passes it to the core tumor detection layer. The core tumor detection layer in- turn passes discriminative features to the enhancing tumor identification layer. The information injection through data coalescing voxels results in enhanced predictions and also in effective handling of data imbalance, which is a major contributor in model viewpoint. Experiments were performed with Brain Tumor Segmentation (BraTS) 2015 data. A comparison with existing literature works indicate improvements up to35% in sensitivity, 27% in PPV and 28% in Dice Score, indicating improvement in the segmentation process.
Collapse
Affiliation(s)
- S. Preethi Saroj
- Department of Computer Science and Engineering, Anna University, Chennai, India
| | - Pradeep Gurunathan
- Department of Computer Applications, A.V.C College of Engineering, Mayiladuthurai, India
| |
Collapse
|
16
|
Wang P, Chung ACS. Relax and focus on brain tumor segmentation. Med Image Anal 2021; 75:102259. [PMID: 34800788 DOI: 10.1016/j.media.2021.102259] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 03/15/2021] [Accepted: 09/28/2021] [Indexed: 11/25/2022]
Abstract
In this paper, we present a Deep Convolutional Neural Networks (CNNs) for fully automatic brain tumor segmentation for both high- and low-grade gliomas in MRI images. Unlike normal tissues or organs that usually have a fixed location or shape, brain tumors with different grades have shown great variation in terms of the location, size, structure, and morphological appearance. Moreover, the severe data imbalance exists not only between the brain tumor and non-tumor tissues, but also among the different sub-regions inside brain tumor (e.g., enhancing tumor, necrotic, edema, and non-enhancing tumor). Therefore, we introduce a hybrid model to address the challenges in the multi-modality multi-class brain tumor segmentation task. First, we propose the dynamic focal Dice loss function that is able to focus more on the smaller tumor sub-regions with more complex structures during training, and the learning capacity of the model is dynamically distributed to each class independently based on its training performance in different training stages. Besides, to better recognize the overall structure of the brain tumor and the morphological relationship among different tumor sub-regions, we relax the boundary constraints for the inner tumor regions in coarse-to-fine fashion. Additionally, a symmetric attention branch is proposed to highlight the possible location of the brain tumor from the asymmetric features caused by growth and expansion of the abnormal tissues in the brain. Generally, to balance the learning capacity of the model between spatial details and high-level morphological features, the proposed model relaxes the constraints of the inner boundary and complex details and enforces more attention on the tumor shape, location, and the harder classes of the tumor sub-regions. The proposed model is validated on the publicly available brain tumor dataset from real patients, BRATS 2019. The experimental results reveal that our model improves the overall segmentation performance in comparison with the state-of-the-art methods, with major progress on the recognition of the tumor shape, the structural relationship of tumor sub-regions, and the segmentation of more challenging tumor sub-regions, e.g., the tumor core, and enhancing tumor.
Collapse
Affiliation(s)
- Pei Wang
- Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong.
| | - Albert C S Chung
- Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong
| |
Collapse
|
17
|
Ramirez-Quintana JA, Rangel-Gonzalez R, Chacon-Murguia MI, Ramirez-Alonso G. A visual object segmentation algorithm with spatial and temporal coherence inspired by the architecture of the visual cortex. Cogn Process 2021; 23:27-40. [PMID: 34779948 DOI: 10.1007/s10339-021-01065-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2021] [Accepted: 10/25/2021] [Indexed: 11/24/2022]
Abstract
Scene analysis in video sequences is a complex task for a computer vision system. Several schemes have been addressed in this analysis, such as deep learning networks or traditional image processing methods. However, these methods require thorough training or manual adjustment of parameters to achieve accurate results. Therefore, it is necessary to develop novel methods to analyze the scenario information in video sequences. For this reason, this paper proposes a method for object segmentation in video sequences inspired by the structural layers of the visual cortex. The method is called Neuro-Inspired Object Segmentation, SegNI. SegNI has a hierarchical architecture that analyzes object features such as edges, color, and motion to generate regions that represent the objects in the scenario. The results obtained with the Video Segmentation Benchmark VSB100 dataset demonstrate that SegNI can adapt automatically to videos with scenarios that have different nature, composition, and different types of objects. Also, SegNI adapts its processing to new scenario conditions without training, which is a significant advantage over deep learning networks.
Collapse
Affiliation(s)
- Juan A Ramirez-Quintana
- Graduate and Research Department, Tecnologico Nacional de Mexico / I.T. Chihuahua, Av. Tecnologico 2909, Chihuahua, 31310, Mexico.
| | | | - Mario I Chacon-Murguia
- Graduate and Research Department, Tecnologico Nacional de Mexico / I.T. Chihuahua, Av. Tecnologico 2909, Chihuahua, 31310, Mexico
| | | |
Collapse
|
18
|
Dong X, Chen Z, Liu YJ, Yao J, Guo X. GPU-Based Supervoxel Generation With a Novel Anisotropic Metric. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:8847-8860. [PMID: 34694996 DOI: 10.1109/tip.2021.3120878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Video over-segmentation into supervoxels is an important pre-processing technique for many computer vision tasks. Videos are an order of magnitude larger than images. Most existing methods for generating supervovels are either memory- or time-inefficient, which limits their application in subsequent video processing tasks. In this paper, we present an anisotropic supervoxel method, which is memory-efficient and can be executed on the graphics processing unit (GPU). Therefore, our algorithm achieves good balance among segmentation quality, memory usage and processing time. In order to provide accurate segmentation for moving objects in video, we use the optical flow information to design a brand new non-Euclidean metric to calculate the anisotropic distances between seeds and voxels. To efficiently compute the anisotropic metric, we adjust the classic jump flooding algorithm (which is designed for parallel execution on the GPU) to generate anisotropic Voronoi tessellation in the combined color and spatio-temporal space. We evaluate our method and the representative supervoxel algorithms for their capability on segmentation performance, computation speed and memory efficiency. We also apply supervoxel results to the application of foreground propagation in videos to test the performance on solving practical problems. Experiments show that our algorithm is much faster than the existing methods, and achieves good balance on segmentation quality and efficiency.
Collapse
|
19
|
Wang H, Hu J, Song Y, Zhang L, Bai S, Yi Z. Multi-view fusion segmentation for brain glioma on CT images. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02784-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
|
20
|
GSCFN: A graph self-construction and fusion network for semi-supervised brain tissue segmentation in MRI. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
21
|
Yi R, Ye Z, Zhao W, Yu M, Lai YK, Liu YJ. Feature-Aware Uniform Tessellations on Video Manifold for Content-Sensitive Supervoxels. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:3183-3195. [PMID: 32167886 DOI: 10.1109/tpami.2020.2979714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Over-segmenting a video into supervoxels has strong potential to reduce the complexity of downstream computer vision applications. Content-sensitive supervoxels (CSSs) are typically smaller in content-dense regions (i.e., with high variation of appearance and/or motion) and larger in content-sparse regions. In this paper, we propose to compute feature-aware CSSs (FCSSs) that are regularly shaped 3D primitive volumes well aligned with local object/region/motion boundaries in video. To compute FCSSs, we map a video to a 3D manifold embedded in a combined color and spatiotemporal space, in which the volume elements of video manifold give a good measure of the video content density. Then any uniform tessellation on video manifold can induce CSS in the video. Our idea is that among all possible uniform tessellations on the video manifold, FCSS finds one whose cell boundaries well align with local video boundaries. To achieve this goal, we propose a novel restricted centroidal Voronoi tessellation method that simultaneously minimizes the tessellation energy (leading to uniform cells in the tessellation) and maximizes the average boundary distance (leading to good local feature alignment). Theoretically our method has an optimal competitive ratio O(1), and its time and space complexities are O(NK) and O(N+K) for computing K supervoxels in an N-voxel video. We also present a simple extension of FCSS to streaming FCSS for processing long videos that cannot be loaded into main memory at once. We evaluate FCSS, streaming FCSS and ten representative supervoxel methods on four video datasets and two novel video applications. The results show that our method simultaneously achieves state-of-the-art performance with respect to various evaluation criteria.
Collapse
|
22
|
Liu Z, Tong L, Chen L, Zhou F, Jiang Z, Zhang Q, Wang Y, Shan C, Li L, Zhou H. CANet: Context Aware Network for Brain Glioma Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1763-1777. [PMID: 33720830 DOI: 10.1109/tmi.2021.3065918] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Automated segmentation of brain glioma plays an active role in diagnosis decision, progression monitoring and surgery planning. Based on deep neural networks, previous studies have shown promising technologies for brain glioma segmentation. However, these approaches lack powerful strategies to incorporate contextual information of tumor cells and their surrounding, which has been proven as a fundamental cue to deal with local ambiguity. In this work, we propose a novel approach named Context-Aware Network (CANet) for brain glioma segmentation. CANet captures high dimensional and discriminative features with contexts from both the convolutional space and feature interaction graphs. We further propose context guided attentive conditional random fields which can selectively aggregate features. We evaluate our method using publicly accessible brain glioma segmentation datasets BRATS2017, BRATS2018 and BRATS2019. The experimental results show that the proposed algorithm has better or competitive performance against several State-of-The-Art approaches under different segmentation metrics on the training and validation sets.
Collapse
|
23
|
Al-Saffar ZA, Yildirim T. A hybrid approach based on multiple Eigenvalues selection (MES) for the automated grading of a brain tumor using MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 201:105945. [PMID: 33581624 DOI: 10.1016/j.cmpb.2021.105945] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 01/12/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE The manual segmentation, identification, and classification of brain tumor using magnetic resonance (MR) images are essential for making a correct diagnosis. It is, however, an exhausting and time consuming task performed by clinical experts and the accuracy of the results is subject to their point of view. Computer aided technology has therefore been developed to computerize these procedures. METHODS In order to improve the outcomes and decrease the complications involved in the process of analysing medical images, this study has investigated several methods. These include: a Local Difference in Intensity - Means (LDI-Means) based brain tumor segmentation, Mutual Information (MI) based feature selection, Singular Value Decomposition (SVD) based dimensionality reduction, and both Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) based brain tumor classification. Also, this study has presented a new method named Multiple Eigenvalues Selection (MES) to choose the most meaningful features as inputs to the classifiers. This combination between unsupervised and supervised techniques formed an effective system for the grading of brain glioma. RESULTS The experimental results of the proposed method showed an excellent performance in terms of accuracy, recall, specificity, precision, and error rate. They are 91.02%,86.52%, 94.26%, 87.07%, and 0.0897 respectively. CONCLUSION The obtained results prove the significance and effectiveness of the proposed method in comparison to other state-of-the-art techniques and it can have in the contribution to an early diagnosis of brain glioma.
Collapse
Affiliation(s)
- Zahraa A Al-Saffar
- Department of Biomedical Engineering, Al-Khwarizmi Collage of Engineering, University of Baghdad, Baghdad 10071, Iraq; Department of Electronics and Communications Engineering, Yildiz Technical University, Istanbul 34220, Turkey.
| | - Tülay Yildirim
- Department of Electronics and Communications Engineering, Yildiz Technical University, Istanbul 34220, Turkey
| |
Collapse
|
24
|
|
25
|
Gryska E, Schneiderman J, Björkman-Burtscher I, Heckemann RA. Automatic brain lesion segmentation on standard magnetic resonance images: a scoping review. BMJ Open 2021; 11:e042660. [PMID: 33514580 PMCID: PMC7849889 DOI: 10.1136/bmjopen-2020-042660] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 01/09/2021] [Accepted: 01/12/2021] [Indexed: 12/11/2022] Open
Abstract
OBJECTIVES Medical image analysis practices face challenges that can potentially be addressed with algorithm-based segmentation tools. In this study, we map the field of automatic MR brain lesion segmentation to understand the clinical applicability of prevalent methods and study designs, as well as challenges and limitations in the field. DESIGN Scoping review. SETTING Three databases (PubMed, IEEE Xplore and Scopus) were searched with tailored queries. Studies were included based on predefined criteria. Emerging themes during consecutive title, abstract, methods and whole-text screening were identified. The full-text analysis focused on materials, preprocessing, performance evaluation and comparison. RESULTS Out of 2990 unique articles identified through the search, 441 articles met the eligibility criteria, with an estimated growth rate of 10% per year. We present a general overview and trends in the field with regard to publication sources, segmentation principles used and types of lesions. Algorithms are predominantly evaluated by measuring the agreement of segmentation results with a trusted reference. Few articles describe measures of clinical validity. CONCLUSIONS The observed reporting practices leave room for improvement with a view to studying replication, method comparison and clinical applicability. To promote this improvement, we propose a list of recommendations for future studies in the field.
Collapse
Affiliation(s)
- Emilia Gryska
- Medical Radiation Sciences, Goteborgs universitet Institutionen for kliniska vetenskaper, Goteborg, Sweden
| | - Justin Schneiderman
- Sektionen för klinisk neurovetenskap, Goteborgs Universitet Institutionen for Neurovetenskap och fysiologi, Goteborg, Sweden
| | | | - Rolf A Heckemann
- Medical Radiation Sciences, Goteborgs universitet Institutionen for kliniska vetenskaper, Goteborg, Sweden
| |
Collapse
|
26
|
Magadza T, Viriri S. Deep Learning for Brain Tumor Segmentation: A Survey of State-of-the-Art. J Imaging 2021; 7:19. [PMID: 34460618 PMCID: PMC8321266 DOI: 10.3390/jimaging7020019] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 01/07/2021] [Accepted: 01/11/2021] [Indexed: 01/17/2023] Open
Abstract
Quantitative analysis of the brain tumors provides valuable information for understanding the tumor characteristics and treatment planning better. The accurate segmentation of lesions requires more than one image modalities with varying contrasts. As a result, manual segmentation, which is arguably the most accurate segmentation method, would be impractical for more extensive studies. Deep learning has recently emerged as a solution for quantitative analysis due to its record-shattering performance. However, medical image analysis has its unique challenges. This paper presents a review of state-of-the-art deep learning methods for brain tumor segmentation, clearly highlighting their building blocks and various strategies. We end with a critical discussion of open challenges in medical image analysis.
Collapse
Affiliation(s)
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4000, South Africa;
| |
Collapse
|
27
|
Abstract
AbstractThis paper proposes a new approach for superpixel segmentation. It is formulated as finding a rooted spanning forest of a graph with respect to some roots and a path-cost function. The underlying graph represents an image, the roots serve as seeds for segmentation, each pixel is connected to one seed via a path, the path-cost function measures both the color similarity and spatial closeness between two pixels via a path, and each tree in the spanning forest represents one superpixel. Originating from the evenly distributed seeds, the superpixels are guided by a path-cost function to grow uniformly and adaptively, the pixel-by-pixel growing continues until they cover the whole image. The number of superpixels is controlled by the number of seeds. The connectivity is maintained by region growing. Good performances are assured by connecting each pixel to the similar seed, which are dominated by the path-cost function. It is evaluated by both the superpixel benchmark and supervoxel benchmark. Its performance is ranked as the second among top performing state-of-the-art methods. Moreover, it is much faster than the other superpixel and supervoxel methods.
Collapse
|
28
|
Maruthamuthu A, Gnanapandithan G. LP. Brain tumour segmentation from MRI using superpixels based spectral clustering. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2020. [DOI: 10.1016/j.jksuci.2018.01.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
29
|
Wang B, Chen Y, Liu W, Qin J, Du Y, Han G, He S. Real-Time Hierarchical Supervoxel Segmentation via a Minimum Spanning Tree. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:9665-9677. [PMID: 33074808 DOI: 10.1109/tip.2020.3030502] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Supervoxel segmentation algorithm has been applied as a preprocessing step for many vision tasks. However, existing supervoxel segmentation algorithms cannot generate hierarchical supervoxel segmentation well preserving the spatiotemporal boundaries in real time, which prevents the downstream applications from accurate and efficient processing. In this paper, we propose a real-time hierarchical supervoxel segmentation algorithm based on the minimum spanning tree (MST), which achieves state-of-the-art accuracy meanwhile at least 11× faster than existing methods. In particular, we present a dynamic graph updating operation into the iterative construction process of the MST, which can geometrically decrease the numbers of vertices and edges. In this way, the proposed method is able to generate arbitrary scales of supervoxels on the fly. We prove the efficiency of our algorithm that can produce hierarchical supervoxels in the time complexity of O(n) , where n denotes the number of voxels in the input video. Quantitative and qualitative evaluations on public benchmarks demonstrate that our proposed algorithm significantly outperforms the state-of-the-art algorithms in terms of supervoxel segmentation accuracy and computational efficiency. Furthermore, we demonstrate the effectiveness of the proposed method on a downstream application of video object segmentation.
Collapse
|
30
|
Mitchell JR, Kamnitsas K, Singleton KW, Whitmire SA, Clark-Swanson KR, Ranjbar S, Rickertsen CR, Johnston SK, Egan KM, Rollison DE, Arrington J, Krecke KN, Passe TJ, Verdoorn JT, Nagelschneider AA, Carr CM, Port JD, Patton A, Campeau NG, Liebo GB, Eckel LJ, Wood CP, Hunt CH, Vibhute P, Nelson KD, Hoxworth JM, Patel AC, Chong BW, Ross JS, Boxerman JL, Vogelbaum MA, Hu LS, Glocker B, Swanson KR. Deep neural network to locate and segment brain tumors outperformed the expert technicians who created the training data. J Med Imaging (Bellingham) 2020; 7:055501. [PMID: 33102623 PMCID: PMC7567400 DOI: 10.1117/1.jmi.7.5.055501] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Accepted: 09/21/2020] [Indexed: 11/17/2022] Open
Abstract
Purpose: Deep learning (DL) algorithms have shown promising results for brain tumor segmentation in MRI. However, validation is required prior to routine clinical use. We report the first randomized and blinded comparison of DL and trained technician segmentations. Approach: We compiled a multi-institutional database of 741 pretreatment MRI exams. Each contained a postcontrast T1-weighted exam, a T2-weighted fluid-attenuated inversion recovery exam, and at least one technician-derived tumor segmentation. The database included 729 unique patients (470 males and 259 females). Of these exams, 641 were used for training the DL system, and 100 were reserved for testing. We developed a platform to enable qualitative, blinded, controlled assessment of lesion segmentations made by technicians and the DL method. On this platform, 20 neuroradiologists performed 400 side-by-side comparisons of segmentations on 100 test cases. They scored each segmentation between 0 (poor) and 10 (perfect). Agreement between segmentations from technicians and the DL method was also evaluated quantitatively using the Dice coefficient, which produces values between 0 (no overlap) and 1 (perfect overlap). Results: The neuroradiologists gave technician and DL segmentations mean scores of 6.97 and 7.31, respectively (p<0.00007). The DL method achieved a mean Dice coefficient of 0.87 on the test cases. Conclusions: This was the first objective comparison of automated and human segmentation using a blinded controlled assessment study. Our DL system learned to outperform its “human teachers” and produced output that was better, on average, than its training data.
Collapse
Affiliation(s)
- Joseph Ross Mitchell
- H. Lee Moffitt Cancer Center and Research Institute, Department of Machine Learning, Tampa, Florida, United States
| | | | - Kyle W Singleton
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States
| | - Scott A Whitmire
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States
| | | | - Sara Ranjbar
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States
| | | | - Sandra K Johnston
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States.,University of Washington, Department of Radiology, Seattle, Washington, United States
| | - Kathleen M Egan
- H. Lee Moffitt Cancer Center and Research Institute, Department of Cancer Epidemiology, Tampa, Florida, United States
| | - Dana E Rollison
- H. Lee Moffitt Cancer Center and Research Institute, Department of Cancer Epidemiology, Tampa, Florida, United States
| | - John Arrington
- H. Lee Moffitt Cancer Center and Research Institute, Department of Diagnostic Imaging and Interventional Radiology, Tampa, Florida, United States
| | - Karl N Krecke
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Theodore J Passe
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Jared T Verdoorn
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | | | - Carrie M Carr
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - John D Port
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Alice Patton
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Norbert G Campeau
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Greta B Liebo
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Laurence J Eckel
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Christopher P Wood
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Christopher H Hunt
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Prasanna Vibhute
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Kent D Nelson
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Joseph M Hoxworth
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Ameet C Patel
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Brian W Chong
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Jeffrey S Ross
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Jerrold L Boxerman
- Rhode Island Hospital and Alpert Medical School of Brown University, Department of Diagnostic Imaging, Providence, Rhode Island, United States
| | - Michael A Vogelbaum
- H. Lee Moffitt Cancer Center and Research Institute, Department of Neurosurgery, Tampa, Florida, United States
| | - Leland S Hu
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States.,Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Ben Glocker
- Imperial College, Biomedical Image Analysis Group, London, United Kingdom
| | - Kristin R Swanson
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States.,Mayo Clinic, Department of Neurosurgery, Phoenix, Arizona, United States
| |
Collapse
|
31
|
Wang B, Liu W, Han G, He S. Learning Long-term Structural Dependencies for Video Salient Object Detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:9017-9031. [PMID: 32941135 DOI: 10.1109/tip.2020.3023591] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Existing video salient object detection (VSOD) methods focus on exploring either short-term or long-term temporal information. However, temporal information is exploited in a global frame-level or regular grid structure, neglecting interframe structural dependencies. In this paper, we propose to learn long-term structural dependencies with a structure-evolving graph convolutional network (GCN). Particularly, we construct a graph for the entire video using a fast supervoxel segmentation method, in which each node is connected according to spatio-temporal structural similarity. We infer the inter-frame structural dependencies of salient object using convolutional operations on the graph. To prune redundant connections in the graph and better adapt to the moving salient object, we present an adaptive graph pooling to evolve the structure of the graph by dynamically merging similar nodes, learning better hierarchical representations of the graph. Experiments on six public datasets show that our method outperforms all other state-of-the-art methods. Furthermore, We also demonstrate that our proposed adaptive graph pooling can effectively improve the supervoxel algorithm in the term of segmentation accuracy.
Collapse
|
32
|
Yamanakkanavar N, Choi JY, Lee B. MRI Segmentation and Classification of Human Brain Using Deep Learning for Diagnosis of Alzheimer's Disease: A Survey. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3243. [PMID: 32517304 PMCID: PMC7313699 DOI: 10.3390/s20113243] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 05/25/2020] [Accepted: 06/03/2020] [Indexed: 02/07/2023]
Abstract
Many neurological diseases and delineating pathological regions have been analyzed, and the anatomical structure of the brain researched with the aid of magnetic resonance imaging (MRI). It is important to identify patients with Alzheimer's disease (AD) early so that preventative measures can be taken. A detailed analysis of the tissue structures from segmented MRI leads to a more accurate classification of specific brain disorders. Several segmentation methods to diagnose AD have been proposed with varying complexity. Segmentation of the brain structure and classification of AD using deep learning approaches has gained attention as it can provide effective results over a large set of data. Hence, deep learning methods are now preferred over state-of-the-art machine learning methods. We aim to provide an outline of current deep learning-based segmentation approaches for the quantitative analysis of brain MRI for the diagnosis of AD. Here, we report how convolutional neural network architectures are used to analyze the anatomical brain structure and diagnose AD, discuss how brain MRI segmentation improves AD classification, describe the state-of-the-art approaches, and summarize their results using publicly available datasets. Finally, we provide insight into current issues and discuss possible future research directions in building a computer-aided diagnostic system for AD.
Collapse
Affiliation(s)
- Nagaraj Yamanakkanavar
- Department of Information and Communications Engineering, Chosun University, Gwangju 61452, Korea;
| | - Jae Young Choi
- Division of Computer & Electronic Systems Engineering, Hankuk University of Foreign Studies, Yongin 17035, Korea;
| | - Bumshik Lee
- Department of Information and Communications Engineering, Chosun University, Gwangju 61452, Korea;
| |
Collapse
|
33
|
Krishan A, Mittal D. Effective segmentation and classification of tumor on liver MRI and CT images using multi-kernel K-means clustering. ACTA ACUST UNITED AC 2020; 65:301-313. [PMID: 31747373 DOI: 10.1515/bmt-2018-0175] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Accepted: 09/19/2019] [Indexed: 11/15/2022]
Abstract
Our proposed research technique intends to provide an effective liver magnetic resonance imaging (MRI) and computed tomography (CT) scan image classification which would play a significant role in medical dataset especially in feature selection and classification. There are a number of existing research works classifying the liver tumor disease. Early detection of liver tumor will help the patients to get cured rapidly. Our proposed research focuses on the classification of medical images with respect to the classification technique artificial neural network (ANN) to classify an image as normal or abnormal. In the pre-processing step, the input image is selected from the database and adaptive median filtering is used for noise removal. For better enhancement, histogram equalization (HE) is done in the noise-removed images. In the pre-processed images, the texture feature such as gray-level co-occurrence matrix (GLCM) and statistical features are extracted. From the extensive feature set, optimal features are selected using the optimal kernel K-means (OKK-means) clustering algorithm along with the oppositional firefly algorithm (OFA). The proposed method obtained 97.5% accuracy in the classification when compared to the existing method.
Collapse
Affiliation(s)
- Abhay Krishan
- Department of Electrical and Instrumentation Engineering, Thapar Institute of Engineering and Technology, Patiala, 147004 Punjab, India
| | - Deepti Mittal
- Department of Electrical and Instrumentation Engineering, Thapar Institute of Engineering and Technology, Patiala, Punjab, India
| |
Collapse
|
34
|
Geetha A, Gomathi N. A robust grey wolf-based deep learning for brain tumour detection in MR images. ACTA ACUST UNITED AC 2020; 65:191-207. [DOI: 10.1515/bmt-2018-0244] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Accepted: 08/06/2019] [Indexed: 11/15/2022]
Abstract
AbstractIn recent times, the detection of brain tumours has become more common. Generally, a brain tumour is an abnormal mass of tissue where the cells grow uncontrollably and are apparently unregulated by the mechanisms that control cells. A number of techniques have been developed thus far; however, the time needed in a detecting brain tumour is still a challenge in the field of image processing. This article proposes a new accurate detection model. The model includes certain processes such as preprocessing, segmentation, feature extraction and classification. Particularly, two extreme processes such as contrast enhancement and skull stripping are processed under the initial phase. In the segmentation process, we used the fuzzy means clustering (FCM) algorithm. Both the grey co-occurrence matrix (GLCM) as well as the grey-level run-length matrix (GRLM) features were extracted in the feature extraction phase. Moreover, this paper uses a deep belief network (DBN) for classification. The optimized DBN concept is used here, for which grey wolf optimisation (GWO) is used. The proposed model is termed the GW-DBN model. The proposed model compares its performance over other conventional methods in terms of accuracy, specificity, sensitivity, precision, negative predictive value (NPV), the F1Score and Matthews correlation coefficient (MCC), false negative rate (FNR), false positive rate (FPR) and false discovery rate (FDR), and proves the superiority of the proposed work.
Collapse
Affiliation(s)
- A. Geetha
- VelTech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Velachery, Chennai 600042, Tamil Nadu, India
| | - N. Gomathi
- VelTech Dr. Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai 600062, Tamil Nadu, India
| |
Collapse
|
35
|
Rehman ZU, Zia MS, Bojja GR, Yaqub M, Jinchao F, Arshid K. Texture based localization of a brain tumor from MR-images by using a machine learning approach. Med Hypotheses 2020; 141:109705. [PMID: 32289646 DOI: 10.1016/j.mehy.2020.109705] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 03/13/2020] [Accepted: 04/02/2020] [Indexed: 01/10/2023]
Abstract
In this paper, a machine learning approach was used for brain tumour localization on FLAIR scans of magnetic resonance images (MRI). The multi-modal brain images dataset (BraTs 2012) was used, that is a skull stripped and co-registered. In order to remove the noise, bilateral filtering is applied and then texton-map images are created by using the Gabor filter bank. From the texton-map, the image is segmented out into superpixel and then the low-level features are extracted: the first order intensity statistical features and also calculates the histogram level of texton-map at each superpixel level. There is a significant contribution here that the low features are not too much significant for the localization of brain tumour from MR images, but we have to make them meaningful by integrating features with the texton-map images at the region level approach. Then these features which are provided later to classifier for the prediction of three classes: background, tumour and non-tumour region, and used the labels for computation of two different areas (i.e. complete tumour and non-tumour). A Leave-one-out (LOOCV) cross validation technique is applied and achieves the dice overlap score of 88% for the whole tumour area localization, which is similar to the declared score in MICCAI BraTS challenge. This brain tumour localization approach using the textonmap image based on superpixel features illustrates the equivalent performance with other contemporary techniques. Recently, medical hypothesis generation by using autonomous computer based systems in disease diagnosis have given the great contribution in medical diagnosis.
Collapse
Affiliation(s)
- Zaka Ur Rehman
- Department of Computer science and IT, The University of Lahore, Gujrat Campus, Gujrat, Pakistan.
| | - M Sultan Zia
- Department of Computer science and IT, The University of Lahore, Gujrat Campus, Gujrat, Pakistan.
| | - Giridhar Reddy Bojja
- College of Business and Information Systems, Dakota State University, Madison, USA.
| | - Muhammad Yaqub
- Faculty of Information Technology, Beijing University of Technology, China
| | - Feng Jinchao
- Faculty of Information Technology, Beijing University of Technology, China.
| | - Kaleem Arshid
- Faculty of Information Technology, Beijing University of Technology, China
| |
Collapse
|
36
|
Toward Effective Medical Image Analysis Using Hybrid Approaches—Review, Challenges and Applications. INFORMATION 2020. [DOI: 10.3390/info11030155] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Accurate medical images analysis plays a vital role for several clinical applications. Nevertheless, the immense and complex data volume to be processed make difficult the design of effective algorithms. The first aim of this paper is to examine this area of research and to provide some relevant reference sources related to the context of medical image analysis. Then, an effective hybrid solution to further improve the expected results is proposed here. It allows to consider the benefits of the cooperation of different complementary approaches such as statistical-based, variational-based and atlas-based techniques and to reduce their drawbacks. In particular, a pipeline framework that involves different steps such as a preprocessing step, a classification step and a refinement step with variational-based method is developed to identify accurately pathological regions in biomedical images. The preprocessing step has the role to remove noise and improve the quality of the images. Then the classification is based on both symmetry axis detection step and non linear learning with SVM algorithm. Finally, a level set-based model is performed to refine the boundary detection of the region of interest. In this work we will show that an accurate initialization step could enhance final performances. Some obtained results are exposed which are related to the challenging application of brain tumor segmentation.
Collapse
|
37
|
Chaki J, Dey N. Data Tagging in Medical Images: A Survey of the State-of-Art. Curr Med Imaging 2020; 16:1214-1228. [PMID: 32108002 DOI: 10.2174/1573405616666200218130043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 12/02/2019] [Accepted: 12/16/2019] [Indexed: 11/22/2022]
Abstract
A huge amount of medical data is generated every second, and a significant percentage of the data are images that need to be analyzed and processed. One of the key challenges in this regard is the recovery of the data of medical images. The medical image recovery procedure should be done automatically by the computers that are the method of identifying object concepts and assigning homologous tags to them. To discover the hidden concepts in the medical images, the lowlevel characteristics should be used to achieve high-level concepts and that is a challenging task. In any specific case, it requires human involvement to determine the significance of the image. To allow machine-based reasoning on the medical evidence collected, the data must be accompanied by additional interpretive semantics; a change from a pure data-intensive methodology to a model of evidence rich in semantics. In this state-of-art, data tagging methods related to medical images are surveyed which is an important aspect for the recognition of a huge number of medical images. Different types of tags related to the medical image, prerequisites of medical data tagging, different techniques to develop medical image tags, different medical image tagging algorithms and different tools that are used to create the tags are discussed in this paper. The aim of this state-of-art paper is to produce a summary and a set of guidelines for using the tags for the identification of medical images and to identify the challenges and future research directions of tagging medical images.
Collapse
Affiliation(s)
- Jyotismita Chaki
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - Nilanjan Dey
- Department of Information Technology, Techno India College of Technology, West Bengal, India
| |
Collapse
|
38
|
Yin B, Wang C, Abza F. New brain tumor classification method based on an improved version of whale optimization algorithm. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101728] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
39
|
Saygili A, Albayrak S. Knee Meniscus Segmentation and Tear Detection from MRI: A Review. Curr Med Imaging 2020; 16:2-15. [DOI: 10.2174/1573405614666181017122109] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 09/20/2018] [Accepted: 09/29/2018] [Indexed: 12/22/2022]
Abstract
Background:
Automatic diagnostic systems in medical imaging provide useful information
to support radiologists and other relevant experts. The systems that help radiologists in their
analysis and diagnosis appear to be increasing.
Discussion:
Knee joints are intensively studied structures, as well. In this review, studies that
automatically segment meniscal structures from the knee joint MR images and detect tears have
been investigated. Some of the studies in the literature merely perform meniscus segmentation,
while others include classification procedures that detect both meniscus segmentation and anomalies
on menisci. The studies performed on the meniscus were categorized according to the methods
they used. The methods used and the results obtained from such studies were analyzed along with
their drawbacks, and the aspects to be developed were also emphasized.
Conclusion:
The work that has been done in this area can effectively support the decisions that will
be made by radiology and orthopedics specialists. Furthermore, these operations, which were performed
manually on MR images, can be performed in a shorter time with the help of computeraided
systems, which enables early diagnosis and treatment.
Collapse
Affiliation(s)
- Ahmet Saygili
- Computer Engineering Department, Corlu Faculty of Engineering, Namık Kemal University, Tekirdağ, Turkey
| | - Songül Albayrak
- Computer Engineering Department, Faculty of Electric and Electronics, Yıldız Technical University, İstanbul, Turkey
| |
Collapse
|
40
|
Amin J, Sharif M, Anjum MA, Raza M, Bukhari SAC. Convolutional neural network with batch normalization for glioma and stroke lesion detection using MRI. COGN SYST RES 2020. [DOI: 10.1016/j.cogsys.2019.10.002] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
41
|
Gong S, Gao W, Abza F. Brain tumor diagnosis based on artificial neural network and a chaos whale optimization algorithm. Comput Intell 2019. [DOI: 10.1111/coin.12259] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Shu Gong
- Department of Computer ScienceGuangdong University Science and Technology Dongguan China
| | - Wei Gao
- School of Information Science and TechnologyYunnan Normal University Kunming China
| | - Francis Abza
- Department of Computer ScienceUniversity of Ghana Legon‐Accra Ghana
| |
Collapse
|
42
|
A Weakly Supervised Multi-task Ranking Framework for Actor–Action Semantic Segmentation. Int J Comput Vis 2019. [DOI: 10.1007/s11263-019-01244-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
43
|
Walsh Hadamard Transform for Simple Linear Iterative Clustering (SLIC) Superpixel Based Spectral Clustering of Multimodal MRI Brain Tumor Segmentation. Ing Rech Biomed 2019. [DOI: 10.1016/j.irbm.2019.04.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
44
|
Li S, Tao Z, Li K, Fu Y. Visual to Text: Survey of Image and Video Captioning. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2019. [DOI: 10.1109/tetci.2019.2892755] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
45
|
Agn M, Munck Af Rosenschöld P, Puonti O, Lundemann MJ, Mancini L, Papadaki A, Thust S, Ashburner J, Law I, Van Leemput K. A modality-adaptive method for segmenting brain tumors and organs-at-risk in radiation therapy planning. Med Image Anal 2019; 54:220-237. [PMID: 30952038 PMCID: PMC6554451 DOI: 10.1016/j.media.2019.03.005] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 03/14/2019] [Accepted: 03/21/2019] [Indexed: 12/25/2022]
Abstract
In this paper we present a method for simultaneously segmenting brain tumors and an extensive set of organs-at-risk for radiation therapy planning of glioblastomas. The method combines a contrast-adaptive generative model for whole-brain segmentation with a new spatial regularization model of tumor shape using convolutional restricted Boltzmann machines. We demonstrate experimentally that the method is able to adapt to image acquisitions that differ substantially from any available training data, ensuring its applicability across treatment sites; that its tumor segmentation accuracy is comparable to that of the current state of the art; and that it captures most organs-at-risk sufficiently well for radiation therapy planning purposes. The proposed method may be a valuable step towards automating the delineation of brain tumors and organs-at-risk in glioblastoma patients undergoing radiation therapy.
Collapse
Affiliation(s)
- Mikael Agn
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark.
| | - Per Munck Af Rosenschöld
- Radiation Physics, Department of Hematology, Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre, Denmark
| | - Michael J Lundemann
- Department of Oncology, Copenhagen University Hospital Rigshospitalet, Denmark
| | - Laura Mancini
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - Anastasia Papadaki
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - Steffi Thust
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - John Ashburner
- Wellcome Centre for Human Neuroimaging, UCL Institute of Neurology, University College London, UK
| | - Ian Law
- Department of Clinical Physiology, Nuclear Medicine and PET, Copenhagen University Hospital Rigshospitalet, Denmark
| | - Koen Van Leemput
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA
| |
Collapse
|
46
|
Rank-Two NMF Clustering for Glioblastoma Characterization. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:1048164. [PMID: 30425818 PMCID: PMC6218733 DOI: 10.1155/2018/1048164] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 09/26/2018] [Indexed: 11/17/2022]
Abstract
This study investigates a novel classification method for 3D multimodal MRI glioblastomas tumor characterization. We formulate our segmentation problem as a linear mixture model (LMM). Thus, we provide a nonnegative matrix M from every MRI slice in every segmentation process' step. This matrix will be used as an input for the first segmentation process to extract the edema region from T2 and FLAIR modalities. After that, in the rest of segmentation processes, we extract the edema region from T1c modality, generate the matrix M, and segment the necrosis, the enhanced tumor, and the nonenhanced tumor regions. In the segmentation process, we apply a rank-two NMF clustering. We have executed our tumor characterization method on BraTS 2015 challenge dataset. Quantitative and qualitative evaluations over the publicly training and testing dataset from the MICCAI 2015 multimodal brain segmentation challenge (BraTS 2015) attested that the proposed algorithm could yield a competitive performance for brain glioblastomas characterization (necrosis, tumor core, and edema) among several competing methods.
Collapse
|
47
|
K-Means clustering and neural network for object detecting and identifying abnormality of brain tumor. Soft comput 2018. [DOI: 10.1007/s00500-018-3618-7] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
48
|
Iterative spatial fuzzy clustering for 3D brain magnetic resonance image supervoxel segmentation. J Neurosci Methods 2018; 311:17-27. [PMID: 30315839 DOI: 10.1016/j.jneumeth.2018.10.007] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Revised: 09/13/2018] [Accepted: 10/08/2018] [Indexed: 02/02/2023]
Abstract
BACKGROUND Although supervoxel segmentation methods have been employed for brain Magnetic Resonance Image (MRI) processing and analysis, due to the specific features of the brain, including complex-shaped internal structures and partial volume effect, their performance remains unsatisfactory. NEW METHODS To address these issues, this paper presents a novel iterative spatial fuzzy clustering (ISFC) algorithm to generate 3D supervoxels for brain MRI volume based on prior knowledge. This work makes use of the common topology among the human brains to obtain a set of seed templates from a population-based brain template MRI image. After selecting the number of supervoxels, the corresponding seed template is projected onto the considered individual brain for generating reliable seeds. Then, to deal with the influence of partial volume effect, an efficient iterative spatial fuzzy clustering algorithm is proposed to allocate voxels to the seeds and to generate the supervoxels for the overall brain MRI volume. RESULTS The performance of the proposed algorithm is evaluated on two widely used public brain MRI datasets and compared with three other up-to-date methods. CONCLUSIONS The proposed algorithm can be utilized for several brain MRI processing and analysis, including tissue segmentation, tumor detection and segmentation, functional parcellation and registration.
Collapse
|
49
|
Ibrahim RW, Hasan AM, Jalab HA. A new deformable model based on fractional Wright energy function for tumor segmentation of volumetric brain MRI scans. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 163:21-28. [PMID: 30119853 DOI: 10.1016/j.cmpb.2018.05.031] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2018] [Revised: 05/17/2018] [Accepted: 05/24/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVES The MRI brain tumors segmentation is challenging due to variations in terms of size, shape, location and features' intensity of the tumor. Active contour has been applied in MRI scan image segmentation due to its ability to produce regions with boundaries. The main difficulty that encounters the active contour segmentation is the boundary tracking which is controlled by minimization of energy function for segmentation. Hence, this study proposes a novel fractional Wright function (FWF) as a minimization of energy technique to improve the performance of active contour without edge method. METHOD In this study, we implement FWF as an energy minimization function to replace the standard gradient-descent method as minimization function in Chan-Vese segmentation technique. The proposed FWF is used to find the boundaries of an object by controlling the inside and outside values of the contour. In this study, the objective evaluation is used to distinguish the differences between the processed segmented images and ground truth using a set of statistical parameters; true positive, true negative, false positive, and false negative. RESULTS The FWF as a minimization of energy was successfully implemented on BRATS 2013 image dataset. The achieved overall average sensitivity score of the brain tumors segmentation was 94.8 ± 4.7%. CONCLUSIONS The results demonstrate that the proposed FWF method minimized the energy function more than the gradient-decent method that was used in the original three-dimensional active contour without edge (3DACWE) method.
Collapse
Affiliation(s)
- Rabha W Ibrahim
- Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia.
| | - Ali M Hasan
- College of Medicine, Al-Nahrain University, Baghdad, Iraq.
| | - Hamid A Jalab
- Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia.
| |
Collapse
|
50
|
Liu J, Wang J, Tang Z, Hu B, Wu FX, Pan Y. Improving Alzheimer's Disease Classification by Combining Multiple Measures. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2018; 15:1649-1659. [PMID: 28749356 DOI: 10.1109/tcbb.2017.2731849] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Several anatomical magnetic resonance imaging (MRI) markers for Alzheimer's disease (AD) have been identified. Cortical gray matter volume, cortical thickness, and subcortical volume have been used successfully to assist the diagnosis of Alzheimer's disease including its early warning and developing stages, e.g., mild cognitive impairment (MCI) including MCI converted to AD (MCIc) and MCI not converted to AD (MCInc). Currently, these anatomical MRI measures have mainly been used separately. Thus, the full potential of anatomical MRI scans for AD diagnosis might not yet have been used optimally. Meanwhile, most studies currently only focused on morphological features of regions of interest (ROIs) or interregional features without considering the combination of them. To further improve the diagnosis of AD, we propose a novel approach of extracting ROI features and interregional features based on multiple measures from MRI images to distinguish AD, MCI (including MCIc and MCInc), and health control (HC). First, we construct six individual networks based on six different anatomical measures (i.e., CGMV, CT, CSA, CC, CFI, and SV) and Automated Anatomical Labeling (AAL) atlas for each subject. Then, for each individual network, we extract all node (ROI) features and edge (interregional) features, and denoted as node feature set and edge feature set, respectively. Therefore, we can obtain six node feature sets and six edge feature sets from six different anatomical measures. Next, each feature within a feature set is ranked by -score in descending order, and the top ranked features of each feature set are applied to MKBoost algorithm to obtain the best classification accuracy. After obtaining the best classification accuracy, we can get the optimal feature subset and the corresponding classifier for each node or edge feature set. Afterwards, to investigate the classification performance with only node features, we proposed a weighted multiple kernel learning (wMKL) framework to combine these six optimal node feature subsets, and obtain a combined classifier to perform AD classification. Similarly, we can obtain the classification performance with only edge features. Finally, we combine both six optimal node feature subsets and six optimal edge feature subsets to further improve the classification performance. Experimental results show that the proposed method outperforms some state-of-the-art methods in AD classification, and demonstrate that different measures contain complementary information.
Collapse
|