1
|
Prasad V, Jeba Jingle ID, Sriramakrishnan GV. DTDO: Driving Training Development Optimization enabled deep learning approach for brain tumour classification using MRI. NETWORK (BRISTOL, ENGLAND) 2024:1-42. [PMID: 38801074 DOI: 10.1080/0954898x.2024.2351159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 04/29/2024] [Indexed: 05/29/2024]
Abstract
A brain tumour is an abnormal mass of tissue. Brain tumours vary in size, from tiny to large. Moreover, they display variations in location, shape, and size, which add complexity to their detection. The accurate delineation of tumour regions poses a challenge due to their irregular boundaries. In this research, these issues are overcome by introducing the DTDO-ZFNet for detection of brain tumour. The input Magnetic Resonance Imaging (MRI) image is fed to the pre-processing stage. Tumour areas are segmented by utilizing SegNet in which the factors of SegNet are biased using DTDO. The image augmentation is carried out using eminent techniques, such as geometric transformation and colour space transformation. Here, features such as GIST descriptor, PCA-NGIST, statistical feature and Haralick features, SLBT feature, and CNN features are extricated. Finally, the categorization of the tumour is accomplished based on ZFNet, which is trained by utilizing DTDO. The devised DTDO is a consolidation of DTBO and CDDO. The comparison of proposed DTDO-ZFNet with the existing methods, which results in highest accuracy of 0.944, a positive predictive value (PPV) of 0.936, a true positive rate (TPR) of 0.939, a negative predictive value (NPV) of 0.937, and a minimal false-negative rate (FNR) of 0.061%.
Collapse
Affiliation(s)
- Vadamodula Prasad
- Department of Computer Science & Engineering, Lendi Institute of Engineering & Technology, Jonnada, India
| | - Issac Diana Jeba Jingle
- Department of Computer Science & Engineering, Christ (Deemed to be University), Bangalore, India
| | | |
Collapse
|
2
|
Aluri S, Imambi SS. Brain tumour classification using MRI images based on lenet with golden teacher learning optimization. NETWORK (BRISTOL, ENGLAND) 2024; 35:27-54. [PMID: 37947040 DOI: 10.1080/0954898x.2023.2275720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 10/22/2023] [Indexed: 11/12/2023]
Abstract
Brain tumour (BT) is a dangerous neurological disorder produced by abnormal cell growth within the skull or brain. Nowadays, the death rate of people with BT is linearly growing. The finding of tumours at an early stage is crucial for giving treatment to patients, which improves the survival rate of patients. Hence, the BT classification (BTC) is done in this research using magnetic resonance imaging (MRI) images. In this research, the input MRI image is pre-processed using a non-local means (NLM) filter that denoises the input image. For attaining the effective classified result, the tumour area from the MRI image is segmented by the SegNet model. Furthermore, the BTC is accomplished by the LeNet model whose weight is optimized by the Golden Teacher Learning Optimization Algorithm (GTLO) such that the classified output produced by the LeNet model is Gliomas, Meningiomas, and Pituitary tumours. The experimental outcome displays that the GTLO-LeNet achieved an Accuracy of 0.896, Negative Predictive value (NPV) of 0.907, Positive Predictive value (PPV) of 0.821, True Negative Rate (TNR) of 0.880, and True Positive Rate (TPR) of 0.888.
Collapse
Affiliation(s)
- Srilakshmi Aluri
- Research Scholar, Computer Science & Engineering, K L Educational foundation, deemed to be University, Vaddeswaram, India
| | - Sagar S Imambi
- Professor, Computer Science and Engineering, K L Educational foundation, deemed to be University, Vaddeswaram, India
| |
Collapse
|
3
|
Jyothi P, Dhanasekaran S. An attention 3DUNET and visual geometry group-19 based deep neural network for brain tumor segmentation and classification from MRI. J Biomol Struct Dyn 2023:1-12. [PMID: 37979152 DOI: 10.1080/07391102.2023.2283164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 11/06/2023] [Indexed: 11/20/2023]
Abstract
There has been an abrupt increase in brain tumor (BT) related medical cases during the past ten years. The tenth most typical type of tumor affecting millions of people is the BT. The cure rate can, however, rise if it is found early. When evaluating BT diagnosis and treatment options, MRI is a crucial tool. However, segmenting the tumors from magnetic resonance (MR) images is complex. The advancement of deep learning (DL) has led to the development of numerous automatic segmentation and classification approaches. However, most need improvement since they are limited to 2D images. So, this article proposes a novel and optimal DL system for segmenting and classifying the BTs from 3D brain MR images. Preprocessing, segmentation, feature extraction, feature selection, and tumor classification are the main phases of the proposed work. Preprocessing, such as noise removal, is performed on the collected brain MR images using bilateral filtering. The tumor segmentation uses spatial and channel attention-based three-dimensional u-shaped network (SC3DUNet) to segment the tumor lesions from the preprocessed data. After that, the feature extraction is done based on dilated convolution-based visual geometry group-19 (DCVGG-19), making the classification task more manageable. The optimal features are selected from the extracted feature sets using diagonal linear uniform and tangent flight included butterfly optimization algorithm. Finally, the proposed system applies an optimal hyperparameters-based deep neural network to classify the tumor classes. The experiments conducted on the BraTS2020 dataset show that the suggested method can segment tumors and categorize them more accurately than the existing state-of-the-art mechanisms.Communicated by Ramaswamy H. Sarma.
Collapse
Affiliation(s)
- Parvathy Jyothi
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, India
| | - S Dhanasekaran
- Department of Information Technology, Kalasalingam Academy of Research and Education, Krishnankoil, India
| |
Collapse
|
4
|
Alhussan AA, Eid MM, Towfek SK, Khafaga DS. Breast Cancer Classification Depends on the Dynamic Dipper Throated Optimization Algorithm. Biomimetics (Basel) 2023; 8:163. [PMID: 37092415 PMCID: PMC10123690 DOI: 10.3390/biomimetics8020163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/12/2023] [Accepted: 04/14/2023] [Indexed: 04/25/2023] Open
Abstract
According to the American Cancer Society, breast cancer is the second largest cause of mortality among women after lung cancer. Women's death rates can be decreased if breast cancer is diagnosed and treated early. Due to the lengthy duration of manual breast cancer diagnosis, an automated approach is necessary for early cancer identification. This research proposes a novel framework integrating metaheuristic optimization with deep learning and feature selection for robustly classifying breast cancer from ultrasound images. The structure of the proposed methodology consists of five stages, namely, data augmentation to improve the learning of convolutional neural network (CNN) models, transfer learning using GoogleNet deep network for feature extraction, selection of the best set of features using a novel optimization algorithm based on a hybrid of dipper throated and particle swarm optimization algorithms, and classification of the selected features using CNN optimized using the proposed optimization algorithm. To prove the effectiveness of the proposed approach, a set of experiments were conducted on a breast cancer dataset, freely available on Kaggle, to evaluate the performance of the proposed feature selection method and the performance of the optimized CNN. In addition, statistical tests were established to study the stability and difference of the proposed approach compared to state-of-the-art approaches. The achieved results confirmed the superiority of the proposed approach with a classification accuracy of 98.1%, which is better than the other approaches considered in the conducted experiments.
Collapse
Affiliation(s)
- Amel Ali Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Marwa M. Eid
- Faculty of Artificial Intelligence, Delta University for Science and Technology, Mansoura 35712, Egypt
| | - S. K. Towfek
- Delta Higher Institute for Engineering and Technology, Mansoura 35111, Egypt
- Computer Science and Intelligent Systems Research Center, Blacksburg, VA 24060, USA
| | - Doaa Sami Khafaga
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
5
|
Srinivasan S, Bai PSM, Mathivanan SK, Muthukumaran V, Babu JC, Vilcekova L. Grade Classification of Tumors from Brain Magnetic Resonance Images Using a Deep Learning Technique. Diagnostics (Basel) 2023; 13:diagnostics13061153. [PMID: 36980463 PMCID: PMC10046932 DOI: 10.3390/diagnostics13061153] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/14/2023] [Accepted: 03/14/2023] [Indexed: 03/22/2023] Open
Abstract
To improve the accuracy of tumor identification, it is necessary to develop a reliable automated diagnostic method. In order to precisely categorize brain tumors, researchers developed a variety of segmentation algorithms. Segmentation of brain images is generally recognized as one of the most challenging tasks in medical image processing. In this article, a novel automated detection and classification method was proposed. The proposed approach consisted of many phases, including pre-processing MRI images, segmenting images, extracting features, and classifying images. During the pre-processing portion of an MRI scan, an adaptive filter was utilized to eliminate background noise. For feature extraction, the local-binary grey level co-occurrence matrix (LBGLCM) was used, and for image segmentation, enhanced fuzzy c-means clustering (EFCMC) was used. After extracting the scan features, we used a deep learning model to classify MRI images into two groups: glioma and normal. The classifications were created using a convolutional recurrent neural network (CRNN). The proposed technique improved brain image classification from a defined input dataset. MRI scans from the REMBRANDT dataset, which consisted of 620 testing and 2480 training sets, were used for the research. The data demonstrate that the newly proposed method outperformed its predecessors. The proposed CRNN strategy was compared against BP, U-Net, and ResNet, which are three of the most prevalent classification approaches currently being used. For brain tumor classification, the proposed system outcomes were 98.17% accuracy, 91.34% specificity, and 98.79% sensitivity.
Collapse
Affiliation(s)
- Saravanan Srinivasan
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India
| | | | - Sandeep Kumar Mathivanan
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Venkatesan Muthukumaran
- Department of Mathematics, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur 603203, India
| | - Jyothi Chinna Babu
- Department of Electronics and Communications Engineering, Annamacharya Institute of Technology and Sciences, Rajampet 516126, India
| | - Lucia Vilcekova
- Faculty of Management, Comenius University Bratislava, Odbojarov 10, 820 05 Bratislava, Slovakia
- Correspondence:
| |
Collapse
|
6
|
AI-Powered Diagnosis of Skin Cancer: A Contemporary Review, Open Challenges and Future Research Directions. Cancers (Basel) 2023; 15:cancers15041183. [PMID: 36831525 PMCID: PMC9953963 DOI: 10.3390/cancers15041183] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 02/07/2023] [Accepted: 02/08/2023] [Indexed: 02/15/2023] Open
Abstract
Skin cancer continues to remain one of the major healthcare issues across the globe. If diagnosed early, skin cancer can be treated successfully. While early diagnosis is paramount for an effective cure for cancer, the current process requires the involvement of skin cancer specialists, which makes it an expensive procedure and not easily available and affordable in developing countries. This dearth of skin cancer specialists has given rise to the need to develop automated diagnosis systems. In this context, Artificial Intelligence (AI)-based methods have been proposed. These systems can assist in the early detection of skin cancer and can consequently lower its morbidity, and, in turn, alleviate the mortality rate associated with it. Machine learning and deep learning are branches of AI that deal with statistical modeling and inference, which progressively learn from data fed into them to predict desired objectives and characteristics. This survey focuses on Machine Learning and Deep Learning techniques deployed in the field of skin cancer diagnosis, while maintaining a balance between both techniques. A comparison is made to widely used datasets and prevalent review papers, discussing automated skin cancer diagnosis. The study also discusses the insights and lessons yielded by the prior works. The survey culminates with future direction and scope, which will subsequently help in addressing the challenges faced within automated skin cancer diagnosis.
Collapse
|
7
|
Hossain A, Islam MT, Abdul Rahim SK, Rahman MA, Rahman T, Arshad H, Khandakar A, Ayari MA, Chowdhury MEH. A Lightweight Deep Learning Based Microwave Brain Image Network Model for Brain Tumor Classification Using Reconstructed Microwave Brain (RMB) Images. BIOSENSORS 2023; 13:bios13020238. [PMID: 36832004 PMCID: PMC9954219 DOI: 10.3390/bios13020238] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 01/23/2023] [Accepted: 01/30/2023] [Indexed: 05/27/2023]
Abstract
Computerized brain tumor classification from the reconstructed microwave brain (RMB) images is important for the examination and observation of the development of brain disease. In this paper, an eight-layered lightweight classifier model called microwave brain image network (MBINet) using a self-organized operational neural network (Self-ONN) is proposed to classify the reconstructed microwave brain (RMB) images into six classes. Initially, an experimental antenna sensor-based microwave brain imaging (SMBI) system was implemented, and RMB images were collected to create an image dataset. It consists of a total of 1320 images: 300 images for the non-tumor, 215 images for each single malignant and benign tumor, 200 images for each double benign tumor and double malignant tumor, and 190 images for the single benign and single malignant tumor classes. Then, image resizing and normalization techniques were used for image preprocessing. Thereafter, augmentation techniques were applied to the dataset to make 13,200 training images per fold for 5-fold cross-validation. The MBINet model was trained and achieved accuracy, precision, recall, F1-score, and specificity of 96.97%, 96.93%, 96.85%, 96.83%, and 97.95%, respectively, for six-class classification using original RMB images. The MBINet model was compared with four Self-ONNs, two vanilla CNNs, ResNet50, ResNet101, and DenseNet201 pre-trained models, and showed better classification outcomes (almost 98%). Therefore, the MBINet model can be used for reliably classifying the tumor(s) using RMB images in the SMBI system.
Collapse
Affiliation(s)
- Amran Hossain
- Centre for Advanced Electronic and Communication Engineering, Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
- Department of Computer Science and Engineering, Dhaka University of Engineering and Technology, Gazipur, Gazipur 1707, Bangladesh
| | - Mohammad Tariqul Islam
- Centre for Advanced Electronic and Communication Engineering, Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
| | | | - Md Atiqur Rahman
- Centre for Advanced Electronic and Communication Engineering, Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
| | - Tawsifur Rahman
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Haslina Arshad
- Institute of IR4.0, Universiti Kebangsaan Malaysia (UKM), Bangi 43600, Malaysia
| | - Amit Khandakar
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Mohamed Arslane Ayari
- Department of Civil and Architectural Engineering, Qatar University, Doha 2713, Qatar
| | | |
Collapse
|
8
|
Srivastava J, Prakash J, Srivastava A. Hybrid deep learning algorithm for brain tumour detection. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2167624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Affiliation(s)
- Jyoti Srivastava
- Department of ITCA, Madan Mohan Malaviya University of Technology, Gorakhpur, UP, India
| | - Jay Prakash
- Department of ITCA, Madan Mohan Malaviya University of Technology, Gorakhpur, UP, India
| | - Ashish Srivastava
- Department of Computer Engineering & Application, GLA University, Mathura, UP, India
| |
Collapse
|
9
|
Computer-Aided Early Melanoma Brain-Tumor Detection Using Deep-Learning Approach. Biomedicines 2023; 11:biomedicines11010184. [PMID: 36672693 PMCID: PMC9856126 DOI: 10.3390/biomedicines11010184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 01/06/2023] [Accepted: 01/09/2023] [Indexed: 01/14/2023] Open
Abstract
Brain tumors affect the normal functioning of the brain and if not treated in time these cancerous cells may affect the other tissues, blood vessels, and nerves surrounding these cells. Today, a large population worldwide is affected by the precarious disease of the brain tumor. Healthy tissues of the brain are suspected to be damaged because of tumors that become the most significant reason for a large number of deaths nowadays. Therefore, their early detection is necessary to prevent patients from unfortunate mishaps resulting in loss of lives. The manual detection of brain tumors is a challenging task due to discrepancies in appearance in terms of shape, size, nucleus, etc. As a result, an automatic system is required for the early detection of brain tumors. In this paper, the detection of tumors in brain cells is carried out using a deep convolutional neural network with stochastic gradient descent (SGD) optimization algorithm. The multi-classification of brain tumors is performed using the ResNet-50 model and evaluated on the public Kaggle brain-tumor dataset. The method achieved 99.82% and 99.5% training and testing accuracy, respectively. The experimental result indicates that the proposed model outperformed baseline methods, and provides a compelling reason to be applied to other diseases.
Collapse
|
10
|
Mgbejime GT, Hossin MA, Nneji GU, Monday HN, Ekong F. Parallelistic Convolution Neural Network Approach for Brain Tumor Diagnosis. Diagnostics (Basel) 2022; 12:diagnostics12102484. [PMID: 36292173 PMCID: PMC9600759 DOI: 10.3390/diagnostics12102484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 09/27/2022] [Accepted: 10/03/2022] [Indexed: 11/17/2022] Open
Abstract
Today, Magnetic Resonance Imaging (MRI) is a prominent technique used in medicine, produces a significant and varied range of tissue contrasts in each imaging modalities, and is frequently employed by medical professionals to identify brain malignancies. With brain tumor being a very deadly disease, early detection will help increase the likelihood that the patient will receive the appropriate medical care leading to either a full elimination of the tumor or the prolongation of the patient’s life. However, manually examining the enormous volume of magnetic resonance imaging (MRI) images and identifying a brain tumor or cancer is extremely time-consuming and requires the expertise of a trained medical expert or brain doctor to manually detect and diagnose brain cancer using multiple Magnetic Resonance images (MRI) with various modalities. Due to this underlying issue, there is a growing need for increased efforts to automate the detection and diagnosis process of brain tumor without human intervention. Another major concern most research articles do not consider is the low quality nature of MRI images which can be attributed to noise and artifacts. This article presents a Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm to precisely handle the problem of low quality MRI images by eliminating noisy elements and enhancing the visible trainable features of the image. The enhanced image is then fed to the proposed PCNN to learn the features and classify the tumor using sigmoid classifier. To properly train the model, a publicly available dataset is collected and utilized for this research. Additionally, different optimizers and different values of dropout and learning rates are used in the course of this study. The proposed PCNN with Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm achieved an accuracy of 98.7%, sensitivity of 99.7%, and specificity of 97.4%. In comparison with other state-of-the-art brain tumor methods and pre-trained deep transfer learning models, the proposed PCNN model obtained satisfactory performance.
Collapse
Affiliation(s)
- Goodness Temofe Mgbejime
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Md Altab Hossin
- School of Innovation and Entrepreneurship, Chengdu University, Chengdu 610106, China
| | - Grace Ugochi Nneji
- Department of Computing, Oxford Brookes College of Chengdu University of Technology, Chengdu 610059, China
- Deep Learning and Intelligent Computing Lab, HACE SOFTTECH, Lagos 102241, Nigeria
- Correspondence: (G.U.N.); (H.N.M.)
| | - Happy Nkanta Monday
- Department of Computing, Oxford Brookes College of Chengdu University of Technology, Chengdu 610059, China
- Deep Learning and Intelligent Computing Lab, HACE SOFTTECH, Lagos 102241, Nigeria
- Correspondence: (G.U.N.); (H.N.M.)
| | - Favour Ekong
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
11
|
Ramzan M, Raza M, Sharif MI, Kadry S. Gastrointestinal Tract Polyp Anomaly Segmentation on Colonoscopy Images Using Graft-U-Net. J Pers Med 2022; 12:jpm12091459. [PMID: 36143244 PMCID: PMC9503374 DOI: 10.3390/jpm12091459] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 08/28/2022] [Accepted: 09/01/2022] [Indexed: 11/21/2022] Open
Abstract
Computer-aided polyp segmentation is a crucial task that supports gastroenterologists in examining and resecting anomalous tissue in the gastrointestinal tract. The disease polyps grow mainly in the colorectal area of the gastrointestinal tract and in the mucous membrane, which has protrusions of micro-abnormal tissue that increase the risk of incurable diseases such as cancer. So, the early examination of polyps can decrease the chance of the polyps growing into cancer, such as adenomas, which can change into cancer. Deep learning-based diagnostic systems play a vital role in diagnosing diseases in the early stages. A deep learning method, Graft-U-Net, is proposed to segment polyps using colonoscopy frames. Graft-U-Net is a modified version of UNet, which comprises three stages, including the preprocessing, encoder, and decoder stages. The preprocessing technique is used to improve the contrast of the colonoscopy frames. Graft-U-Net comprises encoder and decoder blocks where the encoder analyzes features, while the decoder performs the features’ synthesizing processes. The Graft-U-Net model offers better segmentation results than existing deep learning models. The experiments were conducted using two open-access datasets, Kvasir-SEG and CVC-ClinicDB. The datasets were prepared from the large bowel of the gastrointestinal tract by performing a colonoscopy procedure. The anticipated model outperforms in terms of its mean Dice of 96.61% and mean Intersection over Union (mIoU) of 82.45% with the Kvasir-SEG dataset. Similarly, with the CVC-ClinicDB dataset, the method achieved a mean Dice of 89.95% and an mIoU of 81.38%.
Collapse
Affiliation(s)
- Muhammad Ramzan
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
- Correspondence:
| | - Muhammad Imran Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 999095, Lebanon
| |
Collapse
|
12
|
Kalpana R, Bennet MA, Rahmani AW. Metaheuristic Optimization-Driven Novel Deep Learning Approach for Brain Tumor Segmentation. BIOMED RESEARCH INTERNATIONAL 2022; 2022:2980691. [PMID: 36033583 PMCID: PMC9410780 DOI: 10.1155/2022/2980691] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 07/21/2022] [Accepted: 07/27/2022] [Indexed: 01/10/2023]
Abstract
Brain tumor has the foremost distinguished etiology of high morality. Neoplasm, a categorization of brain tumors, is very operative in distinguishing and determining the tumor's exact location in the brain. Magnetic resonance imaging (MRI) is an efficient noninvasive technique for the anatomical examination of brain tumors. Growth tissues have a distinguishable look in MRI pictures in order that they are unit-wide used for brain tumor feature extraction. The existing research algorithms for brain tumors have some limitations such as different qualities, low sensitivity, and diagnosing the tumor at its stages. In this particular piece of research, an innovative method of optimization known as the procedure for lightning attachment algorithm (PLA) is used, and for the purpose of classification, a CNN model known as DenseNet-169 is applied. PLA was used in order to optimize the growth, and a network model known as the DenseNet-169 model was utilized in order to extract the various growth-optimization choices. First, the MR images of the brain were preprocessed to remove any outliers. Next, the Dense Net-169 CNN model was used to extract network choices from the MR images. In addition, it is used to execute the function of a classifier in order to identify the growth as either an aberrant growth or a traditional growth. In addition, the publicly benchmarked datasets that are widely utilized have validated the algorithmic rule that was granted. The planned system demonstrates the satisfactory accuracy in getting ready to on the dataset and outperforms many of the notable current techniques.
Collapse
Affiliation(s)
- R. Kalpana
- Department of Electronics and Communication Engineering, VelTech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai, Tamil Nadu 600062, India
| | - M. Anto Bennet
- Department of Electronics and Communication Engineering, VelTech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai, Tamil Nadu 600062, India
| | | |
Collapse
|
13
|
BrainNet: Optimal Deep Learning Feature Fusion for Brain Tumor Classification. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1465173. [PMID: 35965745 PMCID: PMC9371837 DOI: 10.1155/2022/1465173] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 07/05/2022] [Indexed: 12/14/2022]
Abstract
Early detection of brain tumors can save precious human life. This work presents a fully automated design to classify brain tumors. The proposed scheme employs optimal deep learning features for the classification of FLAIR, T1, T2, and T1CE tumors. Initially, we normalized the dataset to pass them to the ResNet101 pretrained model to perform transfer learning for our dataset. This approach results in fine-tuning the ResNet101 model for brain tumor classification. The problem with this approach is the generation of redundant features. These redundant features degrade accuracy and cause computational overhead. To tackle this problem, we find optimal features by utilizing differential evaluation and particle swarm optimization algorithms. The obtained optimal feature vectors are then serially fused to get a single-fused feature vector. PCA is applied to this fused vector to get the final optimized feature vector. This optimized feature vector is fed as input to various classifiers to classify tumors. Performance is analyzed at various stages. Performance results show that the proposed technique achieved a speedup of 25.5x in prediction time on the medium neural network with an accuracy of 94.4%. These results show significant improvement over the state-of-the-art techniques in terms of computational overhead by maintaining approximately the same accuracy.
Collapse
|
14
|
Bag of Tricks for Improving Deep Learning Performance on Multimodal Image Classification. Bioengineering (Basel) 2022; 9:bioengineering9070312. [PMID: 35877363 PMCID: PMC9311779 DOI: 10.3390/bioengineering9070312] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 07/10/2022] [Accepted: 07/11/2022] [Indexed: 11/30/2022] Open
Abstract
A comprehensive medical image-based diagnosis is usually performed across various image modalities before passing a final decision; hence, designing a deep learning model that can use any medical image modality to diagnose a particular disease is of great interest. The available methods are multi-staged, with many computational bottlenecks in between. This paper presents an improved end-to-end method of multimodal image classification using deep learning models. We present top research methods developed over the years to improve models trained from scratch and transfer learning approaches. We show that when fully trained, a model can first implicitly discriminate the imaging modality and then diagnose the relevant disease. Our developed models were applied to COVID-19 classification from chest X-ray, CT scan, and lung ultrasound image modalities. The model that achieved the highest accuracy correctly maps all input images to their respective modality, then classifies the disease achieving overall 91.07% accuracy.
Collapse
|
15
|
Sharif MI, Li JP, Khan MA, Kadry S, Tariq U. M3BTCNet: multi model brain tumor classification using metaheuristic deep neural network features optimization. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07204-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
16
|
Latif G, Ben Brahim G, Iskandar DNFA, Bashar A, Alghazo J. Glioma Tumors' Classification Using Deep-Neural-Network-Based Features with SVM Classifier. Diagnostics (Basel) 2022; 12:diagnostics12041018. [PMID: 35454066 PMCID: PMC9032951 DOI: 10.3390/diagnostics12041018] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Accepted: 04/08/2022] [Indexed: 11/16/2022] Open
Abstract
The complexity of brain tissue requires skillful technicians and expert medical doctors to manually analyze and diagnose Glioma brain tumors using multiple Magnetic Resonance (MR) images with multiple modalities. Unfortunately, manual diagnosis suffers from its lengthy process, as well as elevated cost. With this type of cancerous disease, early detection will increase the chances of suitable medical procedures leading to either a full recovery or the prolongation of the patient's life. This has increased the efforts to automate the detection and diagnosis process without human intervention, allowing the detection of multiple types of tumors from MR images. This research paper proposes a multi-class Glioma tumor classification technique using the proposed deep-learning-based features with the Support Vector Machine (SVM) classifier. A deep convolution neural network is used to extract features of the MR images, which are then fed to an SVM classifier. With the proposed technique, a 96.19% accuracy was achieved for the HGG Glioma type while considering the FLAIR modality and a 95.46% for the LGG Glioma tumor type while considering the T2 modality for the classification of four Glioma classes (Edema, Necrosis, Enhancing, and Non-enhancing). The accuracies achieved using the proposed method were higher than those reported by similar methods in the extant literature using the same BraTS dataset. In addition, the accuracy results obtained in this work are better than those achieved by the GoogleNet and LeNet pre-trained models on the same dataset.
Collapse
Affiliation(s)
- Ghazanfar Latif
- Faculty of Computer Science and Information Technology, Université du Québec à Chicoutimi, 555 Boulevard de l’Université, Chicoutimi, QC G7H2B1, Canada; or
- Department of Computer Science, Prince Mohammad bin Fahd University, Khobar 31952, Saudi Arabia
| | - Ghassen Ben Brahim
- Department of Computer Science, Prince Mohammad bin Fahd University, Khobar 31952, Saudi Arabia
- Correspondence:
| | - D. N. F. Awang Iskandar
- Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, Kota Samarahan 94300, Malaysia;
| | - Abul Bashar
- Department of Computer Engineering, Prince Mohammad bin Fahd University, Khobar 31952, Saudi Arabia;
| | - Jaafar Alghazo
- Department of Electrical and Computer Engineering, Virginia Military Institute, Lexington, VA 24450, USA;
| |
Collapse
|
17
|
Sandhya S, Senthil Kumar M. Automated Multimodal Fusion Based Hyperparameter Tuned Deep Learning Model for Brain Tumor Diagnosis. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2022. [DOI: 10.1166/jmihi.2022.3942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
As medical image processing research has progressed, image fusion has emerged as a realistic solution, automatically extracting relevant data from many images before fusing them into a single, unified image. Medical imaging techniques, such as Computed Tomography (CT), Magnetic Resonance
Imaging (MRI), etc., play a crucial role in the diagnosis and classification of brain tumors (BT). A single imaging technique is not sufficient for correct diagnosis of the disease. In case the scans are ambiguous, it can lead doctors to incorrect diagnoses, which can be unsafe to the patient.
The solution to this problem is fusing images from different scans containing complementary information to generate accurate images with minimum uncertainty. This research presents a novel method for the automated identification and classification of brain tumors using multi-modal deep learning
(AMDL-BTDC). The proposed AMDL-BTDC model initially performs image pre-processing using bilateral filtering (BF) technique. Next, feature vectors are generated using a pair of pre-trained deep learning models called EfficientNet and SqueezeNet. Slime Mold Algorithm is used to acquire the DL
models’ optimal hyperparameter settings (SMA). In the end, an autoencoder (AE) model is used for BT classification once features have been fused. The suggested model’s superior performance over other techniques under diverse measures was validated by extensive testing on the benchmark
medical imaging dataset.
Collapse
Affiliation(s)
- S. Sandhya
- Research Scholar, Department of Information Technology, Anna University Chennai, SRM Valliammai Engineering College, Chennai 603203, India
| | - M. Senthil Kumar
- Department of Computer Science and Engineering, SRM Valliammai Engineering College, Chennai 603203, India
| |
Collapse
|
18
|
Aqeel A, Hassan A, Khan MA, Rehman S, Tariq U, Kadry S, Majumdar A, Thinnukool O. A Long Short-Term Memory Biomarker-Based Prediction Framework for Alzheimer's Disease. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22041475. [PMID: 35214375 PMCID: PMC8874990 DOI: 10.3390/s22041475] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 01/31/2022] [Accepted: 02/13/2022] [Indexed: 05/08/2023]
Abstract
The early prediction of Alzheimer's disease (AD) can be vital for the endurance of patients and establishes as an accommodating and facilitative factor for specialists. The proposed work presents a robotized predictive structure, dependent on machine learning (ML) methods for the forecast of AD. Neuropsychological measures (NM) and magnetic resonance imaging (MRI) biomarkers are deduced and passed on to a recurrent neural network (RNN). In the RNN, we have used long short-term memory (LSTM), and the proposed model will predict the biomarkers (feature vectors) of patients after 6, 12, 21 18, 24, and 36 months. These predicted biomarkers will go through fully connected neural network layers. The NN layers will then predict whether these RNN-predicted biomarkers belong to an AD patient or a patient with a mild cognitive impairment (MCI). The developed methodology has been tried on an openly available informational dataset (ADNI) and accomplished an accuracy of 88.24%, which is superior to the next-best available algorithms.
Collapse
Affiliation(s)
- Anza Aqeel
- Department of Computer & Software Engineering, CEME, NUST, Islamabad 44800, Pakistan; (A.A.); (A.H.)
| | - Ali Hassan
- Department of Computer & Software Engineering, CEME, NUST, Islamabad 44800, Pakistan; (A.A.); (A.H.)
| | - Muhammad Attique Khan
- Department of Computer Engineering, HITEC University, Taxila 47080, Pakistan; (M.A.K.); (S.R.)
| | - Saad Rehman
- Department of Computer Engineering, HITEC University, Taxila 47080, Pakistan; (M.A.K.); (S.R.)
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharaj 16242, Saudi Arabia;
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4608 Kristiansand, Norway;
| | - Arnab Majumdar
- Department of Civil Engineering, Imperial College London, London SW7 2AZ, UK;
| | - Orawit Thinnukool
- College of Arts, Media and Technology, Chiang Mai University, Chiang Mai 50200, Thailand
- Correspondence:
| |
Collapse
|
19
|
A Comprehensive Analysis of Recent Deep and Federated-Learning-Based Methodologies for Brain Tumor Diagnosis. J Pers Med 2022; 12:jpm12020275. [PMID: 35207763 PMCID: PMC8880689 DOI: 10.3390/jpm12020275] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/05/2022] [Accepted: 02/09/2022] [Indexed: 12/12/2022] Open
Abstract
Brain tumors are a deadly disease with a high mortality rate. Early diagnosis of brain tumors improves treatment, which results in a better survival rate for patients. Artificial intelligence (AI) has recently emerged as an assistive technology for the early diagnosis of tumors, and AI is the primary focus of researchers in the diagnosis of brain tumors. This study provides an overview of recent research on the diagnosis of brain tumors using federated and deep learning methods. The primary objective is to explore the performance of deep and federated learning methods and evaluate their accuracy in the diagnosis process. A systematic literature review is provided, discussing the open issues and challenges, which are likely to guide future researchers working in the field of brain tumor diagnosis.
Collapse
|
20
|
Jabeen K, Khan MA, Alhaisoni M, Tariq U, Zhang YD, Hamza A, Mickus A, Damaševičius R. Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion. SENSORS 2022; 22:s22030807. [PMID: 35161552 PMCID: PMC8840464 DOI: 10.3390/s22030807] [Citation(s) in RCA: 48] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/12/2022] [Accepted: 01/17/2022] [Indexed: 12/11/2022]
Abstract
After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia;
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharaj 11942, Saudi Arabia;
| | - Yu-Dong Zhang
- Department of Informatics, University of Leicester, Leicester LE1 7RH, UK;
| | - Ameer Hamza
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Artūras Mickus
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
- Correspondence:
| |
Collapse
|
21
|
Khan MA, Rajinikanth V, Satapathy SC, Taniar D, Mohanty JR, Tariq U, Damaševičius R. VGG19 Network Assisted Joint Segmentation and Classification of Lung Nodules in CT Images. Diagnostics (Basel) 2021; 11:2208. [PMID: 34943443 PMCID: PMC8699868 DOI: 10.3390/diagnostics11122208] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 11/17/2021] [Accepted: 11/24/2021] [Indexed: 12/27/2022] Open
Abstract
Pulmonary nodule is one of the lung diseases and its early diagnosis and treatment are essential to cure the patient. This paper introduces a deep learning framework to support the automated detection of lung nodules in computed tomography (CT) images. The proposed framework employs VGG-SegNet supported nodule mining and pre-trained DL-based classification to support automated lung nodule detection. The classification of lung CT images is implemented using the attained deep features, and then these features are serially concatenated with the handcrafted features, such as the Grey Level Co-Occurrence Matrix (GLCM), Local-Binary-Pattern (LBP) and Pyramid Histogram of Oriented Gradients (PHOG) to enhance the disease detection accuracy. The images used for experiments are collected from the LIDC-IDRI and Lung-PET-CT-Dx datasets. The experimental results attained show that the VGG19 architecture with concatenated deep and handcrafted features can achieve an accuracy of 97.83% with the SVM-RBF classifier.
Collapse
Affiliation(s)
| | - Venkatesan Rajinikanth
- Department of Electronics and Instrumentation Engineering, St. Joseph’s College of Engineering, Chennai, Tamilnadu 600119, India;
| | - Suresh Chandra Satapathy
- School of Computer Engineering, Kalinga Institute of Industrial Technology (Deemed to Be University), Bhubaneswar, Odisha 751024, India;
| | - David Taniar
- Faculty of Information Technology, Monash University, Clayton, VIC 3800, Australia;
| | - Jnyana Ranjan Mohanty
- School of Computer Applications, Kalinga Institute of Industrial Technology (Deemed to Be University), Bhubaneswar, Odisha 751024, India;
| | - Usman Tariq
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia;
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| |
Collapse
|
22
|
Abstract
AbstractBrain tumor occurs owing to uncontrolled and rapid growth of cells. If not treated at an initial phase, it may lead to death. Despite many significant efforts and promising outcomes in this domain, accurate segmentation and classification remain a challenging task. A major challenge for brain tumor detection arises from the variations in tumor location, shape, and size. The objective of this survey is to deliver a comprehensive literature on brain tumor detection through magnetic resonance imaging to help the researchers. This survey covered the anatomy of brain tumors, publicly available datasets, enhancement techniques, segmentation, feature extraction, classification, and deep learning, transfer learning and quantum machine learning for brain tumors analysis. Finally, this survey provides all important literature for the detection of brain tumors with their advantages, limitations, developments, and future trends.
Collapse
|
23
|
Categorizing white blood cells by utilizing deep features of proposed 4B-AdditionNet-based CNN network with ant colony optimization. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00564-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
AbstractWhite blood cells, WBCs for short, are an essential component of the human immune system. These cells are our body's first line of defense against infections and diseases caused by bacteria, viruses, and fungi, as well as abnormal and external substances that may enter the bloodstream. A wrong WBC count can signify dangerous viral infections, autoimmune disorders, cancer, sarcoidosis, aplastic anemia, leukemia, tuberculosis, etc. A lot of these diseases and disorders can be extremely painful and often result in death. Leukemia is among the more common types of blood cancer and when left undetected leads to death. An early diagnosis is necessary which is possible by looking at the shapes and determining the numbers of young and immature WBCs to see if they are normal or not. Performing this task manually is a cumbersome, expensive, and time-consuming process for hematologists, and therefore computer-aided systems have been developed to help with this problem. This paper proposes an improved method of classification of WBCs utilizing a combination of preprocessing, convolutional neural networks (CNNs), feature selection algorithms, and classifiers. In preprocessing, contrast-limited adaptive histogram equalization (CLAHE) is applied to the input images. A CNN is designed and trained to be used for feature extraction along with ResNet50 and EfficientNetB0 networks. Ant colony optimization is used to select the best features which are then serially fused and passed onto classifiers such as support vector machine (SVM) and quadratic discriminant analysis (QDA) for classification. The classification accuracy achieved on the Blood Cell Images dataset is 98.44%, which shows the robustness of the proposed work.
Collapse
|
24
|
Nawaz M, Nazir T, Masood M, Mehmood A, Mahum R, Khan MA, Kadry S, Thinnukool O. Analysis of Brain MRI Images Using Improved CornerNet Approach. Diagnostics (Basel) 2021; 11:diagnostics11101856. [PMID: 34679554 PMCID: PMC8535141 DOI: 10.3390/diagnostics11101856] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/24/2021] [Accepted: 09/27/2021] [Indexed: 01/18/2023] Open
Abstract
The brain tumor is a deadly disease that is caused by the abnormal growth of brain cells, which affects the human blood cells and nerves. Timely and precise detection of brain tumors is an important task to avoid complex and painful treatment procedures, as it can assist doctors in surgical planning. Manual brain tumor detection is a time-consuming activity and highly dependent on the availability of area experts. Therefore, it is a need of the hour to design accurate automated systems for the detection and classification of various types of brain tumors. However, the exact localization and categorization of brain tumors is a challenging job due to extensive variations in their size, position, and structure. To deal with the challenges, we have presented a novel approach, namely, DenseNet-41-based CornerNet framework. The proposed solution comprises three steps. Initially, we develop annotations to locate the exact region of interest. In the second step, a custom CornerNet with DenseNet-41 as a base network is introduced to extract the deep features from the suspected samples. In the last step, the one-stage detector CornerNet is employed to locate and classify several brain tumors. To evaluate the proposed method, we have utilized two databases, namely, the Figshare and Brain MRI datasets, and attained an average accuracy of 98.8% and 98.5%, respectively. Both qualitative and quantitative analysis show that our approach is more proficient and consistent with detecting and classifying various types of brain tumors than other latest techniques.
Collapse
Affiliation(s)
- Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | - Tahira Nazir
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | - Momina Masood
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | - Awais Mehmood
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | - Rabbia Mahum
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | | | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway;
| | - Orawit Thinnukool
- Research Group of Embedded Systems and Mobile Application in Health Science, College of Arts, Media and Technology, Chiang Mai University, Chiang Mai 50200, Thailand
- Correspondence:
| |
Collapse
|
25
|
Biratu ES, Schwenker F, Ayano YM, Debelee TG. A Survey of Brain Tumor Segmentation and Classification Algorithms. J Imaging 2021; 7:jimaging7090179. [PMID: 34564105 PMCID: PMC8465364 DOI: 10.3390/jimaging7090179] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 08/25/2021] [Accepted: 08/28/2021] [Indexed: 01/16/2023] Open
Abstract
A brain Magnetic resonance imaging (MRI) scan of a single individual consists of several slices across the 3D anatomical view. Therefore, manual segmentation of brain tumors from magnetic resonance (MR) images is a challenging and time-consuming task. In addition, an automated brain tumor classification from an MRI scan is non-invasive so that it avoids biopsy and make the diagnosis process safer. Since the beginning of this millennia and late nineties, the effort of the research community to come-up with automatic brain tumor segmentation and classification method has been tremendous. As a result, there are ample literature on the area focusing on segmentation using region growing, traditional machine learning and deep learning methods. Similarly, a number of tasks have been performed in the area of brain tumor classification into their respective histological type, and an impressive performance results have been obtained. Considering state of-the-art methods and their performance, the purpose of this paper is to provide a comprehensive survey of three, recently proposed, major brain tumor segmentation and classification model techniques, namely, region growing, shallow machine learning and deep learning. The established works included in this survey also covers technical aspects such as the strengths and weaknesses of different approaches, pre- and post-processing techniques, feature extraction, datasets, and models' performance evaluation metrics.
Collapse
Affiliation(s)
- Erena Siyoum Biratu
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, Addis Ababa 120611, Ethiopia; (E.S.B.); (T.G.D.)
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, Ulm University, 89081 Ulm, Germany
- Correspondence:
| | | | - Taye Girma Debelee
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, Addis Ababa 120611, Ethiopia; (E.S.B.); (T.G.D.)
- Ethiopian Artificial Intelligence Center, Addis Ababa 40782, Ethiopia;
| |
Collapse
|
26
|
Miao F, Yao L, Zhao X. Evolving convolutional neural networks by symbiotic organisms search algorithm for image classification. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107537] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
27
|
Wang K, Niu X, Dou Y, Xie D, Yang T. A siamese network with adaptive gated feature fusion for individual knee OA features grades prediction. Sci Rep 2021; 11:16833. [PMID: 34413365 PMCID: PMC8376929 DOI: 10.1038/s41598-021-96240-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 08/06/2021] [Indexed: 12/19/2022] Open
Abstract
Grading individual knee osteoarthritis (OA) features is a fine-grained knee OA severity assessment. Existing methods ignore following problems: (1) more accurately located knee joints benefit subsequent grades prediction; (2) they do not consider knee joints' symmetry and semantic information, which help to improve grades prediction performance. To this end, we propose a SE-ResNext50-32x4d-based Siamese network with adaptive gated feature fusion method to simultaneously assess eight tasks. In our method, two cascaded small convolution neural networks are designed to locate more accurate knee joints. Detected knee joints are further cropped and split into left and right patches via their symmetry, which are fed into SE-ResNext50-32x4d-based Siamese network with shared weights, extracting more detailed knee features. The adaptive gated feature fusion method is used to capture richer semantic information for better feature representation here. Meanwhile, knee OA/non-knee OA classification task is added, helping extract richer features. We specially introduce a new evaluation metric (top±1 accuracy) aiming to measure model performance with ambiguous data labels. Our model is evaluated on two public datasets: OAI and MOST datasets, achieving the state-of-the-art results comparing to competing approaches. It has the potential to be a tool to assist clinical decision making.
Collapse
Affiliation(s)
- Kang Wang
- National Laboratory for Parallel and Distributed Processing, School of Computer, National University of Defense Technology, Changsha, 410073, China.
| | - Xin Niu
- National Laboratory for Parallel and Distributed Processing, School of Computer, National University of Defense Technology, Changsha, 410073, China
| | - Yong Dou
- National Laboratory for Parallel and Distributed Processing, School of Computer, National University of Defense Technology, Changsha, 410073, China
| | - Dongxing Xie
- Department of Orthopaedics, Xiangya Hospital, Central South University, Changsha, 410008, China
| | - Tuo Yang
- Department of Health Management Center, Xiangya Hospital, Central South University, Changsha, 410008, China
| |
Collapse
|
28
|
Abstract
AbstractMachine learning (ML) has been recognized as a feasible and reliable technique for the modeling of multi-parametric datasets. In real applications, there are different relationships with various complexities between sets of inputs and their corresponding outputs. As a result, various models have been developed with different levels of complexity in the input–output relationships. The group method of data handling (GMDH) employs a family of inductive algorithms for computer-based mathematical modeling grounded on a combination of quadratic and higher neurons in a certain number of variable layers. In this method, a vector of input features is mapped to the expected response by creating a multistage nonlinear pattern. Usually, each neuron of the GMDH is considered a quadratic partial function. In this paper, the basic structure of the GMDH technique is adapted by changing the partial functions to enhance the complexity modeling ability. To accomplish this, popular ML models that have shown reasonable function approximation performance, such as support vector regression and random forest, are used, and the basic polynomial functions in the GMDH are replaced by these ML models. The regression feasibility and validity of the ML-based GMDH models are confirmed by computer simulation.
Collapse
|