1
|
Sharma V, Samant SS, Singh T, Fekete G. An Integrative Framework for Healthcare Recommendation Systems: Leveraging the Linear Discriminant Wolf-Convolutional Neural Network (LDW-CNN) Model. Diagnostics (Basel) 2024; 14:2511. [PMID: 39594176 PMCID: PMC11592656 DOI: 10.3390/diagnostics14222511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2024] [Revised: 10/28/2024] [Accepted: 11/06/2024] [Indexed: 11/28/2024] Open
Abstract
In the evolving healthcare landscape, recommender systems have gained significant importance due to their role in predicting and anticipating a wide range of health-related data for both patients and healthcare professionals. These systems are crucial for delivering precise information while adhering to high standards of quality, reliability, and authentication. Objectives: The primary objective of this research is to address the challenge of class imbalance in healthcare recommendation systems. This is achieved by improving the prediction and diagnostic capabilities of these systems through a novel approach that integrates linear discriminant wolf (LDW) with convolutional neural networks (CNNs), forming the LDW-CNN model. Methods: The LDW-CNN model incorporates the grey wolf optimizer with linear discriminant analysis to enhance prediction accuracy. The model's performance is evaluated using multi-disease datasets, covering heart, liver, and kidney diseases. Established error metrics are used to compare the effectiveness of the LDW-CNN model against conventional methods, such as CNNs and multi-level support vector machines (MSVMs). Results: The proposed LDW-CNN system demonstrates remarkable accuracy, achieving a rate of 98.1%, which surpasses existing deep learning approaches. In addition, the model improves specificity to 99.18% and sensitivity to 99.008%, outperforming traditional CNN and MSVM techniques in terms of predictive performance. Conclusions: The LDW-CNN model emerges as a robust solution for multidisciplinary disease prediction and recommendation, offering superior performance in healthcare recommender systems. Its high accuracy, alongside its improved specificity and sensitivity, positions it as a valuable tool for enhancing prediction and diagnosis across multiple disease domains.
Collapse
Affiliation(s)
- Vedna Sharma
- Department of Computer Science, Graphic Era (Deemed to be University), Dehradun 248002, India;
| | - Surender Singh Samant
- Department of Computer Science, Graphic Era (Deemed to be University), Dehradun 248002, India;
| | - Tej Singh
- Savaria Institute of Technology, Faculty of Informatics, Eötvös Loránd University, H-1117 Budapest, Hungary;
| | - Gusztáv Fekete
- Department of Material Science and Technology, AUDI Hungaria Faculty of Vehicle Engineering, Széchenyi István University, H-9026 Győr, Hungary
| |
Collapse
|
2
|
Bian C, Hu C, Cao N. Exploiting K-Space in Magnetic Resonance Imaging Diagnosis: Dual-Path Attention Fusion for K-Space Global and Image Local Features. Bioengineering (Basel) 2024; 11:958. [PMID: 39451334 PMCID: PMC11504126 DOI: 10.3390/bioengineering11100958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Revised: 09/04/2024] [Accepted: 09/21/2024] [Indexed: 10/26/2024] Open
Abstract
Magnetic resonance imaging (MRI) diagnosis, enhanced by deep learning methods, plays a crucial role in medical image processing, facilitating precise clinical diagnosis and optimal treatment planning. Current methodologies predominantly focus on feature extraction from the image domain, which often results in the loss of global features during down-sampling processes. However, the unique global representational capacity of MRI K-space is often overlooked. In this paper, we present a novel MRI K-space-based global feature extraction and dual-path attention fusion network. Our proposed method extracts global features from MRI K-space data and fuses them with local features from the image domain using a dual-path attention mechanism, thereby achieving accurate MRI segmentation for diagnosis. Specifically, our method consists of four main components: an image-domain feature extraction module, a K-space domain feature extraction module, a dual-path attention feature fusion module, and a decoder. We conducted ablation studies and comprehensive comparisons on the Brain Tumor Segmentation (BraTS) MRI dataset to validate the effectiveness of each module. The results demonstrate that our method exhibits superior performance in segmentation diagnostics, outperforming state-of-the-art methods with improvements of up to 63.82% in the HD95 distance evaluation metric. Furthermore, we performed generalization testing and complexity analysis on the Automated Cardiac Diagnosis Challenge (ACDC) MRI cardiac segmentation dataset. The findings indicate robust performance across different datasets, highlighting strong generalizability and favorable algorithmic complexity. Collectively, these results suggest that our proposed method holds significant potential for practical clinical applications.
Collapse
Affiliation(s)
- Congchao Bian
- College of Information Science and Engineering, Hohai University, Nanjing 210098, China;
| | - Can Hu
- College of Computer Science and Software Engineering, Hohai University, Nanjing 210098, China;
| | - Ning Cao
- College of Information Science and Engineering, Hohai University, Nanjing 210098, China;
| |
Collapse
|
3
|
Wan Q, Kim J, Lindsay C, Chen X, Li J, Iorgulescu JB, Huang RY, Zhang C, Reardon D, Young GS, Qin L. Auto-segmentation of Adult-Type Diffuse Gliomas: Comparison of Transfer Learning-Based Convolutional Neural Network Model vs. Radiologists. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1401-1410. [PMID: 38383806 PMCID: PMC11300742 DOI: 10.1007/s10278-024-01044-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 02/03/2024] [Accepted: 02/06/2024] [Indexed: 02/23/2024]
Abstract
Segmentation of glioma is crucial for quantitative brain tumor assessment, to guide therapeutic research and clinical management, but very time-consuming. Fully automated tools for the segmentation of multi-sequence MRI are needed. We developed and pretrained a deep learning (DL) model using publicly available datasets A (n = 210) and B (n = 369) containing FLAIR, T2WI, and contrast-enhanced (CE)-T1WI. This was then fine-tuned with our institutional dataset (n = 197) containing ADC, T2WI, and CE-T1WI, manually annotated by radiologists, and split into training (n = 100) and testing (n = 97) sets. The Dice similarity coefficient (DSC) was used to compare model outputs and manual labels. A third independent radiologist assessed segmentation quality on a semi-quantitative 5-scale score. Differences in DSC between new and recurrent gliomas, and between uni or multifocal gliomas were analyzed using the Mann-Whitney test. Semi-quantitative analyses were compared using the chi-square test. We found that there was good agreement between segmentations from the fine-tuned DL model and ground truth manual segmentations (median DSC: 0.729, std-dev: 0.134). DSC was higher for newly diagnosed (0.807) than recurrent (0.698) (p < 0.001), and higher for unifocal (0.747) than multi-focal (0.613) cases (p = 0.001). Semi-quantitative scores of DL and manual segmentation were not significantly different (mean: 3.567 vs. 3.639; 93.8% vs. 97.9% scoring ≥ 3, p = 0.107). In conclusion, the proposed transfer learning DL performed similarly to human radiologists in glioma segmentation on both structural and ADC sequences. Further improvement in segmenting challenging postoperative and multifocal glioma cases is needed.
Collapse
Affiliation(s)
- Qi Wan
- Department of Imaging, Dana Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
- Department of Radiology, the Key Laboratory of Advanced Interdisciplinary Studies Center, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Jisoo Kim
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Clifford Lindsay
- Image Processing and Analysis Core (iPAC), Department of Radiology, University of Massachusetts Chan Medical School, Worcester, MA, USA
| | - Xin Chen
- School of Medicine, Guangzhou First People's Hospital, South China University of Technology, Guangzhou, Guangdong, China
| | - Jing Li
- Department of Radiology, the Affiliated Cancer Hospital of Zhengzhou University (Henan Cancer Hospital), Zhengzhou, China
| | - J Bryan Iorgulescu
- Molecular Diagnostics Laboratory, Department of Hematopathology, Division of Pathology and Laboratory Medicine, The University of Texas MD Anderson Cancer Center, Houston, USA
| | - Raymond Y Huang
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Chenxi Zhang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - David Reardon
- Center for Neuro-Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Geoffrey S Young
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| | - Lei Qin
- Department of Imaging, Dana Farber Cancer Institute, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
4
|
El Hachimy I, Kabelma D, Echcharef C, Hassani M, Benamar N, Hajji N. A comprehensive survey on the use of deep learning techniques in glioblastoma. Artif Intell Med 2024; 154:102902. [PMID: 38852314 DOI: 10.1016/j.artmed.2024.102902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 04/28/2024] [Accepted: 06/02/2024] [Indexed: 06/11/2024]
Abstract
Glioblastoma, characterized as a grade 4 astrocytoma, stands out as the most aggressive brain tumor, often leading to dire outcomes. The challenge of treating glioblastoma is exacerbated by the convergence of genetic mutations and disruptions in gene expression, driven by alterations in epigenetic mechanisms. The integration of artificial intelligence, inclusive of machine learning algorithms, has emerged as an indispensable asset in medical analyses. AI is becoming a necessary tool in medicine and beyond. Current research on Glioblastoma predominantly revolves around non-omics data modalities, prominently including magnetic resonance imaging, computed tomography, and positron emission tomography. Nonetheless, the assimilation of omic data-encompassing gene expression through transcriptomics and epigenomics-offers pivotal insights into patients' conditions. These insights, reciprocally, hold significant value in refining diagnoses, guiding decision- making processes, and devising efficacious treatment strategies. This survey's core objective encompasses a comprehensive exploration of noteworthy applications of machine learning methodologies in the domain of glioblastoma, alongside closely associated research pursuits. The study accentuates the deployment of artificial intelligence techniques for both non-omics and omics data, encompassing a range of tasks. Furthermore, the survey underscores the intricate challenges posed by the inherent heterogeneity of Glioblastoma, delving into strategies aimed at addressing its multifaceted nature.
Collapse
Affiliation(s)
| | | | | | - Mohamed Hassani
- Cancer Division, Faculty of medicine, Department of Biomolecular Medicine, Imperial College, London, United Kingdom
| | - Nabil Benamar
- Moulay Ismail University of Meknes, Meknes, Morocco; Al Akhawayn University in Ifrane, Ifrane, Morocco.
| | - Nabil Hajji
- Cancer Division, Faculty of medicine, Department of Biomolecular Medicine, Imperial College, London, United Kingdom; Department of Medical Biochemistry, Molecular Biology and Immunology, School of Medicine, Virgen Macarena University Hospital, University of Seville, Seville, Spain
| |
Collapse
|
5
|
Yao B, Chao L, Asadi M, Alnowibet KA. Modified osprey algorithm for optimizing capsule neural network in leukemia image recognition. Sci Rep 2024; 14:15402. [PMID: 38965305 PMCID: PMC11224281 DOI: 10.1038/s41598-024-66187-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Accepted: 06/28/2024] [Indexed: 07/06/2024] Open
Abstract
The diagnosis of leukemia is a serious matter that requires immediate and accurate attention. This research presents a revolutionary method for diagnosing leukemia using a Capsule Neural Network (CapsNet) with an optimized design. CapsNet is a cutting-edge neural network that effectively captures complex features and spatial relationships within images. To improve the CapsNet's performance, a Modified Version of Osprey Optimization Algorithm (MOA) has been utilized. Thesuggested approach has been tested on the ALL-IDB database, a widely recognized dataset for leukemia image classification. Comparative analysis with various machine learning techniques, including Combined combine MobilenetV2 and ResNet18 (MBV2/Res) network, Depth-wise convolution model, a hybrid model that combines a genetic algorithm with ResNet-50V2 (ResNet/GA), and SVM/JAYA demonstrated the superiority of our method in different terms. As a result, the proposed method is a robust and powerful tool for diagnosing leukemia from medical images.
Collapse
Affiliation(s)
- Bingying Yao
- Software Engineering Department, Software Engineering Institute Of Guangzhou, Guangzhou, 510000, China
| | - Li Chao
- College of Information Technology, Guangdong Industry Polytechnic, Foshan, 510300, China.
| | - Mehdi Asadi
- Ankara Yıldırım Beyazıt University (AYBU), 06010, Ankara, Turkey.
| | - Khalid A Alnowibet
- Statistics and Operations Research Department, College of Science, King Saud University, Riyadh, 11451, Kingdom of Saudi Arabia
| |
Collapse
|
6
|
Chukwujindu E, Faiz H, Ai-Douri S, Faiz K, De Sequeira A. Role of artificial intelligence in brain tumour imaging. Eur J Radiol 2024; 176:111509. [PMID: 38788610 DOI: 10.1016/j.ejrad.2024.111509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 04/29/2024] [Accepted: 05/13/2024] [Indexed: 05/26/2024]
Abstract
Artificial intelligence (AI) is a rapidly evolving field with many neuro-oncology applications. In this review, we discuss how AI can assist in brain tumour imaging, focusing on machine learning (ML) and deep learning (DL) techniques. We describe how AI can help in lesion detection, differential diagnosis, anatomic segmentation, molecular marker identification, prognostication, and pseudo-progression evaluation. We also cover AI applications in non-glioma brain tumours, such as brain metastasis, posterior fossa, and pituitary tumours. We highlight the challenges and limitations of AI implementation in radiology, such as data quality, standardization, and integration. Based on the findings in the aforementioned areas, we conclude that AI can potentially improve the diagnosis and treatment of brain tumours and provide a path towards personalized medicine and better patient outcomes.
Collapse
Affiliation(s)
| | | | | | - Khunsa Faiz
- McMaster University, Department of Radiology, L8S 4L8, Canada.
| | | |
Collapse
|
7
|
Abdusalomov A, Rakhimov M, Karimberdiyev J, Belalova G, Cho YI. Enhancing Automated Brain Tumor Detection Accuracy Using Artificial Intelligence Approaches for Healthcare Environments. Bioengineering (Basel) 2024; 11:627. [PMID: 38927863 PMCID: PMC11201188 DOI: 10.3390/bioengineering11060627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 06/09/2024] [Accepted: 06/17/2024] [Indexed: 06/28/2024] Open
Abstract
Medical imaging and deep learning models are essential to the early identification and diagnosis of brain cancers, facilitating timely intervention and improving patient outcomes. This research paper investigates the integration of YOLOv5, a state-of-the-art object detection framework, with non-local neural networks (NLNNs) to improve brain tumor detection's robustness and accuracy. This study begins by curating a comprehensive dataset comprising brain MRI scans from various sources. To facilitate effective fusion, the YOLOv5 and NLNNs, K-means+, and spatial pyramid pooling fast+ (SPPF+) modules are integrated within a unified framework. The brain tumor dataset is used to refine the YOLOv5 model through the application of transfer learning techniques, adapting it specifically to the task of tumor detection. The results indicate that the combination of YOLOv5 and other modules results in enhanced detection capabilities in comparison to the utilization of YOLOv5 exclusively, proving recall rates of 86% and 83% respectively. Moreover, the research explores the interpretability aspect of the combined model. By visualizing the attention maps generated by the NLNNs module, the regions of interest associated with tumor presence are highlighted, aiding in the understanding and validation of the decision-making procedure of the methodology. Additionally, the impact of hyperparameters, such as NLNNs kernel size, fusion strategy, and training data augmentation, is investigated to optimize the performance of the combined model.
Collapse
Affiliation(s)
- Akmalbek Abdusalomov
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Gyeonggi-do, Republic of Korea;
| | - Mekhriddin Rakhimov
- Department of Artificial Intelligence, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan; (M.R.); (J.K.)
| | - Jakhongir Karimberdiyev
- Department of Artificial Intelligence, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan; (M.R.); (J.K.)
| | - Guzal Belalova
- Department of Information Systems and Technologies, Tashkent State University of Economics, Tashkent 100066, Uzbekistan;
| | - Young Im Cho
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Gyeonggi-do, Republic of Korea;
- Department of Information Systems and Technologies, Tashkent State University of Economics, Tashkent 100066, Uzbekistan;
| |
Collapse
|
8
|
Jiang Z, Gandomkar Z, Trieu PDY, Taba ST, Barron ML, Lewis SJ. AI for interpreting screening mammograms: implications for missed cancer in double reading practices and challenging-to-locate lesions. Sci Rep 2024; 14:11893. [PMID: 38789575 PMCID: PMC11126609 DOI: 10.1038/s41598-024-62324-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 05/15/2024] [Indexed: 05/26/2024] Open
Abstract
Although the value of adding AI as a surrogate second reader in various scenarios has been investigated, it is unknown whether implementing an AI tool within double reading practice would capture additional subtle cancers missed by both radiologists who independently assessed the mammograms. This paper assesses the effectiveness of two state-of-the-art Artificial Intelligence (AI) models in detecting retrospectively-identified missed cancers within a screening program employing double reading practices. The study also explores the agreement between AI and radiologists in locating the lesions, considering various levels of concordance among the radiologists in locating the lesions. The Globally-aware Multiple Instance Classifier (GMIC) and Global-Local Activation Maps (GLAM) models were fine-tuned for our dataset. We evaluated the sensitivity of both models on missed cancers retrospectively identified by a panel of three radiologists who reviewed prior examinations of 729 cancer cases detected in a screening program with double reading practice. Two of these experts annotated the lesions, and based on their concordance levels, cases were categorized as 'almost perfect,' 'substantial,' 'moderate,' and 'poor.' We employed Similarity or Histogram Intersection (SIM) and Kullback-Leibler Divergence (KLD) metrics to compare saliency maps of malignant cases from the AI model with annotations from radiologists in each category. In total, 24.82% of cancers were labeled as "missed." The performance of GMIC and GLAM on the missed cancer cases was 82.98% and 79.79%, respectively, while for the true screen-detected cancers, the performances were 89.54% and 87.25%, respectively (p-values for the difference in sensitivity < 0.05). As anticipated, SIM and KLD from saliency maps were best in 'almost perfect,' followed by 'substantial,' 'moderate,' and 'poor.' Both GMIC and GLAM (p-values < 0.05) exhibited greater sensitivity at higher concordance. Even in a screening program with independent double reading, adding AI could potentially identify missed cancers. However, the challenging-to-locate lesions for radiologists impose a similar challenge for AI.
Collapse
Affiliation(s)
- Zhengqiang Jiang
- Discipline of Medical Imaging Sciences, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia.
| | - Ziba Gandomkar
- Discipline of Medical Imaging Sciences, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Phuong Dung Yun Trieu
- Discipline of Medical Imaging Sciences, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Seyedamir Tavakoli Taba
- Discipline of Medical Imaging Sciences, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Melissa L Barron
- Discipline of Medical Imaging Sciences, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Sarah J Lewis
- Discipline of Medical Imaging Sciences, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
- School of Health Sciences, Western Sydney University, Campbelltown, Australia
| |
Collapse
|
9
|
Holguin-Garcia SA, Guevara-Navarro E, Daza-Chica AE, Patiño-Claro MA, Arteaga-Arteaga HB, Ruz GA, Tabares-Soto R, Bravo-Ortiz MA. A comparative study of CNN-capsule-net, CNN-transformer encoder, and Traditional machine learning algorithms to classify epileptic seizure. BMC Med Inform Decis Mak 2024; 24:60. [PMID: 38429718 PMCID: PMC10908140 DOI: 10.1186/s12911-024-02460-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 02/13/2024] [Indexed: 03/03/2024] Open
Abstract
INTRODUCTION Epilepsy is a disease characterized by an excessive discharge in neurons generally provoked without any external stimulus, known as convulsions. About 2 million people are diagnosed each year in the world. This process is carried out by a neurological doctor using an electroencephalogram (EEG), which is lengthy. METHOD To optimize these processes and make them more efficient, we have resorted to innovative artificial intelligence methods essential in classifying EEG signals. For this, comparing traditional models, such as machine learning or deep learning, with cutting-edge models, in this case, using Capsule-Net architectures and Transformer Encoder, has a crucial role in finding the most accurate model and helping the doctor to have a faster diagnosis. RESULT In this paper, a comparison was made between different models for binary and multiclass classification of the epileptic seizure detection database, achieving a binary accuracy of 99.92% with the Capsule-Net model and a multiclass accuracy with the Transformer Encoder model of 87.30%. CONCLUSION Artificial intelligence is essential in diagnosing pathology. The comparison between models is helpful as it helps to discard those that are not efficient. State-of-the-art models overshadow conventional models, but data processing also plays an essential role in evaluating the higher accuracy of the models.
Collapse
Affiliation(s)
| | - Ernesto Guevara-Navarro
- Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001, Caldas, Colombia
| | - Alvaro Eduardo Daza-Chica
- Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001, Caldas, Colombia
| | - Maria Alejandra Patiño-Claro
- Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001, Caldas, Colombia
| | - Harold Brayan Arteaga-Arteaga
- Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001, Caldas, Colombia
| | - Gonzalo A Ruz
- Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Santiago, 7941169, Chile
- Center of Applied Ecology and Sustainability (CAPES), Santiago, 8331150, Chile
- Data Observatory Foundation, Santiago, 7510277, Chile
| | - Reinel Tabares-Soto
- Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001, Caldas, Colombia
- Departamento de Sistemas e Informática, Universidad de Caldas, Manizales, 170004, Caldas, Colombia
- Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Santiago, 7941169, Chile
| | - Mario Alejandro Bravo-Ortiz
- Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001, Caldas, Colombia.
- Centro de Bioinformática y Biología Computacional (BIOS), Manizales, 170001, Colombia.
| |
Collapse
|
10
|
Jiang Z, Gandomkar Z, Trieu PD(Y, Tavakoli Taba S, Barron ML, Obeidy P, Lewis SJ. Evaluating Recalibrating AI Models for Breast Cancer Diagnosis in a New Context: Insights from Transfer Learning, Image Enhancement and High-Quality Training Data Integration. Cancers (Basel) 2024; 16:322. [PMID: 38254813 PMCID: PMC10814142 DOI: 10.3390/cancers16020322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 01/07/2024] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
This paper investigates the adaptability of four state-of-the-art artificial intelligence (AI) models to the Australian mammographic context through transfer learning, explores the impact of image enhancement on model performance and analyses the relationship between AI outputs and histopathological features for clinical relevance and accuracy assessment. A total of 1712 screening mammograms (n = 856 cancer cases and n = 856 matched normal cases) were used in this study. The 856 cases with cancer lesions were annotated by two expert radiologists and the level of concordance between their annotations was used to establish two sets: a 'high-concordances subset' with 99% agreement of cancer location and an 'entire dataset' with all cases included. The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance of Globally aware Multiple Instance Classifier (GMIC), Global-Local Activation Maps (GLAM), I&H and End2End AI models, both in the pretrained and transfer learning modes, with and without applying the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm. The four AI models with and without transfer learning in the high-concordance subset outperformed those in the entire dataset. Applying the CLAHE algorithm to mammograms improved the performance of the AI models. In the high-concordance subset with the transfer learning and CLAHE algorithm applied, the AUC of the GMIC model was highest (0.912), followed by the GLAM model (0.909), I&H (0.893) and End2End (0.875). There were significant differences (p < 0.05) in the performances of the four AI models between the high-concordance subset and the entire dataset. The AI models demonstrated significant differences in malignancy probability concerning different tumour size categories in mammograms. The performance of AI models was affected by several factors such as concordance classification, image enhancement and transfer learning. Mammograms with a strong concordance with radiologists' annotations, applying image enhancement and transfer learning could enhance the accuracy of AI models.
Collapse
Affiliation(s)
- Zhengqiang Jiang
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
| | - Ziba Gandomkar
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
| | - Phuong Dung (Yun) Trieu
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
| | - Seyedamir Tavakoli Taba
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
| | - Melissa L. Barron
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
| | - Peyman Obeidy
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
| | - Sarah J. Lewis
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
- School of Health Sciences, Western Sydney University, Campbelltown 2560, Australia
| |
Collapse
|
11
|
Ali H, Qureshi R, Shah Z. Artificial Intelligence-Based Methods for Integrating Local and Global Features for Brain Cancer Imaging: Scoping Review. JMIR Med Inform 2023; 11:e47445. [PMID: 37976086 PMCID: PMC10692876 DOI: 10.2196/47445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 07/02/2023] [Accepted: 07/12/2023] [Indexed: 11/19/2023] Open
Abstract
BACKGROUND Transformer-based models are gaining popularity in medical imaging and cancer imaging applications. Many recent studies have demonstrated the use of transformer-based models for brain cancer imaging applications such as diagnosis and tumor segmentation. OBJECTIVE This study aims to review how different vision transformers (ViTs) contributed to advancing brain cancer diagnosis and tumor segmentation using brain image data. This study examines the different architectures developed for enhancing the task of brain tumor segmentation. Furthermore, it explores how the ViT-based models augmented the performance of convolutional neural networks for brain cancer imaging. METHODS This review performed the study search and study selection following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. The search comprised 4 popular scientific databases: PubMed, Scopus, IEEE Xplore, and Google Scholar. The search terms were formulated to cover the interventions (ie, ViTs) and the target application (ie, brain cancer imaging). The title and abstract for study selection were performed by 2 reviewers independently and validated by a third reviewer. Data extraction was performed by 2 reviewers and validated by a third reviewer. Finally, the data were synthesized using a narrative approach. RESULTS Of the 736 retrieved studies, 22 (3%) were included in this review. These studies were published in 2021 and 2022. The most commonly addressed task in these studies was tumor segmentation using ViTs. No study reported early detection of brain cancer. Among the different ViT architectures, Shifted Window transformer-based architectures have recently become the most popular choice of the research community. Among the included architectures, UNet transformer and TransUNet had the highest number of parameters and thus needed a cluster of as many as 8 graphics processing units for model training. The brain tumor segmentation challenge data set was the most popular data set used in the included studies. ViT was used in different combinations with convolutional neural networks to capture both the global and local context of the input brain imaging data. CONCLUSIONS It can be argued that the computational complexity of transformer architectures is a bottleneck in advancing the field and enabling clinical transformations. This review provides the current state of knowledge on the topic, and the findings of this review will be helpful for researchers in the field of medical artificial intelligence and its applications in brain cancer.
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Rizwan Qureshi
- Department of Imaging Physics, MD Anderson Cancer Center, University of Texas, Houston, Houston, TX, United States
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
12
|
Ali H, Mohsen F, Shah Z. Improving diagnosis and prognosis of lung cancer using vision transformers: a scoping review. BMC Med Imaging 2023; 23:129. [PMID: 37715137 PMCID: PMC10503208 DOI: 10.1186/s12880-023-01098-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 09/05/2023] [Indexed: 09/17/2023] Open
Abstract
BACKGROUND Vision transformer-based methods are advancing the field of medical artificial intelligence and cancer imaging, including lung cancer applications. Recently, many researchers have developed vision transformer-based AI methods for lung cancer diagnosis and prognosis. OBJECTIVE This scoping review aims to identify the recent developments on vision transformer-based AI methods for lung cancer imaging applications. It provides key insights into how vision transformers complemented the performance of AI and deep learning methods for lung cancer. Furthermore, the review also identifies the datasets that contributed to advancing the field. METHODS In this review, we searched Pubmed, Scopus, IEEEXplore, and Google Scholar online databases. The search terms included intervention terms (vision transformers) and the task (i.e., lung cancer, adenocarcinoma, etc.). Two reviewers independently screened the title and abstract to select relevant studies and performed the data extraction. A third reviewer was consulted to validate the inclusion and exclusion. Finally, the narrative approach was used to synthesize the data. RESULTS Of the 314 retrieved studies, this review included 34 studies published from 2020 to 2022. The most commonly addressed task in these studies was the classification of lung cancer types, such as lung squamous cell carcinoma versus lung adenocarcinoma, and identifying benign versus malignant pulmonary nodules. Other applications included survival prediction of lung cancer patients and segmentation of lungs. The studies lacked clear strategies for clinical transformation. SWIN transformer was a popular choice of the researchers; however, many other architectures were also reported where vision transformer was combined with convolutional neural networks or UNet model. Researchers have used the publicly available lung cancer datasets of the lung imaging database consortium and the cancer genome atlas. One study used a cluster of 48 GPUs, while other studies used one, two, or four GPUs. CONCLUSION It can be concluded that vision transformer-based models are increasingly in popularity for developing AI methods for lung cancer applications. However, their computational complexity and clinical relevance are important factors to be considered for future research work. This review provides valuable insights for researchers in the field of AI and healthcare to advance the state-of-the-art in lung cancer diagnosis and prognosis. We provide an interactive dashboard on lung-cancer.onrender.com/ .
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar.
| | - Farida Mohsen
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar.
| |
Collapse
|
13
|
Raghavendra U, Gudigar A, Paul A, Goutham TS, Inamdar MA, Hegde A, Devi A, Ooi CP, Deo RC, Barua PD, Molinari F, Ciaccio EJ, Acharya UR. Brain tumor detection and screening using artificial intelligence techniques: Current trends and future perspectives. Comput Biol Med 2023; 163:107063. [PMID: 37329621 DOI: 10.1016/j.compbiomed.2023.107063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 05/16/2023] [Accepted: 05/19/2023] [Indexed: 06/19/2023]
Abstract
A brain tumor is an abnormal mass of tissue located inside the skull. In addition to putting pressure on the healthy parts of the brain, it can lead to significant health problems. Depending on the region of the brain tumor, it can cause a wide range of health issues. As malignant brain tumors grow rapidly, the mortality rate of individuals with this cancer can increase substantially with each passing week. Hence it is vital to detect these tumors early so that preventive measures can be taken at the initial stages. Computer-aided diagnostic (CAD) systems, in coordination with artificial intelligence (AI) techniques, have a vital role in the early detection of this disorder. In this review, we studied 124 research articles published from 2000 to 2022. Here, the challenges faced by CAD systems based on different modalities are highlighted along with the current requirements of this domain and future prospects in this area of research.
Collapse
Affiliation(s)
- U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India.
| | - Aritra Paul
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - T S Goutham
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - Mahesh Anil Inamdar
- Department of Mechatronics, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - Ajay Hegde
- Consultant Neurosurgeon Manipal Hospitals, Sarjapur Road, Bangalore, India
| | - Aruna Devi
- School of Education and Tertiary Access, University of the Sunshine Coast, Caboolture Campus, Australia
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore, 599494, Singapore
| | - Ravinesh C Deo
- School of Mathematics, Physics, and Computing, University of Southern Queensland, Springfield, QLD, 4300, Australia
| | - Prabal Datta Barua
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW, 2010, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD, 4350, Australia; Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, 2007, Australia
| | - Filippo Molinari
- Department of Electronics and Telecommunications, Politecnico di Torino, 10129, Torino, Italy
| | - Edward J Ciaccio
- Department of Medicine, Columbia University Medical Center, New York, NY, 10032, USA
| | - U Rajendra Acharya
- School of Mathematics, Physics, and Computing, University of Southern Queensland, Springfield, QLD, 4300, Australia; International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto, 860-8555, Japan
| |
Collapse
|
14
|
Ortega-Martorell S, Olier I, Hernandez O, Restrepo-Galvis PD, Bellfield RAA, Candiota AP. Tracking Therapy Response in Glioblastoma Using 1D Convolutional Neural Networks. Cancers (Basel) 2023; 15:4002. [PMID: 37568818 PMCID: PMC10417313 DOI: 10.3390/cancers15154002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 07/26/2023] [Accepted: 08/05/2023] [Indexed: 08/13/2023] Open
Abstract
BACKGROUND Glioblastoma (GB) is a malignant brain tumour that is challenging to treat, often relapsing even after aggressive therapy. Evaluating therapy response relies on magnetic resonance imaging (MRI) following the Response Assessment in Neuro-Oncology (RANO) criteria. However, early assessment is hindered by phenomena such as pseudoprogression and pseudoresponse. Magnetic resonance spectroscopy (MRS/MRSI) provides metabolomics information but is underutilised due to a lack of familiarity and standardisation. METHODS This study explores the potential of spectroscopic imaging (MRSI) in combination with several machine learning approaches, including one-dimensional convolutional neural networks (1D-CNNs), to improve therapy response assessment. Preclinical GB (GL261-bearing mice) were studied for method optimisation and validation. RESULTS The proposed 1D-CNN models successfully identify different regions of tumours sampled by MRSI, i.e., normal brain (N), control/unresponsive tumour (T), and tumour responding to treatment (R). Class activation maps using Grad-CAM enabled the study of the key areas relevant to the models, providing model explainability. The generated colour-coded maps showing the N, T and R regions were highly accurate (according to Dice scores) when compared against ground truth and outperformed our previous method. CONCLUSIONS The proposed methodology may provide new and better opportunities for therapy response assessment, potentially providing earlier hints of tumour relapsing stages.
Collapse
Affiliation(s)
- Sandra Ortega-Martorell
- Data Science Research Centre, Liverpool John Moores University, Liverpool L3 3AF, UK; (I.O.); (R.A.A.B.)
| | - Ivan Olier
- Data Science Research Centre, Liverpool John Moores University, Liverpool L3 3AF, UK; (I.O.); (R.A.A.B.)
| | - Orlando Hernandez
- Escuela Colombiana de Ingeniería Julio Garavito, Bogota 111166, Colombia; (O.H.); (P.D.R.-G.)
| | | | - Ryan A. A. Bellfield
- Data Science Research Centre, Liverpool John Moores University, Liverpool L3 3AF, UK; (I.O.); (R.A.A.B.)
| | - Ana Paula Candiota
- Centro de Investigación Biomédica en Red: Bioingeniería, Biomateriales y Nanomedicina, 08193 Cerdanyola del Vallès, Spain
- Departament de Bioquímica i Biologia Molecular, Facultat de Biociències, Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallès, Spain
| |
Collapse
|
15
|
Sahoo S, Mishra S, Panda B, Bhoi AK, Barsocchi P. An Augmented Modulated Deep Learning Based Intelligent Predictive Model for Brain Tumor Detection Using GAN Ensemble. SENSORS (BASEL, SWITZERLAND) 2023; 23:6930. [PMID: 37571713 PMCID: PMC10422344 DOI: 10.3390/s23156930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/25/2023] [Accepted: 07/28/2023] [Indexed: 08/13/2023]
Abstract
Brain tumor detection in the initial stage is becoming an intricate task for clinicians worldwide. The diagnosis of brain tumor patients is rigorous in the later stages, which is a serious concern. Although there are related pragmatic clinical tools and multiple models based on machine learning (ML) for the effective diagnosis of patients, these models still provide less accuracy and take immense time for patient screening during the diagnosis process. Hence, there is still a need to develop a more precise model for more accurate screening of patients to detect brain tumors in the beginning stages and aid clinicians in diagnosis, making the brain tumor assessment more reliable. In this research, a performance analysis of the impact of different generative adversarial networks (GAN) on the early detection of brain tumors is presented. Based on it, a novel hybrid enhanced predictive convolution neural network (CNN) model using a hybrid GAN ensemble is proposed. Brain tumor image data is augmented using a GAN ensemble, which is fed for classification using a hybrid modulated CNN technique. The outcome is generated through a soft voting approach where the final prediction is based on the GAN, which computes the highest value for different performance metrics. This analysis demonstrated that evaluation with a progressive-growing generative adversarial network (PGGAN) architecture produced the best result. In the analysis, PGGAN outperformed others, computing the accuracy, precision, recall, F1-score, and negative predictive value (NPV) to be 98.85, 98.45%, 97.2%, 98.11%, and 98.09%, respectively. Additionally, a very low latency of 3.4 s is determined with PGGAN. The PGGAN model enhanced the overall performance of the identification of brain cell tissues in real time. Therefore, it may be inferred to suggest that brain tumor detection in patients using PGGAN augmentation with the proposed modulated CNN technique generates the optimum performance using the soft voting approach.
Collapse
Affiliation(s)
- Saswati Sahoo
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, India;
| | - Sushruta Mishra
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, India;
| | - Baidyanath Panda
- LTIMindtree, 1 American Row, 3rd Floor, Hartford, CT 06103, USA;
| | - Akash Kumar Bhoi
- Directorate of Research, Sikkim Manipal University, Gangtok 737102, India;
- KIET Group of Institutions, Delhi-NCR, Ghaziabad 201206, India
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| | - Paolo Barsocchi
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| |
Collapse
|
16
|
Saidani O, Aljrees T, Umer M, Alturki N, Alshardan A, Khan SW, Alsubai S, Ashraf I. Enhancing Prediction of Brain Tumor Classification Using Images and Numerical Data Features. Diagnostics (Basel) 2023; 13:2544. [PMID: 37568907 PMCID: PMC10417332 DOI: 10.3390/diagnostics13152544] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 07/23/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023] Open
Abstract
Brain tumors, along with other diseases that harm the neurological system, are a significant contributor to global mortality. Early diagnosis plays a crucial role in effectively treating brain tumors. To distinguish individuals with tumors from those without, this study employs a combination of images and data-based features. In the initial phase, the image dataset is enhanced, followed by the application of a UNet transfer-learning-based model to accurately classify patients as either having tumors or being normal. In the second phase, this research utilizes 13 features in conjunction with a voting classifier. The voting classifier incorporates features extracted from deep convolutional layers and combines stochastic gradient descent with logistic regression to achieve better classification results. The reported accuracy score of 0.99 achieved by both proposed models shows its superior performance. Also, comparing results with other supervised learning algorithms and state-of-the-art models validates its performance.
Collapse
Affiliation(s)
- Oumaima Saidani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia; (O.S.); (N.A.); (A.A.)
| | - Turki Aljrees
- Department College of Computer Science and Engineering, University of Hafr Al-Batin, Hafar Al-Batin 39524, Saudi Arabia;
| | - Muhammad Umer
- Department of Computer Science & Information Technology, The Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia; (O.S.); (N.A.); (A.A.)
| | - Amal Alshardan
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia; (O.S.); (N.A.); (A.A.)
| | - Sardar Waqar Khan
- Department of Computer Science & Information Technology, The University of Lahore, Lahore 54000, Pakistan;
| | - Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia;
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| |
Collapse
|
17
|
Azeem M, Javaid S, Khalil RA, Fahim H, Althobaiti T, Alsharif N, Saeed N. Neural Networks for the Detection of COVID-19 and Other Diseases: Prospects and Challenges. Bioengineering (Basel) 2023; 10:850. [PMID: 37508877 PMCID: PMC10416184 DOI: 10.3390/bioengineering10070850] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/09/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023] Open
Abstract
Artificial neural networks (ANNs) ability to learn, correct errors, and transform a large amount of raw data into beneficial medical decisions for treatment and care has increased in popularity for enhanced patient safety and quality of care. Therefore, this paper reviews the critical role of ANNs in providing valuable insights for patients' healthcare decisions and efficient disease diagnosis. We study different types of ANNs in the existing literature that advance ANNs' adaptation for complex applications. Specifically, we investigate ANNs' advances for predicting viral, cancer, skin, and COVID-19 diseases. Furthermore, we propose a deep convolutional neural network (CNN) model called ConXNet, based on chest radiography images, to improve the detection accuracy of COVID-19 disease. ConXNet is trained and tested using a chest radiography image dataset obtained from Kaggle, achieving more than 97% accuracy and 98% precision, which is better than other existing state-of-the-art models, such as DeTraC, U-Net, COVID MTNet, and COVID-Net, having 93.1%, 94.10%, 84.76%, and 90% accuracy and 94%, 95%, 85%, and 92% precision, respectively. The results show that the ConXNet model performed significantly well for a relatively large dataset compared with the aforementioned models. Moreover, the ConXNet model reduces the time complexity by using dropout layers and batch normalization techniques. Finally, we highlight future research directions and challenges, such as the complexity of the algorithms, insufficient available data, privacy and security, and integration of biosensing with ANNs. These research directions require considerable attention for improving the scope of ANNs for medical diagnostic and treatment applications.
Collapse
Affiliation(s)
- Muhammad Azeem
- School of Science, Engineering & Environment, University of Salford, Manchester M5 4WT, UK;
| | - Shumaila Javaid
- Department of Control Science and Engineering, College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China; (S.J.); (H.F.)
| | - Ruhul Amin Khalil
- Department of Electrical Engineering, University of Engineering and Technology, Peshawar 25120, Pakistan;
- Department of Electrical and Communication Engineering, United Arab Emirates University (UAEU), Al-Ain 15551, United Arab Emirates
| | - Hamza Fahim
- Department of Control Science and Engineering, College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China; (S.J.); (H.F.)
| | - Turke Althobaiti
- Department of Computer Science, Faculty of Science, Northern Border University, Arar 73222, Saudi Arabia;
| | - Nasser Alsharif
- Department of Administrative and Financial Sciences, Ranyah University Collage, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia;
| | - Nasir Saeed
- Department of Electrical and Communication Engineering, United Arab Emirates University (UAEU), Al-Ain 15551, United Arab Emirates
| |
Collapse
|
18
|
Al-Hammuri K, Gebali F, Kanan A, Chelvan IT. Vision transformer architecture and applications in digital health: a tutorial and survey. Vis Comput Ind Biomed Art 2023; 6:14. [PMID: 37428360 PMCID: PMC10333157 DOI: 10.1186/s42492-023-00140-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 05/30/2023] [Indexed: 07/11/2023] Open
Abstract
The vision transformer (ViT) is a state-of-the-art architecture for image recognition tasks that plays an important role in digital health applications. Medical images account for 90% of the data in digital medicine applications. This article discusses the core foundations of the ViT architecture and its digital health applications. These applications include image segmentation, classification, detection, prediction, reconstruction, synthesis, and telehealth such as report generation and security. This article also presents a roadmap for implementing the ViT in digital health systems and discusses its limitations and challenges.
Collapse
Affiliation(s)
- Khalid Al-Hammuri
- Electrical and Computer Engineering, University of Victoria, Victoria, V8W 2Y2, Canada.
| | - Fayez Gebali
- Electrical and Computer Engineering, University of Victoria, Victoria, V8W 2Y2, Canada
| | - Awos Kanan
- Computer Engineering, Princess Sumaya University for Technology, Amman, 11941, Jordan
| | | |
Collapse
|
19
|
Al-Azzwi ZHN, Nazarov A. Brain Tumor Classification based on Improved Stacked Ensemble Deep Learning Methods. Asian Pac J Cancer Prev 2023; 24:2141-2148. [PMID: 37378946 PMCID: PMC10505861 DOI: 10.31557/apjcp.2023.24.6.2141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 06/21/2023] [Indexed: 06/29/2023] Open
Abstract
OBJECTIVE Brain Tumor diagnostic prediction is essential for assisting radiologists and other healthcare professionals in identifying and classifying brain tumors. For the diagnosis and treatment of cancer diseases, prediction and classification accuracy are crucial. The aim of this study was to improve ensemble deep learning models for classifing brain tumor and increase the performance of structure models by combining different model of deep learning to develop a model with more accurate predictions than the individual models. METHODS Convolutional neural networks (CNNs), which are made up of a single algorithm called CNN model, are the foundation of most current methods for classifying cancer illness images. The model CNN is combined with other models to create other methods of classification called ensemble method. However, compared to a single machine learning algorithm, ensemble machine learning models are more accurate. This study used stacked ensemble deep learning technology. The data set used in this study was obtained from Kaggle and included two categories: abnormal & normal brains. The data set was trained with three models: VGG19, Inception v3, and Resnet 10. RESULT The 96.6% accuracy for binary classification (0,1) have been achieved by stacked ensemble deep learning model with Loss binary cross entropy, and Adam optimizer take into consideration with stacking models. CONCLUSION The stacked ensemble deep learning model can be improved over a single framework.
Collapse
Affiliation(s)
- Zobeda Hatif Naji Al-Azzwi
- School of Radio Engineering and Computer Technology, Moscow Institute of Physics and Technology, Moscow, Russian Federation.
| | - A.N Nazarov
- Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, Moscow, Russian Federation.
| |
Collapse
|
20
|
Sailunaz K, Bestepe D, Alhajj S, Özyer T, Rokne J, Alhajj R. Brain tumor detection and segmentation: Interactive framework with a visual interface and feedback facility for dynamically improved accuracy and trust. PLoS One 2023; 18:e0284418. [PMID: 37068084 PMCID: PMC10109523 DOI: 10.1371/journal.pone.0284418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 03/30/2023] [Indexed: 04/18/2023] Open
Abstract
Brain cancers caused by malignant brain tumors are one of the most fatal cancer types with a low survival rate mostly due to the difficulties in early detection. Medical professionals therefore use various invasive and non-invasive methods for detecting and treating brain tumors at the earlier stages thus enabling early treatment. The main non-invasive methods for brain tumor diagnosis and assessment are brain imaging like computed tomography (CT), positron emission tomography (PET) and magnetic resonance imaging (MRI) scans. In this paper, the focus is on detection and segmentation of brain tumors from 2D and 3D brain MRIs. For this purpose, a complete automated system with a web application user interface is described which detects and segments brain tumors with more than 90% accuracy and Dice scores. The user can upload brain MRIs or can access brain images from hospital databases to check presence or absence of brain tumor, to check the existence of brain tumor from brain MRI features and to extract the tumor region precisely from the brain MRI using deep neural networks like CNN, U-Net and U-Net++. The web application also provides an option for entering feedbacks on the results of the detection and segmentation to allow healthcare professionals to add more precise information on the results that can be used to train the model for better future predictions and segmentations.
Collapse
Affiliation(s)
- Kashfia Sailunaz
- Department of Computer Science, University of Calgary, Alberta, Canada
| | - Deniz Bestepe
- Department of Computer Engineering, Istanbul Medipol University, Istanbul, Turkey
| | - Sleiman Alhajj
- International School of Medicine, Istanbul Medipol University, Istanbul, Turkey
| | - Tansel Özyer
- Department of Computer Engineering, Ankara Medipol University, Ankara, Turkey
| | - Jon Rokne
- Department of Computer Science, University of Calgary, Alberta, Canada
| | - Reda Alhajj
- Department of Computer Science, University of Calgary, Alberta, Canada
- Department of Computer Engineering, Istanbul Medipol University, Istanbul, Turkey
- Department of Health Informatics, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
21
|
Srinivasan S, Bai PSM, Mathivanan SK, Muthukumaran V, Babu JC, Vilcekova L. Grade Classification of Tumors from Brain Magnetic Resonance Images Using a Deep Learning Technique. Diagnostics (Basel) 2023; 13:diagnostics13061153. [PMID: 36980463 PMCID: PMC10046932 DOI: 10.3390/diagnostics13061153] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/14/2023] [Accepted: 03/14/2023] [Indexed: 03/22/2023] Open
Abstract
To improve the accuracy of tumor identification, it is necessary to develop a reliable automated diagnostic method. In order to precisely categorize brain tumors, researchers developed a variety of segmentation algorithms. Segmentation of brain images is generally recognized as one of the most challenging tasks in medical image processing. In this article, a novel automated detection and classification method was proposed. The proposed approach consisted of many phases, including pre-processing MRI images, segmenting images, extracting features, and classifying images. During the pre-processing portion of an MRI scan, an adaptive filter was utilized to eliminate background noise. For feature extraction, the local-binary grey level co-occurrence matrix (LBGLCM) was used, and for image segmentation, enhanced fuzzy c-means clustering (EFCMC) was used. After extracting the scan features, we used a deep learning model to classify MRI images into two groups: glioma and normal. The classifications were created using a convolutional recurrent neural network (CRNN). The proposed technique improved brain image classification from a defined input dataset. MRI scans from the REMBRANDT dataset, which consisted of 620 testing and 2480 training sets, were used for the research. The data demonstrate that the newly proposed method outperformed its predecessors. The proposed CRNN strategy was compared against BP, U-Net, and ResNet, which are three of the most prevalent classification approaches currently being used. For brain tumor classification, the proposed system outcomes were 98.17% accuracy, 91.34% specificity, and 98.79% sensitivity.
Collapse
Affiliation(s)
- Saravanan Srinivasan
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India
| | | | - Sandeep Kumar Mathivanan
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Venkatesan Muthukumaran
- Department of Mathematics, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur 603203, India
| | - Jyothi Chinna Babu
- Department of Electronics and Communications Engineering, Annamacharya Institute of Technology and Sciences, Rajampet 516126, India
| | - Lucia Vilcekova
- Faculty of Management, Comenius University Bratislava, Odbojarov 10, 820 05 Bratislava, Slovakia
- Correspondence:
| |
Collapse
|
22
|
Alturki N, Umer M, Ishaq A, Abuzinadah N, Alnowaiser K, Mohamed A, Saidani O, Ashraf I. Combining CNN Features with Voting Classifiers for Optimizing Performance of Brain Tumor Classification. Cancers (Basel) 2023; 15:cancers15061767. [PMID: 36980653 PMCID: PMC10046217 DOI: 10.3390/cancers15061767] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 02/20/2023] [Accepted: 03/04/2023] [Indexed: 03/17/2023] Open
Abstract
Brain tumors and other nervous system cancers are among the top ten leading fatal diseases. The effective treatment of brain tumors depends on their early detection. This research work makes use of 13 features with a voting classifier that combines logistic regression with stochastic gradient descent using features extracted by deep convolutional layers for the efficient classification of tumorous victims from the normal. From the first and second-order brain tumor features, deep convolutional features are extracted for model training. Using deep convolutional features helps to increase the precision of tumor and non-tumor patient classification. The proposed voting classifier along with convoluted features produces results that show the highest accuracy of 99.9%. Compared to cutting-edge methods, the proposed approach has demonstrated improved accuracy.
Collapse
Affiliation(s)
- Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Muhammad Umer
- Department of Computer Science & Information Technology, The Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan
| | - Abid Ishaq
- Department of Computer Science & Information Technology, The Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan
| | - Nihal Abuzinadah
- Faculty of Computer Science and Information Technology, King Abdulaziz University, P.O. Box. 80200, Jeddah 21589, Saudi Arabia
| | - Khaled Alnowaiser
- Department of Computer Engineering, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Abdullah Mohamed
- Research Centre, Future University in Egypt, New Cairo 11745, Egypt
| | - Oumaima Saidani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
- Correspondence:
| |
Collapse
|
23
|
Surianarayanan C, Lawrence JJ, Chelliah PR, Prakash E, Hewage C. Convergence of Artificial Intelligence and Neuroscience towards the Diagnosis of Neurological Disorders-A Scoping Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:3062. [PMID: 36991773 PMCID: PMC10053494 DOI: 10.3390/s23063062] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 03/09/2023] [Accepted: 03/09/2023] [Indexed: 06/19/2023]
Abstract
Artificial intelligence (AI) is a field of computer science that deals with the simulation of human intelligence using machines so that such machines gain problem-solving and decision-making capabilities similar to that of the human brain. Neuroscience is the scientific study of the struczture and cognitive functions of the brain. Neuroscience and AI are mutually interrelated. These two fields help each other in their advancements. The theory of neuroscience has brought many distinct improvisations into the AI field. The biological neural network has led to the realization of complex deep neural network architectures that are used to develop versatile applications, such as text processing, speech recognition, object detection, etc. Additionally, neuroscience helps to validate the existing AI-based models. Reinforcement learning in humans and animals has inspired computer scientists to develop algorithms for reinforcement learning in artificial systems, which enables those systems to learn complex strategies without explicit instruction. Such learning helps in building complex applications, like robot-based surgery, autonomous vehicles, gaming applications, etc. In turn, with its ability to intelligently analyze complex data and extract hidden patterns, AI fits as a perfect choice for analyzing neuroscience data that are very complex. Large-scale AI-based simulations help neuroscientists test their hypotheses. Through an interface with the brain, an AI-based system can extract the brain signals and commands that are generated according to the signals. These commands are fed into devices, such as a robotic arm, which helps in the movement of paralyzed muscles or other human parts. AI has several use cases in analyzing neuroimaging data and reducing the workload of radiologists. The study of neuroscience helps in the early detection and diagnosis of neurological disorders. In the same way, AI can effectively be applied to the prediction and detection of neurological disorders. Thus, in this paper, a scoping review has been carried out on the mutual relationship between AI and neuroscience, emphasizing the convergence between AI and neuroscience in order to detect and predict various neurological disorders.
Collapse
Affiliation(s)
| | | | | | - Edmond Prakash
- Research Center for Creative Arts, University for the Creative Arts (UCA), Farnham GU9 7DS, UK
| | - Chaminda Hewage
- Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff CF5 2YB, UK
| |
Collapse
|
24
|
Zhang Y, Nie R, Cao J, Ma C. Self-Supervised Fusion for Multi-Modal Medical Images via Contrastive Auto-Encoding and Convolutional Information Exchange. IEEE COMPUT INTELL M 2023. [DOI: 10.1109/mci.2022.3223487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
25
|
CNN-Based Classification for Highly Similar Vehicle Model Using Multi-Task Learning. J Imaging 2022; 8:jimaging8110293. [DOI: 10.3390/jimaging8110293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 10/12/2022] [Accepted: 10/16/2022] [Indexed: 11/05/2022] Open
Abstract
Vehicle make and model classification is crucial to the operation of an intelligent transportation system (ITS). Fine-grained vehicle information such as make and model can help officers uncover cases of traffic violations when license plate information cannot be obtained. Various techniques have been developed to perform vehicle make and model classification. However, it is very hard to identify the make and model of vehicles with highly similar visual appearances. The classifier contains a lot of potential for mistakes because the vehicles look very similar but have different models and manufacturers. To solve this problem, a fine-grained classifier based on convolutional neural networks with a multi-task learning approach is proposed in this paper. The proposed method takes a vehicle image as input and extracts features using the VGG-16 architecture. The extracted features will then be sent to two different branches, with one branch being used to classify the vehicle model and the other to classify the vehicle make. The performance of the proposed method was evaluated using the InaV-Dash dataset, which contains an Indonesian vehicle model with a highly similar visual appearance. The experimental results show that the proposed method achieves 98.73% accuracy for vehicle make and 97.69% accuracy for vehicle model. Our study also demonstrates that the proposed method is able to improve the performance of the baseline method on highly similar vehicle classification problems.
Collapse
|