1
|
ul Hassan M, Al-Awady AA, Ahmed N, Saeed M, Alqahtani J, Alahmari AMM, Javed MW. A transfer learning enabled approach for ocular disease detection and classification. Health Inf Sci Syst 2024; 12:36. [PMID: 38868156 PMCID: PMC11164840 DOI: 10.1007/s13755-024-00293-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 04/29/2024] [Indexed: 06/14/2024] Open
Abstract
Ocular diseases pose significant challenges in timely diagnosis and effective treatment. Deep learning has emerged as a promising technique in medical image analysis, offering potential solutions for accurately detecting and classifying ocular diseases. In this research, we propose Ocular Net, a novel deep learning model for detecting and classifying ocular diseases, including Cataracts, Diabetic, Uveitis, and Glaucoma, using a large dataset of ocular images. The study utilized an image dataset comprising 6200 images of both eyes of patients. Specifically, 70% of these images (4000 images) were allocated for model training, while the remaining 30% (2200 images) were designated for testing purposes. The dataset contains images of five categories that include four diseases, and one normal category. The proposed model uses transfer learning, average pooling layers, Clipped Relu, Leaky Relu and various other layers to accurately detect the ocular diseases from images. Our approach involves training a novel Ocular Net model on diverse ocular images and evaluating its accuracy and performance metrics for disease detection. We also employ data augmentation techniques to improve model performance and mitigate overfitting. The proposed model is tested on different training and testing ratios with varied parameters. Additionally, we compare the performance of the Ocular Net with previous methods based on various evaluation parameters, assessing its potential for enhancing the accuracy and efficiency of ocular disease diagnosis. The results demonstrate that Ocular Net achieves 98.89% accuracy and 0.12% loss value in detecting and classifying ocular diseases by outperforming existing methods.
Collapse
Affiliation(s)
- Mahmood ul Hassan
- Department of Computer Skills, Deanship of Preparatory Year, Najran University, Najran, 61441 Kingdom of Saudi Arabia
| | - Amin A. Al-Awady
- Department of Computer Skills, Deanship of Preparatory Year, Najran University, Najran, 61441 Kingdom of Saudi Arabia
| | - Naeem Ahmed
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila, Pakistan
| | - Muhammad Saeed
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila, Pakistan
| | - Jarallah Alqahtani
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran, 61441 Kingdom of Saudi Arabia
| | - Ali Mousa Mohamed Alahmari
- Department of Computer Skills, Deanship of Preparatory Year, Najran University, Najran, 61441 Kingdom of Saudi Arabia
| | - Muhammad Wasim Javed
- Department of Computer Science, Applied College Mohyail Asir, King Khalid University, Abha, Kingdom of Saudi Arabia
| |
Collapse
|
2
|
Al-Kadi OS, Al-Emaryeen R, Al-Nahhas S, Almallahi I, Braik R, Mahafza W. Empowering brain cancer diagnosis: harnessing artificial intelligence for advanced imaging insights. Rev Neurosci 2024; 35:399-419. [PMID: 38291768 DOI: 10.1515/revneuro-2023-0115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 12/10/2023] [Indexed: 02/01/2024]
Abstract
Artificial intelligence (AI) is increasingly being used in the medical field, specifically for brain cancer imaging. In this review, we explore how AI-powered medical imaging can impact the diagnosis, prognosis, and treatment of brain cancer. We discuss various AI techniques, including deep learning and causality learning, and their relevance. Additionally, we examine current applications that provide practical solutions for detecting, classifying, segmenting, and registering brain tumors. Although challenges such as data quality, availability, interpretability, transparency, and ethics persist, we emphasise the enormous potential of intelligent applications in standardising procedures and enhancing personalised treatment, leading to improved patient outcomes. Innovative AI solutions have the power to revolutionise neuro-oncology by enhancing the quality of routine clinical practice.
Collapse
Affiliation(s)
- Omar S Al-Kadi
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Roa'a Al-Emaryeen
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Sara Al-Nahhas
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Isra'a Almallahi
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Ruba Braik
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Waleed Mahafza
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| |
Collapse
|
3
|
Asiri AA, Shaf A, Ali T, Pasha MA, Khan A, Irfan M, Alqahtani S, Alghamdi A, Alghamdi AH, Alshamrani AFA, Alelyani M, Alamri S. Advancing brain tumor detection: harnessing the Swin Transformer's power for accurate classification and performance analysis. PeerJ Comput Sci 2024; 10:e1867. [PMID: 38435590 PMCID: PMC10909192 DOI: 10.7717/peerj-cs.1867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 01/19/2024] [Indexed: 03/05/2024]
Abstract
The accurate detection of brain tumors through medical imaging is paramount for precise diagnoses and effective treatment strategies. In this study, we introduce an innovative and robust methodology that capitalizes on the transformative potential of the Swin Transformer architecture for meticulous brain tumor image classification. Our approach handles the classification of brain tumors across four distinct categories: glioma, meningioma, non-tumor, and pituitary, leveraging a dataset comprising 2,870 images. Employing the Swin Transformer architecture, our method intricately integrates a multifaceted pipeline encompassing sophisticated preprocessing, intricate feature extraction mechanisms, and a highly nuanced classification framework. Utilizing 21 matrices for performance evaluation across all four classes, these matrices provide a detailed insight into the model's behavior throughout the learning process, furthermore showcasing a graphical representation of confusion matrix, training and validation loss and accuracy. The standout performance parameter, accuracy, stands at an impressive 97%. This achievement outperforms established models like CNN, DCNN, ViT, and their variants in brain tumor classification. Our methodology's robustness and exceptional accuracy showcase its potential as a pioneering model in this domain, promising substantial advancements in accurate tumor identification and classification, thereby contributing significantly to the landscape of medical image analysis.
Collapse
Affiliation(s)
- Abdullah A. Asiri
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran, Saudi Arabia
| | - Ahmad Shaf
- Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, Pakistan
| | - Tariq Ali
- Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, Pakistan
| | - Muhammad Ahmad Pasha
- Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, Pakistan
| | - Aiza Khan
- Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, Pakistan
| | - Muhammad Irfan
- Faculty of Electrical Engineering, Najran University, Najran, Saudi Arabia
| | - Saeed Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran, Saudi Arabia
| | - Ahmad Alghamdi
- Radiological Sciences Department, College of Applied Medical Sciences, Taif University, Taif, Saudi Arabia
| | - Ali H. Alghamdi
- Department of Radiological Sciences, Faculty of Applied Medical Sciences, University of Tabuk, Tabuk, Saudi Arabia
| | - Abdullah Fahad A. Alshamrani
- Department of Diagnostic Radiology Technology, College of Applied Medical Sciences, Taibah University, Taibah, Saudi Arabia
| | - Magbool Alelyani
- Department of Radiological Sciences, College of Applied Medical Science, King Khalid University, Abha, Saudi Arabia
| | - Sultan Alamri
- Radiological Sciences Department, College of Applied Medical Sciences, Taif University, Taif, Saudi Arabia
| |
Collapse
|
4
|
Çetin-Kaya Y, Kaya M. A Novel Ensemble Framework for Multi-Classification of Brain Tumors Using Magnetic Resonance Imaging. Diagnostics (Basel) 2024; 14:383. [PMID: 38396422 PMCID: PMC10888105 DOI: 10.3390/diagnostics14040383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 02/01/2024] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
Brain tumors can have fatal consequences, affecting many body functions. For this reason, it is essential to detect brain tumor types accurately and at an early stage to start the appropriate treatment process. Although convolutional neural networks (CNNs) are widely used in disease detection from medical images, they face the problem of overfitting in the training phase on limited labeled and insufficiently diverse datasets. The existing studies use transfer learning and ensemble models to overcome these problems. When the existing studies are examined, it is evident that there is a lack of models and weight ratios that will be used with the ensemble technique. With the framework proposed in this study, several CNN models with different architectures are trained with transfer learning and fine-tuning on three brain tumor datasets. A particle swarm optimization-based algorithm determined the optimum weights for combining the five most successful CNN models with the ensemble technique. The results across three datasets are as follows: Dataset 1, 99.35% accuracy and 99.20 F1-score; Dataset 2, 98.77% accuracy and 98.92 F1-score; and Dataset 3, 99.92% accuracy and 99.92 F1-score. We achieved successful performances on three brain tumor datasets, showing that the proposed framework is reliable in classification. As a result, the proposed framework outperforms existing studies, offering clinicians enhanced decision-making support through its high-accuracy classification performance.
Collapse
Affiliation(s)
- Yasemin Çetin-Kaya
- Department of Computer Engineering, Faculty of Engineering and Architecture, Tokat Gaziosmanpaşa University, Tokat 60250, Turkey
| | - Mahir Kaya
- Department of Computer Engineering, Faculty of Engineering and Architecture, Tokat Gaziosmanpaşa University, Tokat 60250, Turkey
| |
Collapse
|
5
|
Chen YY, Yu PN, Lai YC, Hsieh TC, Cheng DC. Bone Metastases Lesion Segmentation on Breast Cancer Bone Scan Images with Negative Sample Training. Diagnostics (Basel) 2023; 13:3042. [PMID: 37835785 PMCID: PMC10572884 DOI: 10.3390/diagnostics13193042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/18/2023] [Accepted: 09/22/2023] [Indexed: 10/15/2023] Open
Abstract
The use of deep learning methods for the automatic detection and quantification of bone metastases in bone scan images holds significant clinical value. A fast and accurate automated system for segmenting bone metastatic lesions can assist clinical physicians in diagnosis. In this study, a small internal dataset comprising 100 breast cancer patients (90 cases of bone metastasis and 10 cases of non-metastasis) and 100 prostate cancer patients (50 cases of bone metastasis and 50 cases of non-metastasis) was used for model training. Initially, all image labels were binary. We used the Otsu thresholding method or negative mining to generate a non-metastasis mask, thereby transforming the image labels into three classes. We adopted the Double U-Net as the baseline model and made modifications to its output activation function. We changed the activation function to SoftMax to accommodate multi-class segmentation. Several methods were used to enhance model performance, including background pre-processing to remove background information, adding negative samples to improve model precision, and using transfer learning to leverage shared features between two datasets, which enhances the model's performance. The performance was investigated via 10-fold cross-validation and computed on a pixel-level scale. The best model we achieved had a precision of 69.96%, a sensitivity of 63.55%, and an F1-score of 66.60%. Compared to the baseline model, this represents an 8.40% improvement in precision, a 0.56% improvement in sensitivity, and a 4.33% improvement in the F1-score. The developed system has the potential to provide pre-diagnostic reports for physicians in final decisions and the calculation of the bone scan index (BSI) with the combination with bone skeleton segmentation.
Collapse
Affiliation(s)
- Yi-You Chen
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 404, Taiwan; (Y.-Y.C.); (P.-N.Y.)
| | - Po-Nien Yu
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 404, Taiwan; (Y.-Y.C.); (P.-N.Y.)
| | - Yung-Chi Lai
- Department of Nuclear Medicine, Feng Yuan Hospital, Ministry of Health and Welfare, Taichung 420, Taiwan;
| | - Te-Chun Hsieh
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 404, Taiwan; (Y.-Y.C.); (P.-N.Y.)
- Department of Nuclear Medicine and PET Center, China Medical University Hospital, Taichung 404, Taiwan
| | - Da-Chuan Cheng
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 404, Taiwan; (Y.-Y.C.); (P.-N.Y.)
| |
Collapse
|