1
|
Zhu Q, Wang Q, Hu X, Dang X, Yu X, Chen L, Hu H. Differentiation of Early Sacroiliitis Using Machine-Learning- Supported Texture Analysis. Diagnostics (Basel) 2025; 15:209. [PMID: 39857093 PMCID: PMC11763746 DOI: 10.3390/diagnostics15020209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2024] [Revised: 01/10/2025] [Accepted: 01/15/2025] [Indexed: 01/27/2025] Open
Abstract
Objectives: We wished to compare the diagnostic performance of texture analysis (TA) against that of a visual qualitative assessment in identifying early sacroiliitis (nr-axSpA). Methods: A total of 92 participants were retrospectively included at our university hospital institution, comprising 30 controls and 62 patients with axSpA, including 32 with nr-axSpA and 30 with r-axSpA, who underwent MR examination of the sacroiliac joints. MRI at 3T of the lumbar spine and the sacroiliac joint was performed using oblique T1-weighted (W), fluid-sensitive, fat-saturated (Fs) T2WI images. The modified New York criteria for AS were used. Patients were classified into the nr-axSpA group if their digital radiography (DR) and/or CT results within 7 days from the MR examination showed a DR and/or CT grade < 2 for the bilateral sacroiliac joints or a DR and/or CT grade < 3 for the unilateral sacroiliac joint. Patients were classified into the r-axSpA group if their DR and/or CT grade was 2 to 3 for the bilateral sacroiliac joints or their DR and/or CT grade was 3 for the unilateral sacroiliac joint. Patients were considered to have a confirmed diagnosis if their DR or CT grade was 4 for the sacroiliac joints and were thereby excluded. A control group of healthy individuals matched in terms of age and sex to the patients was included in this study. First, two readers independently qualitatively scored the oblique coronal T1WI and FsT2WI non-enhanced sacroiliac joint images. The diagnostic efficacies of the two readers were judged and compared using an assigned Likert score, conducting a Kappa consistency test of the diagnostic results between two readers. Texture analysis models (the T1WI-TA model and the FsT2WI-TA model) were constructed through feature extraction and feature screening. The qualitative and quantitative results were evaluated for their diagnostic performance and compared against a clinical reference standard. Results: The qualitative scores of the two readers could significantly distinguish between the healthy controls and the nr-axSpA group and the nr-axSpA and r-axSpA groups (both p < 0.05). Both TA models could significantly distinguish between the healthy controls and the nr-axSpA group and the nr-axSpA group and the r-axSpA group (both p < 0.05). There was no significant difference in the differential diagnoses of the two TA models between the healthy controls and the nr-axSpA group (AUC: 0.934 vs. 0.976; p = 0.1838) and between the nr-axSpA and r-axSpA groups (AUC: 0.917 vs. 0.848; p = 0.2592). In terms of distinguishing between the healthy control and nr-axSpA groups, both the TA models were superior to the qualitative scores of the two readers (all p < 0.05). In terms of distinguishing between the nr-axSpA and r-axSpA groups, the T1WI-TA model was superior to the qualitative scores of the two readers (p = 0.023 and p = 0.007), whereas there was no significant difference between the fsT2WI-TA model and the qualitative scores of the two readers (p = 0.134 and p = 0.065). Conclusions: Based on MR imaging, the T1WI-TA and fsT2WI-TA models were highly effective for the early diagnosis of sacroiliac joint arthritis. The T1WI-TA model significantly improved the early diagnostic efficacy for sacroiliac arthritis compared to that of the qualitative scores of the readers, while the efficacy of the fsT2WI-TA model was comparable to that of the readers.
Collapse
Affiliation(s)
- Qingqing Zhu
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310016, China; (Q.Z.); (Q.W.); (X.H.); (X.Y.); (L.C.)
| | - Qi Wang
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310016, China; (Q.Z.); (Q.W.); (X.H.); (X.Y.); (L.C.)
| | - Xi Hu
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310016, China; (Q.Z.); (Q.W.); (X.H.); (X.Y.); (L.C.)
| | - Xin Dang
- Department of Rheumatology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310016, China;
| | - Xiaojing Yu
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310016, China; (Q.Z.); (Q.W.); (X.H.); (X.Y.); (L.C.)
| | - Liye Chen
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310016, China; (Q.Z.); (Q.W.); (X.H.); (X.Y.); (L.C.)
| | - Hongjie Hu
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310016, China; (Q.Z.); (Q.W.); (X.H.); (X.Y.); (L.C.)
| |
Collapse
|
2
|
Ayikpa KJ, Ballo AB, Mamadou D, Gouton P. Optimization of Cocoa Pods Maturity Classification Using Stacking and Voting with Ensemble Learning Methods in RGB and LAB Spaces. J Imaging 2024; 10:327. [PMID: 39728224 PMCID: PMC11727684 DOI: 10.3390/jimaging10120327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2024] [Revised: 12/14/2024] [Accepted: 12/17/2024] [Indexed: 12/28/2024] Open
Abstract
Determining the maturity of cocoa pods early is not just about guaranteeing harvest quality and optimizing yield. It is also about efficient resource management. Rapid identification of the stage of maturity helps avoid losses linked to a premature or late harvest, improving productivity. Early determination of cocoa pod maturity ensures both the quality and quantity of the harvest, as immature or overripe pods cannot produce premium cocoa beans. Our innovative research harnesses artificial intelligence and computer vision technologies to revolutionize the cocoa industry, offering precise and advanced tools for accurately assessing cocoa pod maturity. Providing an objective and rapid assessment enables farmers to make informed decisions about the optimal time to harvest, helping to maximize the yield of their plantations. Furthermore, by automating this process, these technologies reduce the margins for human error and improve the management of agricultural resources. With this in mind, our study proposes to exploit a computer vision method based on the GLCM (gray level co-occurrence matrix) algorithm to extract the characteristics of images in the RGB (red, green, blue) and LAB (luminance, axis between red and green, axis between yellow and blue) color spaces. This approach allows for in-depth image analysis, which is essential for capturing the nuances of cocoa pod maturity. Next, we apply classification algorithms to identify the best performers. These algorithms are then combined via stacking and voting techniques, allowing our model to be optimized by taking advantage of the strengths of each method, thus guaranteeing more robust and precise results. The results demonstrated that the combination of algorithms produced superior performance, especially in the LAB color space, where voting scored 98.49% and stacking 98.71%. In comparison, in the RGB color space, voting scored 96.59% and stacking 97.06%. These results surpass those generally reported in the literature, showing the increased effectiveness of combined approaches in improving the accuracy of classification models. This highlights the importance of exploring ensemble techniques to maximize performance in complex contexts such as cocoa pod maturity classification.
Collapse
Affiliation(s)
- Kacoutchy Jean Ayikpa
- Laboratoire Imagerie et Vision Artificielle (ImVia), Université de Bourgogne, 21000 Dijon, France; (D.M.); (P.G.)
- Unité de Recherche et d’Expertise Numérique (UREN), Université Virtuelle de Côte d’Ivoire, Abidjan 28 BP 536, Côte d’Ivoire
| | - Abou Bakary Ballo
- Laboratoire de Mécanique et Information (LaMI), Université Felix Houphouët-Boigny, Abidjan 22 BP 801, Côte d’Ivoire;
| | - Diarra Mamadou
- Laboratoire Imagerie et Vision Artificielle (ImVia), Université de Bourgogne, 21000 Dijon, France; (D.M.); (P.G.)
- Laboratoire de Mécanique et Information (LaMI), Université Felix Houphouët-Boigny, Abidjan 22 BP 801, Côte d’Ivoire;
| | - Pierre Gouton
- Laboratoire Imagerie et Vision Artificielle (ImVia), Université de Bourgogne, 21000 Dijon, France; (D.M.); (P.G.)
| |
Collapse
|
3
|
ul Hassan M, Al-Awady AA, Ahmed N, Saeed M, Alqahtani J, Alahmari AMM, Javed MW. A transfer learning enabled approach for ocular disease detection and classification. Health Inf Sci Syst 2024; 12:36. [PMID: 38868156 PMCID: PMC11164840 DOI: 10.1007/s13755-024-00293-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 04/29/2024] [Indexed: 06/14/2024] Open
Abstract
Ocular diseases pose significant challenges in timely diagnosis and effective treatment. Deep learning has emerged as a promising technique in medical image analysis, offering potential solutions for accurately detecting and classifying ocular diseases. In this research, we propose Ocular Net, a novel deep learning model for detecting and classifying ocular diseases, including Cataracts, Diabetic, Uveitis, and Glaucoma, using a large dataset of ocular images. The study utilized an image dataset comprising 6200 images of both eyes of patients. Specifically, 70% of these images (4000 images) were allocated for model training, while the remaining 30% (2200 images) were designated for testing purposes. The dataset contains images of five categories that include four diseases, and one normal category. The proposed model uses transfer learning, average pooling layers, Clipped Relu, Leaky Relu and various other layers to accurately detect the ocular diseases from images. Our approach involves training a novel Ocular Net model on diverse ocular images and evaluating its accuracy and performance metrics for disease detection. We also employ data augmentation techniques to improve model performance and mitigate overfitting. The proposed model is tested on different training and testing ratios with varied parameters. Additionally, we compare the performance of the Ocular Net with previous methods based on various evaluation parameters, assessing its potential for enhancing the accuracy and efficiency of ocular disease diagnosis. The results demonstrate that Ocular Net achieves 98.89% accuracy and 0.12% loss value in detecting and classifying ocular diseases by outperforming existing methods.
Collapse
Affiliation(s)
- Mahmood ul Hassan
- Department of Computer Skills, Deanship of Preparatory Year, Najran University, Najran, 61441 Kingdom of Saudi Arabia
| | - Amin A. Al-Awady
- Department of Computer Skills, Deanship of Preparatory Year, Najran University, Najran, 61441 Kingdom of Saudi Arabia
| | - Naeem Ahmed
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila, Pakistan
| | - Muhammad Saeed
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila, Pakistan
| | - Jarallah Alqahtani
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran, 61441 Kingdom of Saudi Arabia
| | - Ali Mousa Mohamed Alahmari
- Department of Computer Skills, Deanship of Preparatory Year, Najran University, Najran, 61441 Kingdom of Saudi Arabia
| | - Muhammad Wasim Javed
- Department of Computer Science, Applied College Mohyail Asir, King Khalid University, Abha, Kingdom of Saudi Arabia
| |
Collapse
|
4
|
Al-Kadi OS, Al-Emaryeen R, Al-Nahhas S, Almallahi I, Braik R, Mahafza W. Empowering brain cancer diagnosis: harnessing artificial intelligence for advanced imaging insights. Rev Neurosci 2024; 35:399-419. [PMID: 38291768 DOI: 10.1515/revneuro-2023-0115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 12/10/2023] [Indexed: 02/01/2024]
Abstract
Artificial intelligence (AI) is increasingly being used in the medical field, specifically for brain cancer imaging. In this review, we explore how AI-powered medical imaging can impact the diagnosis, prognosis, and treatment of brain cancer. We discuss various AI techniques, including deep learning and causality learning, and their relevance. Additionally, we examine current applications that provide practical solutions for detecting, classifying, segmenting, and registering brain tumors. Although challenges such as data quality, availability, interpretability, transparency, and ethics persist, we emphasise the enormous potential of intelligent applications in standardising procedures and enhancing personalised treatment, leading to improved patient outcomes. Innovative AI solutions have the power to revolutionise neuro-oncology by enhancing the quality of routine clinical practice.
Collapse
Affiliation(s)
- Omar S Al-Kadi
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Roa'a Al-Emaryeen
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Sara Al-Nahhas
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Isra'a Almallahi
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Ruba Braik
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Waleed Mahafza
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| |
Collapse
|
5
|
Asiri AA, Shaf A, Ali T, Pasha MA, Khan A, Irfan M, Alqahtani S, Alghamdi A, Alghamdi AH, Alshamrani AFA, Alelyani M, Alamri S. Advancing brain tumor detection: harnessing the Swin Transformer's power for accurate classification and performance analysis. PeerJ Comput Sci 2024; 10:e1867. [PMID: 38435590 PMCID: PMC10909192 DOI: 10.7717/peerj-cs.1867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 01/19/2024] [Indexed: 03/05/2024]
Abstract
The accurate detection of brain tumors through medical imaging is paramount for precise diagnoses and effective treatment strategies. In this study, we introduce an innovative and robust methodology that capitalizes on the transformative potential of the Swin Transformer architecture for meticulous brain tumor image classification. Our approach handles the classification of brain tumors across four distinct categories: glioma, meningioma, non-tumor, and pituitary, leveraging a dataset comprising 2,870 images. Employing the Swin Transformer architecture, our method intricately integrates a multifaceted pipeline encompassing sophisticated preprocessing, intricate feature extraction mechanisms, and a highly nuanced classification framework. Utilizing 21 matrices for performance evaluation across all four classes, these matrices provide a detailed insight into the model's behavior throughout the learning process, furthermore showcasing a graphical representation of confusion matrix, training and validation loss and accuracy. The standout performance parameter, accuracy, stands at an impressive 97%. This achievement outperforms established models like CNN, DCNN, ViT, and their variants in brain tumor classification. Our methodology's robustness and exceptional accuracy showcase its potential as a pioneering model in this domain, promising substantial advancements in accurate tumor identification and classification, thereby contributing significantly to the landscape of medical image analysis.
Collapse
Affiliation(s)
- Abdullah A. Asiri
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran, Saudi Arabia
| | - Ahmad Shaf
- Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, Pakistan
| | - Tariq Ali
- Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, Pakistan
| | - Muhammad Ahmad Pasha
- Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, Pakistan
| | - Aiza Khan
- Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, Pakistan
| | - Muhammad Irfan
- Faculty of Electrical Engineering, Najran University, Najran, Saudi Arabia
| | - Saeed Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran, Saudi Arabia
| | - Ahmad Alghamdi
- Radiological Sciences Department, College of Applied Medical Sciences, Taif University, Taif, Saudi Arabia
| | - Ali H. Alghamdi
- Department of Radiological Sciences, Faculty of Applied Medical Sciences, University of Tabuk, Tabuk, Saudi Arabia
| | - Abdullah Fahad A. Alshamrani
- Department of Diagnostic Radiology Technology, College of Applied Medical Sciences, Taibah University, Taibah, Saudi Arabia
| | - Magbool Alelyani
- Department of Radiological Sciences, College of Applied Medical Science, King Khalid University, Abha, Saudi Arabia
| | - Sultan Alamri
- Radiological Sciences Department, College of Applied Medical Sciences, Taif University, Taif, Saudi Arabia
| |
Collapse
|
6
|
Çetin-Kaya Y, Kaya M. A Novel Ensemble Framework for Multi-Classification of Brain Tumors Using Magnetic Resonance Imaging. Diagnostics (Basel) 2024; 14:383. [PMID: 38396422 PMCID: PMC10888105 DOI: 10.3390/diagnostics14040383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 02/01/2024] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
Brain tumors can have fatal consequences, affecting many body functions. For this reason, it is essential to detect brain tumor types accurately and at an early stage to start the appropriate treatment process. Although convolutional neural networks (CNNs) are widely used in disease detection from medical images, they face the problem of overfitting in the training phase on limited labeled and insufficiently diverse datasets. The existing studies use transfer learning and ensemble models to overcome these problems. When the existing studies are examined, it is evident that there is a lack of models and weight ratios that will be used with the ensemble technique. With the framework proposed in this study, several CNN models with different architectures are trained with transfer learning and fine-tuning on three brain tumor datasets. A particle swarm optimization-based algorithm determined the optimum weights for combining the five most successful CNN models with the ensemble technique. The results across three datasets are as follows: Dataset 1, 99.35% accuracy and 99.20 F1-score; Dataset 2, 98.77% accuracy and 98.92 F1-score; and Dataset 3, 99.92% accuracy and 99.92 F1-score. We achieved successful performances on three brain tumor datasets, showing that the proposed framework is reliable in classification. As a result, the proposed framework outperforms existing studies, offering clinicians enhanced decision-making support through its high-accuracy classification performance.
Collapse
Affiliation(s)
- Yasemin Çetin-Kaya
- Department of Computer Engineering, Faculty of Engineering and Architecture, Tokat Gaziosmanpaşa University, Tokat 60250, Turkey
| | - Mahir Kaya
- Department of Computer Engineering, Faculty of Engineering and Architecture, Tokat Gaziosmanpaşa University, Tokat 60250, Turkey
| |
Collapse
|
7
|
Khan F, Gulzar Y, Ayoub S, Majid M, Mir MS, Soomro AB. Least square-support vector machine based brain tumor classification system with multi model texture features. FRONTIERS IN APPLIED MATHEMATICS AND STATISTICS 2023; 9. [DOI: 10.3389/fams.2023.1324054] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Abstract
Radiologists confront formidable challenges when confronted with the intricate task of classifying brain tumors through the analysis of MRI images. Our forthcoming manuscript introduces an innovative and highly effective methodology that capitalizes on the capabilities of Least Squares Support Vector Machines (LS-SVM) in tandem with the rich insights drawn from Multi-Scale Morphological Texture Features (MMTF) extracted from T1-weighted MR images. Our methodology underwent meticulous evaluation on a substantial dataset encompassing 139 cases, consisting of 119 cases of aberrant tumors and 20 cases of normal brain images. The outcomes we achieved are nothing short of extraordinary. Our LS-SVM-based approach vastly outperforms competing classifiers, demonstrating its dominance with an exceptional accuracy rate of 98.97%. This represents a substantial 3.97% improvement over alternative methods, accompanied by a notable 2.48% enhancement in Sensitivity and a substantial 10% increase in Specificity. These results conclusively surpass the performance of traditional classifiers such as Support Vector Machines (SVM), Radial Basis Function (RBF), and Artificial Neural Networks (ANN) in terms of classification accuracy. The outstanding performance of our model in the realm of brain tumor diagnosis signifies a substantial leap forward in the field, holding the promise of delivering more precise and dependable tools for radiologists and healthcare professionals in their pivotal role of identifying and classifying brain tumors using MRI imaging techniques.
Collapse
|
8
|
Chen YY, Yu PN, Lai YC, Hsieh TC, Cheng DC. Bone Metastases Lesion Segmentation on Breast Cancer Bone Scan Images with Negative Sample Training. Diagnostics (Basel) 2023; 13:3042. [PMID: 37835785 PMCID: PMC10572884 DOI: 10.3390/diagnostics13193042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/18/2023] [Accepted: 09/22/2023] [Indexed: 10/15/2023] Open
Abstract
The use of deep learning methods for the automatic detection and quantification of bone metastases in bone scan images holds significant clinical value. A fast and accurate automated system for segmenting bone metastatic lesions can assist clinical physicians in diagnosis. In this study, a small internal dataset comprising 100 breast cancer patients (90 cases of bone metastasis and 10 cases of non-metastasis) and 100 prostate cancer patients (50 cases of bone metastasis and 50 cases of non-metastasis) was used for model training. Initially, all image labels were binary. We used the Otsu thresholding method or negative mining to generate a non-metastasis mask, thereby transforming the image labels into three classes. We adopted the Double U-Net as the baseline model and made modifications to its output activation function. We changed the activation function to SoftMax to accommodate multi-class segmentation. Several methods were used to enhance model performance, including background pre-processing to remove background information, adding negative samples to improve model precision, and using transfer learning to leverage shared features between two datasets, which enhances the model's performance. The performance was investigated via 10-fold cross-validation and computed on a pixel-level scale. The best model we achieved had a precision of 69.96%, a sensitivity of 63.55%, and an F1-score of 66.60%. Compared to the baseline model, this represents an 8.40% improvement in precision, a 0.56% improvement in sensitivity, and a 4.33% improvement in the F1-score. The developed system has the potential to provide pre-diagnostic reports for physicians in final decisions and the calculation of the bone scan index (BSI) with the combination with bone skeleton segmentation.
Collapse
Affiliation(s)
- Yi-You Chen
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 404, Taiwan; (Y.-Y.C.); (P.-N.Y.)
| | - Po-Nien Yu
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 404, Taiwan; (Y.-Y.C.); (P.-N.Y.)
| | - Yung-Chi Lai
- Department of Nuclear Medicine, Feng Yuan Hospital, Ministry of Health and Welfare, Taichung 420, Taiwan;
| | - Te-Chun Hsieh
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 404, Taiwan; (Y.-Y.C.); (P.-N.Y.)
- Department of Nuclear Medicine and PET Center, China Medical University Hospital, Taichung 404, Taiwan
| | - Da-Chuan Cheng
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 404, Taiwan; (Y.-Y.C.); (P.-N.Y.)
| |
Collapse
|