1
|
Liu T, Mao Y, Dou H, Zhang W, Yang J, Wu P, Li D, Mu X. Emerging Wearable Acoustic Sensing Technologies. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2025:e2408653. [PMID: 39749384 DOI: 10.1002/advs.202408653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Revised: 11/08/2024] [Indexed: 01/04/2025]
Abstract
Sound signals not only serve as the primary communication medium but also find application in fields such as medical diagnosis and fault detection. With public healthcare resources increasingly under pressure, and challenges faced by disabled individuals on a daily basis, solutions that facilitate low-cost private healthcare hold considerable promise. Acoustic methods have been widely studied because of their lower technical complexity compared to other medical solutions, as well as the high safety threshold of the human body to acoustic energy. Furthermore, with the recent development of artificial intelligence technology applied to speech recognition, speech recognition devices, and systems capable of assisting disabled individuals in interacting with scenes are constantly being updated. This review meticulously summarizes the sensing mechanisms, materials, structural design, and multidisciplinary applications of wearable acoustic devices applied to human health and human-computer interaction. Further, the advantages and disadvantages of the different approaches used in flexible acoustic devices in various fields are examined. Finally, the current challenges and a roadmap for future research are analyzed based on existing research progress to achieve more comprehensive and personalized healthcare.
Collapse
Affiliation(s)
- Tao Liu
- Key Laboratory of Optoelectronic Technology & Systems of Ministry of Education, International R&D Center of Micro-Nano Systems and New Materials Technology, Chongqing University, Chongqing, 400044, China
| | - Yuchen Mao
- Key Laboratory of Optoelectronic Technology & Systems of Ministry of Education, International R&D Center of Micro-Nano Systems and New Materials Technology, Chongqing University, Chongqing, 400044, China
| | - Hanjie Dou
- Key Laboratory of Optoelectronic Technology & Systems of Ministry of Education, International R&D Center of Micro-Nano Systems and New Materials Technology, Chongqing University, Chongqing, 400044, China
| | - Wangyang Zhang
- Key Laboratory of Optoelectronic Technology & Systems of Ministry of Education, International R&D Center of Micro-Nano Systems and New Materials Technology, Chongqing University, Chongqing, 400044, China
| | - Jiaqian Yang
- Key Laboratory of Optoelectronic Technology & Systems of Ministry of Education, International R&D Center of Micro-Nano Systems and New Materials Technology, Chongqing University, Chongqing, 400044, China
| | - Pengfan Wu
- Key Laboratory of Optoelectronic Technology & Systems of Ministry of Education, International R&D Center of Micro-Nano Systems and New Materials Technology, Chongqing University, Chongqing, 400044, China
| | - Dongxiao Li
- Key Laboratory of Optoelectronic Technology & Systems of Ministry of Education, International R&D Center of Micro-Nano Systems and New Materials Technology, Chongqing University, Chongqing, 400044, China
| | - Xiaojing Mu
- Key Laboratory of Optoelectronic Technology & Systems of Ministry of Education, International R&D Center of Micro-Nano Systems and New Materials Technology, Chongqing University, Chongqing, 400044, China
| |
Collapse
|
2
|
Yahiaoui ME, Derdour M, Abdulghafor R, Turaev S, Gasmi M, Bennour A, Aborujilah A, Sarem MA. Federated Learning with Privacy Preserving for Multi- Institutional Three-Dimensional Brain Tumor Segmentation. Diagnostics (Basel) 2024; 14:2891. [PMID: 39767253 PMCID: PMC11675895 DOI: 10.3390/diagnostics14242891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2024] [Revised: 12/15/2024] [Accepted: 12/20/2024] [Indexed: 01/11/2025] Open
Abstract
BACKGROUND AND OBJECTIVES Brain tumors are complex diseases that require careful diagnosis and treatment. A minor error in the diagnosis may easily lead to significant consequences. Thus, one must place a premium on accurately identifying brain tumors. However, deep learning (DL) models often face challenges in obtaining sufficient medical imaging data due to legal, privacy, and technical barriers hindering data sharing between institutions. This study aims to implement a federated learning (FL) approach with privacy-preserving techniques (PPTs) directed toward segmenting brain tumor lesions in a distributed and privacy-aware manner. METHODS The suggested approach employs a model of 3D U-Net, which is trained using federated learning on the BraTS 2020 dataset. PPTs, such as differential privacy, are included to ensure data confidentiality while managing privacy and heterogeneity challenges with minimal communication overhead. The efficiency of the model is measured in terms of Dice similarity coefficients (DSCs) and 95% Hausdorff distances (HD95) concerning the target areas concerned by tumors, which include the whole tumor (WT), tumor core (TC), and enhancing tumor core (ET). RESULTS In the validation phase, the partial federated model achieved DSCs of 86.1%, 83.3%, and 79.8%, corresponding to 95% values of 25.3 mm, 8.61 mm, and 9.16 mm for WT, TC, and ET, respectively. On the final test set, the model demonstrated improved performance, achieving DSCs of 89.85%, 87.55%, and 86.6%, with HD95 values of 22.95 mm, 8.68 mm, and 8.32 mm for WT, TC, and ET, respectively, which indicates the effectiveness of the segmentation approach, and its privacy preservation. CONCLUSION This study presents a highly competitive, collaborative federated learning model with PPTs that can successfully segment brain tumor lesions without compromising patient data confidentiality. Future work will improve model generalizability and extend the framework to other medical imaging tasks.
Collapse
Affiliation(s)
- Mohammed Elbachir Yahiaoui
- Mathematics, Informatics and Systems LAboratory—LAMIS Laboratory, University of Echahid Cheikh Larbi Tebessi, Tebessa 12000, Algeria; (M.E.Y.); (M.G.)
| | - Makhlouf Derdour
- Artificial Intelligence and Autonomous Things Laboratory—LIAOA, University of Oum el Bouaghi, Oum El Bouaghi 04000, Algeria
| | - Rawad Abdulghafor
- Faculty of Computer Studies (FCS), Arab Open Universit—Oman, Muscat 130, Oman;
| | - Sherzod Turaev
- Department of Computer Science and Software Engineering, College of Information Technology, United Arab Emirates University, Al Ain 15551, United Arab Emirates
| | - Mohamed Gasmi
- Mathematics, Informatics and Systems LAboratory—LAMIS Laboratory, University of Echahid Cheikh Larbi Tebessi, Tebessa 12000, Algeria; (M.E.Y.); (M.G.)
| | - Akram Bennour
- Mathematics, Informatics and Systems LAboratory—LAMIS Laboratory, University of Echahid Cheikh Larbi Tebessi, Tebessa 12000, Algeria; (M.E.Y.); (M.G.)
| | - Abdulaziz Aborujilah
- Department of Management Information System, College of Commerce & Business Administration, Dhofar University, Salalah 211, Oman;
| | - Mohamed Al Sarem
- Department of Information Technology, Aylol University College, Yarim 547, Yemen;
| |
Collapse
|
3
|
Adamu MJ, Kawuwa HB, Qiang L, Nyatega CO, Younis A, Fahad M, Dauya SS. Efficient and Accurate Brain Tumor Classification Using Hybrid MobileNetV2-Support Vector Machine for Magnetic Resonance Imaging Diagnostics in Neoplasms. Brain Sci 2024; 14:1178. [PMID: 39766377 PMCID: PMC11674380 DOI: 10.3390/brainsci14121178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Revised: 11/15/2024] [Accepted: 11/21/2024] [Indexed: 01/11/2025] Open
Abstract
BACKGROUND/OBJECTIVES Magnetic Resonance Imaging (MRI) plays a vital role in brain tumor diagnosis by providing clear visualization of soft tissues without the use of ionizing radiation. Given the increasing incidence of brain tumors, there is an urgent need for reliable diagnostic tools, as misdiagnoses can lead to harmful treatment decisions and poor outcomes. While machine learning has significantly advanced medical diagnostics, achieving both high accuracy and computational efficiency remains a critical challenge. METHODS This study proposes a hybrid model that integrates MobileNetV2 for feature extraction with a Support Vector Machine (SVM) classifier for the classification of brain tumors. The model was trained and validated using the Kaggle MRI brain tumor dataset, which includes 7023 images categorized into four types: glioma, meningioma, pituitary tumor, and no tumor. MobileNetV2's efficient architecture was leveraged for feature extraction, and SVM was used to enhance classification accuracy. RESULTS The proposed hybrid model showed excellent results, achieving Area Under the Curve (AUC) scores of 0.99 for glioma, 0.97 for meningioma, and 1.0 for both pituitary tumors and the no tumor class. These findings highlight that the MobileNetV2-SVM hybrid not only improves classification accuracy but also reduces computational overhead, making it suitable for broader clinical use. CONCLUSIONS The MobileNetV2-SVM hybrid model demonstrates substantial potential for enhancing brain tumor diagnostics by offering a balance of precision and computational efficiency. Its ability to maintain high accuracy while operating efficiently could lead to better outcomes in medical practice, particularly in resource limited settings.
Collapse
Affiliation(s)
- Mohammed Jajere Adamu
- Department of Electronic Science and Technology, School of Microelectronics, Tianjin University, Tianjin 300072, China; (L.Q.); (C.O.N.); (A.Y.)
- Department of Computer Science, Yobe State University, Damaturu 600213, Nigeria;
- Center for Distance and Online Education, Lovely Professional University, Phagwara 144411, India
| | - Halima Bello Kawuwa
- Department of Biomedical Engineering, School of Precision Instruments and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, China;
| | - Li Qiang
- Department of Electronic Science and Technology, School of Microelectronics, Tianjin University, Tianjin 300072, China; (L.Q.); (C.O.N.); (A.Y.)
| | - Charles Okanda Nyatega
- Department of Electronic Science and Technology, School of Microelectronics, Tianjin University, Tianjin 300072, China; (L.Q.); (C.O.N.); (A.Y.)
- Department of Electronics and Telecommunication Engineering, Mbeya University of Science and Technology, Mbeya P.O. Box 131, Tanzania
| | - Ayesha Younis
- Department of Electronic Science and Technology, School of Microelectronics, Tianjin University, Tianjin 300072, China; (L.Q.); (C.O.N.); (A.Y.)
| | - Muhammad Fahad
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China;
| | - Salisu Samaila Dauya
- Department of Computer Science, Yobe State University, Damaturu 600213, Nigeria;
| |
Collapse
|
4
|
Alharbi H, Sampedro GA, Juanatas RA, Lim SJ. Enhanced skin cancer diagnosis: a deep feature extraction-based framework for the multi-classification of skin cancer utilizing dermoscopy images. Front Med (Lausanne) 2024; 11:1495576. [PMID: 39606634 PMCID: PMC11601079 DOI: 10.3389/fmed.2024.1495576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Accepted: 10/23/2024] [Indexed: 11/29/2024] Open
Abstract
Skin cancer is one of the most common, deadly, and widespread cancers worldwide. Early detection of skin cancer can lead to reduced death rates. A dermatologist or primary care physician can use a dermatoscope to inspect a patient to diagnose skin disorders visually. Early detection of skin cancer is essential, and in order to confirm the diagnosis and determine the most appropriate course of therapy, patients should undergo a biopsy and a histological evaluation. Significant advancements have been made recently as the accuracy of skin cancer categorization by automated deep learning systems matches that of dermatologists. Though progress has been made, there is still a lack of a widely accepted, clinically reliable method for diagnosing skin cancer. This article presented four variants of the Convolutional Neural Network (CNN) model (i.e., original CNN, no batch normalization CNN, few filters CNN, and strided CNN) for the classification and prediction of skin cancer in lesion images with the aim of helping physicians in their diagnosis. Further, it presents the hybrid models CNN-Support Vector Machine (CNNSVM), CNN-Random Forest (CNNRF), and CNN-Logistic Regression (CNNLR), using a grid search for the best parameters. Exploratory Data Analysis (EDA) and random oversampling are performed to normalize and balance the data. The CNN models (original CNN, strided, and CNNSVM) obtained an accuracy rate of 98%. In contrast, CNNRF and CNNLR obtained an accuracy rate of 99% for skin cancer prediction on a HAM10000 dataset of 10,015 dermoscopic images. The encouraging outcomes demonstrate the effectiveness of the proposed method and show that improving the performance of skin cancer diagnosis requires including the patient's metadata with the lesion image.
Collapse
Affiliation(s)
- Hadeel Alharbi
- College of Computer Science and Engineering, University of Hail, Ha'il, Saudi Arabia
| | | | - Roben A. Juanatas
- College of Computing and Information Technologies, National University, Manila, Philippines
| | - Se-jung Lim
- School of Electrical and Computer Engineering, Yeosu Campus, Chonnam National University, Gwangju, Republic of Korea
| |
Collapse
|
5
|
Sathya R, Mahesh TR, Bhatia Khan S, Malibari AA, Asiri F, Rehman AU, Malwi WA. Employing Xception convolutional neural network through high-precision MRI analysis for brain tumor diagnosis. Front Med (Lausanne) 2024; 11:1487713. [PMID: 39606635 PMCID: PMC11601128 DOI: 10.3389/fmed.2024.1487713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2024] [Accepted: 09/30/2024] [Indexed: 11/29/2024] Open
Abstract
The classification of brain tumors from medical imaging is pivotal for accurate medical diagnosis but remains challenging due to the intricate morphologies of tumors and the precision required. Existing methodologies, including manual MRI evaluations and computer-assisted systems, primarily utilize conventional machine learning and pre-trained deep learning models. These systems often suffer from overfitting due to modest medical imaging datasets and exhibit limited generalizability on unseen data, alongside substantial computational demands that hinder real-time application. To enhance diagnostic accuracy and reliability, this research introduces an advanced model utilizing the Xception architecture, enriched with additional batch normalization and dropout layers to mitigate overfitting. This model is further refined by leveraging large-scale data through transfer learning and employing a customized dense layer setup tailored to effectively distinguish between meningioma, glioma, and pituitary tumor categories. This hybrid method not only capitalizes on the strengths of pre-trained network features but also adapts specific training to a targeted dataset, thereby improving the generalization capacity of the model across different imaging conditions. Demonstrating an important improvement in diagnostic performance, the proposed model achieves a classification accuracy of 98.039% on the test dataset, with precision and recall rates above 96% for all categories. These results underscore the possibility of the model as a reliable diagnostic tool in clinical settings, significantly surpassing existing diagnostic protocols for brain tumors.
Collapse
Affiliation(s)
- R. Sathya
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ramapuram, Chennai, India
| | - T. R. Mahesh
- Department of Computer Science and Engineering, JAIN (Deemed-to-be University), Bengaluru, India
| | - Surbhi Bhatia Khan
- School of Science, Engineering and Environment, University of Salford, Manchester, United Kingdom
- Adjunct Research Faculty at the Centre for Research Impact and Outcome, Chitkara University, Chandigarh, Punjab, India
| | - Areej A. Malibari
- Department of Industrial and Systems Engineering, College of Engineering, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Fatima Asiri
- College of Computer Science, Informatics and Computer Systems Department, King Khalid University, Abha, Saudi Arabia
| | - Attique ur Rehman
- Suleman Dawood School of Business, Lahore University of Management Sciences, Lahore, Pakistan
| | - Wajdan Al Malwi
- College of Computer Science, Informatics and Computer Systems Department, King Khalid University, Abha, Saudi Arabia
| |
Collapse
|
6
|
Ullah Z, Jamjoom M, Thirumalaisamy M, Alajmani SH, Saleem F, Sheikh-Akbari A, Khan UA. A Deep Learning Based Intelligent Decision Support System for Automatic Detection of Brain Tumor. Biomed Eng Comput Biol 2024; 15:11795972241277322. [PMID: 39238891 PMCID: PMC11375672 DOI: 10.1177/11795972241277322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 08/06/2024] [Indexed: 09/07/2024] Open
Abstract
Brain tumor (BT) is an awful disease and one of the foremost causes of death in human beings. BT develops mainly in 2 stages and varies by volume, form, and structure, and can be cured with special clinical procedures such as chemotherapy, radiotherapy, and surgical mediation. With revolutionary advancements in radiomics and research in medical imaging in the past few years, computer-aided diagnostic systems (CAD), especially deep learning, have played a key role in the automatic detection and diagnosing of various diseases and significantly provided accurate decision support systems for medical clinicians. Thus, convolution neural network (CNN) is a commonly utilized methodology developed for detecting various diseases from medical images because it is capable of extracting distinct features from an image under investigation. In this study, a deep learning approach is utilized to extricate distinct features from brain images in order to detect BT. Hence, CNN from scratch and transfer learning models (VGG-16, VGG-19, and LeNet-5) are developed and tested on brain images to build an intelligent decision support system for detecting BT. Since deep learning models require large volumes of data, data augmentation is used to populate the existing dataset synthetically in order to utilize the best fit detecting models. Hyperparameter tuning was conducted to set the optimum parameters for training the models. The achieved results show that VGG models outperformed others with an accuracy rate of 99.24%, average precision of 99%, average recall of 99%, average specificity of 99%, and average f1-score of 99% each. The results of the proposed models compared to the other state-of-the-art models in the literature show better performance of the proposed models in terms of accuracy, sensitivity, specificity, and f1-score. Moreover, comparative analysis shows that the proposed models are reliable in that they can be used for detecting BT as well as helping medical practitioners to diagnose BT.
Collapse
Affiliation(s)
- Zahid Ullah
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia
| | - Mona Jamjoom
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | | | - Samah H Alajmani
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Farrukh Saleem
- School of Built Environment, Engineering, and Computing, Leeds Beckett University, Leeds, UK
| | - Akbar Sheikh-Akbari
- School of Built Environment, Engineering, and Computing, Leeds Beckett University, Leeds, UK
| | - Usman Ali Khan
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
7
|
Zahoor MM, Khan SH, Alahmadi TJ, Alsahfi T, Mazroa ASA, Sakr HA, Alqahtani S, Albanyan A, Alshemaimri BK. Brain Tumor MRI Classification Using a Novel Deep Residual and Regional CNN. Biomedicines 2024; 12:1395. [PMID: 39061969 PMCID: PMC11274019 DOI: 10.3390/biomedicines12071395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/30/2024] [Accepted: 06/10/2024] [Indexed: 07/28/2024] Open
Abstract
Brain tumor classification is essential for clinical diagnosis and treatment planning. Deep learning models have shown great promise in this task, but they are often challenged by the complex and diverse nature of brain tumors. To address this challenge, we propose a novel deep residual and region-based convolutional neural network (CNN) architecture, called Res-BRNet, for brain tumor classification using magnetic resonance imaging (MRI) scans. Res-BRNet employs a systematic combination of regional and boundary-based operations within modified spatial and residual blocks. The spatial blocks extract homogeneity, heterogeneity, and boundary-related features of brain tumors, while the residual blocks significantly capture local and global texture variations. We evaluated the performance of Res-BRNet on a challenging dataset collected from Kaggle repositories, Br35H, and figshare, containing various tumor categories, including meningioma, glioma, pituitary, and healthy images. Res-BRNet outperformed standard CNN models, achieving excellent accuracy (98.22%), sensitivity (0.9811), F1-score (0.9841), and precision (0.9822). Our results suggest that Res-BRNet is a promising tool for brain tumor classification, with the potential to improve the accuracy and efficiency of clinical diagnosis and treatment planning.
Collapse
Affiliation(s)
- Mirza Mumtaz Zahoor
- Faculty of Computer Sciences, Ibadat International University, Islamabad 44000, Pakistan;
| | - Saddam Hussain Khan
- Department of Computer System Engineering, University of Engineering and Applied Science (UEAS), Swat 19060, Pakistan;
| | - Tahani Jaser Alahmadi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Tariq Alsahfi
- Department of Information Systems and Technology, College of Computer Science and Engineering, University of Jeddah, Jeddah 21959, Saudi Arabia;
| | - Alanoud S. Al Mazroa
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Hesham A. Sakr
- Nile Higher Institute for Engineering and Technology, Mansoura 35511, Dakahlia, Egypt;
| | - Saeed Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 61441, Saudi Arabia;
| | - Abdullah Albanyan
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia;
| | | |
Collapse
|
8
|
Zhang J, Guo J, Lu D, Cao Y. ASD-SWNet: a novel shared-weight feature extraction and classification network for autism spectrum disorder diagnosis. Sci Rep 2024; 14:13696. [PMID: 38871844 DOI: 10.1038/s41598-024-64299-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 06/06/2024] [Indexed: 06/15/2024] Open
Abstract
The traditional diagnostic process for autism spectrum disorder (ASD) is subjective, where early and accurate diagnosis significantly affects treatment outcomes and life quality. Thus, improving ASD diagnostic methods is critical. This paper proposes ASD-SWNet, a new shared-weight feature extraction and classification network. It resolves the issue found in previous studies of inefficiently integrating unsupervised and supervised learning, thereby enhancing diagnostic precision. The approach utilizes functional magnetic resonance imaging to improve diagnostic accuracy, featuring an autoencoder (AE) with Gaussian noise for robust feature extraction and a tailored convolutional neural network (CNN) for classification. The shared-weight mechanism utilizes features learned by the AE to initialize the convolutional layer weights of the CNN, thereby integrating AE and CNN for joint training. A novel data augmentation strategy for time-series medical data is also introduced, tackling the problem of small sample sizes. Tested on the ABIDE-I dataset through nested ten-fold cross-validation, the method achieved an accuracy of 76.52% and an AUC of 0.81. This approach surpasses existing methods, showing significant enhancements in diagnostic accuracy and robustness. The contribution of this paper lies not only in proposing new methods for ASD diagnosis but also in offering new approaches for other neurological brain diseases.
Collapse
Affiliation(s)
- Jian Zhang
- School of Internet of Things and Artificial Intelligence, Wuxi Vocational College of Science and Technology, Wuxi, 214028, China.
| | - Jifeng Guo
- College of Computer Science and Engineering, Guilin University of Aerospace Technology, Guilin, 540004, China
| | - Donglei Lu
- School of Internet of Things and Artificial Intelligence, Wuxi Vocational College of Science and Technology, Wuxi, 214028, China
| | - Yuanyuan Cao
- School of Internet of Things and Artificial Intelligence, Wuxi Vocational College of Science and Technology, Wuxi, 214028, China
| |
Collapse
|
9
|
Abraham A, Jose R, Farooqui N, Mayer J, Ahmad J, Satti Z, Jacob TJ, Syed F, Toma M. The Role of ArtificiaI Intelligence in Brain Tumor Diagnosis: An Evaluation of a Machine Learning Model. Cureus 2024; 16:e61483. [PMID: 38952601 PMCID: PMC11215798 DOI: 10.7759/cureus.61483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/01/2024] [Indexed: 07/03/2024] Open
Abstract
This research study explores of the effectiveness of a machine learning image classification model in the accurate identification of various types of brain tumors. The types of tumors under consideration in this study are gliomas, meningiomas, and pituitary tumors. These are some of the most common types of brain tumors and pose significant challenges in terms of accurate diagnosis and treatment. The machine learning model that is the focus of this study is built on the Google Teachable Machine platform (Alphabet Inc., Mountain View, CA). The Google Teachable Machine is a machine learning image classification platform that is built from Tensorflow, a popular open-source platform for machine learning. The Google Teachable Machine model was specifically evaluated for its ability to differentiate between normal brains and the aforementioned types of tumors in MRI images. MRI images are a common tool in the diagnosis of brain tumors, but the challenge lies in the accurate classification of the tumors. This is where the machine learning model comes into play. The model is trained to recognize patterns in the MRI images that correspond to the different types of tumors. The performance of the machine learning model was assessed using several metrics. These include precision, recall, and F1 score. These metrics were generated from a confusion matrix analysis and performance graphs. A confusion matrix is a table that is often used to describe the performance of a classification model. Precision is a measure of the model's ability to correctly identify positive instances among all instances it identified as positive. Recall, on the other hand, measures the model's ability to correctly identify positive instances among all actual positive instances. The F1 score is a measure that combines precision and recall providing a single metric for model performance. The results of the study were promising. The Google Teachable Machine model demonstrated high performance, with accuracy, precision, recall, and F1 scores ranging between 0.84 and 1.00. This suggests that the model is highly effective in accurately classifying the different types of brain tumors. This study provides insights into the potential of machine learning models in the accurate classification of brain tumors. The findings of this study lay the groundwork for further research in this area and have implications for the diagnosis and treatment of brain tumors. The study also highlights the potential of machine learning in enhancing the field of medical imaging and diagnosis. With the increasing complexity and volume of medical data, machine learning models like the one evaluated in this study could play a crucial role in improving the accuracy and efficiency of diagnoses. Furthermore, the study underscores the importance of continued research and development in this field to further refine these models and overcome any potential limitations or challenges. Overall, the study contributes to the field of medical imaging and machine learning and sets the stage for future research and advancements in this area.
Collapse
Affiliation(s)
- Adriel Abraham
- Department of Internal Medicine, New York Institute of Technology College of Osteopathic Medicine, New York, USA
| | - Rejath Jose
- Department of Internal Medicine, New York Institute of Technology College of Osteopathic Medicine, New York, USA
| | - Nabeel Farooqui
- Department of Computer and Information Science, University of Pennsylvania School of Engineering and Applied Science, Philadelphia, USA
| | - Jonathan Mayer
- Department of Clinical Sciences, New York Institute of Technology College of Osteopathic Medicine, New York, USA
| | - Jawad Ahmad
- Department of Clinical Sciences, New York Institute of Technology College of Osteopathic Medicine, New York, USA
| | - Zain Satti
- Department of Clinical Sciences, New York Institute of Technology College of Osteopathic Medicine, New York, USA
| | - Thomas J Jacob
- Department of Internal Medicine, New York Institute of Technology College of Osteopathic Medicine, New York, USA
| | - Faiz Syed
- Department of Internal Medicine, New York Institute of Technology College of Osteopathic Medicine, New York, USA
| | - Milan Toma
- Department of Osteopathic Manipulative Medicine, New York Institute of Technology College of Osteopathic Medicine, New York, USA
| |
Collapse
|
10
|
Albalawi E, T R M, Thakur A, Kumar VV, Gupta M, Khan SB, Almusharraf A. Integrated approach of federated learning with transfer learning for classification and diagnosis of brain tumor. BMC Med Imaging 2024; 24:110. [PMID: 38750436 PMCID: PMC11097560 DOI: 10.1186/s12880-024-01261-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Accepted: 03/27/2024] [Indexed: 05/18/2024] Open
Abstract
Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.
Collapse
Affiliation(s)
- Eid Albalawi
- Department of Computer science, College of Computer Science and Information Technology, King faisal University, 31982, Hofuf, Saudi Arabia
| | - Mahesh T R
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), 562112, Bangalore, India
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), 562112, Bangalore, India
| | - V Vinoth Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, 632014, Vellore, India
| | - Muskan Gupta
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), 562112, Bangalore, India
| | - Surbhi Bhatia Khan
- School of Science, Engineering and environment, University of Salford, M5 4WT, Manchester, UK.
- , Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon, Lebanon.
| | - Ahlam Almusharraf
- Department of Business Administration, College of Business and Administration, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Riyadh, Saudi Arabia
| |
Collapse
|
11
|
Ejiyi CJ, Qin Z, Nneji GU, Monday HN, Agbesi VK, Ejiyi MB, Ejiyi TU, Bamisile OO. Enhanced Cardiovascular Disease Prediction Modelling using Machine Learning Techniques: A Focus on CardioVitalnet. NETWORK (BRISTOL, ENGLAND) 2024:1-33. [PMID: 38626055 DOI: 10.1080/0954898x.2024.2343341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 04/07/2024] [Indexed: 04/18/2024]
Abstract
Aiming at early detection and accurate prediction of cardiovascular disease (CVD) to reduce mortality rates, this study focuses on the development of an intelligent predictive system to identify individuals at risk of CVD. The primary objective of the proposed system is to combine deep learning models with advanced data mining techniques to facilitate informed decision-making and precise CVD prediction. This approach involves several essential steps, including the preprocessing of acquired data, optimized feature selection, and disease classification, all aimed at enhancing the effectiveness of the system. The chosen optimal features are fed as input to the disease classification models and into some Machine Learning (ML) algorithms for improved performance in CVD classification. The experiment was simulated in the Python platform and the evaluation metrics such as accuracy, sensitivity, and F1_score were employed to assess the models' performances. The ML models (Extra Trees (ET), Random Forest (RF), AdaBoost, and XG-Boost) classifiers achieved high accuracies of 94.35%, 97.87%, 96.44%, and 99.00%, respectively, on the test set, while the proposed CardioVitalNet (CVN) achieved 87.45% accuracy. These results offer valuable insights into the process of selecting models for medical data analysis, ultimately enhancing the ability to make more accurate diagnoses and predictions.
Collapse
Affiliation(s)
- Chukwuebuka Joseph Ejiyi
- Network and Data Security Key Laboratory, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Zhen Qin
- Network and Data Security Key Laboratory, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Grace Ugochi Nneji
- Department of Computer Science and Software Engineering, Oxford Brookes College of Chengdu University of Technology, China
| | - Happy Nkanta Monday
- Department of Computer Science and Software Engineering, Oxford Brookes College of Chengdu University of Technology, China
| | - Victor K Agbesi
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | | | - Thomas Ugochukwu Ejiyi
- Department of Pure and Industrial Chemistry, University of Nigeria Nsukka, Enugu, Nigeria
| | - Olusola O Bamisile
- Sichuan Industrial Internet Intelligent Monitoring and Application Engineering Research Center, Chengdu University of Technology, Chengdu, China
| |
Collapse
|
12
|
Liu J, Wang P, Zhang H, Wu N. Distinguishing brain tumors by Label-free confocal micro-Raman spectroscopy. Photodiagnosis Photodyn Ther 2024; 45:104010. [PMID: 38336147 DOI: 10.1016/j.pdpdt.2024.104010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Revised: 01/31/2024] [Accepted: 02/06/2024] [Indexed: 02/12/2024]
Abstract
BACKGROUND Brain tumors have serious adverse effects on public health and social economy. Accurate detection of brain tumor types is critical for effective and proactive treatment, and thus improve the survival of patients. METHODS Four types of brain tumor tissue sections were detected by Raman spectroscopy. Principal component analysis (PCA) has been used to reduce the dimensionality of the Raman spectra data. Linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA) methods were utilized to discriminate different types of brain tumors. RESULTS Raman spectra were collected from 40 brain tumors. Variations in intensity and shift were observed in the Raman spectra positioned at 721, 854, 1004, 1032, 1128, 1248, 1449 cm-1 for different brain tumor tissues. The PCA results indicated that glioma, pituitary adenoma, and meningioma are difficult to differentiate from each other, whereas acoustic neuroma is clearly distinguished from the other three tumors. Multivariate analysis including QDA and LDA methods showed the classification accuracy rate of the QDA model was 99.47 %, better than the rate of LDA model was 95.07 %. CONCLUSIONS Raman spectroscopy could be used to extract valuable fingerprint-type molecular and chemical information of biological samples. The demonstrated technique has the potential to be developed to a rapid, label-free, and intelligent approach to distinguish brain tumor types with high accuracy.
Collapse
Affiliation(s)
- Jie Liu
- Chongqing Medical University, Chongqing, 400016, China; Department of Neurosurgery, Chongqing General Hospital, Chongqing University, Chongqing, 401147, China; Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, 400714, China; Chongqing School, University of Academy of Sciences, Chongqing, 400714, China
| | - Pan Wang
- Department of Neurosurgery, Chongqing General Hospital, Chongqing University, Chongqing, 401147, China
| | - Hua Zhang
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, 400714, China
| | - Nan Wu
- Chongqing Medical University, Chongqing, 400016, China; Department of Neurosurgery, Chongqing General Hospital, Chongqing University, Chongqing, 401147, China; Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, 400714, China; Chongqing School, University of Academy of Sciences, Chongqing, 400714, China.
| |
Collapse
|
13
|
Pitarch C, Ungan G, Julià-Sapé M, Vellido A. Advances in the Use of Deep Learning for the Analysis of Magnetic Resonance Image in Neuro-Oncology. Cancers (Basel) 2024; 16:300. [PMID: 38254790 PMCID: PMC10814384 DOI: 10.3390/cancers16020300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
Machine Learning is entering a phase of maturity, but its medical applications still lag behind in terms of practical use. The field of oncological radiology (and neuro-oncology in particular) is at the forefront of these developments, now boosted by the success of Deep-Learning methods for the analysis of medical images. This paper reviews in detail some of the most recent advances in the use of Deep Learning in this field, from the broader topic of the development of Machine-Learning-based analytical pipelines to specific instantiations of the use of Deep Learning in neuro-oncology; the latter including its use in the groundbreaking field of ultra-low field magnetic resonance imaging.
Collapse
Affiliation(s)
- Carla Pitarch
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Eurecat, Digital Health Unit, Technology Centre of Catalonia, 08005 Barcelona, Spain
| | - Gulnur Ungan
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Margarida Julià-Sapé
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Alfredo Vellido
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| |
Collapse
|
14
|
Shamsan A, Senan EM, Ahmad Shatnawi HS. Predicting of diabetic retinopathy development stages of fundus images using deep learning based on combined features. PLoS One 2023; 18:e0289555. [PMID: 37862328 PMCID: PMC10588832 DOI: 10.1371/journal.pone.0289555] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 07/20/2023] [Indexed: 10/22/2023] Open
Abstract
The number of diabetic retinopathy (DR) patients is increasing every year, and this causes a public health problem. Therefore, regular diagnosis of diabetes patients is necessary to avoid the progression of DR stages to advanced stages that lead to blindness. Manual diagnosis requires effort and expertise and is prone to errors and differing expert diagnoses. Therefore, artificial intelligence techniques help doctors make a proper diagnosis and resolve different opinions. This study developed three approaches, each with two systems, for early diagnosis of DR disease progression. All colour fundus images have been subjected to image enhancement and increasing contrast ROI through filters. All features extracted by the DenseNet-121 and AlexNet (Dense-121 and Alex) were fed to the Principal Component Analysis (PCA) method to select important features and reduce their dimensions. The first approach is to DR image analysis for early prediction of DR disease progression by Artificial Neural Network (ANN) with selected, low-dimensional features of Dense-121 and Alex models. The second approach is to DR image analysis for early prediction of DR disease progression is by integrating important and low-dimensional features of Dense-121 and Alex models before and after PCA. The third approach is to DR image analysis for early prediction of DR disease progression by ANN with the radiomic features. The radiomic features are a combination of the features of the CNN models (Dense-121 and Alex) separately with the handcrafted features extracted by Discrete Wavelet Transform (DWT), Local Binary Pattern (LBP), Fuzzy colour histogram (FCH), and Gray Level Co-occurrence Matrix (GLCM) methods. With the radiomic features of the Alex model and the handcrafted features, ANN reached a sensitivity of 97.92%, an AUC of 99.56%, an accuracy of 99.1%, a specificity of 99.4% and a precision of 99.06%.
Collapse
Affiliation(s)
- Ahlam Shamsan
- Computer Department, Applied College, Najran University, Najran, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | | |
Collapse
|
15
|
Alshahrani M, Al-Jabbar M, Senan EM, Ahmed IA, Saif JAM. Hybrid Methods for Fundus Image Analysis for Diagnosis of Diabetic Retinopathy Development Stages Based on Fusion Features. Diagnostics (Basel) 2023; 13:2783. [PMID: 37685321 PMCID: PMC10486790 DOI: 10.3390/diagnostics13172783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/22/2023] [Accepted: 08/24/2023] [Indexed: 09/10/2023] Open
Abstract
Diabetic retinopathy (DR) is a complication of diabetes that damages the delicate blood vessels of the retina and leads to blindness. Ophthalmologists rely on diagnosing the retina by imaging the fundus. The process takes a long time and needs skilled doctors to diagnose and determine the stage of DR. Therefore, automatic techniques using artificial intelligence play an important role in analyzing fundus images for the detection of the stages of DR development. However, diagnosis using artificial intelligence techniques is a difficult task and passes through many stages, and the extraction of representative features is important in reaching satisfactory results. Convolutional Neural Network (CNN) models play an important and distinct role in extracting features with high accuracy. In this study, fundus images were used for the detection of the developmental stages of DR by two proposed methods, each with two systems. The first proposed method uses GoogLeNet with SVM and ResNet-18 with SVM. The second method uses Feed-Forward Neural Networks (FFNN) based on the hybrid features extracted by first using GoogLeNet, Fuzzy color histogram (FCH), Gray Level Co-occurrence Matrix (GLCM), and Local Binary Pattern (LBP); followed by ResNet-18, FCH, GLCM and LBP. All the proposed methods obtained superior results. The FFNN network with hybrid features of ResNet-18, FCH, GLCM, and LBP obtained 99.7% accuracy, 99.6% precision, 99.6% sensitivity, 100% specificity, and 99.86% AUC.
Collapse
Affiliation(s)
- Mohammed Alshahrani
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia;
| | - Mohammed Al-Jabbar
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia;
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | | | | |
Collapse
|
16
|
Azam H, Tariq H, Shehzad D, Akbar S, Shah H, Khan ZA. Fully Automated Skull Stripping from Brain Magnetic Resonance Images Using Mask RCNN-Based Deep Learning Neural Networks. Brain Sci 2023; 13:1255. [PMID: 37759856 PMCID: PMC10526767 DOI: 10.3390/brainsci13091255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 08/09/2023] [Accepted: 08/21/2023] [Indexed: 09/29/2023] Open
Abstract
This research comprises experiments with a deep learning framework for fully automating the skull stripping from brain magnetic resonance (MR) images. Conventional techniques for segmentation have progressed to the extent of Convolutional Neural Networks (CNN). We proposed and experimented with a contemporary variant of the deep learning framework based on mask region convolutional neural network (Mask-RCNN) for all anatomical orientations of brain MR images. We trained the system from scratch to build a model for classification, detection, and segmentation. It is validated by images taken from three different datasets: BrainWeb; NAMIC, and a local hospital. We opted for purposive sampling to select 2000 images of T1 modality from data volumes followed by a multi-stage random sampling technique to segregate the dataset into three batches for training (75%), validation (15%), and testing (10%) respectively. We utilized a robust backbone architecture, namely ResNet-101 and Functional Pyramid Network (FPN), to achieve optimal performance with higher accuracy. We subjected the same data to two traditional methods, namely Brain Extraction Tools (BET) and Brain Surface Extraction (BSE), to compare their performance results. Our proposed method had higher mean average precision (mAP) = 93% and content validity index (CVI) = 0.95%, which were better than comparable methods. We contributed by training Mask-RCNN from scratch for generating reusable learning weights known as transfer learning. We contributed to methodological novelty by applying a pragmatic research lens, and used a mixed method triangulation technique to validate results on all anatomical modalities of brain MR images. Our proposed method improved the accuracy and precision of skull stripping by fully automating it and reducing its processing time and operational cost and reliance on technicians. This research study has also provided grounds for extending the work to the scale of explainable artificial intelligence (XAI).
Collapse
Affiliation(s)
- Humera Azam
- Department of Computer Science, University of Karachi, Karachi 75270, Pakistan
| | - Humera Tariq
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA;
| | - Danish Shehzad
- Department of Computer Science, The Superior University, Lahore 54590, Pakistan
| | - Saad Akbar
- College of Computing and Information Sciences, Karachi Institute of Economics and Technology, Karachi 75190, Pakistan;
| | - Habib Shah
- Department of Computer Science, College of Computer Science, King Khalid University, Abha 61421, Saudi Arabia;
| | - Zamin Ali Khan
- Department of Computer Science, IQRA University, Karachi 71500, Pakistan;
| |
Collapse
|
17
|
Mathew NA, Stanley IM, Jose R. Machine Learning based tumor diagnosis using compressive sensing in MRI images. Biomed Phys Eng Express 2023; 9:055023. [PMID: 37524065 DOI: 10.1088/2057-1976/acebf1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 07/31/2023] [Indexed: 08/02/2023]
Abstract
Despite the widespread use of Magnetic Resonance Imaging (MRI) analysis for disease diagnosis, processing and analyzing the substantial amount of acquired data may be challenging. Compressive Sensing (CS) offers a promising solution to this problem. MRI diagnosis can be performed faster and more accurately using CS since it requires fewer data for image analysis. A combination of CS with conventional and Deep Learning (DL) models, specifically VGGNet-16, is proposed for categorizing reconstructed MRI images into healthy and unhealthy. The model is properly trained using a dataset containing both normal and tumor images. The method is evaluated using a variety of parameters, including recall, F1-score, accuracy, and precision. Using the VGGNet-16 model, the proposed work achieved a classification accuracy of 98.7%, which is comparable with another state-of-the-art method based on traditionally acquired MRI images. The results indicate that CS may be useful in clinical settings for improving the efficiency and accuracy of MRI-based tumor diagnosis. Furthermore, the approach could be extended to other medical imaging modalities, possibly improving diagnosis accuracy. The study illustrates how CS can enhance medical imaging analysis, particularly in the context of tumor diagnosis using MRI images. It is necessary to conduct further research to investigate the potential applications of CS in other medical imaging contexts.
Collapse
Affiliation(s)
- Nimmy Ann Mathew
- Department of Electronics and Communication, Rajiv Gandhi Institute of Technology, Kottayam, Kerala, 686501. Affiliated to APJ Abdul Kalam Technological University, Kerala, India
| | - Ishita Maria Stanley
- Department of Electronics and Communication, Rajiv Gandhi Institute of Technology, Kottayam, Kerala, 686501. Affiliated to APJ Abdul Kalam Technological University, Kerala, India
| | - Renu Jose
- Department of Electronics and Communication, Rajiv Gandhi Institute of Technology, Kottayam, Kerala, 686501. Affiliated to APJ Abdul Kalam Technological University, Kerala, India
| |
Collapse
|
18
|
Ortega-Martorell S, Olier I, Hernandez O, Restrepo-Galvis PD, Bellfield RAA, Candiota AP. Tracking Therapy Response in Glioblastoma Using 1D Convolutional Neural Networks. Cancers (Basel) 2023; 15:4002. [PMID: 37568818 PMCID: PMC10417313 DOI: 10.3390/cancers15154002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 07/26/2023] [Accepted: 08/05/2023] [Indexed: 08/13/2023] Open
Abstract
BACKGROUND Glioblastoma (GB) is a malignant brain tumour that is challenging to treat, often relapsing even after aggressive therapy. Evaluating therapy response relies on magnetic resonance imaging (MRI) following the Response Assessment in Neuro-Oncology (RANO) criteria. However, early assessment is hindered by phenomena such as pseudoprogression and pseudoresponse. Magnetic resonance spectroscopy (MRS/MRSI) provides metabolomics information but is underutilised due to a lack of familiarity and standardisation. METHODS This study explores the potential of spectroscopic imaging (MRSI) in combination with several machine learning approaches, including one-dimensional convolutional neural networks (1D-CNNs), to improve therapy response assessment. Preclinical GB (GL261-bearing mice) were studied for method optimisation and validation. RESULTS The proposed 1D-CNN models successfully identify different regions of tumours sampled by MRSI, i.e., normal brain (N), control/unresponsive tumour (T), and tumour responding to treatment (R). Class activation maps using Grad-CAM enabled the study of the key areas relevant to the models, providing model explainability. The generated colour-coded maps showing the N, T and R regions were highly accurate (according to Dice scores) when compared against ground truth and outperformed our previous method. CONCLUSIONS The proposed methodology may provide new and better opportunities for therapy response assessment, potentially providing earlier hints of tumour relapsing stages.
Collapse
Affiliation(s)
- Sandra Ortega-Martorell
- Data Science Research Centre, Liverpool John Moores University, Liverpool L3 3AF, UK; (I.O.); (R.A.A.B.)
| | - Ivan Olier
- Data Science Research Centre, Liverpool John Moores University, Liverpool L3 3AF, UK; (I.O.); (R.A.A.B.)
| | - Orlando Hernandez
- Escuela Colombiana de Ingeniería Julio Garavito, Bogota 111166, Colombia; (O.H.); (P.D.R.-G.)
| | | | - Ryan A. A. Bellfield
- Data Science Research Centre, Liverpool John Moores University, Liverpool L3 3AF, UK; (I.O.); (R.A.A.B.)
| | - Ana Paula Candiota
- Centro de Investigación Biomédica en Red: Bioingeniería, Biomateriales y Nanomedicina, 08193 Cerdanyola del Vallès, Spain
- Departament de Bioquímica i Biologia Molecular, Facultat de Biociències, Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallès, Spain
| |
Collapse
|
19
|
Sahoo S, Mishra S, Panda B, Bhoi AK, Barsocchi P. An Augmented Modulated Deep Learning Based Intelligent Predictive Model for Brain Tumor Detection Using GAN Ensemble. SENSORS (BASEL, SWITZERLAND) 2023; 23:6930. [PMID: 37571713 PMCID: PMC10422344 DOI: 10.3390/s23156930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/25/2023] [Accepted: 07/28/2023] [Indexed: 08/13/2023]
Abstract
Brain tumor detection in the initial stage is becoming an intricate task for clinicians worldwide. The diagnosis of brain tumor patients is rigorous in the later stages, which is a serious concern. Although there are related pragmatic clinical tools and multiple models based on machine learning (ML) for the effective diagnosis of patients, these models still provide less accuracy and take immense time for patient screening during the diagnosis process. Hence, there is still a need to develop a more precise model for more accurate screening of patients to detect brain tumors in the beginning stages and aid clinicians in diagnosis, making the brain tumor assessment more reliable. In this research, a performance analysis of the impact of different generative adversarial networks (GAN) on the early detection of brain tumors is presented. Based on it, a novel hybrid enhanced predictive convolution neural network (CNN) model using a hybrid GAN ensemble is proposed. Brain tumor image data is augmented using a GAN ensemble, which is fed for classification using a hybrid modulated CNN technique. The outcome is generated through a soft voting approach where the final prediction is based on the GAN, which computes the highest value for different performance metrics. This analysis demonstrated that evaluation with a progressive-growing generative adversarial network (PGGAN) architecture produced the best result. In the analysis, PGGAN outperformed others, computing the accuracy, precision, recall, F1-score, and negative predictive value (NPV) to be 98.85, 98.45%, 97.2%, 98.11%, and 98.09%, respectively. Additionally, a very low latency of 3.4 s is determined with PGGAN. The PGGAN model enhanced the overall performance of the identification of brain cell tissues in real time. Therefore, it may be inferred to suggest that brain tumor detection in patients using PGGAN augmentation with the proposed modulated CNN technique generates the optimum performance using the soft voting approach.
Collapse
Affiliation(s)
- Saswati Sahoo
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, India;
| | - Sushruta Mishra
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, India;
| | - Baidyanath Panda
- LTIMindtree, 1 American Row, 3rd Floor, Hartford, CT 06103, USA;
| | - Akash Kumar Bhoi
- Directorate of Research, Sikkim Manipal University, Gangtok 737102, India;
- KIET Group of Institutions, Delhi-NCR, Ghaziabad 201206, India
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| | - Paolo Barsocchi
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| |
Collapse
|
20
|
Saidani O, Aljrees T, Umer M, Alturki N, Alshardan A, Khan SW, Alsubai S, Ashraf I. Enhancing Prediction of Brain Tumor Classification Using Images and Numerical Data Features. Diagnostics (Basel) 2023; 13:2544. [PMID: 37568907 PMCID: PMC10417332 DOI: 10.3390/diagnostics13152544] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 07/23/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023] Open
Abstract
Brain tumors, along with other diseases that harm the neurological system, are a significant contributor to global mortality. Early diagnosis plays a crucial role in effectively treating brain tumors. To distinguish individuals with tumors from those without, this study employs a combination of images and data-based features. In the initial phase, the image dataset is enhanced, followed by the application of a UNet transfer-learning-based model to accurately classify patients as either having tumors or being normal. In the second phase, this research utilizes 13 features in conjunction with a voting classifier. The voting classifier incorporates features extracted from deep convolutional layers and combines stochastic gradient descent with logistic regression to achieve better classification results. The reported accuracy score of 0.99 achieved by both proposed models shows its superior performance. Also, comparing results with other supervised learning algorithms and state-of-the-art models validates its performance.
Collapse
Affiliation(s)
- Oumaima Saidani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia; (O.S.); (N.A.); (A.A.)
| | - Turki Aljrees
- Department College of Computer Science and Engineering, University of Hafr Al-Batin, Hafar Al-Batin 39524, Saudi Arabia;
| | - Muhammad Umer
- Department of Computer Science & Information Technology, The Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia; (O.S.); (N.A.); (A.A.)
| | - Amal Alshardan
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia; (O.S.); (N.A.); (A.A.)
| | - Sardar Waqar Khan
- Department of Computer Science & Information Technology, The University of Lahore, Lahore 54000, Pakistan;
| | - Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia;
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| |
Collapse
|
21
|
Alalayah KM, Senan EM, Atlam HF, Ahmed IA, Shatnawi HSA. Effective Early Detection of Epileptic Seizures through EEG Signals Using Classification Algorithms Based on t-Distributed Stochastic Neighbor Embedding and K-Means. Diagnostics (Basel) 2023; 13:diagnostics13111957. [PMID: 37296809 DOI: 10.3390/diagnostics13111957] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 05/22/2023] [Accepted: 06/02/2023] [Indexed: 06/12/2023] Open
Abstract
Epilepsy is a neurological disorder in the activity of brain cells that leads to seizures. An electroencephalogram (EEG) can detect seizures as it contains physiological information of the neural activity of the brain. However, visual examination of EEG by experts is time consuming, and their diagnoses may even contradict each other. Thus, an automated computer-aided diagnosis for EEG diagnostics is necessary. Therefore, this paper proposes an effective approach for the early detection of epilepsy. The proposed approach involves the extraction of important features and classification. First, signal components are decomposed to extract the features via the discrete wavelet transform (DWT) method. Principal component analysis (PCA) and the t-distributed stochastic neighbor embedding (t-SNE) algorithm were applied to reduce the dimensions and focus on the most important features. Subsequently, K-means clustering + PCA and K-means clustering + t-SNE were used to divide the dataset into subgroups to reduce the dimensions and focus on the most important representative features of epilepsy. The features extracted from these steps were fed to extreme gradient boosting, K-nearest neighbors (K-NN), decision tree (DT), random forest (RF) and multilayer perceptron (MLP) classifiers. The experimental results demonstrated that the proposed approach provides superior results to those of existing studies. During the testing phase, the RF classifier with DWT and PCA achieved an accuracy of 97.96%, precision of 99.1%, recall of 94.41% and F1 score of 97.41%. Moreover, the RF classifier with DWT and t-SNE attained an accuracy of 98.09%, precision of 99.1%, recall of 93.9% and F1 score of 96.21%. In comparison, the MLP classifier with PCA + K-means reached an accuracy of 98.98%, precision of 99.16%, recall of 95.69% and F1 score of 97.4%.
Collapse
Affiliation(s)
- Khaled M Alalayah
- Department of Computer Science, College of Science and Arts, Najran University, Sharurah 68341, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a P.O. Box 1152, Yemen
| | - Hany F Atlam
- Cyber Security Centre, WMG, University of Warwick, Coventry CV4 7AL, UK
| | | | | |
Collapse
|
22
|
Muezzinoglu T, Baygin N, Tuncer I, Barua PD, Baygin M, Dogan S, Tuncer T, Palmer EE, Cheong KH, Acharya UR. PatchResNet: Multiple Patch Division-Based Deep Feature Fusion Framework for Brain Tumor Classification Using MRI Images. J Digit Imaging 2023; 36:973-987. [PMID: 36797543 PMCID: PMC10287865 DOI: 10.1007/s10278-023-00789-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 01/30/2023] [Accepted: 01/31/2023] [Indexed: 02/18/2023] Open
Abstract
Modern computer vision algorithms are based on convolutional neural networks (CNNs), and both end-to-end learning and transfer learning modes have been used with CNN for image classification. Thus, automated brain tumor classification models have been proposed by deploying CNNs to help medical professionals. Our primary objective is to increase the classification performance using CNN. Therefore, a patch-based deep feature engineering model has been proposed in this work. Nowadays, patch division techniques have been used to attain high classification performance, and variable-sized patches have achieved good results. In this work, we have used three types of patches of different sizes (32 × 32, 56 × 56, 112 × 112). Six feature vectors have been obtained using these patches and two layers of the pretrained ResNet50 (global average pooling and fully connected layers). In the feature selection phase, three selectors-neighborhood component analysis (NCA), Chi2, and ReliefF-have been used, and 18 final feature vectors have been obtained. By deploying k nearest neighbors (kNN), 18 results have been calculated. Iterative hard majority voting (IHMV) has been applied to compute the general classification accuracy of this framework. This model uses different patches, feature extractors (two layers of the ResNet50 have been utilized as feature extractors), and selectors, making this a framework that we have named PatchResNet. A public brain image dataset containing four classes (glioblastoma multiforme (GBM), meningioma, pituitary tumor, healthy) has been used to develop the proposed PatchResNet model. Our proposed PatchResNet attained 98.10% classification accuracy using the public brain tumor image dataset. The developed PatchResNet model obtained high classification accuracy and has the advantage of being a self-organized framework. Therefore, the proposed method can choose the best result validation prediction vectors and achieve high image classification performance.
Collapse
Affiliation(s)
- Taha Muezzinoglu
- Department of Computer Engineering, Faculty of Engineering, Munzur University, Tunceli, Turkey
| | - Nursena Baygin
- Department of Computer Engineering, Faculty of Engineering, Erzurum Technical University, Erzurum, Turkey
| | | | - Prabal Datta Barua
- School of Management & Enterprise, University of Southern Queensland, Toowoomba, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, Australia
| | - Mehmet Baygin
- Department of Computer Engineering, Faculty of Engineering, Ardahan University, Ardahan, Turkey
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Elizabeth Emma Palmer
- Centre of Clinical Genetics, Sydney Children’s Hospitals Network, Randwick, 2031 Australia
- School of Women’s and Children’s Health, University of New South Wales, Randwick, 2031 Australia
| | - Kang Hao Cheong
- Science, Mathematics and Technology Cluster, Singapore University of Technology and Design, Singapore, S487372 Singapore
| | - U. Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore, 599489 Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
23
|
Alalayah KM, Senan EM, Atlam HF, Ahmed IA, Shatnawi HSA. Automatic and Early Detection of Parkinson's Disease by Analyzing Acoustic Signals Using Classification Algorithms Based on Recursive Feature Elimination Method. Diagnostics (Basel) 2023; 13:diagnostics13111924. [PMID: 37296776 DOI: 10.3390/diagnostics13111924] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 05/23/2023] [Accepted: 05/27/2023] [Indexed: 06/12/2023] Open
Abstract
Parkinson's disease (PD) is a neurodegenerative condition generated by the dysfunction of brain cells and their 60-80% inability to produce dopamine, an organic chemical responsible for controlling a person's movement. This condition causes PD symptoms to appear. Diagnosis involves many physical and psychological tests and specialist examinations of the patient's nervous system, which causes several issues. The methodology method of early diagnosis of PD is based on analysing voice disorders. This method extracts a set of features from a recording of the person's voice. Then machine-learning (ML) methods are used to analyse and diagnose the recorded voice to distinguish Parkinson's cases from healthy ones. This paper proposes novel techniques to optimize the techniques for early diagnosis of PD by evaluating selected features and hyperparameter tuning of ML algorithms for diagnosing PD based on voice disorders. The dataset was balanced by the synthetic minority oversampling technique (SMOTE) and features were arranged according to their contribution to the target characteristic by the recursive feature elimination (RFE) algorithm. We applied two algorithms, t-distributed stochastic neighbour embedding (t-SNE) and principal component analysis (PCA), to reduce the dimensions of the dataset. Both t-SNE and PCA finally fed the resulting features into the classifiers support-vector machine (SVM), K-nearest neighbours (KNN), decision tree (DT), random forest (RF), and multilayer perception (MLP). Experimental results proved that the proposed techniques were superior to existing studies in which RF with the t-SNE algorithm yielded an accuracy of 97%, precision of 96.50%, recall of 94%, and F1-score of 95%. In addition, MLP with the PCA algorithm yielded an accuracy of 98%, precision of 97.66%, recall of 96%, and F1-score of 96.66%.
Collapse
Affiliation(s)
- Khaled M Alalayah
- Department of Computer Science, Faculty of Science and Arts, Najran University, Sharurah 68341, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | - Hany F Atlam
- Cyber Security Centre, WMG, University of Warwick, Coventry CV4 7AL, UK
| | | | | |
Collapse
|
24
|
Olayah F, Senan EM, Ahmed IA, Awaji B. Blood Slide Image Analysis to Classify WBC Types for Prediction Haematology Based on a Hybrid Model of CNN and Handcrafted Features. Diagnostics (Basel) 2023; 13:diagnostics13111899. [PMID: 37296753 DOI: 10.3390/diagnostics13111899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 05/24/2023] [Accepted: 05/26/2023] [Indexed: 06/12/2023] Open
Abstract
White blood cells (WBCs) are one of the main components of blood produced by the bone marrow. WBCs are part of the immune system that protects the body from infectious diseases and an increase or decrease in the amount of any type that causes a particular disease. Thus, recognizing the WBC types is essential for diagnosing the patient's health and identifying the disease. Analyzing blood samples to determine the amount and WBC types requires experienced doctors. Artificial intelligence techniques were applied to analyze blood samples and classify their types to help doctors distinguish between types of infectious diseases due to increased or decreased WBC amounts. This study developed strategies for analyzing blood slide images to classify WBC types. The first strategy is to classify WBC types by the SVM-CNN technique. The second strategy for classifying WBC types is by SVM based on hybrid CNN features, which are called VGG19-ResNet101-SVM, ResNet101-MobileNet-SVM, and VGG19-ResNet101-MobileNet-SVM techniques. The third strategy for classifying WBC types by FFNN is based on a hybrid model of CNN and handcrafted features. With MobileNet and handcrafted features, FFNN achieved an AUC of 99.43%, accuracy of 99.80%, precision of 99.75%, specificity of 99.75%, and sensitivity of 99.68%.
Collapse
Affiliation(s)
- Fekry Olayah
- Department of Information System, Faculty Computer Science and information System, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | | - Bakri Awaji
- Department of Computer Science, Faculty of Computer Science and Information System, Najran University, Najran 66462, Saudi Arabia
| |
Collapse
|
25
|
Al-Jabbar M, Alshahrani M, Senan EM, Ahmed IA. Analyzing Histological Images Using Hybrid Techniques for Early Detection of Multi-Class Breast Cancer Based on Fusion Features of CNN and Handcrafted. Diagnostics (Basel) 2023; 13:diagnostics13101753. [PMID: 37238243 DOI: 10.3390/diagnostics13101753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 05/09/2023] [Accepted: 05/11/2023] [Indexed: 05/28/2023] Open
Abstract
Breast cancer is the second most common type of cancer among women, and it can threaten women's lives if it is not diagnosed early. There are many methods for detecting breast cancer, but they cannot distinguish between benign and malignant tumors. Therefore, a biopsy taken from the patient's abnormal tissue is an effective way to distinguish between malignant and benign breast cancer tumors. There are many challenges facing pathologists and experts in diagnosing breast cancer, including the addition of some medical fluids of various colors, the direction of the sample, the small number of doctors and their differing opinions. Thus, artificial intelligence techniques solve these challenges and help clinicians resolve their diagnostic differences. In this study, three techniques, each with three systems, were developed to diagnose multi and binary classes of breast cancer datasets and distinguish between benign and malignant types with 40× and 400× factors. The first technique for diagnosing a breast cancer dataset is using an artificial neural network (ANN) with selected features from VGG-19 and ResNet-18. The second technique for diagnosing breast cancer dataset is by ANN with combined features for VGG-19 and ResNet-18 before and after principal component analysis (PCA). The third technique for analyzing breast cancer dataset is by ANN with hybrid features. The hybrid features are a hybrid between VGG-19 and handcrafted; and a hybrid between ResNet-18 and handcrafted. The handcrafted features are mixed features extracted using Fuzzy color histogram (FCH), local binary pattern (LBP), discrete wavelet transform (DWT) and gray level co-occurrence matrix (GLCM) methods. With the multi classes data set, ANN with the hybrid features of the VGG-19 and handcrafted reached a precision of 95.86%, an accuracy of 97.3%, sensitivity of 96.75%, AUC of 99.37%, and specificity of 99.81% with images at magnification factor 400×. Whereas with the binary classes data set, ANN with the hybrid features of the VGG-19 and handcrafted reached a precision of 99.74%, an accuracy of 99.7%, sensitivity of 100%, AUC of 99.85%, and specificity of 100% with images at a magnification factor 400×.
Collapse
Affiliation(s)
- Mohammed Al-Jabbar
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Mohammed Alshahrani
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | |
Collapse
|
26
|
Ahmed IA, Senan EM, Shatnawi HSA. Hybrid Models for Endoscopy Image Analysis for Early Detection of Gastrointestinal Diseases Based on Fused Features. Diagnostics (Basel) 2023; 13:diagnostics13101758. [PMID: 37238241 DOI: 10.3390/diagnostics13101758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 05/12/2023] [Accepted: 05/13/2023] [Indexed: 05/28/2023] Open
Abstract
The gastrointestinal system contains the upper and lower gastrointestinal tracts. The main tasks of the gastrointestinal system are to break down food and convert it into essential elements that the body can benefit from and expel waste in the form of feces. If any organ is affected, it does not work well, which affects the body. Many gastrointestinal diseases, such as infections, ulcers, and benign and malignant tumors, threaten human life. Endoscopy techniques are the gold standard for detecting infected parts within the organs of the gastrointestinal tract. Endoscopy techniques produce videos that are converted into thousands of frames that show the disease's characteristics in only some frames. Therefore, this represents a challenge for doctors because it is a tedious task that requires time, effort, and experience. Computer-assisted automated diagnostic techniques help achieve effective diagnosis to help doctors identify the disease and give the patient the appropriate treatment. In this study, many efficient methodologies for analyzing endoscopy images for diagnosing gastrointestinal diseases were developed for the Kvasir dataset. The Kvasir dataset was classified by three pre-trained models: GoogLeNet, MobileNet, and DenseNet121. The images were optimized, and the gradient vector flow (GVF) algorithm was applied to segment the regions of interest (ROIs), isolating them from healthy regions and saving the endoscopy images as Kvasir-ROI. The Kvasir-ROI dataset was classified by the three pre-trained GoogLeNet, MobileNet, and DenseNet121 models. Hybrid methodologies (CNN-FFNN and CNN-XGBoost) were developed based on the GVF algorithm and achieved promising results for diagnosing disease based on endoscopy images of gastroenterology. The last methodology is based on fused CNN models and their classification by FFNN and XGBoost networks. The hybrid methodology based on the fused CNN features, called GoogLeNet-MobileNet-DenseNet121-XGBoost, achieved an AUC of 97.54%, accuracy of 97.25%, sensitivity of 96.86%, precision of 97.25%, and specificity of 99.48%.
Collapse
Affiliation(s)
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | |
Collapse
|
27
|
Shamsan A, Senan EM, Shatnawi HSA. Automatic Classification of Colour Fundus Images for Prediction Eye Disease Types Based on Hybrid Features. Diagnostics (Basel) 2023; 13:diagnostics13101706. [PMID: 37238190 DOI: 10.3390/diagnostics13101706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/06/2023] [Accepted: 05/08/2023] [Indexed: 05/28/2023] Open
Abstract
Early detection of eye diseases is the only solution to receive timely treatment and prevent blindness. Colour fundus photography (CFP) is an effective fundus examination technique. Because of the similarity in the symptoms of eye diseases in the early stages and the difficulty in distinguishing between the type of disease, there is a need for computer-assisted automated diagnostic techniques. This study focuses on classifying an eye disease dataset using hybrid techniques based on feature extraction with fusion methods. Three strategies were designed to classify CFP images for the diagnosis of eye disease. The first method is to classify an eye disease dataset using an Artificial Neural Network (ANN) with features from the MobileNet and DenseNet121 models separately after reducing the high dimensionality and repetitive features using Principal Component Analysis (PCA). The second method is to classify the eye disease dataset using an ANN on the basis of fused features from the MobileNet and DenseNet121 models before and after reducing features. The third method is to classify the eye disease dataset using ANN based on the fused features from the MobileNet and DenseNet121 models separately with handcrafted features. Based on the fused MobileNet and handcrafted features, the ANN attained an AUC of 99.23%, an accuracy of 98.5%, a precision of 98.45%, a specificity of 99.4%, and a sensitivity of 98.75%.
Collapse
Affiliation(s)
- Ahlam Shamsan
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | |
Collapse
|
28
|
Khalid A, Senan EM, Al-Wagih K, Ali Al-Azzam MM, Alkhraisha ZM. Hybrid Techniques of X-ray Analysis to Predict Knee Osteoarthritis Grades Based on Fusion Features of CNN and Handcrafted. Diagnostics (Basel) 2023; 13:diagnostics13091609. [PMID: 37175000 PMCID: PMC10178472 DOI: 10.3390/diagnostics13091609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 04/25/2023] [Accepted: 04/28/2023] [Indexed: 05/15/2023] Open
Abstract
Knee osteoarthritis (KOA) is a chronic disease that impedes movement, especially in the elderly, affecting more than 5% of people worldwide. KOA goes through many stages, from the mild grade that can be treated to the severe grade in which the knee must be replaced. Therefore, early diagnosis of KOA is essential to avoid its development to the advanced stages. X-rays are one of the vital techniques for the early detection of knee infections, which requires highly experienced doctors and radiologists to distinguish Kellgren-Lawrence (KL) grading. Thus, artificial intelligence techniques solve the shortcomings of manual diagnosis. This study developed three methodologies for the X-ray analysis of both the Osteoporosis Initiative (OAI) and Rani Channamma University (RCU) datasets for diagnosing KOA and discrimination between KL grades. In all methodologies, the Principal Component Analysis (PCA) algorithm was applied after the CNN models to delete the unimportant and redundant features and keep the essential features. The first methodology for analyzing x-rays and diagnosing the degree of knee inflammation uses the VGG-19 -FFNN and ResNet-101 -FFNN systems. The second methodology of X-ray analysis and diagnosis of KOA grade by Feed Forward Neural Network (FFNN) is based on the combined features of VGG-19 and ResNet-101 before and after PCA. The third methodology for X-ray analysis and diagnosis of KOA grade by FFNN is based on the fusion features of VGG-19 and handcrafted features, and fusion features of ResNet-101 and handcrafted features. For an OAI dataset with fusion features of VGG-19 and handcrafted features, FFNN obtained an AUC of 99.25%, an accuracy of 99.1%, a sensitivity of 98.81%, a specificity of 100%, and a precision of 98.24%. For the RCU dataset with the fusion features of VGG-19 and the handcrafted features, FFNN obtained an AUC of 99.07%, an accuracy of 98.20%, a sensitivity of 98.16%, a specificity of 99.73%, and a precision of 98.08%.
Collapse
Affiliation(s)
- Ahmed Khalid
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | - Khalil Al-Wagih
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | | | |
Collapse
|
29
|
Ghaleb Al-Mekhlafi Z, Mohammed Senan E, Sulaiman Alshudukhi J, Abdulkarem Mohammed B. Hybrid Techniques for Diagnosing Endoscopy Images for Early Detection of Gastrointestinal Disease Based on Fusion Features. INT J INTELL SYST 2023. [DOI: 10.1155/2023/8616939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Gastrointestinal (GI) diseases, particularly tumours, are considered one of the most widespread and dangerous diseases and thus need timely health care for early detection to reduce deaths. Endoscopy technology is an effective technique for diagnosing GI diseases, thus producing a video containing thousands of frames. However, it is difficult to analyse all the images by a gastroenterologist, and it takes a long time to keep track of all the frames. Thus, artificial intelligence systems provide solutions to this challenge by analysing thousands of images with high speed and effective accuracy. Hence, systems with different methodologies are developed in this work. The first methodology for diagnosing endoscopy images of GI diseases is by using VGG-16 + SVM and DenseNet-121 + SVM. The second methodology for diagnosing endoscopy images of gastrointestinal diseases by artificial neural network (ANN) is based on fused features between VGG-16 and DenseNet-121 before and after high-dimensionality reduction by the principal component analysis (PCA). The third methodology is by ANN and is based on the fused features between VGG-16 and handcrafted features and features fused between DenseNet-121 and the handcrafted features. Herein, handcrafted features combine the features of gray level cooccurrence matrix (GLCM), discrete wavelet transform (DWT), fuzzy colour histogram (FCH), and local binary pattern (LBP) methods. All systems achieved promising results for diagnosing endoscopy images of the gastroenterology data set. The ANN network reached an accuracy, sensitivity, precision, specificity, and an AUC of 98.9%, 98.70%, 98.94%, 99.69%, and 99.51%, respectively, based on fused features of the VGG-16 and the handcrafted.
Collapse
Affiliation(s)
- Zeyad Ghaleb Al-Mekhlafi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | - Jalawi Sulaiman Alshudukhi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Badiea Abdulkarem Mohammed
- Department of Computer Engineering, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| |
Collapse
|
30
|
Olayah F, Senan EM, Ahmed IA, Awaji B. AI Techniques of Dermoscopy Image Analysis for the Early Detection of Skin Lesions Based on Combined CNN Features. Diagnostics (Basel) 2023; 13:diagnostics13071314. [PMID: 37046532 PMCID: PMC10093624 DOI: 10.3390/diagnostics13071314] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 03/23/2023] [Accepted: 03/29/2023] [Indexed: 04/05/2023] Open
Abstract
Melanoma is one of the deadliest types of skin cancer that leads to death if not diagnosed early. Many skin lesions are similar in the early stages, which causes an inaccurate diagnosis. Accurate diagnosis of the types of skin lesions helps dermatologists save patients’ lives. In this paper, we propose hybrid systems based on the advantages of fused CNN models. CNN models receive dermoscopy images of the ISIC 2019 dataset after segmenting the area of lesions and isolating them from healthy skin through the Geometric Active Contour (GAC) algorithm. Artificial neural network (ANN) and Random Forest (Rf) receive fused CNN features and classify them with high accuracy. The first methodology involved analyzing the area of skin lesions and diagnosing their type early using the hybrid models CNN-ANN and CNN-RF. CNN models (AlexNet, GoogLeNet and VGG16) receive lesions area only and produce high depth feature maps. Thus, the deep feature maps were reduced by the PCA and then classified by ANN and RF networks. The second methodology involved analyzing the area of skin lesions and diagnosing their type early using the hybrid CNN-ANN and CNN-RF models based on the features of the fused CNN models. It is worth noting that the features of the CNN models were serially integrated after reducing their high dimensions by Principal Component Analysis (PCA). Hybrid models based on fused CNN features achieved promising results for diagnosing dermatoscopic images of the ISIC 2019 data set and distinguishing skin cancer from other skin lesions. The AlexNet-GoogLeNet-VGG16-ANN hybrid model achieved an AUC of 94.41%, sensitivity of 88.90%, accuracy of 96.10%, precision of 88.69%, and specificity of 99.44%.
Collapse
Affiliation(s)
- Fekry Olayah
- Department of Information System, Faculty Computer Science and Information System, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | | | - Bakri Awaji
- Department of Computer Science, Faculty of Computer Science and Information System, Najran University, Najran 66462, Saudi Arabia
| |
Collapse
|
31
|
Al-Jabbar M, Alshahrani M, Senan EM, Ahmed IA. Histopathological Analysis for Detecting Lung and Colon Cancer Malignancies Using Hybrid Systems with Fused Features. Bioengineering (Basel) 2023; 10:bioengineering10030383. [PMID: 36978774 PMCID: PMC10045080 DOI: 10.3390/bioengineering10030383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/05/2023] [Accepted: 03/16/2023] [Indexed: 03/30/2023] Open
Abstract
Lung and colon cancer are among humanity's most common and deadly cancers. In 2020, there were 4.19 million people diagnosed with lung and colon cancer, and more than 2.7 million died worldwide. Some people develop lung and colon cancer simultaneously due to smoking which causes lung cancer, leading to an abnormal diet, which also causes colon cancer. There are many techniques for diagnosing lung and colon cancer, most notably the biopsy technique and its analysis in laboratories. Due to the scarcity of health centers and medical staff, especially in developing countries. Moreover, manual diagnosis takes a long time and is subject to differing opinions of doctors. Thus, artificial intelligence techniques solve these challenges. In this study, three strategies were developed, each with two systems for early diagnosis of histological images of the LC25000 dataset. Histological images have been improved, and the contrast of affected areas has been increased. The GoogLeNet and VGG-19 models of all systems produced high dimensional features, so redundant and unnecessary features were removed to reduce high dimensionality and retain essential features by the PCA method. The first strategy for diagnosing the histological images of the LC25000 dataset by ANN uses crucial features of GoogLeNet and VGG-19 models separately. The second strategy uses ANN with the combined features of GoogLeNet and VGG-19. One system reduced dimensions and combined, while the other combined high features and then reduced high dimensions. The third strategy uses ANN with fusion features of CNN models (GoogLeNet and VGG-19) and handcrafted features. With the fusion features of VGG-19 and handcrafted features, the ANN reached a sensitivity of 99.85%, a precision of 100%, an accuracy of 99.64%, a specificity of 100%, and an AUC of 99.86%.
Collapse
Affiliation(s)
- Mohammed Al-Jabbar
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Mohammed Alshahrani
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | |
Collapse
|
32
|
Multi-Models of Analyzing Dermoscopy Images for Early Detection of Multi-Class Skin Lesions Based on Fused Features. Processes (Basel) 2023. [DOI: 10.3390/pr11030910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023] Open
Abstract
Melanoma is a cancer that threatens life and leads to death. Effective detection of skin lesion types by images is a challenging task. Dermoscopy is an effective technique for detecting skin lesions. Early diagnosis of skin cancer is essential for proper treatment. Skin lesions are similar in their early stages, so manual diagnosis is difficult. Thus, artificial intelligence techniques can analyze images of skin lesions and discover hidden features not seen by the naked eye. This study developed hybrid techniques based on hybrid features to effectively analyse dermoscopic images to classify two datasets, HAM10000 and PH2, of skin lesions. The images have been optimized for all techniques, and the problem of imbalance between the two datasets has been resolved. The HAM10000 and PH2 datasets were classified by pre-trained MobileNet and ResNet101 models. For effective detection of the early stages skin lesions, hybrid techniques SVM-MobileNet, SVM-ResNet101 and SVM-MobileNet-ResNet101 were applied, which showed better performance than pre-trained CNN models due to the effectiveness of the handcrafted features that extract the features of color, texture and shape. Then, handcrafted features were combined with the features of the MobileNet and ResNet101 models to form a high accuracy feature. Finally, features of MobileNet-handcrafted and ResNet101-handcrafted were sent to ANN for classification with high accuracy. For the HAM10000 dataset, the ANN with MobileNet and handcrafted features achieved an AUC of 97.53%, accuracy of 98.4%, sensitivity of 94.46%, precision of 93.44% and specificity of 99.43%. Using the same technique, the PH2 data set achieved 100% for all metrics.
Collapse
|
33
|
Alturki N, Umer M, Ishaq A, Abuzinadah N, Alnowaiser K, Mohamed A, Saidani O, Ashraf I. Combining CNN Features with Voting Classifiers for Optimizing Performance of Brain Tumor Classification. Cancers (Basel) 2023; 15:cancers15061767. [PMID: 36980653 PMCID: PMC10046217 DOI: 10.3390/cancers15061767] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 02/20/2023] [Accepted: 03/04/2023] [Indexed: 03/17/2023] Open
Abstract
Brain tumors and other nervous system cancers are among the top ten leading fatal diseases. The effective treatment of brain tumors depends on their early detection. This research work makes use of 13 features with a voting classifier that combines logistic regression with stochastic gradient descent using features extracted by deep convolutional layers for the efficient classification of tumorous victims from the normal. From the first and second-order brain tumor features, deep convolutional features are extracted for model training. Using deep convolutional features helps to increase the precision of tumor and non-tumor patient classification. The proposed voting classifier along with convoluted features produces results that show the highest accuracy of 99.9%. Compared to cutting-edge methods, the proposed approach has demonstrated improved accuracy.
Collapse
Affiliation(s)
- Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Muhammad Umer
- Department of Computer Science & Information Technology, The Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan
| | - Abid Ishaq
- Department of Computer Science & Information Technology, The Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan
| | - Nihal Abuzinadah
- Faculty of Computer Science and Information Technology, King Abdulaziz University, P.O. Box. 80200, Jeddah 21589, Saudi Arabia
| | - Khaled Alnowaiser
- Department of Computer Engineering, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Abdullah Mohamed
- Research Centre, Future University in Egypt, New Cairo 11745, Egypt
| | - Oumaima Saidani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
- Correspondence:
| |
Collapse
|
34
|
Ahmed IA, Senan EM, Shatnawi HSA, Alkhraisha ZM, Al-Azzam MMA. Hybrid Techniques for the Diagnosis of Acute Lymphoblastic Leukemia Based on Fusion of CNN Features. Diagnostics (Basel) 2023; 13:diagnostics13061026. [PMID: 36980334 PMCID: PMC10047564 DOI: 10.3390/diagnostics13061026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 03/03/2023] [Accepted: 03/06/2023] [Indexed: 03/30/2023] Open
Abstract
Acute lymphoblastic leukemia (ALL) is one of the deadliest forms of leukemia due to the bone marrow producing many white blood cells (WBC). ALL is one of the most common types of cancer in children and adults. Doctors determine the treatment of leukemia according to its stages and its spread in the body. Doctors rely on analyzing blood samples under a microscope. Pathologists face challenges, such as the similarity between infected and normal WBC in the early stages. Manual diagnosis is prone to errors, differences of opinion, and the lack of experienced pathologists compared to the number of patients. Thus, computer-assisted systems play an essential role in assisting pathologists in the early detection of ALL. In this study, systems with high efficiency and high accuracy were developed to analyze the images of C-NMC 2019 and ALL-IDB2 datasets. In all proposed systems, blood micrographs were improved and then fed to the active contour method to extract WBC-only regions for further analysis by three CNN models (DenseNet121, ResNet50, and MobileNet). The first strategy for analyzing ALL images of the two datasets is the hybrid technique of CNN-RF and CNN-XGBoost. DenseNet121, ResNet50, and MobileNet models extract deep feature maps. CNN models produce high features with redundant and non-significant features. So, CNN deep feature maps were fed to the Principal Component Analysis (PCA) method to select highly representative features and sent to RF and XGBoost classifiers for classification due to the high similarity between infected and normal WBC in early stages. Thus, the strategy for analyzing ALL images using serially fused features of CNN models. The deep feature maps of DenseNet121-ResNet50, ResNet50-MobileNet, DenseNet121-MobileNet, and DenseNet121-ResNet50-MobileNet were merged and then classified by RF classifiers and XGBoost. The RF classifier with fused features for DenseNet121-ResNet50-MobileNet reached an AUC of 99.1%, accuracy of 98.8%, sensitivity of 98.45%, precision of 98.7%, and specificity of 98.85% for the C-NMC 2019 dataset. With the ALL-IDB2 dataset, hybrid systems achieved 100% results for AUC, accuracy, sensitivity, precision, and specificity.
Collapse
Affiliation(s)
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | | | | | |
Collapse
|
35
|
Ahmed IA, Senan EM, Shatnawi HSA, Alkhraisha ZM, Al-Azzam MMA. Multi-Techniques for Analyzing X-ray Images for Early Detection and Differentiation of Pneumonia and Tuberculosis Based on Hybrid Features. Diagnostics (Basel) 2023; 13:diagnostics13040814. [PMID: 36832302 PMCID: PMC9955018 DOI: 10.3390/diagnostics13040814] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 02/16/2023] [Accepted: 02/19/2023] [Indexed: 02/23/2023] Open
Abstract
An infectious disease called tuberculosis (TB) exhibits pneumonia-like symptoms and traits. One of the most important methods for identifying and diagnosing pneumonia and tuberculosis is X-ray imaging. However, early discrimination is difficult for radiologists and doctors because of the similarities between pneumonia and tuberculosis. As a result, patients do not receive the proper care, which in turn does not prevent the disease from spreading. The goal of this study is to extract hybrid features using a variety of techniques in order to achieve promising results in differentiating between pneumonia and tuberculosis. In this study, several approaches for early identification and distinguishing tuberculosis from pneumonia were suggested. The first proposed system for differentiating between pneumonia and tuberculosis uses hybrid techniques, VGG16 + support vector machine (SVM) and ResNet18 + SVM. The second proposed system for distinguishing between pneumonia and tuberculosis uses an artificial neural network (ANN) based on integrating features of VGG16 and ResNet18, before and after reducing the high dimensions using the principal component analysis (PCA) method. The third proposed system for distinguishing between pneumonia and tuberculosis uses ANN based on integrating features of VGG16 and ResNet18 separately with handcrafted features extracted by local binary pattern (LBP), discrete wavelet transform (DWT) and gray level co-occurrence matrix (GLCM) algorithms. All the proposed systems have achieved superior results in the early differentiation between pneumonia and tuberculosis. An ANN based on the features of VGG16 with LBP, DWT and GLCM (LDG) reached an accuracy of 99.6%, sensitivity of 99.17%, specificity of 99.42%, precision of 99.63%, and an AUC of 99.58%.
Collapse
Affiliation(s)
- Ibrahim Abdulrab Ahmed
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
- Correspondence: author: (I.A.A.); (E.M.S.)
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
- Correspondence: author: (I.A.A.); (E.M.S.)
| | | | | | | |
Collapse
|
36
|
Hybrid Techniques of Analyzing MRI Images for Early Diagnosis of Brain Tumours Based on Hybrid Features. Processes (Basel) 2023. [DOI: 10.3390/pr11010212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
Brain tumours are considered one of the deadliest tumours in humans and have a low survival rate due to their heterogeneous nature. Several types of benign and malignant brain tumours need to be diagnosed early to administer appropriate treatment. Magnetic resonance (MR) images provide details of the brain’s internal structure, which allow radiologists and doctors to diagnose brain tumours. However, MR images contain complex details that require highly qualified experts and a long time to analyse. Artificial intelligence techniques solve these challenges. This paper presents four proposed systems, each with more than one technology. These techniques vary between machine, deep and hybrid learning. The first system comprises artificial neural network (ANN) and feedforward neural network (FFNN) algorithms based on the hybrid features between local binary pattern (LBP), grey-level co-occurrence matrix (GLCM) and discrete wavelet transform (DWT) algorithms. The second system comprises pre-trained GoogLeNet and ResNet-50 models for dataset classification. The two models achieved superior results in distinguishing between the types of brain tumours. The third system is a hybrid technique between convolutional neural network and support vector machine. This system also achieved superior results in distinguishing brain tumours. The fourth proposed system is a hybrid of the features of GoogLeNet and ResNet-50 with the LBP, GLCM and DWT algorithms (handcrafted features) to obtain representative features and classify them using the ANN and FFNN. This method achieved superior results in distinguishing between brain tumours and performed better than the other methods. With the hybrid features of GoogLeNet and hand-crafted features, FFNN achieved an accuracy of 99.9%, a precision of 99.84%, a sensitivity of 99.95%, a specificity of 99.85% and an AUC of 99.9%.
Collapse
|
37
|
Liao J, Li X, Gan Y, Han S, Rong P, Wang W, Li W, Zhou L. Artificial intelligence assists precision medicine in cancer treatment. Front Oncol 2023; 12:998222. [PMID: 36686757 PMCID: PMC9846804 DOI: 10.3389/fonc.2022.998222] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 11/22/2022] [Indexed: 01/06/2023] Open
Abstract
Cancer is a major medical problem worldwide. Due to its high heterogeneity, the use of the same drugs or surgical methods in patients with the same tumor may have different curative effects, leading to the need for more accurate treatment methods for tumors and personalized treatments for patients. The precise treatment of tumors is essential, which renders obtaining an in-depth understanding of the changes that tumors undergo urgent, including changes in their genes, proteins and cancer cell phenotypes, in order to develop targeted treatment strategies for patients. Artificial intelligence (AI) based on big data can extract the hidden patterns, important information, and corresponding knowledge behind the enormous amount of data. For example, the ML and deep learning of subsets of AI can be used to mine the deep-level information in genomics, transcriptomics, proteomics, radiomics, digital pathological images, and other data, which can make clinicians synthetically and comprehensively understand tumors. In addition, AI can find new biomarkers from data to assist tumor screening, detection, diagnosis, treatment and prognosis prediction, so as to providing the best treatment for individual patients and improving their clinical outcomes.
Collapse
Affiliation(s)
- Jinzhuang Liao
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Xiaoying Li
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Yu Gan
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Shuangze Han
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Pengfei Rong
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
- Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Wei Wang
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
- Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Wei Li
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
- Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Li Zhou
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
- Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China
- Department of Pathology, The Xiangya Hospital of Central South University, Changsha, Hunan, China
| |
Collapse
|
38
|
Alsubai S, Khan HU, Alqahtani A, Sha M, Abbas S, Mohammad UG. Ensemble deep learning for brain tumor detection. Front Comput Neurosci 2022; 16:1005617. [PMID: 36118133 PMCID: PMC9480978 DOI: 10.3389/fncom.2022.1005617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 08/18/2022] [Indexed: 11/29/2022] Open
Abstract
With the quick evolution of medical technology, the era of big data in medicine is quickly approaching. The analysis and mining of these data significantly influence the prediction, monitoring, diagnosis, and treatment of tumor disorders. Since it has a wide range of traits, a low survival rate, and an aggressive nature, brain tumor is regarded as the deadliest and most devastating disease. Misdiagnosed brain tumors lead to inadequate medical treatment, reducing the patient's life chances. Brain tumor detection is highly challenging due to the capacity to distinguish between aberrant and normal tissues. Effective therapy and long-term survival are made possible for the patient by a correct diagnosis. Despite extensive research, there are still certain limitations in detecting brain tumors because of the unusual distribution pattern of the lesions. Finding a region with a small number of lesions can be difficult because small areas tend to look healthy. It directly reduces the classification accuracy, and extracting and choosing informative features is challenging. A significant role is played by automatically classifying early-stage brain tumors utilizing deep and machine learning approaches. This paper proposes a hybrid deep learning model Convolutional Neural Network-Long Short Term Memory (CNN-LSTM) for classifying and predicting brain tumors through Magnetic Resonance Images (MRI). We experiment on an MRI brain image dataset. First, the data is preprocessed efficiently, and then, the Convolutional Neural Network (CNN) is applied to extract the significant features from images. The proposed model predicts the brain tumor with a significant classification accuracy of 99.1%, a precision of 98.8%, recall of 98.9%, and F1-measure of 99.0%.
Collapse
Affiliation(s)
- Shtwai Alsubai
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
| | - Habib Ullah Khan
- Department of Accounting and Information Systems, College of Business and Economics, Qatar University, Doha, Qatar
- *Correspondence: Habib Ullah Khan
| | - Abdullah Alqahtani
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
| | - Mohemmed Sha
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
- Mohemmed Sha
| | - Sidra Abbas
- Department of Computer Science, COMSATS University, Islamabad, Pakistan
- Sidra Abbas
| | - Uzma Ghulam Mohammad
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, Pakistan
| |
Collapse
|
39
|
Early Diagnosis of Oral Squamous Cell Carcinoma Based on Histopathological Images Using Deep and Hybrid Learning Approaches. Diagnostics (Basel) 2022; 12:diagnostics12081899. [PMID: 36010249 PMCID: PMC9406837 DOI: 10.3390/diagnostics12081899] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 07/30/2022] [Accepted: 08/03/2022] [Indexed: 11/17/2022] Open
Abstract
Oral squamous cell carcinoma (OSCC) is one of the most common head and neck cancer types, which is ranked the seventh most common cancer. As OSCC is a histological tumor, histopathological images are the gold diagnosis standard. However, such diagnosis takes a long time and high-efficiency human experience due to tumor heterogeneity. Thus, artificial intelligence techniques help doctors and experts to make an accurate diagnosis. This study aimed to achieve satisfactory results for the early diagnosis of OSCC by applying hybrid techniques based on fused features. The first proposed method is based on a hybrid method of CNN models (AlexNet and ResNet-18) and the support vector machine (SVM) algorithm. This method achieved superior results in diagnosing the OSCC data set. The second proposed method is based on the hybrid features extracted by CNN models (AlexNet and ResNet-18) combined with the color, texture, and shape features extracted using the fuzzy color histogram (FCH), discrete wavelet transform (DWT), local binary pattern (LBP), and gray-level co-occurrence matrix (GLCM) algorithms. Because of the high dimensionality of the data set features, the principal component analysis (PCA) algorithm was applied to reduce the dimensionality and send it to the artificial neural network (ANN) algorithm to diagnose it with promising accuracy. All the proposed systems achieved superior results in histological image diagnosis of OSCC, the ANN network based on the hybrid features using AlexNet, DWT, LBP, FCH, and GLCM achieved an accuracy of 99.1%, specificity of 99.61%, sensitivity of 99.5%, precision of 99.71%, and AUC of 99.52%.
Collapse
|
40
|
Deep and Hybrid Learning Technique for Early Detection of Tuberculosis Based on X-ray Images Using Feature Fusion. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12147092] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Tuberculosis (TB) is a fatal disease in developing countries, with the infection spreading through direct contact or the air. Despite its seriousness, the early detection of tuberculosis by means of reliable techniques can save the patients’ lives. A chest X-ray is a recommended screening technique for locating pulmonary abnormalities. However, analyzing the X-ray images to detect abnormalities requires highly experienced radiologists. Therefore, artificial intelligence techniques come into play to help radiologists to perform an accurate diagnosis at the early stages of TB disease. Hence, this study focuses on applying two AI techniques, CNN and ANN. Furthermore, this study proposes two different approaches with two systems each to diagnose tuberculosis from two datasets. The first approach hybridizes two CNN models, which are Res-Net-50 and GoogLeNet techniques. Prior to the classification stage, the approach applies the principal component analysis (PCA) algorithm to reduce the features’ dimensionality, aiming to extract the deep features. Then, the SVM algorithm is used for classifying features with high accuracy. This hybrid approach achieved superior results in diagnosing tuberculosis based on X-ray images from both datasets. In contrast, the second approach applies artificial neural networks (ANN) based on the fused features extracted by ResNet-50 and GoogleNet models and combines them with the features extracted by the gray level co-occurrence matrix (GLCM), discrete wavelet transform (DWT) and local binary pattern (LBP) algorithms. ANN achieved superior results for the two tuberculosis datasets. When using the first dataset, the ANN, with ResNet-50, GLCM, DWT and LBP features, achieved an accuracy of 99.2%, a sensitivity of 99.23%, a specificity of 99.41%, and an AUC of 99.78%. Meanwhile, with the second dataset, ANN, with the features of ResNet-50, GLCM, DWT and LBP, reached an accuracy of 99.8%, a sensitivity of 99.54%, a specificity of 99.68%, and an AUC of 99.82%. Thus, the proposed methods help doctors and radiologists to diagnose tuberculosis early and increase chances of survival.
Collapse
|