1
|
Zhang L, Huang Y, Chen J, Xu X, Xu F, Yao J. Multimodal deep transfer learning to predict retinal vein occlusion macular edema recurrence after anti-VEGF therapy. Heliyon 2024; 10:e29334. [PMID: 38655307 PMCID: PMC11036002 DOI: 10.1016/j.heliyon.2024.e29334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 03/28/2024] [Accepted: 04/05/2024] [Indexed: 04/26/2024] Open
Abstract
Purpose To develop a multimodal deep transfer learning (DTL) fusion model using optical coherence tomography angiography (OCTA) images to predict the recurrence of retinal vein occlusion (RVO) and macular edema (ME) after three consecutive anti-VEGF therapies. Methods This retrospective cross-sectional study consisted of 2800 B-scan OCTA macular images collected from 140 patients with RVO-ME. The central macular thickness (CMT) > 250 μm was used as a criterion for recurrence in the three-month follow-up after three injections of anti-VEGF therapy. The qualified OCTA image preprocessing and the lesion area segmentation were performed by senior ophthalmologists. We developed and validated the clinical, DTL, and multimodal fusion models based on clinical and extracted OCTA imaging features. The performance of the models and experts predictions were evaluated using several performance metrics, including the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity. Results The DTL models exhibited higher prediction efficacy than the clinical models and experts' predictions. Among the DTL models, the Vgg19 performed better than that of the other models, with an AUC of 0.968 (95 % CI, 0.943-0.994), accuracy of 0.913, sensitivity of 0.922, and specificity of 0.902 in the validation cohort. Moreover, the fusion Vgg19 model showed the highest prediction efficacy among all the models, with an AUC of 0.972 (95 % CI, 0.946-0.997), accuracy of 0.935, sensitivity of 0.935, and specificity of 0.934 in the validation cohort. Conclusions Multimodal fusion DTL models showed robust performance in predicting RVO-ME recurrence and may be applied to assist clinicians in determining patients' follow-up time after anti-VEGF therapy.
Collapse
Affiliation(s)
- Laihe Zhang
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Ying Huang
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Jiaqin Chen
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Xiangzhong Xu
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Fan Xu
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Jin Yao
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| |
Collapse
|
2
|
Abdullah AA, Hassan MM, Mustafa YT. Leveraging Bayesian deep learning and ensemble methods for uncertainty quantification in image classification: A ranking-based approach. Heliyon 2024; 10:e24188. [PMID: 38293520 PMCID: PMC10825337 DOI: 10.1016/j.heliyon.2024.e24188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 12/08/2023] [Accepted: 01/04/2024] [Indexed: 02/01/2024] Open
Abstract
Bayesian deep learning (BDL) has emerged as a powerful technique for quantifying uncertainty in classification tasks, surpassing the effectiveness of traditional models by aligning with the probabilistic nature of real-world data. This alignment allows for informed decision-making by not only identifying the most likely outcome but also quantifying the surrounding uncertainty. Such capabilities hold great significance in fields like medical diagnoses and autonomous driving, where the consequences of misclassification are substantial. To further improve uncertainty quantification, the research community has introduced Bayesian model ensembles, which combines multiple Bayesian models to enhance predictive accuracy and uncertainty quantification. These ensembles have exhibited superior performance compared to individual Bayesian models and even non-Bayesian counterparts. In this study, we propose a novel approach that leverages the power of Bayesian ensembles for enhanced uncertainty quantification. The proposed method exploits the disparity between predicted positive and negative classes and employes it as a ranking metric for model selection. For each instance or sample, the ensemble's output for each class is determined by selecting the top 'k' models based on this ranking. Experimental results on different medical image classifications demonstrate that the proposed method consistently outperforms or achieves comparable performance to conventional Bayesian ensemble. This investigation highlights the practical application of Bayesian ensemble techniques in refining predictive performance and enhancing uncertainty evaluation in image classification tasks.
Collapse
Affiliation(s)
- Abdullah A. Abdullah
- Computer Science Department, Faculty of Science, University of Zakho, Duhok, Iraq
| | - Masoud M. Hassan
- Computer Science Department, Faculty of Science, University of Zakho, Duhok, Iraq
| | - Yaseen T. Mustafa
- Environmental Science Department, Faculty of Science, University of Zakho, Duhok, Iraq
| |
Collapse
|
3
|
Rajendran S, Panneerselvam RK, Kumar PJ, Rajasekaran VA, Suganya P, Mathivanan SK, Jayagopal P. Prescreening and Triage of COVID-19 Patients Through Chest X-Ray Images Using Deep Learning Model. BIG DATA 2023; 11:408-419. [PMID: 36103285 DOI: 10.1089/big.2022.0028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Deep learning models deliver a fast diagnosis during triage prescreening for COVID-19 patients, reducing waiting time for hospital admission during health emergency scenarios. The Ministry of health and family welfare government of India provides guidelines from the Indian Council of Medical Research (ICMR) for triage requirements and emergency response with faster allotment of oxygen beds for COVID-19 patients requiring immediate treatment in Tamil Nadu, India. A combination of pretrained models provides a faster screening rate and finds patients with severe lung infections who need to be attended to and allotted oxygen beds. Deep learning (DL) algorithms need to be accurate in triaging undifferentiated patients entering the emergency care system (ECS). The major goal of this work is to analyze the accuracy of machine learning approaches in their application to triage the acuity of patients arriving in the ECS. The proposed triage model has an accuracy of 93% in classifying COVID/non-COVID patients. The proposed triage DL model effectively reduces the time for the triage procedure and streamlines screening and allocation of beds for patients with high risk.
Collapse
Affiliation(s)
- Sukumar Rajendran
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | | | | | - Vijay Anand Rajasekaran
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - Pandy Suganya
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - Sandeep Kumar Mathivanan
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - Prabhu Jayagopal
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| |
Collapse
|
4
|
Bhandari M, Shahi TB, Neupane A. Evaluating Retinal Disease Diagnosis with an Interpretable Lightweight CNN Model Resistant to Adversarial Attacks. J Imaging 2023; 9:219. [PMID: 37888326 PMCID: PMC10607865 DOI: 10.3390/jimaging9100219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 09/29/2023] [Accepted: 10/09/2023] [Indexed: 10/28/2023] Open
Abstract
Optical Coherence Tomography (OCT) is an imperative symptomatic tool empowering the diagnosis of retinal diseases and anomalies. The manual decision towards those anomalies by specialists is the norm, but its labor-intensive nature calls for more proficient strategies. Consequently, the study recommends employing a Convolutional Neural Network (CNN) for the classification of OCT images derived from the OCT dataset into distinct categories, including Choroidal NeoVascularization (CNV), Diabetic Macular Edema (DME), Drusen, and Normal. The average k-fold (k = 10) training accuracy, test accuracy, validation accuracy, training loss, test loss, and validation loss values of the proposed model are 96.33%, 94.29%, 94.12%, 0.1073, 0.2002, and 0.1927, respectively. Fast Gradient Sign Method (FGSM) is employed to introduce non-random noise aligned with the cost function's data gradient, with varying epsilon values scaling the noise, and the model correctly handles all noise levels below 0.1 epsilon. Explainable AI algorithms: Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) are utilized to provide human interpretable explanations approximating the behaviour of the model within the region of a particular retinal image. Additionally, two supplementary datasets, namely, COVID-19 and Kidney Stone, are assimilated to enhance the model's robustness and versatility, resulting in a level of precision comparable to state-of-the-art methodologies. Incorporating a lightweight CNN model with 983,716 parameters, 2.37×108 floating point operations per second (FLOPs) and leveraging explainable AI strategies, this study contributes to efficient OCT-based diagnosis, underscores its potential in advancing medical diagnostics, and offers assistance in the Internet-of-Medical-Things.
Collapse
Affiliation(s)
- Mohan Bhandari
- Department of Science and Technology, Samriddhi College, Bhaktapur 44800, Nepal;
| | - Tej Bahadur Shahi
- School of Engineering and Technology, Central Queensland University, Norman Gardens, Rockhampton, QLD 4701, Australia;
- Central Department of Computer Science and IT, Tribhuvan University, Kathmandu 44600, Nepal
| | - Arjun Neupane
- School of Engineering and Technology, Central Queensland University, Norman Gardens, Rockhampton, QLD 4701, Australia;
| |
Collapse
|
5
|
Li Z, Han Y, Yang X. Multi-Fundus Diseases Classification Using Retinal Optical Coherence Tomography Images with Swin Transformer V2. J Imaging 2023; 9:203. [PMID: 37888310 PMCID: PMC10607340 DOI: 10.3390/jimaging9100203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 09/25/2023] [Accepted: 09/28/2023] [Indexed: 10/28/2023] Open
Abstract
Fundus diseases cause damage to any part of the retina. Untreated fundus diseases can lead to severe vision loss and even blindness. Analyzing optical coherence tomography (OCT) images using deep learning methods can provide early screening and diagnosis of fundus diseases. In this paper, a deep learning model based on Swin Transformer V2 was proposed to diagnose fundus diseases rapidly and accurately. In this method, calculating self-attention within local windows was used to reduce computational complexity and improve its classification efficiency. Meanwhile, the PolyLoss function was introduced to further improve the model's accuracy, and heat maps were generated to visualize the predictions of the model. Two independent public datasets, OCT 2017 and OCT-C8, were applied to train the model and evaluate its performance, respectively. The results showed that the proposed model achieved an average accuracy of 99.9% on OCT 2017 and 99.5% on OCT-C8, performing well in the automatic classification of multi-fundus diseases using retinal OCT images.
Collapse
Affiliation(s)
- Zhenwei Li
- College of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang 471023, China; (Y.H.); (X.Y.)
| | | | | |
Collapse
|
6
|
He J, Wang J, Han Z, Ma J, Wang C, Qi M. An interpretable transformer network for the retinal disease classification using optical coherence tomography. Sci Rep 2023; 13:3637. [PMID: 36869160 PMCID: PMC9984386 DOI: 10.1038/s41598-023-30853-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 03/02/2023] [Indexed: 03/05/2023] Open
Abstract
Retinal illnesses such as age-related macular degeneration and diabetic macular edema will lead to irreversible blindness. With optical coherence tomography (OCT), doctors are able to see cross-sections of the retinal layers and provide patients with a diagnosis. Manual reading of OCT images is time-consuming, labor-intensive and even error-prone. Computer-aided diagnosis algorithms improve efficiency by automatically analyzing and diagnosing retinal OCT images. However, the accuracy and interpretability of these algorithms can be further improved through effective feature extraction, loss optimization and visualization analysis. In this paper, we propose an interpretable Swin-Poly Transformer network for performing automatically retinal OCT image classification. By shifting the window partition, the Swin-Poly Transformer constructs connections between neighboring non-overlapping windows in the previous layer and thus has the flexibility to model multi-scale features. Besides, the Swin-Poly Transformer modifies the importance of polynomial bases to refine cross entropy for better retinal OCT image classification. In addition, the proposed method also provides confidence score maps, assisting medical practitioners to understand the models' decision-making process. Experiments in OCT2017 and OCT-C8 reveal that the proposed method outperforms both the convolutional neural network approach and ViT, with an accuracy of 99.80% and an AUC of 99.99%.
Collapse
Affiliation(s)
- Jingzhen He
- Department of Radiology, Qilu Hospital of Shandong University, Jinan, 250012, China.
| | - Junxia Wang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Zeyu Han
- School of Mathematics and Statistics, Shandong University, Weihai, 264209, China
| | - Jun Ma
- School of Cyber Science and Engineering, Southeast University, Nanjing, 211189, China
| | - Chongjing Wang
- China Academy of Information and Communications Technology, Beijing, 100191, China
| | - Meng Qi
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China.
| |
Collapse
|
7
|
Retinal disease prediction through blood vessel segmentation and classification using ensemble-based deep learning approaches. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08402-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2023]
|
8
|
Saravanan S, Kumar VV, Sarveshwaran V, Indirajithu A, Elangovan D, Allayear SM. Computational and Mathematical Methods in Medicine Glioma Brain Tumor Detection and Classification Using Convolutional Neural Network. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:4380901. [PMID: 36277002 PMCID: PMC9586767 DOI: 10.1155/2022/4380901] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 09/14/2022] [Accepted: 09/22/2022] [Indexed: 09/29/2023]
Abstract
The classification of the brain tumor image is playing a vital role in the medical image domain, and it directly assists the clinicians to understand the severity and to take an appropriate solution. The magnetic resonance imaging tool is used to analyze the brain tissues and to examine the different portion of brain circumstance. We propose the convolutional neural network database learning along with neighboring network limitation (CDBLNL) technique for brain tumor image classification in medical image processing domain. The proposed system architecture is constructed with multilayer-based metadata learning, and they have integrated with CNN layer to deliver the accurate information. The metadata-based vector encoding is used, and the type of coding estimation for extra dimension is known as sparse. In order to maintain the supervised data in terms of geometric format, the atoms of neighboring limitation are built based on a well-structured k-neighbored network. The resultant of the proposed system is considerably strong and subjective for classification. The proposed system used two different datasets, such as BRATS and REMBRANDT, and the proposed brain MRI classification technique outcome is more efficient than the other existing techniques.
Collapse
Affiliation(s)
- S. Saravanan
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai, India
| | - V. Vinoth Kumar
- Department of Computer Science and Engineering, Jain (Deemed to Be University), Bangalore, India
| | - Velliangiri Sarveshwaran
- Department of Computational Intelligence, SRM Institute of Science and Technology, Kattankulathur Campus, Chennai, India
| | - Alagiri Indirajithu
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, 632014 Tamil Nadu, India
| | - D. Elangovan
- Department of Computer Science and Engineering, Panimalar Engineering College, Chennai, Tamil Nadu, India
| | - Shaikh Muhammad Allayear
- Department of Multimedia and Creative Technology, Daffodil International University, Daffodil Smart City, Khagan, Ashulia, Dhaka, Bangladesh
| |
Collapse
|
9
|
Investigation of Applying Machine Learning and Hyperparameter Tuned Deep Learning Approaches for Arrhythmia Detection in ECG Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8571970. [PMID: 36132548 PMCID: PMC9484938 DOI: 10.1155/2022/8571970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 08/08/2022] [Accepted: 08/24/2022] [Indexed: 11/23/2022]
Abstract
The level of patient's illness is determined by diagnosing the problem through different methods like physically examining patients, lab test data, and history of patient and by experience. To treat the patient, proper diagnosis is very much important. Arrhythmias are irregular variations in normal heart rhythm, and detecting them manually takes a long time and relies on clinical skill. Currently machine learning and deep learning models are used to automate the diagnosis by capturing unseen patterns from datasets. This research work concentrates on data expansion using augmentation technique which increases the dataset size by generating different images. The proposed system develops a medical diagnosis system which can be used to classify arrhythmia into different categories. Initially, machine learning techniques like Support Vector Machine (SVM), Naïve Bayes (NB), and Logistic Regression (LR) are used for diagnosis. In general deep learning models are used to extract high level features and to provide improved performance over machine learning algorithms. In order to achieve this, the proposed system utilizes a deep learning algorithm known as Convolutional Neural Network-baseline model for arrhythmia detection. The proposed system also adopts a novel hyperparameter tuned CNN model to acquire optimal combination of parameters that minimizes loss function and produces better result. The result shows that the hyper-tuned model outperforms other machine learning models and CNN baseline model for accurate classification of normal and other five different arrhythmia types.
Collapse
|
10
|
Prediction of the Age and Gender Based on Human Face Images Based on Deep Learning Algorithm. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1413597. [PMID: 36060657 PMCID: PMC9433232 DOI: 10.1155/2022/1413597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 06/14/2022] [Accepted: 06/19/2022] [Indexed: 11/22/2022]
Abstract
In recent times, nutrition recommendation system has gained increasing attention due to their need for healthy living. Current studies on the food domain deal with a recommendation system that focuses on independent users and their health problems but lack nutritional advice to individual users. The proposed system is developed to suggest nutritional food to people based on age and gender predicted from their face image. The designed methodology preprocesses the input image before performing feature extraction using the deep convolution neural network (DCNN) strategy. This network extracts D-dimensional characteristics from the source face image, followed by the feature selection strategy. The face's distinctive and identifiable traits are chosen utilizing a hybrid particle swarm optimization (HPSO) technique. Support vector machine (SVM) is used to classify a person's age and gender. The nutrition recommendation system relies on the age and gender classes. The proposed system is evaluated using classification rate, precision, and recall using Adience dataset and UTKface dataset, and real-world images exhibit excellent performance by achieving good prediction results and computation time.
Collapse
|
11
|
Multiconvolutional Transfer Learning for 3D Brain Tumor Magnetic Resonance Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8722476. [PMID: 36052054 PMCID: PMC9427231 DOI: 10.1155/2022/8722476] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 07/16/2022] [Accepted: 08/01/2022] [Indexed: 11/25/2022]
Abstract
The difficulty or cost of obtaining data or labels in applications like medical imaging has progressed less quickly. If deep learning techniques can be implemented reliably, automated workflows and more sophisticated analysis may be possible in previously unexplored areas of medical imaging. In addition, numerous characteristics of medical images, such as their high resolution, three-dimensional nature, and anatomical detail across multiple size scales, can increase the complexity of their analysis. This study employs multiconvolutional transfer learning (MCTL) for applying deep learning to small medical imaging datasets in an effort to address these issues. Multiconvolutional transfer learning is a model based on transfer learning that enables deep learning with small datasets. In order to learn new features on a smaller target dataset, an initial baseline is used in the transfer learning process. In this study, 3D MRI images of brain tumors are classified using a convolutional autoencoder method. In order to use unenhanced Magnetic Resonance Imaging (MRI) for clinical diagnosis, expensive and invasive contrast-enhancing procedures must be performed. MCTL has been shown to increase accuracy by 1.5%, indicating that small targets are more easily detected with MCTL. This research can be applied to a wide range of medical imaging and diagnostic procedures, including improving the accuracy of brain tumor severity diagnosis through the use of MRI.
Collapse
|
12
|
Analysis on COVID-19 Infection Spread Rate during Relief Schemes Using Graph Theory and Deep Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8131193. [PMID: 35991144 PMCID: PMC9391156 DOI: 10.1155/2022/8131193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 07/28/2022] [Indexed: 12/04/2022]
Abstract
The novel coronavirus 2019 (COVID-19) disease is a pandemic which affects thousands of people throughout the world. It has rapidly spread throughout India since the first case in India was reported on 30 January 2020. The official report says that totally 4, 11,773 cases are positive, 2, 28,307 recovered, and the country reported 12,948 deaths as of 21 June 2020. Vaccination is the only way to prevent the spreading of COVID-19 disease. Due to various reasons, there is vaccine hesitancy across many people. Hence, the Indian government has the solution to avoid the spread of the disease by instructing their citizens to maintain social distancing, wearing masks, avoiding crowds, and cleaning your hands. Moreover, lots of poverty cases are reported due to social distancing, and hence, both the center government and the respective state governments decide to issue relief funds to all its citizens. The government is unable to maintain social distancing during the relief schemes as the population is huge and available support staffs are less. In this paper, the proposed algorithm makes use of graph theory to schedule the timing of the relief funds so that with the available support staff, the government would able to implement its relief scheme while maintaining social distancing. Furthermore, we have used LSTM deep learning model to predict the spread rate and analyze the daily positive COVID cases.
Collapse
|
13
|
An Empirical Analysis of an Optimized Pretrained Deep Learning Model for COVID-19 Diagnosis. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:9771212. [PMID: 35928972 PMCID: PMC9344483 DOI: 10.1155/2022/9771212] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 06/23/2022] [Accepted: 06/30/2022] [Indexed: 11/17/2022]
Abstract
As a result of the COVID-19 outbreak, which has put the world in an unprecedented predicament, thousands of people have died. Data from structured and unstructured sources are combined to create user-friendly platforms for clinicians and researchers in an integrated bioinformatics approach. The diagnosis and treatment of COVID-19 disease can be accelerated using AI-based platforms. In the battle against the virus, however, researchers and decision-makers must contend with an ever-increasing volume of data, referred to as “big data.” VGG19 and ResNet152V2 pretrained deep learning architectures were used in this study. With these datasets, we could train and fine-tune our model on lung ultrasound frames from healthy people as well as from patients with COVID-19 and pneumonia. In two separate experiments, we evaluated two different classes of predictive models: one against pneumonia and the other against non-COVID-19. COVID-19 can be detected and diagnosed accurately and efficiently using these models, according to the findings. Therefore, the use of these inexpensive and affordable deep learning methods should be considered as a reliable method for the diagnosis of COVID-19.
Collapse
|
14
|
Diagnosing Breast Cancer Based on the Adaptive Neuro-Fuzzy Inference System. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:9166873. [PMID: 35602339 PMCID: PMC9117043 DOI: 10.1155/2022/9166873] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 01/27/2022] [Accepted: 04/19/2022] [Indexed: 01/10/2023]
Abstract
In this work, a novel hybrid neuro-fuzzy classifier (HNFC) technique is proposed for producing more accuracy in input data classification. The inputs are fuzzified using a generalized membership function. The fuzzification matrix helps to create connectivity between input pattern and degree of membership to various classes in the dataset. According to that, the classification process is performed for the input data. This novel method is applied for ten number of benchmark datasets. During preprocessing, the missing data is replaced with the mean value. Then, the statistical correlation is applied for selecting the important features from the dataset. After applying a data transformation technique, the values normalized. Initially, fuzzy logic has been applied for the input dataset; then, the neural network is applied to measure the performance. The result of the proposed method is evaluated with supervised classification techniques such as radial basis function neural network (RBFNN) and adaptive neuro-fuzzy inference system (ANFIS). Classifier performance is evaluated by measures like accuracy and error rate. From the investigation, the proposed approach provided 86.2% of classification accuracy for the breast cancer dataset compared to other two approaches.
Collapse
|