1
|
AlRyalat SA, Musleh AM, Kahook MY. Evaluating the strengths and limitations of multimodal ChatGPT-4 in detecting glaucoma using fundus images. FRONTIERS IN OPHTHALMOLOGY 2024; 4:1387190. [PMID: 38984105 PMCID: PMC11182172 DOI: 10.3389/fopht.2024.1387190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 05/17/2024] [Indexed: 07/11/2024]
Abstract
Overview This study evaluates the diagnostic accuracy of a multimodal large language model (LLM), ChatGPT-4, in recognizing glaucoma using color fundus photographs (CFPs) with a benchmark dataset and without prior training or fine tuning. Methods The publicly accessible Retinal Fundus Glaucoma Challenge "REFUGE" dataset was utilized for analyses. The input data consisted of the entire 400 image testing set. The task involved classifying fundus images into either 'Likely Glaucomatous' or 'Likely Non-Glaucomatous'. We constructed a confusion matrix to visualize the results of predictions from ChatGPT-4, focusing on accuracy of binary classifications (glaucoma vs non-glaucoma). Results ChatGPT-4 demonstrated an accuracy of 90% with a 95% confidence interval (CI) of 87.06%-92.94%. The sensitivity was found to be 50% (95% CI: 34.51%-65.49%), while the specificity was 94.44% (95% CI: 92.08%-96.81%). The precision was recorded at 50% (95% CI: 34.51%-65.49%), and the F1 Score was 0.50. Conclusion ChatGPT-4 achieved relatively high diagnostic accuracy without prior fine tuning on CFPs. Considering the scarcity of data in specialized medical fields, including ophthalmology, the use of advanced AI techniques, such as LLMs, might require less data for training compared to other forms of AI with potential savings in time and financial resources. It may also pave the way for the development of innovative tools to support specialized medical care, particularly those dependent on multimodal data for diagnosis and follow-up, irrespective of resource constraints.
Collapse
Affiliation(s)
- Saif Aldeen AlRyalat
- Department of Ophthalmology, The University of Jordan, Amman, Jordan
- Department of Ophthalmology, Houston Methodist Hospital, Houston, TX, United States
| | | | - Malik Y. Kahook
- Department of Ophthalmology, University of Colorado School of Medicine, Sue Anschutz-Rodgers Eye Center, Aurora, CO, United States
| |
Collapse
|
2
|
Hasan MM, Phu J, Sowmya A, Meijering E, Kalloniatis M. Artificial intelligence in the diagnosis of glaucoma and neurodegenerative diseases. Clin Exp Optom 2024; 107:130-146. [PMID: 37674264 DOI: 10.1080/08164622.2023.2235346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/07/2023] [Indexed: 09/08/2023] Open
Abstract
Artificial Intelligence is a rapidly expanding field within computer science that encompasses the emulation of human intelligence by machines. Machine learning and deep learning - two primary data-driven pattern analysis approaches under the umbrella of artificial intelligence - has created considerable interest in the last few decades. The evolution of technology has resulted in a substantial amount of artificial intelligence research on ophthalmic and neurodegenerative disease diagnosis using retinal images. Various artificial intelligence-based techniques have been used for diagnostic purposes, including traditional machine learning, deep learning, and their combinations. Presented here is a review of the literature covering the last 10 years on this topic, discussing the use of artificial intelligence in analysing data from different modalities and their combinations for the diagnosis of glaucoma and neurodegenerative diseases. The performance of published artificial intelligence methods varies due to several factors, yet the results suggest that such methods can potentially facilitate clinical diagnosis. Generally, the accuracy of artificial intelligence-assisted diagnosis ranges from 67-98%, and the area under the sensitivity-specificity curve (AUC) ranges from 0.71-0.98, which outperforms typical human performance of 71.5% accuracy and 0.86 area under the curve. This indicates that artificial intelligence-based tools can provide clinicians with useful information that would assist in providing improved diagnosis. The review suggests that there is room for improvement of existing artificial intelligence-based models using retinal imaging modalities before they are incorporated into clinical practice.
Collapse
Affiliation(s)
- Md Mahmudul Hasan
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Jack Phu
- School of Optometry and Vision Science, University of New South Wales, Kensington, Australia
- Centre for Eye Health, University of New South Wales, Sydney, New South Wales, Australia
- School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Michael Kalloniatis
- School of Optometry and Vision Science, University of New South Wales, Kensington, Australia
- School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia
| |
Collapse
|
3
|
Zedan MJM, Zulkifley MA, Ibrahim AA, Moubark AM, Kamari NAM, Abdani SR. Automated Glaucoma Screening and Diagnosis Based on Retinal Fundus Images Using Deep Learning Approaches: A Comprehensive Review. Diagnostics (Basel) 2023; 13:2180. [PMID: 37443574 DOI: 10.3390/diagnostics13132180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 06/16/2023] [Accepted: 06/17/2023] [Indexed: 07/15/2023] Open
Abstract
Glaucoma is a chronic eye disease that may lead to permanent vision loss if it is not diagnosed and treated at an early stage. The disease originates from an irregular behavior in the drainage flow of the eye that eventually leads to an increase in intraocular pressure, which in the severe stage of the disease deteriorates the optic nerve head and leads to vision loss. Medical follow-ups to observe the retinal area are needed periodically by ophthalmologists, who require an extensive degree of skill and experience to interpret the results appropriately. To improve on this issue, algorithms based on deep learning techniques have been designed to screen and diagnose glaucoma based on retinal fundus image input and to analyze images of the optic nerve and retinal structures. Therefore, the objective of this paper is to provide a systematic analysis of 52 state-of-the-art relevant studies on the screening and diagnosis of glaucoma, which include a particular dataset used in the development of the algorithms, performance metrics, and modalities employed in each article. Furthermore, this review analyzes and evaluates the used methods and compares their strengths and weaknesses in an organized manner. It also explored a wide range of diagnostic procedures, such as image pre-processing, localization, classification, and segmentation. In conclusion, automated glaucoma diagnosis has shown considerable promise when deep learning algorithms are applied. Such algorithms could increase the accuracy and efficiency of glaucoma diagnosis in a better and faster manner.
Collapse
Affiliation(s)
- Mohammad J M Zedan
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
- Computer and Information Engineering Department, College of Electronics Engineering, Ninevah University, Mosul 41002, Iraq
| | - Mohd Asyraf Zulkifley
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Ahmad Asrul Ibrahim
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Asraf Mohamed Moubark
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Nor Azwan Mohamed Kamari
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Siti Raihanah Abdani
- School of Computing Sciences, College of Computing, Informatics and Media, Universiti Teknologi MARA, Shah Alam 40450, Selangor, Malaysia
| |
Collapse
|
4
|
Bhavani R, Vasanth K. Brain image fusion-based tumour detection using grey level co-occurrence matrix Tamura feature extraction with backpropagation network classification. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:8727-8744. [PMID: 37161219 DOI: 10.3934/mbe.2023383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Most challenging task in medical image analysis is the detection of brain tumours, which can be accomplished by methodologies such as MRI, CT and PET. MRI and CT images are chosen and fused after preprocessing and SWT-based decomposition stage to increase efficiency. The fused image is obtained through ISWT. Further, its features are extracted through the GLCM-Tamura method and fed to the BPN classifier. Will employ supervised learning with a non-knowledge-based classifier for picture classification. The classifier utilized Trained databases of the tumour as benign or malignant from which the tumour region is segmented via k-means clustering. After the software needs to be implemented, the health status of the patients is notified through GSM. Our method integrates image fusion, feature extraction, and classification to distinguish and further segment the tumour-affected area and to acknowledge the affected person. The experimental analysis has been carried out regarding accuracy, precision, recall, F-1 score, RMSE and MAP.
Collapse
Affiliation(s)
- R Bhavani
- Department of ECE, Sathyabama Institute of Science and Technology, Chennai 600119, India
| | - K Vasanth
- Department of ECE, Vidya Jyothi Institute of Technology, Hyderabad 500075, India
| |
Collapse
|
5
|
Prediction of the Age and Gender Based on Human Face Images Based on Deep Learning Algorithm. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1413597. [PMID: 36060657 PMCID: PMC9433232 DOI: 10.1155/2022/1413597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 06/14/2022] [Accepted: 06/19/2022] [Indexed: 11/22/2022]
Abstract
In recent times, nutrition recommendation system has gained increasing attention due to their need for healthy living. Current studies on the food domain deal with a recommendation system that focuses on independent users and their health problems but lack nutritional advice to individual users. The proposed system is developed to suggest nutritional food to people based on age and gender predicted from their face image. The designed methodology preprocesses the input image before performing feature extraction using the deep convolution neural network (DCNN) strategy. This network extracts D-dimensional characteristics from the source face image, followed by the feature selection strategy. The face's distinctive and identifiable traits are chosen utilizing a hybrid particle swarm optimization (HPSO) technique. Support vector machine (SVM) is used to classify a person's age and gender. The nutrition recommendation system relies on the age and gender classes. The proposed system is evaluated using classification rate, precision, and recall using Adience dataset and UTKface dataset, and real-world images exhibit excellent performance by achieving good prediction results and computation time.
Collapse
|
6
|
Diagnosis of Retinal Diseases Based on Bayesian Optimization Deep Learning Network Using Optical Coherence Tomography Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8014979. [PMID: 35463234 PMCID: PMC9033334 DOI: 10.1155/2022/8014979] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 03/17/2022] [Indexed: 02/08/2023]
Abstract
Retinal abnormalities have emerged as a serious public health concern in recent years and can manifest gradually and without warning. These diseases can affect any part of the retina, causing vision impairment and indeed blindness in extreme cases. This necessitates the development of automated approaches to detect retinal diseases more precisely and, preferably, earlier. In this paper, we examine transfer learning of pretrained convolutional neural network (CNN) and then transfer it to detect retinal problems from Optical Coherence Tomography (OCT) images. In this study, pretrained CNN models, namely, VGG16, DenseNet201, InceptionV3, and Xception, are used to classify seven different retinal diseases from a dataset of images with and without retinal diseases. In addition, to choose optimum values for hyperparameters, Bayesian optimization is applied, and image augmentation is used to increase the generalization capabilities of the developed models. This research also provides a comparison of the proposed models as well as an analysis of them. The accuracy achieved using DenseNet201 on the Retinal OCT Image dataset is more than 99% and offers a good level of accuracy in classifying retinal diseases compared to other approaches, which only detect a small number of retinal diseases.
Collapse
|
7
|
Gampala V, Maram B, Vigneshwari S, Cristin R. Glaucoma detection using hybrid architecture based on optimal deep neuro fuzzy network. INT J INTELL SYST 2022. [DOI: 10.1002/int.22845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Veerraju Gampala
- Department of Computer Science and Engineering Koneru Lakshmaiah Education Foundation Vaddeswaram, Guntur Andhra Pradesh India
| | - Balajee Maram
- Department of Computer Science and Engineering GMR Institute of Technology Rajam Andhra Pradesh India
| | - S. Vigneshwari
- Department of Computer Science and Engineering Sathyabama Institute of Science and Technology Chennai Tamil Nadu India
| | - R. Cristin
- Department of Computer Science and Engineering GMR Institute of Technology Rajam Andhra Pradesh India
| |
Collapse
|
8
|
Investigating the Role of Image Fusion in Brain Tumor Classification Models Based on Machine Learning Algorithm for Personalized Medicine. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7137524. [PMID: 35178119 PMCID: PMC8843791 DOI: 10.1155/2022/7137524] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Revised: 12/13/2021] [Accepted: 12/20/2021] [Indexed: 12/17/2022]
Abstract
Image fusion can be performed on images either in spatial domain or frequency domain methods. Frequency domain methods will be most preferred because these methods can improve the quality of edges in an image. In image fusion, the resultant fused images will be more informative than individual input images, thus more suitable for classification problems. Artificial intelligence (AI) algorithms play a significant role in improving patient's treatment in the health care industry and thus improving personalized medicine. This research work analyses the role of image fusion in an improved brain tumour classification model, and this novel fusion-based cancer classification model can be used for personalized medicine more effectively. Image fusion can improve the quality of resultant images and thus improve the result of classifiers. Instead of using individual input images, the high-quality fused images will provide better classification results. Initially, the contrast limited adaptive histogram equalization technique preprocess input images such as MRI and SPECT images. Benign and malignant class brain tumor images are applied with discrete cosine transform-based fusion method to obtain fused images. AI algorithms such as support vector machine classifier, KNN classifier, and decision tree classifiers are tested with features obtained from fused images and compared with the result obtained from individual input images. Performances of classifiers are measured using the parameters accuracy, precision, recall, specificity, and F1 score. SVM classifier provided the maximum accuracy of 96.8%, precision of 95%, recall of 94%, specificity of 93%, F1 score of 91%, and performed better than KNN and decision tree classifiers when extracted features from fused images are used. The proposed method results are compared with existing methods and provide satisfactory results.
Collapse
|
9
|
Kogilavani SV, Prabhu J, Sandhiya R, Kumar MS, Subramaniam U, Karthick A, Muhibbullah M, Imam SBS. COVID-19 Detection Based on Lung Ct Scan Using Deep Learning Techniques. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7672196. [PMID: 35116074 PMCID: PMC8805449 DOI: 10.1155/2022/7672196] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 01/07/2022] [Indexed: 12/17/2022]
Abstract
SARS-CoV-2 is a novel virus, responsible for causing the COVID-19 pandemic that has emerged as a pandemic in recent years. Humans are becoming infected with the virus. In 2019, the city of Wuhan reported the first-ever incidence of COVID-19. COVID-19 infected people have symptoms that are related to pneumonia, and the virus affects the body's respiratory organs, making breathing difficult. A real-time reverse transcriptase-polymerase chain reaction (RT-PCR) kit is used to diagnose the disease. Due to a shortage of kits, suspected patients cannot be treated promptly, resulting in disease spread. To develop an alternative, radiologists looked at the changes in radiological imaging, like CT scans, that produce comprehensive pictures of the body of excellent quality. The suspected patient's computed tomography (CT) scan is used to distinguish between a healthy individual and a COVID-19 patient using deep learning algorithms. A lot of deep learning methods have been proposed for COVID-19. The proposed work utilizes CNN architectures like VGG16, DeseNet121, MobileNet, NASNet, Xception, and EfficientNet. The dataset contains 3873 total CT scan images with "COVID" and "Non-COVID." The dataset is divided into train, test, and validation. Accuracies obtained for VGG16 are 97.68%, DenseNet121 is 97.53%, MobileNet is 96.38%, NASNet is 89.51%, Xception is 92.47%, and EfficientNet is 80.19%, respectively. From the obtained analysis, the results show that the VGG16 architecture gives better accuracy compared to other architectures.
Collapse
Affiliation(s)
- S. V. Kogilavani
- . Department of Computer Science and Engineering, Kongu Engineering College, Perundurai, Erode 638060, Tamil Nadu, India
| | - J. Prabhu
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - R. Sandhiya
- . Department of Computer Science and Engineering, Kongu Engineering College, Perundurai, Erode 638060, Tamil Nadu, India
| | - M. Sandeep Kumar
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - UmaShankar Subramaniam
- Renewable Energy Lab, College of Engineering, Prince Sultan University, Riyadh, Saudi Arabia 11586
- Department of Energy and Environmental Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Saveetha Nagar, Thandalam, Chennai-602105, Tamilnadu, India
| | - Alagar Karthick
- Renewable Energy Lab, Department of Electrical and Electronics Engineering, KPR Institute of Engineering and Technology, Coimbatore, 641407 Tamilnadu, India
| | - M. Muhibbullah
- Department of Electrical and Electronic Engineering, Bangladesh University, Dhaka 1207, Bangladesh
| | - Sharmila Banu Sheik Imam
- College of Computer Science & Information Technology (CCSIT), King Faisal University, Alahsa, Saudi Arabia 31982
| |
Collapse
|