1
|
Muhsin ZJ, Qahwaji R, AlShawabkeh M, AlRyalat SA, Al Bdour M, Al-Taee M. Smart decision support system for keratoconus severity staging using corneal curvature and thinnest pachymetry indices. EYE AND VISION (LONDON, ENGLAND) 2024; 11:28. [PMID: 38978067 PMCID: PMC11229244 DOI: 10.1186/s40662-024-00394-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 06/17/2024] [Indexed: 07/10/2024]
Abstract
BACKGROUND This study proposes a decision support system created in collaboration with machine learning experts and ophthalmologists for detecting keratoconus (KC) severity. The system employs an ensemble machine model and minimal corneal measurements. METHODS A clinical dataset is initially obtained from Pentacam corneal tomography imaging devices, which undergoes pre-processing and addresses imbalanced sampling through the application of an oversampling technique for minority classes. Subsequently, a combination of statistical methods, visual analysis, and expert input is employed to identify Pentacam indices most correlated with severity class labels. These selected features are then utilized to develop and validate three distinct machine learning models. The model exhibiting the most effective classification performance is integrated into a real-world web-based application and deployed on a web application server. This deployment facilitates evaluation of the proposed system, incorporating new data and considering relevant human factors related to the user experience. RESULTS The performance of the developed system is experimentally evaluated, and the results revealed an overall accuracy of 98.62%, precision of 98.70%, recall of 98.62%, F1-score of 98.66%, and F2-score of 98.64%. The application's deployment also demonstrated precise and smooth end-to-end functionality. CONCLUSION The developed decision support system establishes a robust basis for subsequent assessment by ophthalmologists before potential deployment as a screening tool for keratoconus severity detection in a clinical setting.
Collapse
Affiliation(s)
- Zahra J Muhsin
- Department of Computer Science, University of Bradford, Bradford, BD7 1DP, UK.
| | - Rami Qahwaji
- Department of Computer Science, University of Bradford, Bradford, BD7 1DP, UK
| | | | | | - Muawyah Al Bdour
- School of Medicine, The University of Jordan, Amman, 11942, Jordan
| | - Majid Al-Taee
- Department of Computer Science, University of Bradford, Bradford, BD7 1DP, UK
| |
Collapse
|
2
|
Goodman D, Zhu AY. Utility of artificial intelligence in the diagnosis and management of keratoconus: a systematic review. FRONTIERS IN OPHTHALMOLOGY 2024; 4:1380701. [PMID: 38984114 PMCID: PMC11182163 DOI: 10.3389/fopht.2024.1380701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 04/23/2024] [Indexed: 07/11/2024]
Abstract
Introduction The application of artificial intelligence (AI) systems in ophthalmology is rapidly expanding. Early detection and management of keratoconus is important for preventing disease progression and the need for corneal transplant. We review studies regarding the utility of AI in the diagnosis and management of keratoconus and other corneal ectasias. Methods We conducted a systematic search for relevant original, English-language research studies in the PubMed, Web of Science, Embase, and Cochrane databases from inception to October 31, 2023, using a combination of the following keywords: artificial intelligence, deep learning, machine learning, keratoconus, and corneal ectasia. Case reports, literature reviews, conference proceedings, and editorials were excluded. We extracted the following data from each eligible study: type of AI, input used for training, output, ground truth or reference, dataset size, availability of algorithm/model, availability of dataset, and major study findings. Results Ninety-three original research studies were included in this review, with the date of publication ranging from 1994 to 2023. The majority of studies were regarding the use of AI in detecting keratoconus or subclinical keratoconus (n=61). Among studies regarding keratoconus diagnosis, the most common inputs were corneal topography, Scheimpflug-based corneal tomography, and anterior segment-optical coherence tomography. This review also summarized 16 original research studies regarding AI-based assessment of severity and clinical features, 7 studies regarding the prediction of disease progression, and 6 studies regarding the characterization of treatment response. There were only three studies regarding the use of AI in identifying susceptibility genes involved in the etiology and pathogenesis of keratoconus. Discussion Algorithms trained on Scheimpflug-based tomography seem promising tools for the early diagnosis of keratoconus that can be particularly applied in low-resource communities. Future studies could investigate the application of AI models trained on multimodal patient information for staging keratoconus severity and tracking disease progression.
Collapse
|
3
|
Xu X, Liu D, Huang G, Wang M, Lei M, Jia Y. Computer aided diagnosis of diabetic retinopathy based on multi-view joint learning. Comput Biol Med 2024; 174:108428. [PMID: 38631117 DOI: 10.1016/j.compbiomed.2024.108428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 04/02/2024] [Accepted: 04/04/2024] [Indexed: 04/19/2024]
Abstract
Diabetic retinopathy (DR) is a kind of ocular complication of diabetes, and its degree grade is an essential basis for early diagnosis of patients. Manual diagnosis is a long and expensive process with a specific risk of misdiagnosis. Computer-aided diagnosis can provide more accurate and practical treatment recommendations. In this paper, we propose a multi-view joint learning DR diagnostic model called RT2Net, which integrates the global features of fundus images and the local detailed features of vascular images to reduce the limitations of single fundus image learning. Firstly, the original image is preprocessed using operations such as contrast-limited adaptive histogram equalization, and the vascular structure of the extracted DR image is segmented. Then, the vascular image and fundus image are input into two branch networks of RT2Net for feature extraction, respectively, and the feature fusion module adaptively fuses the feature vectors' output from the branch networks. Finally, the optimized classification model is used to identify the five categories of DR. This paper conducts extensive experiments on the public datasets EyePACS and APTOS 2019 to demonstrate the method's effectiveness. The accuracy of RT2Net on the two datasets reaches 88.2% and 85.4%, and the area under the receiver operating characteristic curve (AUC) is 0.98 and 0.96, respectively. The excellent classification ability of RT2Net for DR can significantly help patients detect and treat lesions early and provide doctors with a more reliable diagnosis basis, which has significant clinical value for diagnosing DR.
Collapse
Affiliation(s)
- Xuebin Xu
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an 710121, Shaanxi, China; Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an 710121, Shaanxi, China; Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an 710121, Shaanxi, China.
| | - Dehua Liu
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an 710121, Shaanxi, China; Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an 710121, Shaanxi, China; Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an 710121, Shaanxi, China.
| | - Guohua Huang
- Weinan Central Hospital, Xi'an 714099, Shaanxi, China.
| | - Muyu Wang
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an 710121, Shaanxi, China; Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an 710121, Shaanxi, China; Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an 710121, Shaanxi, China.
| | - Meng Lei
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an 710121, Shaanxi, China; Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an 710121, Shaanxi, China; Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an 710121, Shaanxi, China.
| | - Yang Jia
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an 710121, Shaanxi, China; Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an 710121, Shaanxi, China; Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an 710121, Shaanxi, China.
| |
Collapse
|
4
|
Yaraghi S, Khatibi T. Keratoconus disease classification with multimodel fusion and vision transformer: a pretrained model approach. BMJ Open Ophthalmol 2024; 9:e001589. [PMID: 38653536 PMCID: PMC11043764 DOI: 10.1136/bmjophth-2023-001589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 03/28/2024] [Indexed: 04/25/2024] Open
Abstract
OBJECTIVE Our objective is to develop a novel keratoconus image classification system that leverages multiple pretrained models and a transformer architecture to achieve state-of-the-art performance in detecting keratoconus. METHODS AND ANALYSIS Three pretrained models were used to extract features from the input images. These models have been trained on large datasets and have demonstrated strong performance in various computer vision tasks.The extracted features from the three pretrained models were fused using a feature fusion technique. This fusion aimed to combine the strengths of each model and capture a more comprehensive representation of the input images. The fused features were then used as input to a vision transformer, a powerful architecture that has shown excellent performance in image classification tasks. The vision transformer learnt to classify the input images as either indicative of keratoconus or not.The proposed method was applied to the Shahroud Cohort Eye collection and keratoconus detection dataset. The performance of the model was evaluated using standard evaluation metrics such as accuracy, precision, recall and F1 score. RESULTS The research results demonstrated that the proposed model achieved higher accuracy compared with using each model individually. CONCLUSION The findings of this study suggest that the proposed approach can significantly improve the accuracy of image classification models for keratoconus detection. This approach can serve as an effective decision support system alongside physicians, aiding in the diagnosis of keratoconus and potentially reducing the need for invasive procedures such as corneal transplantation in severe cases.
Collapse
Affiliation(s)
- Shokufeh Yaraghi
- Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran (the Islamic Republic of)
| | - Toktam Khatibi
- Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran (the Islamic Republic of)
| |
Collapse
|
5
|
Kanellopoulos AJ, Kanellopoulos AJ. Topographic Keratoconus Incidence in Greece Diagnosed in Routine Consecutive Cataract Procedures: A Consecutive Case Series of 1250 Cases over 5 Years. J Clin Med 2024; 13:2378. [PMID: 38673651 PMCID: PMC11051409 DOI: 10.3390/jcm13082378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 04/11/2024] [Accepted: 04/15/2024] [Indexed: 04/28/2024] Open
Abstract
Background: Scheimpflug tomography has for many years been an integral part of our pre-operative assessment in cataract extraction. We retrospectively reviewed the incidence of topographic keratoconus and keratoconus suspicion in our routine cataract surgery population over 5 years. Setting: The Laservision Clinical and Research Institute, Athens, Greece. Methods: In 1250 consecutive cataract surgery cases in otherwise naïve eyes, accounting for years 2017 to 2021, we retrospectively evaluated preoperative Pentacam HR imaging. The cases already classified as keratoconus were included in group A. The residual cases were assessed by five different experienced evaluators (two ophthalmic surgeons and three optometrists) for topographic and tomographic keratoconus suspicion based on irregular pachymetry distribution, astigmatism truncation, and/or astigmatic imaging irregularity and included in group B. Regular corneas, by this assessment, were included in group C; irregular corneas, as determined by the evaluators but unrelated to keratoconus, were included in group D. Results: Based on the above, 138 cases (11.08%) were classified by Pentacam tomography as keratoconus and by default were included in group A. Of the residual cases, 314 or 25.12% were classified as suspect keratoconus and included in group B; 725 cases (58%) were classified as normal and non-keratoconus and included in group C; and 73 cases or 5.84% were placed in group D as non-keratoconus but abnormal. There was no disagreement between the five evaluators over any of the cases in groups C and D, and little variance among them for cases included in group B (less than 5% by ANOVA). Conclusions: The incidence of keratoconus and corneas suspicious for keratoconus in Greece appears to be much higher than respective reports from other regions: one in ten Greeks appear to have topographic keratoconus, most not diagnosed even by the age of cataract surgery, and almost an additional one in four may have suspicious corneal imaging for keratoconus. These data strongly imply that routine screening for disease should be promoted among Greeks, especially during puberty, to halt possible progression; moreover, careful screening should be performed when laser vision correction is being considered.
Collapse
Affiliation(s)
- Anastasios John Kanellopoulos
- Ophthalmology Department, LaserVision Ambulatory Eye Surgery Unit, 115 21 Athens, Greece;
- Ophthalmology Department, NYU Grossman Med School, New York, NY 10016, USA
| | - Alexander J. Kanellopoulos
- Ophthalmology Department, LaserVision Ambulatory Eye Surgery Unit, 115 21 Athens, Greece;
- School of Medicine, European University Cyprus, Engomi, Nicosia 2404, Cyprus
| |
Collapse
|
6
|
Afifah A, Syafira F, Afladhanti PM, Dharmawidiarini D. Artificial intelligence as diagnostic modality for keratoconus: A systematic review and meta-analysis. J Taibah Univ Med Sci 2024; 19:296-303. [PMID: 38283379 PMCID: PMC10821587 DOI: 10.1016/j.jtumed.2023.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 11/13/2023] [Accepted: 12/25/2023] [Indexed: 01/30/2024] Open
Abstract
Objectives The challenges in diagnosing keratoconus (KC) have led researchers to explore the use of artificial intelligence (AI) as a diagnostic tool. AI has emerged as a new way to improve the efficiency of KC diagnosis. This study analyzed the use of AI as a diagnostic modality for KC. Methods This study used a systematic review and meta-analysis following the 2020 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. We searched selected databases using a combination of search terms: "((Artificial Intelligence) OR (Diagnostic Modality)) AND (Keratoconus)" from PubMed, Medline, and ScienceDirect within the last 5 years (2018-2023). Following a systematic review protocol, we selected 11 articles and 6 articles were eligible for final analysis. The relevant data were analyzed with Review Manager 5.4 software and the final output was presented in a forest plot. Results This research found neural networks as the most used AI model in diagnosing KC. Neural networks and naïve bayes showed the highest accuracy of AI in diagnosing KC with a sensitivity of 1.00, while random forests were >0.90. All studies in each group have proven high sensitivity and specificity over 0.90. Conclusions AI potentially makes a better diagnosis of the KC with its high performance, particularly on sensitivity and specificity, which can help clinicians make medical decisions about an individual patient.
Collapse
Affiliation(s)
- Azzahra Afifah
- Undaan Eye Hospital, Surabaya, Indonesia
- Medical Profession Program, Faculty of Medicine, Universitas Sriwijaya, Palembang, South Sumatra, Indonesia
| | - Fara Syafira
- Medical Profession Program, Faculty of Medicine, Universitas Sriwijaya, Palembang, South Sumatra, Indonesia
| | - Putri Mahirah Afladhanti
- Medical Profession Program, Faculty of Medicine, Universitas Sriwijaya, Palembang, South Sumatra, Indonesia
| | - Dini Dharmawidiarini
- Lens, Cornea and Refractive Surgery Division, Undaan Eye Hospital, Surabaya, Indonesia
| |
Collapse
|
7
|
Hashemi H, Doroodgar F, Niazi S, Khabazkhoob M, Heidari Z. Comparison of different corneal imaging modalities using artificial intelligence for diagnosis of keratoconus: a systematic review and meta-analysis. Graefes Arch Clin Exp Ophthalmol 2024; 262:1017-1039. [PMID: 37418053 DOI: 10.1007/s00417-023-06154-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 04/18/2023] [Accepted: 06/16/2023] [Indexed: 07/08/2023] Open
Abstract
PURPOSE This review was designed to compare different corneal imaging modalities using artificial intelligence (AI) for the diagnosis of keratoconus (KCN), subclinical KCN (SKCN), and forme fruste KCN (FFKCN). METHODS A comprehensive systematic search was conducted in scientific databases, including Web of Science, PubMed, Scopus, and Google Scholar based on the PRISMA statement. Two independent reviewers assessed all potential publications on AI and KCN up to March 2022. The Critical Appraisal Skills Program (CASP) 11-item checklist was used to evaluate the validity of the studies. Eligible articles were categorized into three groups (KCN, SKCN, and FFKCN) and included in the meta-analysis. The pooled estimate of accuracy (PEA) was calculated for all selected articles. RESULTS The initial search yielded 575 relevant publications, of which 36 met the CASP quality criteria and were included in the analysis. Qualitative assessment showed that Scheimpflug and Placido combined with biomechanical and wavefront evaluations improved KCN detection (PEA, 99.2, and 99.0, respectively). The Scheimpflug system (92.25 PEA, 95% CI, 94.76-97.51) and a combination of Scheimpflug and Placido (96.44 PEA, 95% CI, 93.13-98.19) had the highest diagnostic accuracy for the detection of SKCN and FFKCN, respectively. The meta-analysis outcomes showed no significant difference between the CASP score and accuracy of the publications (all P > 0.05). CONCLUSIONS Simultaneous Scheimpflug and Placido corneal imaging methods provide high diagnostic accuracy for early detection of keratoconus. The use of AI models improves the discrimination of keratoconic eyes from normal corneas.
Collapse
Affiliation(s)
- Hassan Hashemi
- Noor Research Center for Ophthalmic Epidemiology, Noor Eye Hospital, Tehran, Iran
| | - Farideh Doroodgar
- Translational Ophthalmology Research Center, Tehran University of Medical Sciences, Tehran, Iran
- Negah Eye Hospital Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Sana Niazi
- Research Institute for Ophthalmology and Vision Science, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mehdi Khabazkhoob
- Department of Medical Surgical Nursing, School of Nursing and Midwifery, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Zahra Heidari
- Department of Ophthalmology, Bu-Ali Sina Hospital, Mazandaran University of Medical Sciences, Sari, Iran.
- Psychiatry and Behavioral Sciences Research Center, Mazandaran University of Medical Sciences, Sari, Iran.
| |
Collapse
|
8
|
Tey KY, Cheong EZK, Ang M. Potential applications of artificial intelligence in image analysis in cornea diseases: a review. EYE AND VISION (LONDON, ENGLAND) 2024; 11:10. [PMID: 38448961 PMCID: PMC10919022 DOI: 10.1186/s40662-024-00376-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 02/09/2024] [Indexed: 03/08/2024]
Abstract
Artificial intelligence (AI) is an emerging field which could make an intelligent healthcare model a reality and has been garnering traction in the field of medicine, with promising results. There have been recent developments in machine learning and/or deep learning algorithms for applications in ophthalmology-primarily for diabetic retinopathy, and age-related macular degeneration. However, AI research in the field of cornea diseases is relatively new. Algorithms have been described to assist clinicians in diagnosis or detection of cornea conditions such as keratoconus, infectious keratitis and dry eye disease. AI may also be used for segmentation and analysis of cornea imaging or tomography as an adjunctive tool. Despite the potential advantages that these new technologies offer, there are challenges that need to be addressed before they can be integrated into clinical practice. In this review, we aim to summarize current literature and provide an update regarding recent advances in AI technologies pertaining to corneal diseases, and its potential future application, in particular pertaining to image analysis.
Collapse
Affiliation(s)
- Kai Yuan Tey
- Singapore National Eye Centre, 11 Third Hospital Ave, Singapore, 168751, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | | | - Marcus Ang
- Singapore National Eye Centre, 11 Third Hospital Ave, Singapore, 168751, Singapore.
- Singapore Eye Research Institute, Singapore, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
| |
Collapse
|
9
|
Niazi S, Gatzioufas Z, Doroodgar F, Findl O, Baradaran-Rafii A, Liechty J, Moshirfar M. Keratoconus: exploring fundamentals and future perspectives - a comprehensive systematic review. Ther Adv Ophthalmol 2024; 16:25158414241232258. [PMID: 38516169 PMCID: PMC10956165 DOI: 10.1177/25158414241232258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 01/22/2024] [Indexed: 03/23/2024] Open
Abstract
Background New developments in artificial intelligence, particularly with promising results in early detection and management of keratoconus, have favorably altered the natural history of the disease over the last few decades. Features of artificial intelligence in different machine such as anterior segment optical coherence tomography, and femtosecond laser technique have improved safety, precision, effectiveness, and predictability of treatment modalities of keratoconus (from contact lenses to keratoplasty techniques). These options ingrained in artificial intelligence are already underway and allow ophthalmologist to approach disease in the most non-invasive way. Objectives This study comprehensively describes all of the treatment modalities of keratoconus considering machine learning strategies. Design A multidimensional comprehensive systematic narrative review. Data sources and methods A comprehensive search was done in the five main electronic databases (PubMed, Scopus, Web of Science, Embase, and Cochrane), without language and time or type of study restrictions. Afterward, eligible articles were selected by screening the titles and abstracts based on main mesh keywords. For potentially eligible articles, the full text was also reviewed. Results Artificial intelligence demonstrates promise in keratoconus diagnosis and clinical management, spanning early detection (especially in subclinical cases), preoperative screening, postoperative ectasia prediction after keratorefractive surgery, and guiding surgical decisions. The majority of studies employed a solitary machine learning algorithm, whereas minor studies assessed multiple algorithms that evaluated the association of various keratoconus staging and management strategies. Last but not least, AI has proven effective in guiding the implantation of intracorneal ring segments in keratoconus corneas and predicting surgical outcomes. Conclusion The efficient and widespread clinical translation of machine learning models in keratoconus management is a crucial goal of potential future approaches to better visual performance in keratoconus patients. Trial registration The article has been registered through PROSPERO, an international database of prospectively registered systematic reviews, with the ID: CRD42022319338.
Collapse
Affiliation(s)
- Sana Niazi
- Translational Ophthalmology Research Center, Tehran University of Medical Sciences, Tehran, Iran
- Ophthalmic Research Center, Research Institute for Ophthalmology and Vision Science, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Zisis Gatzioufas
- Department of Ophthalmology, University Eye Hospital Basel, Basel, Switzerland
| | - Farideh Doroodgar
- Translational Ophthalmology Research Center, Tehran University of Medical Sciences, Tehran Province, Tehran, District 6, Pour Sina St, P94V+8MF, Tehran 1416753955, Iran
- Negah Aref Ophthalmic Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Oliver Findl
- Department of Ophthalmology, Hanusch Hospital, Vienna Institute for Research in Ocular Surgery (VIROS), Vienna, Austria
| | - Alireza Baradaran-Rafii
- Department of Ophthalmology, Morsani College of Medicine, University of South Florida, Tampa, FL, USA
| | - Jacob Liechty
- Department of Ophthalmology, Morsani College of Medicine, University of South Florida, Tampa, FL, USA
| | - Majid Moshirfar
- John A. Moran Eye Center, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
10
|
Agharezaei Z, Firouzi R, Hassanzadeh S, Zarei-Ghanavati S, Bahaadinbeigy K, Golabpour A, Akbarzadeh R, Agharezaei L, Bakhshali MA, Sedaghat MR, Eslami S. Computer-aided diagnosis of keratoconus through VAE-augmented images using deep learning. Sci Rep 2023; 13:20586. [PMID: 37996439 PMCID: PMC10667539 DOI: 10.1038/s41598-023-46903-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 11/07/2023] [Indexed: 11/25/2023] Open
Abstract
Detecting clinical keratoconus (KCN) poses a challenging and time-consuming task. During the diagnostic process, ophthalmologists are required to review demographic and clinical ophthalmic examinations in order to make an accurate diagnosis. This study aims to develop and evaluate the accuracy of deep convolutional neural network (CNN) models for the detection of keratoconus (KCN) using corneal topographic maps. We retrospectively collected 1758 corneal images (978 normal and 780 keratoconus) from 1010 subjects of the KCN group with clinically evident keratoconus and the normal group with regular astigmatism. To expand the dataset, we developed a model using Variational Auto Encoder (VAE) to generate and augment images, resulting in a dataset of 4000 samples. Four deep learning models were used to extract and identify deep corneal features of original and synthesized images. We demonstrated that the utilization of synthesized images during training process increased classification performance. The overall average accuracy of the deep learning models ranged from 99% for VGG16 to 95% for EfficientNet-B0. All CNN models exhibited sensitivity and specificity above 0.94, with the VGG16 model achieving an AUC of 0.99. The customized CNN model achieved satisfactory results with an accuracy and AUC of 0.97 at a much faster processing speed compared to other models. In conclusion, the DL models showed high accuracy in screening for keratoconus based on corneal topography images. This is a development toward the potential clinical implementation of a more enhanced computer-aided diagnosis (CAD) system for KCN detection, which would aid ophthalmologists in validating the clinical decision and carrying out prompt and precise KCN treatment.
Collapse
Affiliation(s)
- Zhila Agharezaei
- Pharmaceutical Research Center, Pharmaceutical Technology Institute, Mashhad University of Medical Sciences, Mashhad, Iran
- Department of Medical Informatics, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
- Medical Informatics Research Center, Institute for Future Studies in Health, Kerman University of Medical Sciences, Kerman, Iran
| | - Reza Firouzi
- Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Samira Hassanzadeh
- School of Paramedical Sciences and Rehabilitation, Mashhad University of Medical Sciences, Mashhad, Iran
| | | | - Kambiz Bahaadinbeigy
- Medical Informatics Research Center, Institute for Future Studies in Health, Kerman University of Medical Sciences, Kerman, Iran
| | - Amin Golabpour
- School of Medicine, Shahroud University of Medical Sciences, Shahroud, Iran
| | - Reyhaneh Akbarzadeh
- Department of Optometry, School of Paramedical Sciences, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Laleh Agharezaei
- Modeling in Health Research Center, Institute for Future Studies in Health, Kerman University of Medical Sciences, Kerman, Iran
| | - Mohamad Amin Bakhshali
- Department of Medical Informatics, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | | | - Saeid Eslami
- Pharmaceutical Research Center, Pharmaceutical Technology Institute, Mashhad University of Medical Sciences, Mashhad, Iran.
- Department of Medical Informatics, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran.
| |
Collapse
|
11
|
Vandevenne MM, Favuzza E, Veta M, Lucenteforte E, Berendschot TT, Mencucci R, Nuijts RM, Virgili G, Dickman MM. Artificial intelligence for detecting keratoconus. Cochrane Database Syst Rev 2023; 11:CD014911. [PMID: 37965960 PMCID: PMC10646985 DOI: 10.1002/14651858.cd014911.pub2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
BACKGROUND Keratoconus remains difficult to diagnose, especially in the early stages. It is a progressive disorder of the cornea that starts at a young age. Diagnosis is based on clinical examination and corneal imaging; though in the early stages, when there are no clinical signs, diagnosis depends on the interpretation of corneal imaging (e.g. topography and tomography) by trained cornea specialists. Using artificial intelligence (AI) to analyse the corneal images and detect cases of keratoconus could help prevent visual acuity loss and even corneal transplantation. However, a missed diagnosis in people seeking refractive surgery could lead to weakening of the cornea and keratoconus-like ectasia. There is a need for a reliable overview of the accuracy of AI for detecting keratoconus and the applicability of this automated method to the clinical setting. OBJECTIVES To assess the diagnostic accuracy of artificial intelligence (AI) algorithms for detecting keratoconus in people presenting with refractive errors, especially those whose vision can no longer be fully corrected with glasses, those seeking corneal refractive surgery, and those suspected of having keratoconus. AI could help ophthalmologists, optometrists, and other eye care professionals to make decisions on referral to cornea specialists. Secondary objectives To assess the following potential causes of heterogeneity in diagnostic performance across studies. • Different AI algorithms (e.g. neural networks, decision trees, support vector machines) • Index test methodology (preprocessing techniques, core AI method, and postprocessing techniques) • Sources of input to train algorithms (topography and tomography images from Placido disc system, Scheimpflug system, slit-scanning system, or optical coherence tomography (OCT); number of training and testing cases/images; label/endpoint variable used for training) • Study setting • Study design • Ethnicity, or geographic area as its proxy • Different index test positivity criteria provided by the topography or tomography device • Reference standard, topography or tomography, one or two cornea specialists • Definition of keratoconus • Mean age of participants • Recruitment of participants • Severity of keratoconus (clinically manifest or subclinical) SEARCH METHODS: We searched CENTRAL (which contains the Cochrane Eyes and Vision Trials Register), Ovid MEDLINE, Ovid Embase, OpenGrey, the ISRCTN registry, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP). There were no date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 29 November 2022. SELECTION CRITERIA We included cross-sectional and diagnostic case-control studies that investigated AI for the diagnosis of keratoconus using topography, tomography, or both. We included studies that diagnosed manifest keratoconus, subclinical keratoconus, or both. The reference standard was the interpretation of topography or tomography images by at least two cornea specialists. DATA COLLECTION AND ANALYSIS Two review authors independently extracted the study data and assessed the quality of studies using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. When an article contained multiple AI algorithms, we selected the algorithm with the highest Youden's index. We assessed the certainty of evidence using the GRADE approach. MAIN RESULTS We included 63 studies, published between 1994 and 2022, that developed and investigated the accuracy of AI for the diagnosis of keratoconus. There were three different units of analysis in the studies: eyes, participants, and images. Forty-four studies analysed 23,771 eyes, four studies analysed 3843 participants, and 15 studies analysed 38,832 images. Fifty-four articles evaluated the detection of manifest keratoconus, defined as a cornea that showed any clinical sign of keratoconus. The accuracy of AI seems almost perfect, with a summary sensitivity of 98.6% (95% confidence interval (CI) 97.6% to 99.1%) and a summary specificity of 98.3% (95% CI 97.4% to 98.9%). However, accuracy varied across studies and the certainty of the evidence was low. Twenty-eight articles evaluated the detection of subclinical keratoconus, although the definition of subclinical varied. We grouped subclinical keratoconus, forme fruste, and very asymmetrical eyes together. The tests showed good accuracy, with a summary sensitivity of 90.0% (95% CI 84.5% to 93.8%) and a summary specificity of 95.5% (95% CI 91.9% to 97.5%). However, the certainty of the evidence was very low for sensitivity and low for specificity. In both groups, we graded most studies at high risk of bias, with high applicability concerns, in the domain of patient selection, since most were case-control studies. Moreover, we graded the certainty of evidence as low to very low due to selection bias, inconsistency, and imprecision. We could not explain the heterogeneity between the studies. The sensitivity analyses based on study design, AI algorithm, imaging technique (topography versus tomography), and data source (parameters versus images) showed no differences in the results. AUTHORS' CONCLUSIONS AI appears to be a promising triage tool in ophthalmologic practice for diagnosing keratoconus. Test accuracy was very high for manifest keratoconus and slightly lower for subclinical keratoconus, indicating a higher chance of missing a diagnosis in people without clinical signs. This could lead to progression of keratoconus or an erroneous indication for refractive surgery, which would worsen the disease. We are unable to draw clear and reliable conclusions due to the high risk of bias, the unexplained heterogeneity of the results, and high applicability concerns, all of which reduced our confidence in the evidence. Greater standardization in future research would increase the quality of studies and improve comparability between studies.
Collapse
Affiliation(s)
- Magali Ms Vandevenne
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Eleonora Favuzza
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
| | - Mitko Veta
- Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Ersilia Lucenteforte
- Department of Statistics, Computer Science and Applications «G. Parenti», University of Florence, Florence, Italy
| | - Tos Tjm Berendschot
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Rita Mencucci
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
| | - Rudy Mma Nuijts
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Gianni Virgili
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
- Queen's University Belfast, Belfast, UK
| | - Mor M Dickman
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| |
Collapse
|
12
|
Niazi S, Jiménez-García M, Findl O, Gatzioufas Z, Doroodgar F, Shahriari MH, Javadi MA. Keratoconus Diagnosis: From Fundamentals to Artificial Intelligence: A Systematic Narrative Review. Diagnostics (Basel) 2023; 13:2715. [PMID: 37627975 PMCID: PMC10453081 DOI: 10.3390/diagnostics13162715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 07/21/2023] [Accepted: 07/26/2023] [Indexed: 08/27/2023] Open
Abstract
The remarkable recent advances in managing keratoconus, the most common corneal ectasia, encouraged researchers to conduct further studies on the disease. Despite the abundance of information about keratoconus, debates persist regarding the detection of mild cases. Early detection plays a crucial role in facilitating less invasive treatments. This review encompasses corneal data ranging from the basic sciences to the application of artificial intelligence in keratoconus patients. Diagnostic systems utilize automated decision trees, support vector machines, and various types of neural networks, incorporating input from various corneal imaging equipment. Although the integration of artificial intelligence techniques into corneal imaging devices may take time, their popularity in clinical practice is increasing. Most of the studies reviewed herein demonstrate a high discriminatory power between normal and keratoconus cases, with a relatively lower discriminatory power for subclinical keratoconus.
Collapse
Affiliation(s)
- Sana Niazi
- Translational Ophthalmology Research Center, Tehran University of Medical Sciences, Tehran P.O. Box 1336616351, Iran;
| | - Marta Jiménez-García
- Department of Ophthalmology, Antwerp University Hospital (UZA), 2650 Edegem, Belgium
- Department of Medicine and Health Sciences, University of Antwerp, 2000 Antwerp, Belgium
| | - Oliver Findl
- Department of Ophthalmology, Vienna Institute for Research in Ocular Surgery (VIROS), Hanusch Hospital, 1140 Vienna, Austria
| | - Zisis Gatzioufas
- Department of Ophthalmology, University Hospital Basel, 4031 Basel, Switzerland;
| | - Farideh Doroodgar
- Translational Ophthalmology Research Center, Tehran University of Medical Sciences, Tehran P.O. Box 1336616351, Iran;
- Negah Aref Ophthalmic Research Center, Shahid Beheshti University of Medical Sciences, Tehran P.O. Box 1544914599, Iran
| | - Mohammad Hasan Shahriari
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran P.O. Box 1971653313, Iran
| | - Mohammad Ali Javadi
- Ophthalmic Research Center, Labbafinezhad Hospital, Shahid Beheshti University of Medical Sciences, Tehran P.O. Box 19395-4741, Iran
| |
Collapse
|
13
|
Prakash G, Perera C, Jhanji V. Comparison of machine learning-based algorithms using corneal asymmetry vs. single-metric parameters for keratoconus detection. Graefes Arch Clin Exp Ophthalmol 2023; 261:2335-2342. [PMID: 37022493 DOI: 10.1007/s00417-023-06049-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 03/09/2023] [Accepted: 03/24/2023] [Indexed: 04/07/2023] Open
Abstract
PURPOSE To evaluate the diagnostic performance of three different parameter sets relevant to corneal asymmetry in comparison to conventional parameters including maximum anterior corneal curvature (Kmax) and thinnest corneal thickness for diagnosis of keratoconus. METHODS In this retrospective case control study, 290 eyes with keratoconus and 847 eyes of normal patients were included in the analyses. Corneal tomography data were acquired from Scheimpflug tomography. The sklearn and FastAI libraries were used in a Python 3 environment to create all machine learning models. The original topography metrics and derived metrics together with the clinical diagnoses were used as the dataset for model training. The data were first split to assign 20% of the data to an isolated test set. The remaining data were then split 80/20 to a training and validation group for model training. Sensitivity and specificity outcomes with standard parameters (Kmax, central curvature, and thinnest pachymetry) and ratio of asymmetry across horizontal, apex centered, and flat axis-centered axis of reflection were studied via various machine learning models. RESULTS Thinnest corneal pachymetry and Kmax were 549.8 ± 34.3 µm and 45.3 ± 1.7 D in normal eyes and 460.5 ± 62.6 µm and 59.3 ± 11.3 D in keratoconic eyes. Use of only corneal asymmetry ratios across all 4 meridians had mean sensitivity of 99.0% and mean specificity of 94.0%, better than utilizing Kmax alone or traditional measures combined (Kmax, thinnest cornea and inferior-superior asymmetry). CONCLUSIONS By using the ratio of asymmetry between corneal axes alone, a machine learning model could identify patients with keratoconus in our dataset with satisfactory sensitivity and specificity. Further studies on pooled/larger datasets or more borderline population can help validate or refine these parameters.
Collapse
Affiliation(s)
- Gaurav Prakash
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Chandrashan Perera
- Ophthalmology, Byers Eye Institute at Stanford University School of Medicine, Palo Alto, CA, USA
| | - Vishal Jhanji
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
14
|
Paterson T, Azizoglu S, Gokhale M, Chambers M, Suphioglu C. Preserved Ophthalmic Anti-Allergy Medication in Cumulatively Increasing Risk Factors of Corneal Ectasia. BIOLOGY 2023; 12:1036. [PMID: 37508465 PMCID: PMC10376818 DOI: 10.3390/biology12071036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/12/2023] [Accepted: 07/20/2023] [Indexed: 07/30/2023]
Abstract
The prevalence of allergies is rising every year. For those who suffer from it, ocular inflammation and irritation can be inconvenient and unpleasant. Anti-allergy eyedrops are a readily available treatment for symptoms of ocular allergy (OA) and can help allergy sufferers regain normal function. However, the eye is a delicate organ, and multiuse eyedrops often utilise preservatives to deter microbial growth. Preservatives such as benzalkonium chloride (BAK) have been shown to induce decreased cell viability. Therefore, during a period of high localised inflammation and eye rubbing, it is important that the preservatives used in topical medicines do not contribute to the weakening of the corneal structure. This review explores ocular allergy and the thinning and protrusion of the cornea that is characteristic of the disease keratoconus (KC) and how it relates to a weakened corneal structure. It also describes the use of BAK and its documented effects on the integrity of the cornea. It was found that atopy and eye rubbing are significant risk factors for KC, and BAK can severely decrease the integrity of the corneal structure when compared to other preservatives and preservative-free alternatives.
Collapse
Affiliation(s)
- Tom Paterson
- NeuroAllergy Research Laboratory (NARL), School of Life and Environmental Sciences (LES), Faculty of Science, Engineering and Built Environment (SEBE), Deakin University, 75 Pigdons Road, Geelong, VIC 3216, Australia
| | - Serap Azizoglu
- NeuroAllergy Research Laboratory (NARL), School of Life and Environmental Sciences (LES), Faculty of Science, Engineering and Built Environment (SEBE), Deakin University, 75 Pigdons Road, Geelong, VIC 3216, Australia
- Deakin Optometry, School of Medicine, Faculty of Health, Deakin University, 75 Pigdons Road, Geelong, VIC 3216, Australia
| | - Moneisha Gokhale
- NeuroAllergy Research Laboratory (NARL), School of Life and Environmental Sciences (LES), Faculty of Science, Engineering and Built Environment (SEBE), Deakin University, 75 Pigdons Road, Geelong, VIC 3216, Australia
- Deakin Optometry, School of Medicine, Faculty of Health, Deakin University, 75 Pigdons Road, Geelong, VIC 3216, Australia
| | - Madeline Chambers
- NeuroAllergy Research Laboratory (NARL), School of Life and Environmental Sciences (LES), Faculty of Science, Engineering and Built Environment (SEBE), Deakin University, 75 Pigdons Road, Geelong, VIC 3216, Australia
| | - Cenk Suphioglu
- NeuroAllergy Research Laboratory (NARL), School of Life and Environmental Sciences (LES), Faculty of Science, Engineering and Built Environment (SEBE), Deakin University, 75 Pigdons Road, Geelong, VIC 3216, Australia
| |
Collapse
|
15
|
Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, Zhou H, Wu S, Shao Y, Chen W. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Rep Med 2023:101095. [PMID: 37385253 PMCID: PMC10394169 DOI: 10.1016/j.xcrm.2023.101095] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Lei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guiyang 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - He Xie
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Hongjian Zhou
- Department of Computer Science, University of Oxford, Oxford, Oxfordshire OX1 2JD, UK
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Yi Shao
- Department of Ophthalmology, the First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
16
|
Deshmukh R, Ong ZZ, Rampat R, Alió del Barrio JL, Barua A, Ang M, Mehta JS, Said DG, Dua HS, Ambrósio R, Ting DSJ. Management of keratoconus: an updated review. Front Med (Lausanne) 2023; 10:1212314. [PMID: 37409272 PMCID: PMC10318194 DOI: 10.3389/fmed.2023.1212314] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 05/30/2023] [Indexed: 07/07/2023] Open
Abstract
Keratoconus is the most common corneal ectatic disorder. It is characterized by progressive corneal thinning with resultant irregular astigmatism and myopia. Its prevalence has been estimated at 1:375 to 1:2,000 people globally, with a considerably higher rate in the younger populations. Over the past two decades, there was a paradigm shift in the management of keratoconus. The treatment has expanded significantly from conservative management (e.g., spectacles and contact lenses wear) and penetrating keratoplasty to many other therapeutic and refractive modalities, including corneal cross-linking (with various protocols/techniques), combined CXL-keratorefractive surgeries, intracorneal ring segments, anterior lamellar keratoplasty, and more recently, Bowman's layer transplantation, stromal keratophakia, and stromal regeneration. Several recent large genome-wide association studies (GWAS) have identified important genetic mutations relevant to keratoconus, facilitating the development of potential gene therapy targeting keratoconus and halting the disease progression. In addition, attempts have been made to leverage the power of artificial intelligence-assisted algorithms in enabling earlier detection and progression prediction in keratoconus. In this review, we provide a comprehensive overview of the current and emerging treatment of keratoconus and propose a treatment algorithm for systematically guiding the management of this common clinical entity.
Collapse
Affiliation(s)
- Rashmi Deshmukh
- Department of Cornea and Refractive Surgery, LV Prasad Eye Institute, Hyderabad, India
| | - Zun Zheng Ong
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham, United Kingdom
| | - Radhika Rampat
- Department of Ophthalmology, Royal Free London NHS Foundation Trust, London, United Kingdom
| | - Jorge L. Alió del Barrio
- Cornea, Cataract and Refractive Surgery Unit, Vissum (Miranza Group), Alicante, Spain
- Division of Ophthalmology, School of Medicine, Universidad Miguel Hernández, Alicante, Spain
| | - Ankur Barua
- Birmingham and Midland Eye Centre, Birmingham, United Kingdom
| | - Marcus Ang
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Jodhbir S. Mehta
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Dalia G. Said
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham, United Kingdom
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Harminder S. Dua
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham, United Kingdom
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Renato Ambrósio
- Department of Cornea and Refractive Surgery, Instituto de Olhos Renato Ambrósio, Rio de Janeiro, Brazil
- Department of Ophthalmology, Federal University of the State of Rio de Janeiro (UNIRIO), Rio de Janeiro, Brazil
- Federal University of São Paulo (UNIFESP), São Paulo, Brazil
| | - Darren Shu Jeng Ting
- Birmingham and Midland Eye Centre, Birmingham, United Kingdom
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
17
|
Ting DSJ, Deshmukh R, Ting DSW, Ang M. Big data in corneal diseases and cataract: Current applications and future directions. Front Big Data 2023; 6:1017420. [PMID: 36818823 PMCID: PMC9929069 DOI: 10.3389/fdata.2023.1017420] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 01/16/2023] [Indexed: 02/04/2023] Open
Abstract
The accelerated growth in electronic health records (EHR), Internet-of-Things, mHealth, telemedicine, and artificial intelligence (AI) in the recent years have significantly fuelled the interest and development in big data research. Big data refer to complex datasets that are characterized by the attributes of "5 Vs"-variety, volume, velocity, veracity, and value. Big data analytics research has so far benefitted many fields of medicine, including ophthalmology. The availability of these big data not only allow for comprehensive and timely examinations of the epidemiology, trends, characteristics, outcomes, and prognostic factors of many diseases, but also enable the development of highly accurate AI algorithms in diagnosing a wide range of medical diseases as well as discovering new patterns or associations of diseases that are previously unknown to clinicians and researchers. Within the field of ophthalmology, there is a rapidly expanding pool of large clinical registries, epidemiological studies, omics studies, and biobanks through which big data can be accessed. National corneal transplant registries, genome-wide association studies, national cataract databases, and large ophthalmology-related EHR-based registries (e.g., AAO IRIS Registry) are some of the key resources. In this review, we aim to provide a succinct overview of the availability and clinical applicability of big data in ophthalmology, particularly from the perspective of corneal diseases and cataract, the synergistic potential of big data, AI technologies, internet of things, mHealth, and wearable smart devices, and the potential barriers for realizing the clinical and research potential of big data in this field.
Collapse
Affiliation(s)
- Darren S. J. Ting
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, United Kingdom,Birmingham and Midland Eye Centre, Birmingham, United Kingdom,Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, United Kingdom,*Correspondence: Darren S. J. Ting ✉
| | - Rashmi Deshmukh
- Department of Cornea and Refractive Surgery, LV Prasad Eye Institute, Hyderabad, India
| | - Daniel S. W. Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore,Department of Ophthalmology and Visual Sciences, Duke-National University of Singapore (NUS) Medical School, Singapore, Singapore
| | - Marcus Ang
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore,Department of Ophthalmology and Visual Sciences, Duke-National University of Singapore (NUS) Medical School, Singapore, Singapore
| |
Collapse
|
18
|
Zorto AD, Sharif MS, Wall J, Brahma A, Alzahrani AI, Alalwan N. An innovative approach based on machine learning to evaluate the risk factors importance in diagnosing keratoconus. INFORMATICS IN MEDICINE UNLOCKED 2023. [DOI: 10.1016/j.imu.2023.101208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023] Open
|
19
|
Zhang Z, Wang Y, Zhang H, Samusak A, Rao H, Xiao C, Abula M, Cao Q, Dai Q. Artificial intelligence-assisted diagnosis of ocular surface diseases. Front Cell Dev Biol 2023; 11:1133680. [PMID: 36875760 PMCID: PMC9981656 DOI: 10.3389/fcell.2023.1133680] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 02/08/2023] [Indexed: 02/19/2023] Open
Abstract
With the rapid development of computer technology, the application of artificial intelligence (AI) in ophthalmology research has gained prominence in modern medicine. Artificial intelligence-related research in ophthalmology previously focused on the screening and diagnosis of fundus diseases, particularly diabetic retinopathy, age-related macular degeneration, and glaucoma. Since fundus images are relatively fixed, their standards are easy to unify. Artificial intelligence research related to ocular surface diseases has also increased. The main issue with research on ocular surface diseases is that the images involved are complex, with many modalities. Therefore, this review aims to summarize current artificial intelligence research and technologies used to diagnose ocular surface diseases such as pterygium, keratoconus, infectious keratitis, and dry eye to identify mature artificial intelligence models that are suitable for research of ocular surface diseases and potential algorithms that may be used in the future.
Collapse
Affiliation(s)
- Zuhui Zhang
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China.,National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Ying Wang
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Hongzhen Zhang
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Arzigul Samusak
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Huimin Rao
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Chun Xiao
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Muhetaer Abula
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Qixin Cao
- Huzhou Traditional Chinese Medicine Hospital Affiliated to Zhejiang University of Traditional Chinese Medicine, Huzhou, China
| | - Qi Dai
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China.,National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
20
|
Naseem MT, Hussain T, Lee CS, Khan MA. Classification and Detection of COVID-19 and Other Chest-Related Diseases Using Transfer Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:7977. [PMID: 36298328 PMCID: PMC9610066 DOI: 10.3390/s22207977] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/17/2022] [Accepted: 10/14/2022] [Indexed: 06/16/2023]
Abstract
COVID-19 has infected millions of people worldwide over the past few years. The main technique used for COVID-19 detection is reverse transcription, which is expensive, sensitive, and requires medical expertise. X-ray imaging is an alternative and more accessible technique. This study aimed to improve detection accuracy to create a computer-aided diagnostic tool. Combining other artificial intelligence applications techniques with radiological imaging can help detect different diseases. This study proposes a technique for the automatic detection of COVID-19 and other chest-related diseases using digital chest X-ray images of suspected patients by applying transfer learning (TL) algorithms. For this purpose, two balanced datasets, Dataset-1 and Dataset-2, were created by combining four public databases and collecting images from recently published articles. Dataset-1 consisted of 6000 chest X-ray images with 1500 for each class. Dataset-2 consisted of 7200 images with 1200 for each class. To train and test the model, TL with nine pretrained convolutional neural networks (CNNs) was used with augmentation as a preprocessing method. The network was trained to classify using five classifiers: two-class classifier (normal and COVID-19); three-class classifier (normal, COVID-19, and viral pneumonia), four-class classifier (normal, viral pneumonia, COVID-19, and tuberculosis (Tb)), five-class classifier (normal, bacterial pneumonia, COVID-19, Tb, and pneumothorax), and six-class classifier (normal, bacterial pneumonia, COVID-19, viral pneumonia, Tb, and pneumothorax). For two, three, four, five, and six classes, our model achieved a maximum accuracy of 99.83, 98.11, 97.00, 94.66, and 87.29%, respectively.
Collapse
Affiliation(s)
- Muhammad Tahir Naseem
- Department of Electronic Engineering, Yeungnam University, Gyeongsan 38541, Korea
- Riphah School of Computing & Applied Sciences (RSCI), Riphah International University, Lahore 55150, Pakistan
| | - Tajmal Hussain
- Riphah School of Computing & Applied Sciences (RSCI), Riphah International University, Lahore 55150, Pakistan
| | - Chan-Su Lee
- Department of Electronic Engineering, Yeungnam University, Gyeongsan 38541, Korea
| | - Muhammad Adnan Khan
- Riphah School of Computing & Applied Sciences (RSCI), Riphah International University, Lahore 55150, Pakistan
| |
Collapse
|
21
|
Gao HB, Pan ZG, Shen MX, Lu F, Li H, Zhang XQ. KeratoScreen: Early Keratoconus Classification With Zernike Polynomial Using Deep Learning. Cornea 2022; 41:1158-1165. [PMID: 35543584 DOI: 10.1097/ico.0000000000003038] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 02/16/2022] [Indexed: 12/13/2022]
Abstract
PURPOSE We aimed to investigate the usefulness of Zernike coefficients (ZCs) for distinguishing subclinical keratoconus (KC) from normal corneas and to evaluate the goodness of detection of the entire corneal topography and tomography characteristics with ZCs as a screening feature input set of artificial neural networks. METHODS This retrospective study was conducted at the Affiliated Eye Hospital of Wenzhou Medical University, China. A total of 208 patients (1040 corneal topography images) were evaluated. Data were collected between 2012 and 2018 using the Pentacam system and analyzed from February 2019 to December 2021. An artificial neural network (KeratoScreen) was trained using a data set of ZCs generated from corneal topography and tomography. Each image was previously assigned to 3 groups: normal (70 eyes; average age, 28.7 ± 2.6 years), subclinical KC (48 eyes; average age, 24.6 ± 5.7 years), and KC (90 eyes; average age, 25.9 ± 5.4 years). The data set was randomly split into 70% for training and 30% for testing. We evaluated the precision of screening symptoms and examined the discriminative capability of several combinations of the input set and nodes. RESULTS The best results were achieved using ZCs generated from corneal thickness as an input parameter, determining the 3 categories of clinical classification for each subject. The sensitivity and precision rates were 93.9% and 96.1% in subclinical KC cases and 97.6% and 95.1% in KC cases, respectively. CONCLUSIONS Deep learning algorithms based on ZCs could be used to screen for early KC and for other corneal ectasia during preoperative screening for corneal refractive surgery.
Collapse
Affiliation(s)
- He-Bei Gao
- Division of Health Sciences, Hangzhou Normal University, Hangzhou, China
- Department of Information, Wenzhou Polytechnic, Wenzhou, China
| | - Zhi-Geng Pan
- School of Artificial Intelligence, Nanjing University of Information Science & Technology, Nanjing, China
| | - Mei-Xiao Shen
- School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, China ; and
| | - Fan Lu
- School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, China ; and
| | - Hong Li
- College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, China
| | - Xiao-Qin Zhang
- College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, China
| |
Collapse
|
22
|
Tan Z, Chen X, Li K, Liu Y, Cao H, Li J, Jhanji V, Zou H, Liu F, Wang R, Wang Y. Artificial Intelligence-Based Diagnostic Model for Detecting Keratoconus Using Videos of Corneal Force Deformation. Transl Vis Sci Technol 2022; 11:32. [PMID: 36178782 PMCID: PMC9527334 DOI: 10.1167/tvst.11.9.32] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Purpose To develop a novel method based on biomechanical parameters calculated from raw corneal dynamic deformation videos to quickly and accurately diagnose keratoconus using machine learning. Methods The keratoconus group was included according to Rabinowitz's criteria, and the normal group included corneal refractive surgery candidates. Independent biomechanical parameters were calculated from dynamic corneal deformation videos. A novel neural network model was trained to diagnose keratoconus. Tenfold cross-validation was performed, and the sample set was divided into a training set for training, a validation set for parameter validation, and a testing set for performance evaluation. External validation was performed to evaluate the model's generalizability. Results A novel intelligent diagnostic model for keratoconus based on a five-layer feedforward network was constructed by calculating four biomechanical characteristics, including time of the first applanation, deformation amplitude at the highest concavity, central corneal thickness, and radius at the highest concavity. The model was able to diagnose keratoconus with 99.6% accuracy, 99.3% sensitivity, 100% specificity, and 100% precision in the sample set (n = 276), and it achieved an accuracy of 98.7%, sensitivity of 97.4%, specificity of 100%, and precision of 100% in the external validation set (n = 78). Conclusions In the absence of corneal topographic examination, rapid and accurate diagnosis of keratoconus is possible with the aid of machine learning. Our study provides a new potential approach and sheds light on the diagnosis of keratoconus from a purely corneal biomechanical perspective. Translational Relevance Our findings could help improve the diagnosis of keratoconus based on corneal biomechanical properties.
Collapse
Affiliation(s)
- Zuoping Tan
- Wenzhou University of Technology, Wenzhou, Zhejiang, China
| | - Xuan Chen
- Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China
| | - Kangsheng Li
- Tianjin University of Technology, Tianjin, China
| | - Yan Liu
- Tianjin University of Technology, Tianjin, China
| | - Huazheng Cao
- Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China
| | - Jing Li
- Shanxi Eye Hospital, Xi'an People's Hospital, Xi'an, Shanxi, China
| | - Vishal Jhanji
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Haohan Zou
- Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China
| | - Fenglian Liu
- Tianjin University of Technology, Tianjin, China
| | - Riwei Wang
- Wenzhou University of Technology, Wenzhou, Zhejiang, China
| | - Yan Wang
- Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China.,Tianjin Eye Hospital, Tianjin Eye Institute, Tianjin Key Laboratory of Ophthalmology and Visual Science, Nankai University Affiliated Eye Hospital, Tianjin, China.,https://orcid.org/0000-0002-1257-6635
| |
Collapse
|
23
|
Xu Z, Feng R, Jin X, Hu H, Ni S, Xu W, Zheng X, Wu J, Yao K. Evaluation of artificial intelligence models for the detection of asymmetric keratoconus eyes using Scheimpflug tomography. Clin Exp Ophthalmol 2022; 50:714-723. [PMID: 35704615 DOI: 10.1111/ceo.14126] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 05/26/2022] [Accepted: 06/11/2022] [Indexed: 11/29/2022]
Abstract
BACKGROUND To evaluate artificial intelligence (AI) models based on objective indices and raw corneal data from the Scheimpflug Pentacam HR system (OCULUS Optikgeräte GmbH, Wetzlar, Germany) for the detection of clinically unaffected eyes in patients with asymmetric keratoconus (AKC) eyes. METHODS A total of 1108 eyes of 1108 patients were enrolled, including 430 eyes from normal control subjects, 231 clinically unaffected eyes from patients with AKC, and 447 eyes from keratoconus (KC) patients. Eyes were divided into a training set (664 eyes), a test set (222 eyes) and a validation set (222 eyes). AI models were built based on objective indices (XGBoost, LGBM, LR and RF) and entire corneal raw data (KerNet). The discriminating performances of the AI models were evaluated by accuracy and the area under the ROC curve (AUC). RESULTS The KerNet model showed great overall discriminating power in the test (accuracy = 94.67%, AUC = 0.985) and validation (accuracy = 94.12%, AUC = 0.990) sets, which were higher than the index-derived AI models (accuracy = 84.02%-86.98%, AUC = 0.944-0.968). In the test set, the KerNet model demonstrated good diagnostic power for the AKC group (accuracy = 95.24%, AUC = 0.984). The validation set also proved that the KerNet model was useful for AKC group diagnosis (accuracy = 94.12%, AUC = 0.983). CONCLUSIONS KerNet outperformed all the index-derived AI models. Based on the raw data of the entire cornea, KerNet was helpful for distinguishing clinically unaffected eyes in patients with AKC from normal eyes.
Collapse
Affiliation(s)
- Zhe Xu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Ruiwei Feng
- College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiuming Jin
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Heping Hu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China
| | - Shuang Ni
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Wen Xu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiangshang Zheng
- College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jian Wu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China.,School of Public Health, Zhejiang University, Hangzhou, Zhejiang, China
| | - Ke Yao
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| |
Collapse
|
24
|
Dong Y, Li D, Guo Z, Liu Y, Lin P, Lv B, Lv C, Xie G, Xie L. Dissecting the Profile of Corneal Thickness With Keratoconus Progression Based on Anterior Segment Optical Coherence Tomography. Front Neurosci 2022; 15:804273. [PMID: 35173574 PMCID: PMC8842478 DOI: 10.3389/fnins.2021.804273] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 12/23/2021] [Indexed: 12/02/2022] Open
Abstract
Purpose To characterize the corneal and epithelial thickness at different stages of keratoconus (KC), using a deep learning based corneal segmentation algorithm for anterior segment optical coherence tomography (AS-OCT). Methods An AS-OCT dataset was constructed in this study with 1,430 images from 715 eyes, which included 118 normal eyes, 134 mild KC, 239 moderate KC, 153 severe KC, and 71 scarring KC. A deep learning based corneal segmentation algorithm was applied to isolate the epithelial and corneal tissues from the background. Based on the segmentation results, the thickness of epithelial and corneal tissues was automatically measured in the center 6 mm area. One-way ANOVA and linear regression were performed in 20 equally divided zones to explore the trend of the thickness changes at different locations with the KC progression. The 95% confidence intervals (CI) of epithelial thickness and corneal thickness in a specific zone were calculated to reveal the difference of thickness distribution among different groups. Results Our data showed that the deep learning based corneal segmentation algorithm can achieve accurate tissue segmentation and the error range of measured thickness was less than 4 μm between our method and the results from clinical experts, which is approximately one image pixel. Statistical analyses revealed significant corneal thickness differences in all the divided zones (P < 0.05). The entire corneal thickness grew gradually thinner with the progression of the KC, and their trends were more pronounced around the pupil center with a slight shift toward the temporal and inferior side. Especially the epithelial thicknesses were thinner gradually from a normal eye to severe KC. Due to the formation of the corneal scarring, epithelial thickness had irregular fluctuations in the scarring KC. Conclusion Our study demonstrates that our deep learning method based on AS-OCT images could accurately delineate the corneal tissues and further successfully characterize the epithelial and corneal thickness changes at different stages of the KC progression.
Collapse
Affiliation(s)
- Yanling Dong
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Dongfang Li
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Zhen Guo
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Yang Liu
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
| | - Ping Lin
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Bin Lv
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
| | - Chuanfeng Lv
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
| | - Guotong Xie
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
- Ping An Health Cloud Co. Ltd., Shenzhen, China
- Ping An International Smart City Technology Co. Ltd., Shenzhen, China
- *Correspondence: Guotong Xie,
| | - Lixin Xie
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
- *Correspondence: Guotong Xie,
| |
Collapse
|
25
|
Cao K, Verspoor K, Sahebjada S, Baird PN. Accuracy of Machine Learning Assisted Detection of Keratoconus: A Systematic Review and Meta-Analysis. J Clin Med 2022; 11:jcm11030478. [PMID: 35159930 PMCID: PMC8836961 DOI: 10.3390/jcm11030478] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 01/10/2022] [Accepted: 01/13/2022] [Indexed: 12/26/2022] Open
Abstract
(1) Background: The objective of this review was to synthesize available data on the use of machine learning to evaluate its accuracy (as determined by pooled sensitivity and specificity) in detecting keratoconus (KC), and measure reporting completeness of machine learning models in KC based on TRIPOD (the transparent reporting of multivariable prediction models for individual prognosis or diagnosis) statement. (2) Methods: Two independent reviewers searched the electronic databases for all potential articles on machine learning and KC published prior to 2021. The TRIPOD 29-item checklist was used to evaluate the adherence to reporting guidelines of the studies, and the adherence rate to each item was computed. We conducted a meta-analysis to determine the pooled sensitivity and specificity of machine learning models for detecting KC. (3) Results: Thirty-five studies were included in this review. Thirty studies evaluated machine learning models for detecting KC eyes from controls and 14 studies evaluated machine learning models for detecting early KC eyes from controls. The pooled sensitivity for detecting KC was 0.970 (95% CI 0.949–0.982), with a pooled specificity of 0.985 (95% CI 0.971–0.993), whereas the pooled sensitivity of detecting early KC was 0.882 (95% CI 0.822–0.923), with a pooled specificity of 0.947 (95% CI 0.914–0.967). Between 3% and 48% of TRIPOD items were adhered to in studies, and the average (median) adherence rate for a single TRIPOD item was 23% across all studies. (4) Conclusions: Application of machine learning model has the potential to make the diagnosis and monitoring of KC more efficient, resulting in reduced vision loss to the patients. This review provides current information on the machine learning models that have been developed for detecting KC and early KC. Presently, the machine learning models performed poorly in identifying early KC from control eyes and many of these research studies did not follow established reporting standards, thus resulting in the failure of these clinical translation of these machine learning models. We present possible approaches for future studies for improvement in studies related to both KC and early KC models to more efficiently and widely utilize machine learning models for diagnostic process.
Collapse
Affiliation(s)
- Ke Cao
- Centre for Eye Research Australia, Melbourne, VIC 3002, Australia; (K.C.); (S.S.)
- Department of Surgery, Ophthalmology, The University of Melbourne, Melbourne, VIC 3002, Australia
| | - Karin Verspoor
- School of Computing Technologies, RMIT University, Melbourne, VIC 3000, Australia;
- School of Computing and Information Systems, The University of Melbourne, Melbourne, VIC 3010, Australia
| | - Srujana Sahebjada
- Centre for Eye Research Australia, Melbourne, VIC 3002, Australia; (K.C.); (S.S.)
- Department of Surgery, Ophthalmology, The University of Melbourne, Melbourne, VIC 3002, Australia
| | - Paul N. Baird
- Department of Surgery, Ophthalmology, The University of Melbourne, Melbourne, VIC 3002, Australia
- Correspondence: ; Tel.: +61-3-9929-8613
| |
Collapse
|
26
|
Maile H, Li JPO, Gore D, Leucci M, Mulholland P, Hau S, Szabo A, Moghul I, Balaskas K, Fujinami K, Hysi P, Davidson A, Liskova P, Hardcastle A, Tuft S, Pontikos N. Machine Learning Algorithms to Detect Subclinical Keratoconus: Systematic Review. JMIR Med Inform 2021; 9:e27363. [PMID: 34898463 PMCID: PMC8713097 DOI: 10.2196/27363] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 05/10/2021] [Accepted: 10/14/2021] [Indexed: 12/18/2022] Open
Abstract
BACKGROUND Keratoconus is a disorder characterized by progressive thinning and distortion of the cornea. If detected at an early stage, corneal collagen cross-linking can prevent disease progression and further visual loss. Although advanced forms are easily detected, reliable identification of subclinical disease can be problematic. Several different machine learning algorithms have been used to improve the detection of subclinical keratoconus based on the analysis of multiple types of clinical measures, such as corneal imaging, aberrometry, or biomechanical measurements. OBJECTIVE The aim of this study is to survey and critically evaluate the literature on the algorithmic detection of subclinical keratoconus and equivalent definitions. METHODS For this systematic review, we performed a structured search of the following databases: MEDLINE, Embase, and Web of Science and Cochrane Library from January 1, 2010, to October 31, 2020. We included all full-text studies that have used algorithms for the detection of subclinical keratoconus and excluded studies that did not perform validation. This systematic review followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) recommendations. RESULTS We compared the measured parameters and the design of the machine learning algorithms reported in 26 papers that met the inclusion criteria. All salient information required for detailed comparison, including diagnostic criteria, demographic data, sample size, acquisition system, validation details, parameter inputs, machine learning algorithm, and key results are reported in this study. CONCLUSIONS Machine learning has the potential to improve the detection of subclinical keratoconus or early keratoconus in routine ophthalmic practice. Currently, there is no consensus regarding the corneal parameters that should be included for assessment and the optimal design for the machine learning algorithm. We have identified avenues for further research to improve early detection and stratification of patients for early treatment to prevent disease progression.
Collapse
Affiliation(s)
- Howard Maile
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
| | | | - Daniel Gore
- Moorfields Eye Hospital, London, United Kingdom
| | | | - Padraig Mulholland
- UCL Institute of Ophthalmology, University College London, London, United Kingdom.,Moorfields Eye Hospital, London, United Kingdom.,Centre for Optometry & Vision Science, Biomedical Sciences Research Institute, Ulster University, Coleraine, United Kingdom
| | - Scott Hau
- Moorfields Eye Hospital, London, United Kingdom
| | - Anita Szabo
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
| | | | | | - Kaoru Fujinami
- UCL Institute of Ophthalmology, University College London, London, United Kingdom.,Moorfields Eye Hospital, London, United Kingdom.,Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan.,Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | - Pirro Hysi
- Section of Ophthalmology, School of Life Course Sciences, King's College London, London, United Kingdom.,Department of Twin Research and Genetic Epidemiology, King's College London, London, United Kingdom
| | - Alice Davidson
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
| | - Petra Liskova
- Department of Paediatrics and Inherited Metabolic Disorders, First Faculty of Medicine, Charles University and General University Hospital, Prague, Czech Republic.,Department of Ophthalmology, First Faculty of Medicine, Charles University and General University Hospital, Prague, Czech Republic
| | - Alison Hardcastle
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
| | - Stephen Tuft
- UCL Institute of Ophthalmology, University College London, London, United Kingdom.,Moorfields Eye Hospital, London, United Kingdom
| | - Nikolas Pontikos
- UCL Institute of Ophthalmology, University College London, London, United Kingdom.,Moorfields Eye Hospital, London, United Kingdom
| |
Collapse
|
27
|
Keratoconus Severity Classification Using Features Selection and Machine Learning Algorithms. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:9979560. [PMID: 34824602 PMCID: PMC8610665 DOI: 10.1155/2021/9979560] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 10/28/2021] [Indexed: 02/03/2023]
Abstract
Keratoconus is a noninflammatory disease characterized by thinning and bulging of the cornea, generally appearing during adolescence and slowly progressing, causing vision impairment. However, the detection of keratoconus remains difficult in the early stages of the disease because the patient does not feel any pain. Therefore, the development of a method for detecting this disease based on machine and deep learning methods is necessary for early detection in order to provide the appropriate treatment as early as possible to patients. Thus, the objective of this work is to determine the most relevant parameters with respect to the different classifiers used for keratoconus classification based on the keratoconus dataset of Harvard Dataverse. A total of 446 parameters are analyzed out of 3162 observations by 11 different feature selection algorithms. Obtained results showed that sequential forward selection (SFS) method provided a subset of 10 most relevant variables, thus, generating the highest classification performance by the application of random forest (RF) classifier, with an accuracy of 98% and 95% considering 2 and 4 keratoconus classes, respectively. Found classification accuracy applying RF classifier on the selected variables using SFS method achieves the accuracy obtained using all features of the original dataset.
Collapse
|
28
|
Cao K, Verspoor K, Chan E, Daniell M, Sahebjada S, Baird PN. Machine learning with a reduced dimensionality representation of comprehensive Pentacam tomography parameters to identify subclinical keratoconus. Comput Biol Med 2021; 138:104884. [PMID: 34607273 DOI: 10.1016/j.compbiomed.2021.104884] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 09/15/2021] [Accepted: 09/19/2021] [Indexed: 12/26/2022]
Abstract
PURPOSE To investigate the performance of a machine learning model based on a reduced dimensionality parameter space derived from complete Pentacam parameters to identify subclinical keratoconus (KC). METHODS All 1692 available parameters were obtained from the Pentacam imaging machine on 145 subclinical KC and 122 control eyes. We applied a principal component analysis (PCA) to the complete Pentacam dataset to reduce its parameter dimensionality. Subsequently, we investigated machine learning performance of the random forest algorithm with increasing numbers of components to identify their optimal number for detecting subclinical KC from control eyes. RESULTS The dimensionality of the complete set of 1692 Pentacam parameters was reduced to 267 principal components using PCA. Subsequent selection of 15 of these principal components explained over 85% of the variance of the original Pentacam-derived parameters and input to train a random forest machine learning model to achieve the best accuracy of 98% in detecting subclinical KC eyes. The model established also reached a high sensitivity of 97% in identification of subclinical KC and a specificity of 98% in recognizing control eyes. CONCLUSIONS A random forest-based model trained using a modest number of components derived from a reduced dimensionality representation of complete Pentacam system parameters allowed for high accuracy of subclinical KC identification.
Collapse
Affiliation(s)
- Ke Cao
- Centre for Eye Research Australia, Melbourne, Victoria, Australia; Department of Surgery, Ophthalmology, The University of Melbourne, Melbourne, Victoria, Australia
| | - Karin Verspoor
- School of Computing Technologies, RMIT University, Melbourne, Australia; School of Computing and Information Systems, The University of Melbourne, Melbourne, Australia
| | - Elsie Chan
- Centre for Eye Research Australia, Melbourne, Victoria, Australia; Department of Surgery, Ophthalmology, The University of Melbourne, Melbourne, Victoria, Australia; Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia
| | - Mark Daniell
- Centre for Eye Research Australia, Melbourne, Victoria, Australia; Department of Surgery, Ophthalmology, The University of Melbourne, Melbourne, Victoria, Australia; Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia
| | - Srujana Sahebjada
- Centre for Eye Research Australia, Melbourne, Victoria, Australia; Department of Surgery, Ophthalmology, The University of Melbourne, Melbourne, Victoria, Australia
| | - Paul N Baird
- Department of Surgery, Ophthalmology, The University of Melbourne, Melbourne, Victoria, Australia.
| |
Collapse
|
29
|
Wang L, Chen K, Wen H, Zheng Q, Chen Y, Pu J, Chen W. Feasibility assessment of infectious keratitis depicted on slit-lamp and smartphone photographs using deep learning. Int J Med Inform 2021; 155:104583. [PMID: 34560490 DOI: 10.1016/j.ijmedinf.2021.104583] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 09/09/2021] [Accepted: 09/14/2021] [Indexed: 12/26/2022]
Abstract
BACKGROUND This study aims to investigate how infectious keratitis depicted on slit-lamp and smartphone photographs can be reliably assessed using deep learning. MATERIALS AND METHODS We retrospectively collected a dataset consisting of 5,673 slit-lamp photographs and 400 smartphone photographs acquired on different subjects. Based on multiple clinical tests (e.g., cornea scraping), these photographs were diagnosed and classified into four categories, including normal (i.e., no keratitis), bacterial keratitis (BK), fungal keratitis (FK), and herpes simplex virus stromal keratitis (HSK). We preprocessed these slit-lamp images into two separate subgroups: (1) global images and (2) regional images. The cases in each group were randomly split into training, internal validation, and independent testing sets. Then, we implemented a deep learning network based on the InceptionV3 by fine-tuning its architecture and used the developed network to classify these slit-lamp images. Additionally, we investigated the performance of the InceptionV3 model in classifying infectious keratitis depicted on smartphone images. We, in particular, clarified whether the computer model trained on the global images outperformed the one trained on the regional images. The quadratic weighted kappa (QWK) and the receiver operating characteristic (ROC) analysis were used to assess the performance of the developed models. RESULTS Our experiments on the independent testing sets showed that the developed models achieved the QWK of 0.9130 (95% CI: 88.99-93.61%) and 0.8872 (95% CI: 86.13-91.31%), and 0.5379 (95% CI, 48.89-58.69%) for the global images, the regional images, and the smartphone images, respectively. The area under the ROC curves (AUCs) were 0.9588 (95% CI: 94.28-97.48%), 0.9425 (95% CI: 92.35-96.15%), and 0.8529 (95% CI: 81.79-88.79%) for the same test sets, respectively. CONCLUSION The deep learning solution demonstrated very promising performance in assessing infectious keratitis depicted on slit-lamp photographs and the images acquired by smartphones. In particular, the model trained on the global images outperformed that trained on the regional images.
Collapse
Affiliation(s)
- Lei Wang
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China; Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing 211189, China.
| | - Kuan Chen
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Han Wen
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Qinxiang Zheng
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Yang Chen
- Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing 211189, China
| | - Jiantao Pu
- Departments of Radiology and Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Wei Chen
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
30
|
Diagnosis of Subclinical Keratoconus Based on Machine Learning Techniques. J Clin Med 2021; 10:jcm10184281. [PMID: 34575391 PMCID: PMC8468312 DOI: 10.3390/jcm10184281] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 08/24/2021] [Accepted: 09/17/2021] [Indexed: 11/25/2022] Open
Abstract
(1) Background: Keratoconus is a non-inflammatory corneal disease characterized by gradual thinning of the stroma, resulting in irreversible visual quality and quantity decline. Early detection of keratoconus and subsequent prevention of possible risks are crucial factors in its progression. Random forest is a machine learning technique for classification based on the construction of thousands of decision trees. The aim of this study was to use the random forest technique in the classification and prediction of subclinical keratoconus, considering the metrics proposed by Pentacam and Corvis. (2) Methods: The design was a retrospective cross-sectional study. A total of 81 eyes of 81 patients were enrolled: sixty-one eyes with healthy corneas and twenty patients with subclinical keratoconus (SCKC): This initial stage includes patients with the following conditions: (1) minor topographic signs of keratoconus and suspicious topographic findings (mild asymmetric bow tie, with or without deviation; (2) average K (mean corneal curvature) < 46, 5 D; (3) minimum corneal thickness (ECM) > 490 μm; (4) no slit lamp found; and (5) contralateral clinical keratoconus of the eye. Pentacam topographic and Corvis biomechanical variables were collected. Decision tree and random forest were used as machine learning techniques for classifications. Random forest performed a ranking of the most critical variables in classification. (3) Results: The essential variable was SP A1 (stiffness parameter A1), followed by A2 time, posterior coma 0°, A2 velocity and peak distance. The model efficiently predicted all patients with subclinical keratoconus (Sp = 93%) and was also a good model for classifying healthy cases (Sen = 86%). The overall accuracy rate of the model was 89%. (4) Conclusions: The random forest model was a good model for classifying subclinical keratoconus. The SP A1 variable was the most critical determinant in classifying and identifying subclinical keratoconus, followed by A2 time.
Collapse
|
31
|
Shanthi S, Aruljyothi L, Balasundaram MB, Janakiraman A, Nirmaladevi K, Pyingkodi M. Artificial intelligence applications in different imaging modalities for corneal topography. Surv Ophthalmol 2021; 67:801-816. [PMID: 34450134 DOI: 10.1016/j.survophthal.2021.08.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 08/13/2021] [Accepted: 08/16/2021] [Indexed: 12/26/2022]
Abstract
Interpretation of topographical maps used to detect corneal ectasias requires a high level of expertise. Several artificial intelligence (AI) technologies have attempted to interpret topographic maps. The purpose of this study is to provide a review of AI algorithms in corneal topography from the perspectives of an eye care professional, a biomedical engineer, and a data scientist. A systematic literature review using Web of Science, Pubmed, and Google Scholar was performed from 2010 to 2020 on themes regarding imaging modalities, their parameters, purpose, and conclusions and their samples and performance related to AI in corneal topography. We provide a comprehensive summary of advances in corneal imaging and its applications in AI. Combined metrics from the Dual Scheimpflug and Placido device could be a good starting point to try AI models in corneal imaging systems. The range of area under the receiving operating curve for AI in keratoconus detection and classification was from 0.87 to 1, sensitivity was from 0.89 to 1, and specificity was from 0.82 to 1. A combination of different types of AI applications to corneal ectasia diagnosis is recommended.
Collapse
Affiliation(s)
- S Shanthi
- Kongu Engineering College, Erode, Tamil Nadu, India.
| | | | | | | | | | - M Pyingkodi
- Kongu Engineering College, Erode, Tamil Nadu, India
| |
Collapse
|
32
|
Consejo A, Jiménez-García M, Issarti I, Rozema JJ. Detection of Subclinical Keratoconus With a Validated Alternative Method to Corneal Densitometry. Transl Vis Sci Technol 2021; 10:32. [PMID: 34436543 PMCID: PMC8399563 DOI: 10.1167/tvst.10.9.32] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
Purpose To enhance the current standards of subclinical keratoconus screening based on the statistical modeling of the pixel intensity distribution of Scheimpflug images. Methods Scheimpflug corneal tomographies corresponding to 25 corneal meridians of 60 participants were retrospectively collected and divided into three groups: controls (20 eyes), subclinical keratoconus (20 eyes), and clinical keratoconus (20 eyes). Only right eyes were selected. After corneal segmentation, pixel intensities of the stromal tissue were statistically modeled using a Weibull probability density function from which parameter α (pixel brightness) was derived. Further, data were transformed to polar coordinates, smoothed, and interpolated to build a map of the corneal α parameter. The discriminative power of the method was analyzed using receiver operating characteristic curves. Results The proposed platform-independent method achieved a higher performance in discriminating subclinical keratoconus from control eyes (90.0% sensitivity, 95.0% specificity, 0.97 area under the curve [AUC]) than the standard method (Belin-Ambrósio enhanced ectasia display), which uses only corneal morphometry (85.0% sensitivity, 85.0% specificity, 0.80 AUC). Conclusions Analysis of light backscatter at the cornea successfully discriminates subclinical keratoconus from control eyes, upgrading the results previously reported in the literature. Translational Relevance The proposed methodology has the potential to support clinicians in the detection of keratoconus before showing clinical signs.
Collapse
Affiliation(s)
- Alejandra Consejo
- Department of Applied Physics, University of Zaragoza, Zaragoza, Spain.,Institute of Physical Chemistry, Polish Academy of Sciences, Warsaw, Poland
| | - Marta Jiménez-García
- Department of Ophthalmology, Antwerp University Hospital, Edegem, Belgium.,Department of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Ikram Issarti
- Department of Ophthalmology, Antwerp University Hospital, Edegem, Belgium.,Department of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Jos J Rozema
- Department of Ophthalmology, Antwerp University Hospital, Edegem, Belgium.,Department of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
33
|
Forecasting Progressive Trends in Keratoconus by Means of a Time Delay Neural Network. J Clin Med 2021; 10:jcm10153238. [PMID: 34362023 PMCID: PMC8347247 DOI: 10.3390/jcm10153238] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 07/17/2021] [Accepted: 07/19/2021] [Indexed: 12/31/2022] Open
Abstract
Early and accurate detection of keratoconus progression is particularly important for the prudent, cost-effective use of corneal cross-linking and judicious timing of clinical follow-up visits. The aim of this study was to verify whether a progression could be predicted based on two prior tomography measurements and to verify the accuracy of the system when labelling the eye as stable or suspect progressive. Data from 743 patients measured by Pentacam (Oculus, Wetzlar, Germany) were available, and they were filtered and preprocessed to data quality needs. The time delay neural network received six features as input, measured in two consecutive examinations, predicted the future values, and determined the classification (stable or suspect progressive) based on the significance of the change from the baseline. The system showed a sensitivity of 70.8% and a specificity of 80.6%. On average, the positive and negative predictive values were 71.4% and 80.2%. Including data of less quality (as defined by the software) did not significantly worsen the results. This predictive system constitutes another step towards a personalized management of keratoconus. While the results obtained were modest and perhaps insufficient to decide on a surgical procedure, such as cross-linking, they may be useful to customize the timing for the patient’s next follow-up.
Collapse
|
34
|
Chen X, Zhao J, Iselin KC, Borroni D, Romano D, Gokul A, McGhee CNJ, Zhao Y, Sedaghat MR, Momeni-Moghaddam H, Ziaei M, Kaye S, Romano V, Zheng Y. Keratoconus detection of changes using deep learning of colour-coded maps. BMJ Open Ophthalmol 2021; 6:e000824. [PMID: 34337155 PMCID: PMC8278890 DOI: 10.1136/bmjophth-2021-000824] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 07/05/2021] [Indexed: 12/26/2022] Open
Abstract
Objective To evaluate the accuracy of convolutional neural networks technique (CNN) in detecting keratoconus using colour-coded corneal maps obtained by a Scheimpflug camera. Design Multicentre retrospective study. Methods and analysis We included the images of keratoconic and healthy volunteers’ eyes provided by three centres: Royal Liverpool University Hospital (Liverpool, UK), Sedaghat Eye Clinic (Mashhad, Iran) and The New Zealand National Eye Center (New Zealand). Corneal tomography scans were used to train and test CNN models, which included healthy controls. Keratoconic scans were classified according to the Amsler-Krumeich classification. Keratoconic scans from Iran were used as an independent testing set. Four maps were considered for each scan: axial map, anterior and posterior elevation map, and pachymetry map. Results A CNN model detected keratoconus versus health eyes with an accuracy of 0.9785 on the testing set, considering all four maps concatenated. Considering each map independently, the accuracy was 0.9283 for axial map, 0.9642 for thickness map, 0.9642 for the front elevation map and 0.9749 for the back elevation map. The accuracy of models in recognising between healthy controls and stage 1 was 0.90, between stages 1 and 2 was 0.9032, and between stages 2 and 3 was 0.8537 using the concatenated map. Conclusion CNN provides excellent detection performance for keratoconus and accurately grades different severities of disease using the colour-coded maps obtained by the Scheimpflug camera. CNN has the potential to be further developed, validated and adopted for screening and management of keratoconus.
Collapse
Affiliation(s)
- Xu Chen
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Jiaxin Zhao
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Katja C Iselin
- Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Davide Borroni
- Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Davide Romano
- Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Akilesh Gokul
- Department of Ophthalmology, New Zealand National Eye Centre, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Charles N J McGhee
- Department of Ophthalmology, New Zealand National Eye Centre, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Mohammad-Reza Sedaghat
- Eye Research Center, Mashhad University of Medical Sciences, Mashhad, Iran.,Health Promotion Research Center, Zahedan University of Medical Sciences, Zahedan, Iran
| | - Hamed Momeni-Moghaddam
- Eye Research Center, Mashhad University of Medical Sciences, Mashhad, Iran.,Health Promotion Research Center, Zahedan University of Medical Sciences, Zahedan, Iran
| | - Mohammed Ziaei
- Department of Ophthalmology, New Zealand National Eye Centre, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Stephen Kaye
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK.,Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Vito Romano
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK.,Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Yalin Zheng
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| |
Collapse
|
35
|
Rampat R, Deshmukh R, Chen X, Ting DSW, Said DG, Dua HS, Ting DSJ. Artificial Intelligence in Cornea, Refractive Surgery, and Cataract: Basic Principles, Clinical Applications, and Future Directions. Asia Pac J Ophthalmol (Phila) 2021; 10:268-281. [PMID: 34224467 PMCID: PMC7611495 DOI: 10.1097/apo.0000000000000394] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
ABSTRACT Corneal diseases, uncorrected refractive errors, and cataract represent the major causes of blindness globally. The number of refractive surgeries, either cornea- or lens-based, is also on the rise as the demand for perfect vision continues to increase. With the recent advancement and potential promises of artificial intelligence (AI) technologies demonstrated in the realm of ophthalmology, particularly retinal diseases and glaucoma, AI researchers and clinicians are now channeling their focus toward the less explored ophthalmic areas related to the anterior segment of the eye. Conditions that rely on anterior segment imaging modalities, including slit-lamp photography, anterior segment optical coherence tomography, corneal tomography, in vivo confocal microscopy and/or optical biometers, are the most commonly explored areas. These include infectious keratitis, keratoconus, corneal grafts, ocular surface pathologies, preoperative screening before refractive surgery, intraocular lens calculation, and automated refraction, among others. In this review, we aimed to provide a comprehensive update on the utilization of AI in anterior segment diseases, with particular emphasis on the recent advancement in the past few years. In addition, we demystify some of the basic principles and terminologies related to AI, particularly machine learning and deep learning, to help improve the understanding, research and clinical implementation of these AI technologies among the ophthalmologists and vision scientists. As we march toward the era of digital health, guidelines such as CONSORT-AI, SPIRIT-AI, and STARD-AI will play crucial roles in guiding and standardizing the conduct and reporting of AI-related trials, ultimately promoting their potential for clinical translation.
Collapse
Affiliation(s)
| | - Rashmi Deshmukh
- Department of Ophthalmology, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - Xin Chen
- School of Computer Science, University of Nottingham, Nottingham, UK
| | - Daniel S. W. Ting
- Duke-NUS Medical School, National University of Singapore, Singapore
- Singapore National Eye Centre / Singapore Eye Research Institute, Singapore
| | - Dalia G. Said
- Academic Ophthalmology, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham, UK
| | - Harminder S. Dua
- Academic Ophthalmology, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham, UK
| | - Darren S. J. Ting
- Singapore National Eye Centre / Singapore Eye Research Institute, Singapore
- Academic Ophthalmology, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham, UK
| |
Collapse
|
36
|
Cui T, Yun D, Wu X, Lin H. Anterior Segment and Others in Teleophthalmology: Past, Present, and Future. Asia Pac J Ophthalmol (Phila) 2021; 10:234-243. [PMID: 34224468 DOI: 10.1097/apo.0000000000000396] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
ABSTRACT Teleophthalmology, a subfield of telemedicine, has recently been widely applied in ophthalmic disease management, accelerated by ubiquitous connectivity via mobile computing and communication applications. Teleophthalmology has strengths in overcoming geographic barriers and broadening access to medical resources, as a supplement to face-to-face clinical settings. Eyes, especially the anterior segment, are one of the most researched superficial parts of the human body. Therefore, ophthalmic images, easily captured by portable devices, have been widely applied in teleophthalmology, boosted by advancements in software and hardware in recent years. This review aims to revise current teleophthalmology applications in the anterior segment and other diseases from a temporal and spatial perspective, and summarize common scenarios in teleophthalmology, including screening, diagnosis, treatment, monitoring, postoperative follow-up, and tele-education of patients and clinical practitioners. Further, challenges in the current application of teleophthalmology and the future development of teleophthalmology are discussed.
Collapse
Affiliation(s)
- Tingxin Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
37
|
Abdelmotaal H, Abdou AA, Omar AF, El-Sebaity DM, Abdelazeem K. Pix2pix Conditional Generative Adversarial Networks for Scheimpflug Camera Color-Coded Corneal Tomography Image Generation. Transl Vis Sci Technol 2021; 10:21. [PMID: 34132759 PMCID: PMC8242686 DOI: 10.1167/tvst.10.7.21] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
Purpose To assess the ability of pix2pix conditional generative adversarial network (pix2pix cGAN) to create plausible synthesized Scheimpflug camera color-coded corneal tomography images based upon a modest-sized original dataset to be used for image augmentation during training a deep convolutional neural network (DCNN) for classification of keratoconus and normal corneal images. Methods Original images of 1778 eyes of 923 nonconsecutive patients with or without keratoconus were retrospectively analyzed. Images were labeled and preprocessed for use in training the proposed pix2pix cGAN. The best quality synthesized images were selected based on the Fréchet inception distance score, and their quality was studied by calculating the mean square error, structural similarity index, and the peak signal-to-noise ratio. We used original, traditionally augmented original and synthesized images to train a DCNN for image classification and compared classification performance metrics. Results The pix2pix cGAN synthesized images showed plausible subjectively and objectively assessed quality. Training the DCNN with a combination of real and synthesized images allowed better classification performance compared with training using original images only or with traditional augmentation. Conclusions Using the pix2pix cGAN to synthesize corneal tomography images can overcome issues related to small datasets and class imbalance when training computer-aided diagnostic models. Translational Relevance Pix2pix cGAN can provide an unlimited supply of plausible synthetic Scheimpflug camera color-coded corneal tomography images at levels useful for experimental and clinical applications.
Collapse
Affiliation(s)
- Hazem Abdelmotaal
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Ahmed A Abdou
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Ahmed F Omar
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | | | - Khaled Abdelazeem
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| |
Collapse
|
38
|
Al-Timemy AH, Ghaeb NH, Mosa ZM, Escudero J. Deep Transfer Learning for Improved Detection of Keratoconus using Corneal Topographic Maps. Cognit Comput 2021. [DOI: 10.1007/s12559-021-09880-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Abstract
Clinical keratoconus (KCN) detection is a challenging and time-consuming task. In the diagnosis process, ophthalmologists must revise demographic and clinical ophthalmic examinations. The latter include slit-lamb, corneal topographic maps, and Pentacam indices (PI). We propose an Ensemble of Deep Transfer Learning (EDTL) based on corneal topographic maps. We consider four pretrained networks, SqueezeNet (SqN), AlexNet (AN), ShuffleNet (SfN), and MobileNet-v2 (MN), and fine-tune them on a dataset of KCN and normal cases, each including four topographic maps. We also consider a PI classifier. Then, our EDTL method combines the output probabilities of each of the five classifiers to obtain a decision based on the fusion of probabilities. Individually, the classifier based on PI achieved 93.1% accuracy, whereas the deep classifiers reached classification accuracies over 90% only in isolated cases. Overall, the average accuracy of the deep networks over the four corneal maps ranged from 86% (SfN) to 89.9% (AN). The classifier ensemble increased the accuracy of the deep classifiers based on corneal maps to values ranging (92.2% to 93.1%) for SqN and (93.1% to 94.8%) for AN. Including in the ensemble-specific combinations of corneal maps’ classifiers and PI increased the accuracy to 98.3%. Moreover, visualization of first learner filters in the networks and Grad-CAMs confirmed that the networks had learned relevant clinical features. This study shows the potential of creating ensembles of deep classifiers fine-tuned with a transfer learning strategy as it resulted in an improved accuracy while showing learnable filters and Grad-CAMs that agree with clinical knowledge. This is a step further towards the potential clinical deployment of an improved computer-assisted diagnosis system for KCN detection to help ophthalmologists to confirm the clinical decision and to perform fast and accurate KCN treatment.
Collapse
|
39
|
Logistic Regression Model Using Scheimpflug-Placido Cornea Topographer Parameters to Diagnose Keratoconus. J Ophthalmol 2021; 2021:5528927. [PMID: 34113464 PMCID: PMC8154304 DOI: 10.1155/2021/5528927] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Revised: 04/11/2021] [Accepted: 05/10/2021] [Indexed: 11/17/2022] Open
Abstract
Purpose Diagnose keratoconus by establishing an effective logistic regression model from the data obtained with a Scheimpflug-Placido cornea topographer. Methods Topographical parameters of 125 eyes of 70 patients diagnosed with keratoconus by clinical or topographical findings were compared with 120 eyes of 63 patients who were defined as keratorefractive surgery candidates. The receiver operating character (ROC) curve analysis was performed to determine the diagnostic ability of the topographic parameters. The data set of parameters with an AUROC (area under the ROC curve) value greater than 0.9 was analyzed with logistic regression analysis (LRA) to determine the most predictive model that could diagnose keratoconus. A logit formula of the model was built, and the logit values of every eye in the study were calculated according to this formula. Then, an ROC analysis of the logit values was done. Results Baiocchi Calossi Versaci front index (BCVf) had the highest AUROC value (0.976) in the study. The LRA model, which had the highest prediction ability, had 97.5% accuracy, 96.8% sensitivity, and 99.2% specificity. The most significant parameters were found to be BCVf (p=0.001), BCVb (Baiocchi Calossi Versaci back) (p=0.002), posterior rf (apical radius of the flattest meridian of the aspherotoric surface in 4.5 mm diameter of the cornea) (p=0.005), central corneal thickness (p=0.072), and minimum corneal thickness (p=0.494). Conclusions The LRA model can distinguish keratoconus corneas from normal ones with high accuracy without the need for complex computer algorithms.
Collapse
|
40
|
Feng R, Xu Z, Zheng X, Hu H, Jin X, Chen DZ, Yao K, Wu J. KerNet: A Novel Deep Learning Approach for Keratoconus and Sub-clinical Keratoconus Detection Based on Raw Data of the Pentacam System. IEEE J Biomed Health Inform 2021; 25:3898-3910. [PMID: 33979295 DOI: 10.1109/jbhi.2021.3079430] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Keratoconus is one of the most severe corneal diseases, which is difficult to detect at the early stage (i.e., sub-clinical keratoconus) and possibly results in vision loss. In this paper, we propose a novel end-to-end deep learning approach, called KerNet, which processes the raw data of the Pentacam system to detect keratoconus and sub-clinical keratoconus. First, we collect raw data from the Pentacam system. The raw data is of a specific format, that is, each sample consists of five numerical matrices, corresponding to the front and back surface curvature, the front and back surface elevation, and the pachymetry of an eye. Then, we propose a novel convolutional neural network, called KerNet, containing five branches as the backbone with a multi-level fusion architecture. The five branches receive five slices separately and capture effectively the features of different slices by several cascaded residual blocks. The multi-level fusion architecture (i.e., low-level fusion and high-level fusion) moderately takes into account the correlation among five slices and fuses the extracted features for better prediction. Specifically, five spatial attention modules are utilized, each in a branch, to guide the operation of the low-level fusion. The high-level fusion is implemented by simply concatenating the output feature maps of the last residual block in each branch. Experimental results show that: (1) our novel approach outperforms state-of-the-art methods on an in-house dataset, by ∼ 1\% for keratoconus detection accuracy and ∼ 4\% for sub-clinical keratoconus detection accuracy; (2) the attention maps visualized by Grad-CAM show that our KerNet places more attention on the inferior temporal part for sub-clinical keratoconus, which has been proved as the identifying regions for ophthalmologists to detect sub-clinical keratoconus in previous clinical studies. To our best knowledge, we are the first to propose an end-to-end deep learning approach utilizing raw data obtained by the Pentacam system for keratoconus and subclinical keratoconus detection. Further, the prediction performance and the clinical significance of our KerNet are well evaluated and proved by two clinical experts. Our code is available at \url{https://github.com/upzheng/Keratoconus}.
Collapse
|
41
|
Dong Y, Li D, Guo Z, Liu Y, Lin P, Lv B, Lv C, Xie G, Xie L. Dissecting the Profile of Corneal Thickness With Keratoconus Progression Based on Anterior Segment Optical Coherence Tomography. Front Neurosci 2021; 15:804273. [PMID: 35173574 PMCID: PMC8842478 DOI: 10.3389/fnins.2021.804273 10.4103/joco.joco_147_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 12/23/2021] [Indexed: 05/21/2023] Open
Abstract
PURPOSE To characterize the corneal and epithelial thickness at different stages of keratoconus (KC), using a deep learning based corneal segmentation algorithm for anterior segment optical coherence tomography (AS-OCT). METHODS An AS-OCT dataset was constructed in this study with 1,430 images from 715 eyes, which included 118 normal eyes, 134 mild KC, 239 moderate KC, 153 severe KC, and 71 scarring KC. A deep learning based corneal segmentation algorithm was applied to isolate the epithelial and corneal tissues from the background. Based on the segmentation results, the thickness of epithelial and corneal tissues was automatically measured in the center 6 mm area. One-way ANOVA and linear regression were performed in 20 equally divided zones to explore the trend of the thickness changes at different locations with the KC progression. The 95% confidence intervals (CI) of epithelial thickness and corneal thickness in a specific zone were calculated to reveal the difference of thickness distribution among different groups. RESULTS Our data showed that the deep learning based corneal segmentation algorithm can achieve accurate tissue segmentation and the error range of measured thickness was less than 4 μm between our method and the results from clinical experts, which is approximately one image pixel. Statistical analyses revealed significant corneal thickness differences in all the divided zones (P < 0.05). The entire corneal thickness grew gradually thinner with the progression of the KC, and their trends were more pronounced around the pupil center with a slight shift toward the temporal and inferior side. Especially the epithelial thicknesses were thinner gradually from a normal eye to severe KC. Due to the formation of the corneal scarring, epithelial thickness had irregular fluctuations in the scarring KC. CONCLUSION Our study demonstrates that our deep learning method based on AS-OCT images could accurately delineate the corneal tissues and further successfully characterize the epithelial and corneal thickness changes at different stages of the KC progression.
Collapse
Affiliation(s)
- Yanling Dong
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Dongfang Li
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Zhen Guo
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Yang Liu
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
| | - Ping Lin
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Bin Lv
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
| | - Chuanfeng Lv
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
| | - Guotong Xie
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
- Ping An Health Cloud Co. Ltd., Shenzhen, China
- Ping An International Smart City Technology Co. Ltd., Shenzhen, China
- *Correspondence: Guotong Xie,
| | - Lixin Xie
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
- *Correspondence: Guotong Xie,
| |
Collapse
|
42
|
Abdelmotaal H, Mostafa MM, Mostafa ANR, Mohamed AA, Abdelazeem K. Classification of Color-Coded Scheimpflug Camera Corneal Tomography Images Using Deep Learning. Transl Vis Sci Technol 2020; 9:30. [PMID: 33384884 PMCID: PMC7757611 DOI: 10.1167/tvst.9.13.30] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 11/05/2020] [Indexed: 12/26/2022] Open
Abstract
Purpose To assess the use of deep learning for high-performance image classification of color-coded corneal maps obtained using a Scheimpflug camera. Methods We used a domain-specific convolutional neural network (CNN) to implement deep learning. CNN performance was assessed using standard metrics and detailed error analyses, including network activation maps. Results The CNN classified four map-selectable display images with average accuracies of 0.983 and 0.958 for the training and test sets, respectively. Network activation maps revealed that the model was heavily influenced by clinically relevant spatial regions. Conclusions Deep learning using color-coded Scheimpflug images achieved high diagnostic performance with regard to discriminating keratoconus, subclinical keratoconus, and normal corneal images at levels that may be useful in clinical practice when screening refractive surgery candidates. Translational Relevance Deep learning can assist human graders in keratoconus detection in Scheimpflug camera color-coded corneal tomography maps.
Collapse
Affiliation(s)
- Hazem Abdelmotaal
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Magdi M Mostafa
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Ali N R Mostafa
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Abdelsalam A Mohamed
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Khaled Abdelazeem
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| |
Collapse
|
43
|
Shi C, Wang M, Zhu T, Zhang Y, Ye Y, Jiang J, Chen S, Lu F, Shen M. Machine learning helps improve diagnostic ability of subclinical keratoconus using Scheimpflug and OCT imaging modalities. EYE AND VISION 2020; 7:48. [PMID: 32974414 PMCID: PMC7507244 DOI: 10.1186/s40662-020-00213-3] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 08/19/2020] [Indexed: 12/26/2022]
Abstract
Purpose To develop an automated classification system using a machine learning classifier to distinguish clinically unaffected eyes in patients with keratoconus from a normal control population based on a combination of Scheimpflug camera images and ultra-high-resolution optical coherence tomography (UHR-OCT) imaging data. Methods A total of 121 eyes from 121 participants were classified by 2 cornea experts into 3 groups: normal (50 eyes), with keratoconus (38 eyes) or with subclinical keratoconus (33 eyes). All eyes were imaged with a Scheimpflug camera and UHR-OCT. Corneal morphological features were extracted from the imaging data. A neural network was used to train a model based on these features to distinguish the eyes with subclinical keratoconus from normal eyes. Fisher’s score was used to rank the differentiable power of each feature. The receiver operating characteristic (ROC) curves were calculated to obtain the area under the ROC curves (AUCs). Results The developed classification model used to combine all features from the Scheimpflug camera and UHR-OCT dramatically improved the differentiable power to discriminate between normal eyes and eyes with subclinical keratoconus (AUC = 0.93). The variation in the thickness profile within each individual in the corneal epithelium extracted from UHR-OCT imaging ranked the highest in differentiating eyes with subclinical keratoconus from normal eyes. Conclusion The automated classification system using machine learning based on the combination of Scheimpflug camera data and UHR-OCT imaging data showed excellent performance in discriminating eyes with subclinical keratoconus from normal eyes. The epithelial features extracted from the OCT images were the most valuable in the discrimination process. This classification system has the potential to improve the differentiable power of subclinical keratoconus and the efficiency of keratoconus screening.
Collapse
Affiliation(s)
- Ce Shi
- School of Ophthalmology and Optometry, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, 325027 China
| | - Mengyi Wang
- School of Ophthalmology and Optometry, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, 325027 China
| | - Tiantian Zhu
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang 12624 China
| | - Ying Zhang
- School of Ophthalmology and Optometry, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, 325027 China
| | - Yufeng Ye
- School of Ophthalmology and Optometry, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, 325027 China
| | - Jun Jiang
- School of Ophthalmology and Optometry, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, 325027 China
| | - Sisi Chen
- School of Ophthalmology and Optometry, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, 325027 China
| | - Fan Lu
- School of Ophthalmology and Optometry, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, 325027 China
| | - Meixiao Shen
- School of Ophthalmology and Optometry, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, 325027 China
| |
Collapse
|
44
|
Consejo A, Solarski J, Karnowski K, Rozema JJ, Wojtkowski M, Iskander DR. Keratoconus Detection Based on a Single Scheimpflug Image. Transl Vis Sci Technol 2020; 9:36. [PMID: 32832241 PMCID: PMC7414642 DOI: 10.1167/tvst.9.7.36] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 05/10/2020] [Indexed: 02/06/2023] Open
Abstract
Purpose To introduce a new approach for keratoconus detection based on corneal microstructure observed in vivo derived from a single Scheimpflug image. Methods Scheimpflug single-image snapshots from 25 control subjects and 25 keratoconus eyes were analyzed; from each group, five subjects were randomly selected to provide out-of-sample data. Each corneal image was segmented, after which the stromal pixel intensities were statistically modeled with a Weibull distribution. Distribution estimated parameters α and β, characterizing corneal microstructure, were used in combination with a macrostructure parameter, central corneal thickness (CCT), for the detection of keratoconus. In addition, receiver operating characteristic curves were used to determine the sensitivity and specificity of each parameter for keratoconus detection. Results The combination of CCT (sensitivity = 88%; specificity = 84%) with microscopic parameters extracted from statistical modeling of light intensity distribution, α (sensitivity = 76%; specificity = 76%) and β (sensitivity = 96%; specificity = 88%), from a single Scheimpflug image was found to be a successful tool to differentiate between keratoconus and control eyes with no misclassifications (sensitivity = 100%; specificity = 100%) with coefficients of variation up to 2.5%. Conclusions The combination of microscopic and macroscopic corneal parameters extracted from a static Scheimpflug image is a promising, non-invasive tool to differentiate corneal diseases without the need to perform measurements based on induced deformation of the corneal structure. Translational Relevance The proposed methodology has the potential to support clinicians in the detection of keratoconus, without compromising patient comfort.
Collapse
Affiliation(s)
- Alejandra Consejo
- Institute of Physical Chemistry, Polish Academy of Sciences, Warsaw, Poland
| | - Jędrzej Solarski
- Institute of Physical Chemistry, Polish Academy of Sciences, Warsaw, Poland
| | - Karol Karnowski
- Institute of Physical Chemistry, Polish Academy of Sciences, Warsaw, Poland.,School of Electrical, Electronic and Computer Engineering, The University of Western Australia, Perth, Australia
| | - Jos J Rozema
- Department of Ophthalmology, Antwerp University Hospital, Edegem, Belgium.,Department of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Maciej Wojtkowski
- Institute of Physical Chemistry, Polish Academy of Sciences, Warsaw, Poland
| | - D Robert Iskander
- Department of Biomedical Engineering, Wroclaw University of Science and Technology, Wroclaw, Poland
| |
Collapse
|
45
|
Ting DSJ, Foo VH, Yang LWY, Sia JT, Ang M, Lin H, Chodosh J, Mehta JS, Ting DSW. Artificial intelligence for anterior segment diseases: Emerging applications in ophthalmology. Br J Ophthalmol 2020; 105:158-168. [PMID: 32532762 DOI: 10.1136/bjophthalmol-2019-315651] [Citation(s) in RCA: 86] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Revised: 02/21/2020] [Accepted: 03/24/2020] [Indexed: 12/12/2022]
Abstract
With the advancement of computational power, refinement of learning algorithms and architectures, and availability of big data, artificial intelligence (AI) technology, particularly with machine learning and deep learning, is paving the way for 'intelligent' healthcare systems. AI-related research in ophthalmology previously focused on the screening and diagnosis of posterior segment diseases, particularly diabetic retinopathy, age-related macular degeneration and glaucoma. There is now emerging evidence demonstrating the application of AI to the diagnosis and management of a variety of anterior segment conditions. In this review, we provide an overview of AI applications to the anterior segment addressing keratoconus, infectious keratitis, refractive surgery, corneal transplant, adult and paediatric cataracts, angle-closure glaucoma and iris tumour, and highlight important clinical considerations for adoption of AI technologies, potential integration with telemedicine and future directions.
Collapse
Affiliation(s)
- Darren Shu Jeng Ting
- Academic Ophthalmology, University of Nottingham, Nottingham, UK.,Department of Ophthalmology, Queen's Medical Centre, Nottingham, UK.,Singapore Eye Research Institute, Singapore
| | | | | | - Josh Tjunrong Sia
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Marcus Ang
- Singapore Eye Research Institute, Singapore.,Cornea And Ext Disease, Singapore National Eye Centre, Singapore
| | - Haotian Lin
- Sun Yat-Sen University Zhongshan Ophthalmic Center, Guangzhou, China
| | - James Chodosh
- Ophthalmology, Massachusetts Eye and Ear Infirmary Howe Laboratory Harvard Medical School, Boston, Massachusetts, USA
| | - Jodhbir S Mehta
- Singapore Eye Research Institute, Singapore.,Cornea And Ext Disease, Singapore National Eye Centre, Singapore
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore .,Vitreo-retinal Department, Singapore National Eye Center, Singapore
| |
Collapse
|
46
|
Issarti I, Consejo A, Jiménez-García M, Kreps EO, Koppen C, Rozema JJ. Logistic index for keratoconus detection and severity scoring (Logik). Comput Biol Med 2020; 122:103809. [PMID: 32658727 DOI: 10.1016/j.compbiomed.2020.103809] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 05/03/2020] [Accepted: 05/03/2020] [Indexed: 12/26/2022]
Abstract
PURPOSE To develop an objective severity scoring system for keratoconus for the use in clinical practice. METHODS Corneal elevation and minimum thickness data of 812 subjects were retrospectively collected and divided into two groups: one control group with normal topography in both eyes (304 eyes), and one keratoconus group (508 eyes). Keratoconus cases ranged from suspect to moderate and had at least 1 examination in 1 of 2 recruiting centres. The elevation data were fitted to Zernike polynomial functions up to 8th order. An adapted machine learning algorithm was then applied to derive a platform-independent severity scoring and identification system for keratoconus. RESULTS The resulting logistic index for keratoconus (Logik) provided consistent and progressing scoring that reflected keratoconus severity. Moreover, the system provided an accurate classification of suspect keratoconus versus normal (sensitivity of 85.2%, specificity of 70.0%) when compared with Belin/Ambrosio Display Deviation (BAD_D) (sensitivity of 75.0%, specificity of 74.4%) and the Pentacam Topographical Keratoconus Classification (TKC) (sensitivity of 9.3%, specificity of 97.0%). Logik also showed better accuracy for grading keratoconus stages with an average accuracy of 99.9% versus (98.2%, 94.7%) with BAD_D and TKC respectively. CONCLUSION Logik is a reliable index to identify suspect keratoconus and to score the severity of the disease. It shows an agreement with existing approaches while achieving better performance.
Collapse
Affiliation(s)
- Ikram Issarti
- Department of Ophthalmology, Antwerp University Hospital (UZA), Edegem, Belgium; Department of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium.
| | - Alejandra Consejo
- Institute of Physical Chemistry, Polish Academy of Sciences, Warsaw, Poland
| | - Marta Jiménez-García
- Department of Ophthalmology, Antwerp University Hospital (UZA), Edegem, Belgium; Department of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Elke O Kreps
- Department of Ophthalmology, Ghent University Hospital, Ghent, Belgium; Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium
| | - Carina Koppen
- Department of Ophthalmology, Antwerp University Hospital (UZA), Edegem, Belgium; Department of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Jos J Rozema
- Department of Ophthalmology, Antwerp University Hospital (UZA), Edegem, Belgium; Department of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
47
|
Cao K, Verspoor K, Sahebjada S, Baird PN. Evaluating the Performance of Various Machine Learning Algorithms to Detect Subclinical Keratoconus. Transl Vis Sci Technol 2020; 9:24. [PMID: 32818085 PMCID: PMC7396174 DOI: 10.1167/tvst.9.2.24] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 02/05/2020] [Indexed: 12/26/2022] Open
Abstract
Purpose Keratoconus (KC) represents one of the leading causes of corneal transplantation worldwide. Detecting subclinical KC would lead to better management to avoid the need for corneal grafts, but the condition is clinically challenging to diagnose. We wished to compare eight commonly used machine learning algorithms using a range of parameter combinations by applying them to our KC dataset and build models to better differentiate subclinical KC from non-KC eyes. Methods Oculus Pentacam was used to obtain corneal parameters on 49 subclinical KC and 39 control eyes, along with clinical and demographic parameters. Eight machine learning methods were applied to build models to differentiate subclinical KC from control eyes. Dominant algorithms were trained with all combinations of the considered parameters to select important parameter combinations. The performance of each model was evaluated and compared. Results Using a total of eleven parameters, random forest, support vector machine and k-nearest neighbors had better performance in detecting subclinical KC. The highest area under the curve of 0.97 for detecting subclinical KC was achieved using five parameters by the random forest method. The highest sensitivity (0.94) and specificity (0.90) were obtained by the support vector machine and the k-nearest neighbor model, respectively. Conclusions This study showed machine learning algorithms can be applied to identify subclinical KC using a minimal parameter set that are routinely collected during clinical eye examination. Translational Relevance Machine learning algorithms can be built using routinely collected clinical parameters that will assist in the objective detection of subclinical KC.
Collapse
Affiliation(s)
- Ke Cao
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia.,Department of Surgery, Ophthalmology, The University of Melbourne, Melbourne, Victoria, Australia
| | - Karin Verspoor
- Department of Computing and Information Systems, The University of Melbourne, Melbourne, Victoria, Australia
| | - Srujana Sahebjada
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia.,Department of Surgery, Ophthalmology, The University of Melbourne, Melbourne, Victoria, Australia
| | - Paul N Baird
- Department of Surgery, Ophthalmology, The University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
48
|
Robust keratoconus detection with Bayesian network classifier for Placido-based corneal indices. Cont Lens Anterior Eye 2019; 43:366-372. [PMID: 31866403 DOI: 10.1016/j.clae.2019.12.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2019] [Revised: 12/03/2019] [Accepted: 12/06/2019] [Indexed: 02/05/2023]
Abstract
PURPOSE To evaluate in a sample of normal and keratoconic eyes a simple Bayesian network classifier for keratoconus identification that uses previously developed topographic indices, calculated directly from the digital analysis of the Placido ring images. METHODS A comparative study was performed on a total of 60 eyes from 60 patients (age 20-60 years) from the Department of keratoconus of INVISION Ophthalmology clinic (Almería, Spain). Patients were divided into two groups depending on their preliminary diagnosis based on the classical topographic criteria: a control group without topographic alteration (30 eyes) and a keratoconus group (30 eyes). The keratoconus group included all grades except grade IV with excessively distorted corneal topography. All cases were examined using the CSO topography system (CSO, Firenze, Italy), and primary corneal Placido-indices were computed, as described in literature. Finally, a classifier was built by fitting a conditional linear Gaussian Bayesian network to the data, using the 5- and 10-fold cross-validation. For comparison, the original data were perturbed with random white noise of different magnitude. RESULTS The naïve Bayes classifier showed perfect discrimination ability among normal and keratoconic corneas, with 100% of sensibility and specificity, even in the presence of a very significant noise. CONCLUSIONS The Bayesian network classifiers are highly accurate and proved a stable screening method to assist ophthalmologists with the detection of keratoconus, even in the presence of noise or incomplete data. This algorithm is easily implemented for any Placido topographic system.
Collapse
|
49
|
Chen Z, Pang M, Zhao Z, Li S, Miao R, Zhang Y, Feng X, Feng X, Zhang Y, Duan M, Huang L, Zhou F. Feature selection may improve deep neural networks for the bioinformatics problems. Bioinformatics 2019; 36:1542-1552. [DOI: 10.1093/bioinformatics/btz763] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 09/03/2019] [Accepted: 10/02/2019] [Indexed: 12/22/2022] Open
Abstract
Abstract
Motivation
Deep neural network (DNN) algorithms were utilized in predicting various biomedical phenotypes recently, and demonstrated very good prediction performances without selecting features. This study proposed a hypothesis that the DNN models may be further improved by feature selection algorithms.
Results
A comprehensive comparative study was carried out by evaluating 11 feature selection algorithms on three conventional DNN algorithms, i.e. convolution neural network (CNN), deep belief network (DBN) and recurrent neural network (RNN), and three recent DNNs, i.e. MobilenetV2, ShufflenetV2 and Squeezenet. Five binary classification methylomic datasets were chosen to calculate the prediction performances of CNN/DBN/RNN models using feature selected by the 11 feature selection algorithms. Seventeen binary classification transcriptome and two multi-class transcriptome datasets were also utilized to evaluate how the hypothesis may generalize to different data types. The experimental data supported our hypothesis that feature selection algorithms may improve DNN models, and the DBN models using features selected by SVM-RFE usually achieved the best prediction accuracies on the five methylomic datasets.
Availability and implementation
All the algorithms were implemented and tested under the programming environment Python version 3.6.6.
Supplementary information
Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Zheng Chen
- BioKnow Health Informatics Lab, College of Computer Science and Technology
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
| | - Meng Pang
- BioKnow Health Informatics Lab, College of Computer Science and Technology
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
| | - Zixin Zhao
- BioKnow Health Informatics Lab, College of Computer Science and Technology
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
| | - Shuainan Li
- BioKnow Health Informatics Lab, College of Computer Science and Technology
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
| | - Rui Miao
- BioKnow Health Informatics Lab, College of Computer Science and Technology
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
| | - Yifan Zhang
- BioKnow Health Informatics Lab, College of Computer Science and Technology
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
| | - Xiaoyue Feng
- BioKnow Health Informatics Lab, College of Computer Science and Technology
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
| | - Xin Feng
- BioKnow Health Informatics Lab, College of Computer Science and Technology
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
| | - Yexian Zhang
- BioKnow Health Informatics Lab, College of Computer Science and Technology
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
| | - Meiyu Duan
- BioKnow Health Informatics Lab, College of Computer Science and Technology
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
| | - Lan Huang
- BioKnow Health Informatics Lab, College of Computer Science and Technology
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
| | - Fengfeng Zhou
- BioKnow Health Informatics Lab, College of Computer Science and Technology
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
| |
Collapse
|
50
|
Kamiya K, Ayatsuka Y, Kato Y, Fujimura F, Takahashi M, Shoji N, Mori Y, Miyata K. Keratoconus detection using deep learning of colour-coded maps with anterior segment optical coherence tomography: a diagnostic accuracy study. BMJ Open 2019; 9:e031313. [PMID: 31562158 PMCID: PMC6773416 DOI: 10.1136/bmjopen-2019-031313] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
OBJECTIVE To evaluate the diagnostic accuracy of keratoconus using deep learning of the colour-coded maps measured with the swept-source anterior segment optical coherence tomography (AS-OCT). DESIGN A diagnostic accuracy study. SETTING A single-centre study. PARTICIPANTS A total of 304 keratoconic eyes (grade 1 (108 eyes), 2 (75 eyes), 3 (42 eyes) and 4 (79 eyes)) according to the Amsler-Krumeich classification, and 239 age-matched healthy eyes. MAIN OUTCOME MEASURES The diagnostic accuracy of keratoconus using deep learning of six colour-coded maps (anterior elevation, anterior curvature, posterior elevation, posterior curvature, total refractive power and pachymetry map). RESULTS Deep learning of the arithmetical mean output data of these six maps showed an accuracy of 0.991 in discriminating between normal and keratoconic eyes. For single map analysis, posterior elevation map (0.993) showed the highest accuracy, followed by posterior curvature map (0.991), anterior elevation map (0.983), corneal pachymetry map (0.982), total refractive power map (0.978) and anterior curvature map (0.976), in discriminating between normal and keratoconic eyes. This deep learning also showed an accuracy of 0.874 in classifying the stage of the disease. Posterior curvature map (0.869) showed the highest accuracy, followed by corneal pachymetry map (0.845), anterior curvature map (0.836), total refractive power map (0.836), posterior elevation map (0.829) and anterior elevation map (0.820), in classifying the stage. CONCLUSIONS Deep learning using the colour-coded maps obtained by the AS-OCT effectively discriminates keratoconus from normal corneas, and furthermore classifies the grade of the disease. It is suggested that this will become an aid for improving the diagnostic accuracy of keratoconus in daily practice. CLINICAL TRIAL REGISTRATION NUMBER 000034587.
Collapse
Affiliation(s)
- Kazutaka Kamiya
- Visual Phisiology, School of Allied Health Sciences, Kitasato University, Sagamihara, Japan
| | | | - Yudai Kato
- Cresco Ltd, Technology Laboratory, Tokyo, Japan
| | - Fusako Fujimura
- Visual Phisiology, School of Allied Health Sciences, Kitasato University, Sagamihara, Japan
| | - Masahide Takahashi
- Department of Ophthalmology, School of Medicine, Kitasato University, Sagamihara, Japan
| | - Nobuyuki Shoji
- Department of Ophthalmology, School of Medicine, Kitasato University, Sagamihara, Japan
| | - Yosai Mori
- Miyata Eye Hospital, Department of Ophthalmology, Miyakonojo, Japan
| | - Kazunori Miyata
- Miyata Eye Hospital, Department of Ophthalmology, Miyakonojo, Japan
| |
Collapse
|