1
|
Coca-Serrano R, Sánchez-Tena MA, Álvarez-Peregrina C, Martínez-Pérez C, Moriche-Carretero M. [Bibliometric study and analysis of citation networks of visual screening in primary care]. Semergen 2024; 50:102225. [PMID: 38603945 DOI: 10.1016/j.semerg.2024.102225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 01/26/2024] [Accepted: 02/16/2024] [Indexed: 04/13/2024]
Abstract
AIM Screenings make it possible to detect anomalies that can be treated and identify patients who require referral to a specialist. The objective is to identify the different areas of research and determine the most cited publications on screening in primary care. METHODS An analysis of publications and visualization of citation networks has been carried out using the Citation Network Explorer software. The bibliographic search was carried out with the Web of Science (WOS) database using the search term: "screening AND (vision OR eye OR ocular OR visual)". RESULTS We analyzed 16707 publications in all fields, 23919 citation networks have been found. The number of publications has increased, with 2021 being the year with the highest number. The majority are scientific articles and the predominant language is English. The most cited article is a global meta-analysis on the prevalence of glaucoma, showing the importance of screening for its early detection since it is essential to avoid blindness. Using the clustering function we found 8 groups with a significant number of publications where we have bibliography on certain eye diseases: glaucoma, diabetic retinopathy, pediatric amblyopia, keratoconus and dry eye. CONCLUSIONS The main areas of study in relation to screening are the detection of diseases such as glaucoma, retinopathy of prematurity, keratoconus and dry eye. As well as the detection through visual analysis of childhood amblyopia and vision loss in elderly patients. It also gives importance to performing ocular motility tests in problems of acquired brain damage.
Collapse
Affiliation(s)
| | - M A Sánchez-Tena
- Departamento de Optometría y Visión, Universidad Complutense de Madrid, Madrid, España; ISEC LISBOA - Instituto Superior de Educação e Ciências, Lisboa, Portugal
| | - C Álvarez-Peregrina
- Departamento de Optometría y Visión, Universidad Complutense de Madrid, Madrid, España
| | - C Martínez-Pérez
- ISEC LISBOA - Instituto Superior de Educação e Ciências, Lisboa, Portugal
| | | |
Collapse
|
2
|
Bowd C, Belghith A, Rezapour J, Christopher M, Jonas JB, Hyman L, Fazio MA, Weinreb RN, Zangwill LM. Multimodal Deep Learning Classifier for Primary Open Angle Glaucoma Diagnosis Using Wide-Field Optic Nerve Head Cube Scans in Eyes With and Without High Myopia. J Glaucoma 2023; 32:841-847. [PMID: 37523623 DOI: 10.1097/ijg.0000000000002267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 06/18/2023] [Indexed: 08/02/2023]
Abstract
PRCIS An optical coherence tomography (OCT)-based multimodal deep learning (DL) classification model, including texture information, is introduced that outperforms single-modal models and multimodal models without texture information for glaucoma diagnosis in eyes with and without high myopia. BACKGROUND/AIMS To evaluate the diagnostic accuracy of a multimodal DL classifier using wide OCT optic nerve head cube scans in eyes with and without axial high myopia. MATERIALS AND METHODS Three hundred seventy-one primary open angle glaucoma (POAG) eyes and 86 healthy eyes, all without axial high myopia [axial length (AL) ≤ 26 mm] and 92 POAG eyes and 44 healthy eyes, all with axial high myopia (AL > 26 mm) were included. The multimodal DL classifier combined features of 3 individual VGG-16 models: (1) texture-based en face image, (2) retinal nerve fiber layer (RNFL) thickness map image, and (3) confocal scanning laser ophthalmoscope (cSLO) image. Age, AL, and disc area adjusted area under the receiver operating curves were used to compare model accuracy. RESULTS Adjusted area under the receiver operating curve for the multimodal DL model was 0.91 (95% CI = 0.87, 0.95). This value was significantly higher than the values of individual models [0.83 (0.79, 0.86) for texture-based en face image; 0.84 (0.81, 0.87) for RNFL thickness map; and 0.68 (0.61, 0.74) for cSLO image; all P ≤ 0.05]. Using only highly myopic eyes, the multimodal DL model showed significantly higher diagnostic accuracy [0.89 (0.86, 0.92)] compared with texture en face image [0.83 (0.78, 0.85)], RNFL [0.85 (0.81, 0.86)] and cSLO image models [0.69 (0.63, 0.76)] (all P ≤ 0.05). CONCLUSIONS Combining OCT-based RNFL thickness maps with texture-based en face images showed a better ability to discriminate between healthy and POAG than thickness maps alone, particularly in high axial myopic eyes.
Collapse
Affiliation(s)
- Christopher Bowd
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, Hamilton Glaucoma Center
| | - Akram Belghith
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, Hamilton Glaucoma Center
| | - Jasmin Rezapour
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, Hamilton Glaucoma Center
- Department of Ophthalmology, University Medical Center of the Johannes Gutenberg University Mainz
| | - Mark Christopher
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, Hamilton Glaucoma Center
| | - Jost B Jonas
- Department of Ophthalmology, Heidelberg University, Mannheim, Germany
| | - Leslie Hyman
- Vickie and Jack Farber Vision Research Center, Wills Eye Hospital, Thomas Jefferson University, Philadelphia, PA
| | - Massimo A Fazio
- Department of Ophthalmology and Visual Sciences, The University of Alabama at Birmingham, Birmingham, AL
| | - Robert N Weinreb
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, Hamilton Glaucoma Center
| | - Linda M Zangwill
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, Hamilton Glaucoma Center
| |
Collapse
|
3
|
Chen B, Fang XW, Wu MN, Zhu SJ, Zheng B, Liu BQ, Wu T, Hong XQ, Wang JT, Yang WH. Artificial intelligence assisted pterygium diagnosis: current status and perspectives. Int J Ophthalmol 2023; 16:1386-1394. [PMID: 37724272 PMCID: PMC10475638 DOI: 10.18240/ijo.2023.09.04] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 05/24/2023] [Indexed: 09/20/2023] Open
Abstract
Pterygium is a prevalent ocular disease that can cause discomfort and vision impairment. Early and accurate diagnosis is essential for effective management. Recently, artificial intelligence (AI) has shown promising potential in assisting clinicians with pterygium diagnosis. This paper provides an overview of AI-assisted pterygium diagnosis, including the AI techniques used such as machine learning, deep learning, and computer vision. Furthermore, recent studies that have evaluated the diagnostic performance of AI-based systems for pterygium detection, classification and segmentation were summarized. The advantages and limitations of AI-assisted pterygium diagnosis and discuss potential future developments in this field were also analyzed. The review aims to provide insights into the current state-of-the-art of AI and its potential applications in pterygium diagnosis, which may facilitate the development of more efficient and accurate diagnostic tools for this common ocular disease.
Collapse
Affiliation(s)
- Bang Chen
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou 313000, Zhejiang Province, China
| | - Xin-Wen Fang
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou 313000, Zhejiang Province, China
| | - Mao-Nian Wu
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou 313000, Zhejiang Province, China
| | - Shao-Jun Zhu
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou 313000, Zhejiang Province, China
| | - Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou 313000, Zhejiang Province, China
| | - Bang-Quan Liu
- College of Digital Technology and Engineering, Ningbo University of Finance & Economics, Ningbo 315000, Zhejiang Province, China
| | - Tao Wu
- Huzhou Institute, Zhejiang University of Technology, Huzhou 313000, Zhejiang Province, China
| | - Xiang-Qian Hong
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| | - Jian-Tao Wang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| | - Wei-Hua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| |
Collapse
|
4
|
Islam MT, Khan HA, Naveed K, Nauman A, Gulfam SM, Kim SW. LUVS-Net: A Lightweight U-Net Vessel Segmentor for Retinal Vasculature Detection in Fundus Images. ELECTRONICS 2023; 12:1786. [DOI: 10.3390/electronics12081786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
This paper presents LUVS-Net, which is a lightweight convolutional network for retinal vessel segmentation in fundus images that is designed for resource-constrained devices that are typically unable to meet the computational requirements of large neural networks. The computational challenges arise due to low-quality retinal images, wide variance in image acquisition conditions and disparities in intensity. Consequently, the training of existing segmentation methods requires a multitude of trainable parameters for the training of networks, resulting in computational complexity. The proposed Lightweight U-Net for Vessel Segmentation Network (LUVS-Net) can achieve high segmentation performance with only a few trainable parameters. This network uses an encoder–decoder framework in which edge data are transposed from the first layers of the encoder to the last layer of the decoder, massively improving the convergence latency. Additionally, LUVS-Net’s design allows for a dual-stream information flow both inside as well as outside of the encoder–decoder pair. The network width is enhanced using group convolutions, which allow the network to learn a larger number of low- and intermediate-level features. Spatial information loss is minimized using skip connections, and class imbalances are mitigated using dice loss for pixel-wise classification. The performance of the proposed network is evaluated on the publicly available retinal blood vessel datasets DRIVE, CHASE_DB1 and STARE. LUVS-Net proves to be quite competitive, outperforming alternative state-of-the-art segmentation methods and achieving comparable accuracy using trainable parameters that are reduced by two to three orders of magnitude compared with those of comparative state-of-the-art methods.
Collapse
Affiliation(s)
- Muhammad Talha Islam
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
| | - Haroon Ahmed Khan
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
- Department of Electrical and Computer Engineering, Aarhus University, 8000 Aarhus, Denmark
| | - Ali Nauman
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si 38541, Republic of Korea
| | - Sardar Muhammad Gulfam
- Department of Electrical and Computer Engineering, Abbottabad Campus, COMSATS University Islamabad (CUI), Abbottabad 22060, Pakistan
| | - Sung Won Kim
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si 38541, Republic of Korea
| |
Collapse
|
5
|
Chalkidou A, Shokraneh F, Kijauskaite G, Taylor-Phillips S, Halligan S, Wilkinson L, Glocker B, Garrett P, Denniston AK, Mackie A, Seedat F. Recommendations for the development and use of imaging test sets to investigate the test performance of artificial intelligence in health screening. Lancet Digit Health 2022; 4:e899-e905. [PMID: 36427951 DOI: 10.1016/s2589-7500(22)00186-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 08/11/2022] [Accepted: 09/09/2022] [Indexed: 11/24/2022]
Abstract
Rigorous evaluation of artificial intelligence (AI) systems for image classification is essential before deployment into health-care settings, such as screening programmes, so that adoption is effective and safe. A key step in the evaluation process is the external validation of diagnostic performance using a test set of images. We conducted a rapid literature review on methods to develop test sets, published from 2012 to 2020, in English. Using thematic analysis, we mapped themes and coded the principles using the Population, Intervention, and Comparator or Reference standard, Outcome, and Study design framework. A group of screening and AI experts assessed the evidence-based principles for completeness and provided further considerations. From the final 15 principles recommended here, five affect population, one intervention, two comparator, one reference standard, and one both reference standard and comparator. Finally, four are appliable to outcome and one to study design. Principles from the literature were useful to address biases from AI; however, they did not account for screening specific biases, which we now incorporate. The principles set out here should be used to support the development and use of test sets for studies that assess the accuracy of AI within screening programmes, to ensure they are fit for purpose and minimise bias.
Collapse
Affiliation(s)
| | - Farhad Shokraneh
- King's Technology Evaluation Centre, King's College London, London, UK
| | - Goda Kijauskaite
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | | | - Steve Halligan
- Centre for Medical Imaging, Division of Medicine, University College London, London, UK
| | | | - Ben Glocker
- Department of Computing, Imperial College London, London, UK
| | - Peter Garrett
- Department of Chemical Engineering and Analytical Science, University of Manchester, Manchester, UK
| | - Alastair K Denniston
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Anne Mackie
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | - Farah Seedat
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| |
Collapse
|
6
|
Cheung R, So S, Malvankar-Mehta MS. Diagnostic accuracy of machine learning classifiers for cataracts: a systematic review and meta-analysis. EXPERT REVIEW OF OPHTHALMOLOGY 2022. [DOI: 10.1080/17469899.2022.2142120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Affiliation(s)
- Ronald Cheung
- Department of Epidemiology and Biostatistics, Schulich School of Medicine and Dentistry, The University of Western Ontario, London, ON, Canada
| | - Samantha So
- Department of Epidemiology and Biostatistics, Schulich School of Medicine and Dentistry, The University of Western Ontario, London, ON, Canada
| | - Monali S. Malvankar-Mehta
- Department of Epidemiology and Biostatistics, Schulich School of Medicine and Dentistry, The University of Western Ontario, London, ON, Canada
- Department of Ophthalmology, Schulich School of Medicine and Dentistry, The University of Western Ontario, London, ON, Canada
| |
Collapse
|
7
|
Balasubramanian K, Ramya K, Gayathri Devi K. Improved swarm optimization of deep features for glaucoma classification using SEGSO and VGGNet. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
8
|
Balasubramanian K, N.P. A. Correlation-based feature selection using bio-inspired algorithms and optimized KELM classifier for glaucoma diagnosis. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109432] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
9
|
Corneal Hysteresis, Intraocular Pressure, and Progression of Glaucoma: Time for a “Hyst-Oric” Change in Clinical Practice? J Clin Med 2022; 11:jcm11102895. [PMID: 35629021 PMCID: PMC9148097 DOI: 10.3390/jcm11102895] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Revised: 05/14/2022] [Accepted: 05/15/2022] [Indexed: 12/04/2022] Open
Abstract
It is known that as people age their tissues become less compliant and the ocular structures are no different. Corneal Hysteresis (CH) is a surrogate marker for ocular compliance. Low hysteresis values are associated with optic nerve damage and visual field loss, the structural and functional components of glaucomatous optic neuropathy. Presently, a range of parameters are measured to monitor and stratify glaucoma, including intraocular pressure (IOP), central corneal thickness (CCT), optical coherence tomography (OCT) scans of the retinal nerve fibre layer (RNFL) and the ganglion cell layer (GCL), and subjective measurement such as visual fields. The purpose of this review is to summarise the current evidence that CH values area risk factor for the development of glaucoma and are a marker for its progression. The authors will explain what precisely CH is, how it can be measured, and the influence that medication and surgery can have on its value. CH is likely to play an integral role in glaucoma care and could potentially be incorporated synergistically with IOP, CCT, and visual field testing to establish risk stratification modelling and progression algorithms in glaucoma management in the future.
Collapse
|
10
|
Diagnostic accuracy of current machine learning classifiers for age-related macular degeneration: a systematic review and meta-analysis. Eye (Lond) 2022; 36:994-1004. [PMID: 33958739 PMCID: PMC9046206 DOI: 10.1038/s41433-021-01540-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 02/23/2021] [Accepted: 04/06/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND AND OBJECTIVE The objective of this study was to systematically review and meta-analyze the diagnostic accuracy of current machine learning classifiers for age-related macular degeneration (AMD). Artificial intelligence diagnostic algorithms can automatically detect and diagnose AMD through training data from large sets of fundus or OCT images. The use of AI algorithms is a powerful tool, and it is a method of obtaining a cost-effective, simple, and fast diagnosis of AMD. METHODS MEDLINE, EMBASE, CINAHL, and ProQuest Dissertations and Theses were searched systematically and thoroughly. Conferences held through Association for Research in Vision and Ophthalmology, American Academy of Ophthalmology, and Canadian Society of Ophthalmology were searched. Studies were screened using Covidence software and data on sensitivity, specificity and area under curve were extracted from the included studies. STATA 15.0 was used to conduct the meta-analysis. RESULTS Our search strategy identified 307 records from online databases and 174 records from gray literature. Total of 13 records, 64,798 subjects (and 612,429 images), were used for the quantitative analysis. The pooled estimate for sensitivity was 0.918 [95% CI: 0.678, 0.98] and specificity was 0.888 [95% CI: 0.578, 0.98] for AMD screening using machine learning classifiers. The relative odds of a positive screen test in AMD cases were 89.74 [95% CI: 3.05-2641.59] times more likely than a negative screen test in non-AMD cases. The positive likelihood ratio was 8.22 [95% CI: 1.52-44.48] and the negative likelihood ratio was 0.09 [95% CI: 0.02-0.52]. CONCLUSION The included studies show promising results for the diagnostic accuracy of the machine learning classifiers for AMD and its implementation in clinical settings.
Collapse
|
11
|
Chaurasia AK, Greatbatch CJ, Hewitt AW. Diagnostic Accuracy of Artificial Intelligence in Glaucoma Screening and Clinical Practice. J Glaucoma 2022; 31:285-299. [PMID: 35302538 DOI: 10.1097/ijg.0000000000002015] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 02/26/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Artificial intelligence (AI) has been shown as a diagnostic tool for glaucoma detection through imaging modalities. However, these tools are yet to be deployed into clinical practice. This meta-analysis determined overall AI performance for glaucoma diagnosis and identified potential factors affecting their implementation. METHODS We searched databases (Embase, Medline, Web of Science, and Scopus) for studies that developed or investigated the use of AI for glaucoma detection using fundus and optical coherence tomography (OCT) images. A bivariate random-effects model was used to determine the summary estimates for diagnostic outcomes. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis of Diagnostic Test Accuracy (PRISMA-DTA) extension was followed, and the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool was used for bias and applicability assessment. RESULTS Seventy-nine articles met inclusion criteria, with a subset of 66 containing adequate data for quantitative analysis. The pooled area under receiver operating characteristic curve across all studies for glaucoma detection was 96.3%, with a sensitivity of 92.0% (95% confidence interval: 89.0-94.0) and specificity of 94.0% (95% confidence interval: 92.0-95.0). The pooled area under receiver operating characteristic curve on fundus and OCT images was 96.2% and 96.0%, respectively. Mixed data set and external data validation had unsatisfactory diagnostic outcomes. CONCLUSION Although AI has the potential to revolutionize glaucoma care, this meta-analysis highlights that before such algorithms can be implemented into clinical care, a number of issues need to be addressed. With substantial heterogeneity across studies, many factors were found to affect the diagnostic performance. We recommend implementing a standard diagnostic protocol for grading, implementing external data validation, and analysis across different ethnicity groups.
Collapse
Affiliation(s)
- Abadh K Chaurasia
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Connor J Greatbatch
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Alex W Hewitt
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
- Centre for Eye Research Australia, University of Melbourne, Melbourne, Australia
| |
Collapse
|
12
|
Singh LK, Garg H, Khanna M. Performance evaluation of various deep learning based models for effective glaucoma evaluation using optical coherence tomography images. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:27737-27781. [PMID: 35368855 PMCID: PMC8962290 DOI: 10.1007/s11042-022-12826-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 02/20/2022] [Accepted: 03/09/2022] [Indexed: 06/14/2023]
Abstract
Glaucoma is the dominant reason for irreversible blindness worldwide, and its best remedy is early and timely detection. Optical coherence tomography has come to be the most commonly used imaging modality in detecting glaucomatous damage in recent years. Deep Learning using Optical Coherence Tomography Modality helps in predicting glaucoma more accurately and less tediously. This experimental study aims to perform glaucoma prediction using eight different ImageNet models from Optical Coherence Tomography of Glaucoma. A thorough investigation is performed to evaluate these models' performances on various efficiency metrics, which will help discover the best performing model. Every net is tested on three different optimizers, namely Adam, Root Mean Squared Propagation, and Stochastic Gradient Descent, to find the best relevant results. An attempt has been made to improvise the performance of models using transfer learning and fine-tuning. The work presented in this study was initially trained and tested on a private database that consists of 4220 images (2110 normal optical coherence tomography and 2110 glaucoma optical coherence tomography). Based on the results, the four best-performing models are shortlisted. Later, these models are tested on the well-recognized standard public Mendeley dataset. Experimental results illustrate that VGG16 using the Root Mean Squared Propagation Optimizer attains auspicious performance with 95.68% accuracy. The proposed work concludes that different ImageNet models are a good alternative as a computer-based automatic glaucoma screening system. This fully automated system has a lot of potential to tell the difference between normal Optical Coherence Tomography and glaucomatous Optical Coherence Tomography automatically. The proposed system helps in efficiently detecting this retinal infection in suspected patients for better diagnosis to avoid vision loss and also decreases senior ophthalmologists' (experts) precious time and involvement.
Collapse
Affiliation(s)
- Law Kumar Singh
- Department of Computer Science and Engineering, Sharda University , Greater Noida, India
- Department of Computer Science and Engineering, Hindustan College of Science and Technology, Mathura, India
| | - Hitendra Garg
- Department of Computer Engineering and Applications, GLA University, Mathura, India
| | - Munish Khanna
- Department of Computer Science and Engineering, Hindustan College of Science and Technology, Mathura, India
| |
Collapse
|
13
|
Jayakumar S, Sounderajah V, Normahani P, Harling L, Markar SR, Ashrafian H, Darzi A. Quality assessment standards in artificial intelligence diagnostic accuracy systematic reviews: a meta-research study. NPJ Digit Med 2022; 5:11. [PMID: 35087178 PMCID: PMC8795185 DOI: 10.1038/s41746-021-00544-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 11/28/2021] [Indexed: 01/05/2023] Open
Abstract
Artificial intelligence (AI) centred diagnostic systems are increasingly recognised as robust solutions in healthcare delivery pathways. In turn, there has been a concurrent rise in secondary research studies regarding these technologies in order to influence key clinical and policymaking decisions. It is therefore essential that these studies accurately appraise methodological quality and risk of bias within shortlisted trials and reports. In order to assess whether this critical step is performed, we undertook a meta-research study evaluating adherence to the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool within AI diagnostic accuracy systematic reviews. A literature search was conducted on all studies published from 2000 to December 2020. Of 50 included reviews, 36 performed the quality assessment, of which 27 utilised the QUADAS-2 tool. Bias was reported across all four domains of QUADAS-2. Two hundred forty-three of 423 studies (57.5%) across all systematic reviews utilising QUADAS-2 reported a high or unclear risk of bias in the patient selection domain, 110 (26%) reported a high or unclear risk of bias in the index test domain, 121 (28.6%) in the reference standard domain and 157 (37.1%) in the flow and timing domain. This study demonstrates the incomplete uptake of quality assessment tools in reviews of AI-based diagnostic accuracy studies and highlights inconsistent reporting across all domains of quality assessment. Poor standards of reporting act as barriers to clinical implementation. The creation of an AI-specific extension for quality assessment tools of diagnostic accuracy AI studies may facilitate the safe translation of AI tools into clinical practice.
Collapse
Affiliation(s)
- Shruti Jayakumar
- Department of Surgery and Cancer, Imperial College London, London, UK
| | - Viknesh Sounderajah
- Department of Surgery and Cancer, Imperial College London, London, UK
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Pasha Normahani
- Department of Surgery and Cancer, Imperial College London, London, UK
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Leanne Harling
- Department of Surgery and Cancer, Imperial College London, London, UK
- Department of Thoracic Surgery, Guy's Hospital, London, UK
| | - Sheraz R Markar
- Department of Surgery and Cancer, Imperial College London, London, UK
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Hutan Ashrafian
- Department of Surgery and Cancer, Imperial College London, London, UK.
| | - Ara Darzi
- Department of Surgery and Cancer, Imperial College London, London, UK
- Institute of Global Health Innovation, Imperial College London, London, UK
| |
Collapse
|
14
|
Sunija A, Gopi VP, Palanisamy P. Redundancy reduced depthwise separable convolution for glaucoma classification using OCT images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103192] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
15
|
Buisson M, Navel V, Labbé A, Watson SL, Baker JS, Murtagh P, Chiambaretta F, Dutheil F. Deep learning versus ophthalmologists for screening for glaucoma on fundus examination: A systematic review and meta-analysis. Clin Exp Ophthalmol 2021; 49:1027-1038. [PMID: 34506041 DOI: 10.1111/ceo.14000] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 09/02/2021] [Accepted: 09/08/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND In this systematic review and meta-analysis, we aimed to compare deep learning versus ophthalmologists in glaucoma diagnosis on fundus examinations. METHOD PubMed, Cochrane, Embase, ClinicalTrials.gov and ScienceDirect databases were searched for studies reporting a comparison between the glaucoma diagnosis performance of deep learning and ophthalmologists on fundus examinations on the same datasets, until 10 December 2020. Studies had to report an area under the receiver operating characteristics (AUC) with SD or enough data to generate one. RESULTS We included six studies in our meta-analysis. There was no difference in AUC between ophthalmologists (AUC = 82.0, 95% confidence intervals [CI] 65.4-98.6) and deep learning (97.0, 89.4-104.5). There was also no difference using several pessimistic and optimistic variants of our meta-analysis: the best (82.2, 60.0-104.3) or worst (77.7, 53.1-102.3) ophthalmologists versus the best (97.1, 89.5-104.7) or worst (97.1, 88.5-105.6) deep learning of each study. We did not retrieve any factors influencing those results. CONCLUSION Deep learning had similar performance compared to ophthalmologists in glaucoma diagnosis from fundus examinations. Further studies should evaluate deep learning in clinical situations.
Collapse
Affiliation(s)
- Mathieu Buisson
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France
| | - Valentin Navel
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Antoine Labbé
- Department of Ophthalmology III, Quinze-Vingts National Ophthalmology Hospital, IHU FOReSIGHT, Paris, France.,Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France.,Department of Ophthalmology, Ambroise Paré Hospital, APHP, Université de Versailles Saint-Quentin en Yvelines, Versailles, France
| | - Stephanie L Watson
- Save Sight Institute, Discipline of Ophthalmology, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,Corneal Unit, Sydney Eye Hospital, Sydney, New South Wales, Australia
| | - Julien S Baker
- Centre for Health and Exercise Science Research, Department of Sport, Physical Education and Health, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Patrick Murtagh
- Department of Ophthalmology, Royal Victoria Eye and Ear Hospital, Dublin, Ireland
| | - Frédéric Chiambaretta
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Frédéric Dutheil
- Université Clermont Auvergne, CNRS, LaPSCo, Physiological and Psychosocial Stress, CHU Clermont-Ferrand, University Hospital of Clermont-Ferrand, Preventive and Occupational Medicine, Witty Fit, Clermont-Ferrand, France
| |
Collapse
|
16
|
Ong J, Selvam A, Chhablani J. Artificial intelligence in ophthalmology: Optimization of machine learning for ophthalmic care and research. Clin Exp Ophthalmol 2021; 49:413-415. [PMID: 34279854 DOI: 10.1111/ceo.13952] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Affiliation(s)
- Joshua Ong
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
17
|
Olivas LG, Alférez GH, Castillo J. Glaucoma detection in Latino population through OCT's RNFL thickness map using transfer learning. Int Ophthalmol 2021; 41:3727-3741. [PMID: 34212255 DOI: 10.1007/s10792-021-01931-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 06/19/2021] [Indexed: 11/25/2022]
Abstract
PURPOSE Glaucoma is the leading cause of irreversible blindness worldwide. It is estimated that over 60 million people around the world have this disease, with only part of them knowing they have it. Timely and early diagnosis is vital to delay/prevent patient blindness. Deep learning (DL) could be a tool for ophthalmologists to give a more informed and objective diagnosis. However, there is a lack of studies that apply DL for glaucoma detection to Latino population. Our contribution is to use transfer learning to retrain MobileNet and Inception V3 models with images of the retinal nerve fiber layer thickness map of Mexican patients, obtained with optical coherence tomography (OCT) from the Instituto de la Visión, a clinic in the northern part of Mexico. METHODS The IBM Foundational Methodology for Data Science was used in this study. The MobileNet and Inception V3 topologies were chosen as the analytical approaches to classify OCT images in two classes, namely glaucomatous and non-glaucomatous. The OCT files were collected from a Zeiss OCT machine at the Instituto de la Visión, and classified by an expert into the two classes under study. These images conform a dataset of 333 files in total. Since this research work is focused on RNFL thickness map images, the OCT files were cropped to obtain only the RNFL thickness map images of the corresponding eye. This action was carried out for images in both classes, glaucomatous and non-glaucomatous. Since some images were damaged (with black spots in which data was missing), these images were cut-out and cut-off. After the preparation process, 50 images per class were used for training. Fifteen images per class, different than the ones used in the training stage, were used for running predictions. In total, 260 images were used in the experiments, 130 per eye. Four models were generated, two trained with MobileNet, one for the left eye and one for the right eye, and another two trained with Inception V3. TensorFlow was used for running transfer learning. RESULTS The evaluation results of the MobileNet model for the left eye are, accuracy: 86%, precision: 87%, recall: 87%, and F1 score: 87%. The evaluation results of the MobileNet model for the right eye are, accuracy: 90%, precision: 90%, recall: 90%, and F1 score: 90%. The evaluation results of the Inception V3 model for the left eye are, accuracy: 90%, precision: 90%, recall: 90%, and F1 score: 90%. The evaluation results of the Inception V3 model for the right eye are, accuracy: 90%, precision: 90%, recall: 90%, and F1 score: 90%. CONCLUSION In average, the evaluation results for right eye images were the same for both models. The Inception V3 model showed slight better average results than the MobileNet model in the case of classifying left eye images.
Collapse
Affiliation(s)
- Liza G Olivas
- School of Engineering and Technology, Universidad de Montemorelos, Montemorelos, NL, Mexico
| | - Germán H Alférez
- School of Engineering and Technology, Universidad de Montemorelos, Montemorelos, NL, Mexico.
| | - Javier Castillo
- School of Medicine, Universidad de Montemorelos, Montemorelos, NL, Mexico
| |
Collapse
|
18
|
Xiao X, Xue L, Ye L, Li H, He Y. Health care cost and benefits of artificial intelligence-assisted population-based glaucoma screening for the elderly in remote areas of China: a cost-offset analysis. BMC Public Health 2021; 21:1065. [PMID: 34088286 PMCID: PMC8178835 DOI: 10.1186/s12889-021-11097-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 05/17/2021] [Indexed: 12/04/2022] Open
Abstract
Background Population-based screening was essential for glaucoma management. Although various studies have investigated the cost-effectiveness of glaucoma screening, policymakers facing with uncontrollably growing total health expenses were deeply concerned about the potential financial consequences of glaucoma screening. This present study was aimed to explore the impact of glaucoma screening with artificial intelligence (AI) automated diagnosis from a budgetary standpoint in Changjiang county, China. Methods A Markov model based on health care system’s perspective was adapted from previously published studies to predict disease progression and healthcare costs. A cohort of 19,395 individuals aged 65 and above were simulated over a 15-year timeframe. Fur illustrative purpose, we only considered primary angle-closure glaucoma (PACG) in this study. Prevalence, disease progression risks between stages, compliance rates were obtained from publish studies. We did a meta-analysis to estimate diagnostic performance of AI automated diagnosis system from fundus image. Screening costs were provided by the Changjiang screening programme, whereas treatment costs were derived from electronic medical records from two county hospitals. Main outcomes included the number of PACG patients and health care costs. Cost-offset analysis was employed to compare projected health outcomes and medical care costs under the screening with what they would have been without screening. One-way sensitivity analysis was conducted to quantify uncertainties around model results. Results Among people aged 65 and above in Changjiang county, it was predicted that there were 1940 PACG patients under the AI-assisted screening scenario, compared with 2104 patients without screening in 15 years’ time. Specifically, the screening would reduce patients with primary angle closure suspect by 7.7%, primary angle closure by 8.8%, PACG by 16.7%, and visual blindness by 33.3%. Due to early diagnosis and treatment under the screening, healthcare costs surged dramatically to $107,761.4 dollar in the first year and then were constantly declining over time, while without screening costs grew from $14,759.8 in the second year until peaking at $17,900.9 in the 9th year. However, cost-offset analysis revealed that additional healthcare costs resulted from the screening could not be offset by decreased disease progression. The 5-, 10-, and 15-year accumulated incremental costs of screening versus no screening were estimated to be $396,362.8, $424,907.9, and $434,903.2, respectively. As a result, the incremental cost per PACG of any stages prevented was $1464.3. Conclusions This study represented the first attempt to address decision-maker’s budgetary concerns when adopting glaucoma screening by developing a Markov prediction model to project health outcomes and costs. Population screening combined with AI automated diagnosis for PACG in China were able to reduce disease progression risks. However, the excess costs of screening could never be offset by reduction in disease progression. Further studies examining the cost-effectiveness or cost-utility of AI-assisted glaucoma screening were needed. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-021-11097-w.
Collapse
Affiliation(s)
- Xuan Xiao
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060, China
| | - Long Xue
- School of Public Health, Fudan University, Shanghai, 200433, China
| | - Lin Ye
- Department of Eye Plastic and Lacrimal Disease, Shenzhen Eye Hospital of Jinan University, Shenzhen, 518040, China
| | - Hongzheng Li
- School of Public Health, Fudan University, Shanghai, 200433, China
| | - Yunzhen He
- School of Public Health, Fudan University, Shanghai, 200433, China.
| |
Collapse
|
19
|
Stagg BC, Stein JD, Medeiros FA, Wirostko B, Crandall A, Hartnett ME, Cummins M, Morris A, Hess R, Kawamoto K. Special Commentary: Using Clinical Decision Support Systems to Bring Predictive Models to the Glaucoma Clinic. Ophthalmol Glaucoma 2020; 4:5-9. [PMID: 32810611 DOI: 10.1016/j.ogla.2020.08.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Revised: 08/12/2020] [Accepted: 08/12/2020] [Indexed: 01/29/2023]
Abstract
Advances in the field of predictive modeling using artificial intelligence and machine learning have the potential to improve clinical care and outcomes, but only if the results of these models are presented appropriately to clinicians at the time they make decisions for individual patients. Clinical decision support (CDS) systems could be used to accomplish this. Modern CDS systems are computer-based tools designed to improve clinician decision making for individual patients. However, not all CDS systems are effective. Four principles that have been shown in other medical fields to be important for successful CDS system implementation are (1) integration into clinician workflow, (2) user-centered interface design, (3) evaluation of CDS systems and rules, and (4) standards-based development so the tools can be deployed across health systems.
Collapse
Affiliation(s)
- Brian C Stagg
- John Moran Eye Center, Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, Utah; Department of Population Health Sciences, University of Utah, Salt Lake City, Utah.
| | - Joshua D Stein
- Center for Eye Policy & Innovation, Kellogg Eye Center, Department of Opthalmology and Visual Sciences, University of Michigan, Ann Arbor, Michigan; Institute for Healthcare Policy and Innovation, University of Michigan, Ann Arbor, Michigan; Department of Health Management and Policy, University of Michigan School of Public Health, Ann Arbor, Michigan
| | | | - Barbara Wirostko
- John Moran Eye Center, Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, Utah
| | - Alan Crandall
- John Moran Eye Center, Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, Utah
| | - M Elizabeth Hartnett
- John Moran Eye Center, Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, Utah
| | - Mollie Cummins
- College of Nursing, University of Utah, Salt Lake City, Utah
| | - Alan Morris
- Division of Respiratory, Critical Care and Occupational Pulmonary Medicine, Department of Internal Medicine, University of Utah, Salt Lake City, Utah
| | - Rachel Hess
- Department of Population Health Sciences, University of Utah, Salt Lake City, Utah; Department of Internal Medicine, University of Utah, Salt Lake City, Utah
| | - Kensaku Kawamoto
- Department of Biomedical Informatics, University of Utah, Salt Lake City, Utah
| |
Collapse
|