1
|
Tian Y, Sharma A, Mehta S, Kaushal S, Liebmann JM, Cioffi GA, Thakoor KA. Automated Identification of Clinically Relevant Regions in Glaucoma OCT Reports Using Expert Eye Tracking Data and Deep Learning. Transl Vis Sci Technol 2024; 13:24. [PMID: 39405074 PMCID: PMC11482640 DOI: 10.1167/tvst.13.10.24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/04/2024] [Indexed: 10/19/2024] Open
Abstract
Purpose To propose a deep learning-based approach for predicting the most-fixated regions on optical coherence tomography (OCT) reports using eye tracking data of ophthalmologists, assisting them in finding medically salient image regions. Methods We collected eye tracking data of ophthalmology residents, fellows, and faculty as they viewed OCT reports to detect glaucoma. We used a U-Net model as the deep learning backbone and quantized eye tracking coordinates by dividing the input report into an 11 × 11 grid. The model was trained to predict the grids on which fixations would land in unseen OCT reports. We investigated the contribution of different variables, including the viewer's level of expertise, model architecture, and number of eye gaze patterns included in training. Results Our approach predicted most-fixated regions in OCT reports with precision of 0.723, recall of 0.562, and f1-score of 0.609. We found that using a grid-based eye tracking structure enabled efficient training and using a U-Net backbone led to the best performance. Conclusions Our approach has the potential to assist ophthalmologists in diagnosing glaucoma by predicting the most medically salient regions on OCT reports. Our study suggests the value of eye tracking in guiding deep learning algorithms toward informative regions when experts may not be accessible. Translational Relevance By suggesting important OCT report regions for a glaucoma diagnosis, our model could aid in medical education and serve as a precursor for self-supervised deep learning approaches to expedite early detection of irreversible vision loss owing to glaucoma.
Collapse
Affiliation(s)
- Ye Tian
- Department of Biomedical Engineering, Columbia University, New York, New York, USA
- Artificial Intelligence for Vision Science Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, New York, USA
| | - Anurag Sharma
- Department of Biomedical Engineering, Columbia University, New York, New York, USA
- Artificial Intelligence for Vision Science Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, New York, USA
| | - Shubh Mehta
- Department of Biomedical Engineering, Columbia University, New York, New York, USA
| | - Shubham Kaushal
- Data Science Institute, Columbia University, New York, New York, USA
- Artificial Intelligence for Vision Science Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, New York, USA
| | - Jeffrey M. Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, New York, USA
| | - George A. Cioffi
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, New York, USA
| | - Kaveri A. Thakoor
- Department of Biomedical Engineering, Columbia University, New York, New York, USA
- Data Science Institute, Columbia University, New York, New York, USA
- Artificial Intelligence for Vision Science Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, New York, USA
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, New York, USA
- Department of Computer Science, Columbia University, New York, New York, USA
| |
Collapse
|
2
|
Upadhyaya S, Rao DP, Kavitha S, Ballae Ganeshrao S, Negiloni K, Bhandary S, Savoy FM, Venkatesh R. Diagnostic Performance of the Offline Medios Artificial Intelligence for Glaucoma Detection in a Rural Tele-Ophthalmology Setting. Ophthalmol Glaucoma 2024:S2589-4196(24)00173-X. [PMID: 39277171 DOI: 10.1016/j.ogla.2024.09.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 08/31/2024] [Accepted: 09/09/2024] [Indexed: 09/17/2024]
Abstract
PURPOSE This study assesses the diagnostic efficacy of offline Medios Artificial Intelligence (AI) glaucoma software in a primary eye care setting, using nonmydriatic fundus images from Remidio's Fundus-on-Phone (FOP NM-10). Artificial intelligence results were compared with tele-ophthalmologists' diagnoses and with a glaucoma specialist's assessment for those participants referred to a tertiary eye care hospital. DESIGN Prospective cross-sectional study PARTICIPANTS: Three hundred three participants from 6 satellite vision centers of a tertiary eye hospital. METHODS At the vision center, participants underwent comprehensive eye evaluations, including clinical history, visual acuity measurement, slit lamp examination, intraocular pressure measurement, and fundus photography using the FOP NM-10 camera. Medios AI-Glaucoma software analyzed 42-degree disc-centric fundus images, categorizing them as normal, glaucoma, or suspect. Tele-ophthalmologists who were glaucoma fellows with a minimum of 3 years of ophthalmology and 1 year of glaucoma fellowship training, masked to artificial intelligence (AI) results, remotely diagnosed subjects based on the history and disc appearance. All participants labeled as disc suspects or glaucoma by AI or tele-ophthalmologists underwent further comprehensive glaucoma evaluation at the base hospital, including clinical examination, Humphrey visual field analysis, and OCT. Artificial intelligence and tele-ophthalmologist diagnoses were then compared with a glaucoma specialist's diagnosis. MAIN OUTCOME MEASURES Sensitivity and specificity of Medios AI. RESULTS Out of 303 participants, 299 with at least one eye of sufficient image quality were included in the study. The remaining 4 participants did not have sufficient image quality in both eyes. Medios AI identified 39 participants (13%) with referable glaucoma. The AI exhibited a sensitivity of 0.91 (95% confidence interval [CI]: 0.71-0.99) and specificity of 0.93 (95% CI: 0.89-0.96) in detecting referable glaucoma (definite perimetric glaucoma) when compared to tele-ophthalmologist. The agreement between AI and the glaucoma specialist was 80.3%, surpassing the 55.3% agreement between the tele-ophthalmologist and the glaucoma specialist amongst those participants who were referred to the base hospital. Both AI and the tele-ophthalmologist relied on fundus photos for diagnoses, whereas the glaucoma specialist's assessments at the base hospital were aided by additional tools such as Humphrey visual field analysis and OCT. Furthermore, AI had fewer false positive referrals (2 out of 10) compared to the tele-ophthalmologist (9 out of 10). CONCLUSIONS Medios offline AI exhibited promising sensitivity and specificity in detecting referable glaucoma from remote vision centers in southern India when compared with teleophthalmologists. It also demonstrated better agreement with glaucoma specialist's diagnosis for referable glaucoma participants. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Swati Upadhyaya
- Department of Glaucoma, Aravind Eye Hospital, Pondicherry, India.
| | | | | | | | - Kalpa Negiloni
- Remidio Innovative Solutions Private Limited, Bengaluru, India
| | - Shreya Bhandary
- Remidio Innovative Solutions Private Limited, Bengaluru, India
| | - Florian M Savoy
- Medios Technologies, Remidio Innovative Solutions, Singapore
| | | |
Collapse
|
3
|
Huang S, Jin K, Gao Z, Yang B, Shi X, Zhou J, Grzybowski A, Gawecki M, Ye J. Automated interpretation of retinal vein occlusion based on fundus fluorescein angiography images using deep learning: A retrospective, multi-center study. Heliyon 2024; 10:e33108. [PMID: 39027617 PMCID: PMC11255597 DOI: 10.1016/j.heliyon.2024.e33108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 06/13/2024] [Accepted: 06/14/2024] [Indexed: 07/20/2024] Open
Abstract
Purpose Fundus fluorescein angiography (FFA) is the gold standard for retinal vein occlusion (RVO) diagnosis. This study aims to develop a deep learning-based system to diagnose and classify RVO using FFA images, addressing the challenges of time-consuming and variable interpretations by ophthalmologists. Methods 4028 FFA images of 467 eyes from 463 patients were collected and annotated. Three convolutional neural networks (CNN) models (ResNet50, VGG19, InceptionV3) were trained to generate the label of image quality, eye, location, phase, lesions, diagnosis, and macular involvement. The performance of the models was evaluated by accuracy, precision, recall, F-1 score, the area under the curve, confusion matrix, human-machine comparison, and Clinical validation on three external data sets. Results The InceptionV3 model outperformed ResNet50 and VGG19 in labeling and interpreting FFA images for RVO diagnosis, achieving 77.63%-96.45% accuracy for basic information labels and 81.72%-96.45% for RVO-relevant labels. The comparison between the best CNN and ophthalmologists showed up to 19% accuracy improvement with the inceptionV3. Conclusion This study developed a deep learning model capable of automatically multi-label and multi-classification of FFA images for RVO diagnosis. The proposed system is anticipated to serve as a new tool for diagnosing RVO in places short of medical resources.
Collapse
Affiliation(s)
- Shenyu Huang
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Zhiyuan Gao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Boyuan Yang
- Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Xin Shi
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| | - Maciej Gawecki
- Department of Ophthalmology of Specialist Hospital in Chojnice, Lesna 10, 89-600, Chojnice, Poland
- Dobry Wzrok Ophthalmological Clinic, Zabi Kruk 10, 80-402, Gdańsk, Poland
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| |
Collapse
|
4
|
Kang D, Wu H, Yuan L, Shi Y, Jin K, Grzybowski A. A Beginner's Guide to Artificial Intelligence for Ophthalmologists. Ophthalmol Ther 2024; 13:1841-1855. [PMID: 38734807 PMCID: PMC11178755 DOI: 10.1007/s40123-024-00958-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024] Open
Abstract
The integration of artificial intelligence (AI) in ophthalmology has promoted the development of the discipline, offering opportunities for enhancing diagnostic accuracy, patient care, and treatment outcomes. This paper aims to provide a foundational understanding of AI applications in ophthalmology, with a focus on interpreting studies related to AI-driven diagnostics. The core of our discussion is to explore various AI methods, including deep learning (DL) frameworks for detecting and quantifying ophthalmic features in imaging data, as well as using transfer learning for effective model training in limited datasets. The paper highlights the importance of high-quality, diverse datasets for training AI models and the need for transparent reporting of methodologies to ensure reproducibility and reliability in AI studies. Furthermore, we address the clinical implications of AI diagnostics, emphasizing the balance between minimizing false negatives to avoid missed diagnoses and reducing false positives to prevent unnecessary interventions. The paper also discusses the ethical considerations and potential biases in AI models, underscoring the importance of continuous monitoring and improvement of AI systems in clinical settings. In conclusion, this paper serves as a primer for ophthalmologists seeking to understand the basics of AI in their field, guiding them through the critical aspects of interpreting AI studies and the practical considerations for integrating AI into clinical practice.
Collapse
Affiliation(s)
- Daohuan Kang
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Hongkang Wu
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Lu Yuan
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Yu Shi
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
- Zhejiang University School of Medicine, Hangzhou, China
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| |
Collapse
|
5
|
Hashemian H, Peto T, Ambrósio Jr R, Lengyel I, Kafieh R, Muhammed Noori A, Khorrami-Nejad M. Application of Artificial Intelligence in Ophthalmology: An Updated Comprehensive Review. J Ophthalmic Vis Res 2024; 19:354-367. [PMID: 39359529 PMCID: PMC11444002 DOI: 10.18502/jovr.v19i3.15893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 07/06/2024] [Indexed: 10/04/2024] Open
Abstract
Artificial intelligence (AI) holds immense promise for transforming ophthalmic care through automated screening, precision diagnostics, and optimized treatment planning. This paper reviews recent advances and challenges in applying AI techniques such as machine learning and deep learning to major eye diseases. In diabetic retinopathy, AI algorithms analyze retinal images to accurately identify lesions, which helps clinicians in ophthalmology practice. Systems like IDx-DR (IDx Technologies Inc, USA) are FDA-approved for autonomous detection of referable diabetic retinopathy. For glaucoma, deep learning models assess optic nerve head morphology in fundus photographs to detect damage. In age-related macular degeneration, AI can quantify drusen and diagnose disease severity from both color fundus and optical coherence tomography images. AI has also been used in screening for retinopathy of prematurity, keratoconus, and dry eye disease. Beyond screening, AI can aid treatment decisions by forecasting disease progression and anti-VEGF response. However, potential limitations such as the quality and diversity of training data, lack of rigorous clinical validation, and challenges in regulatory approval and clinician trust must be addressed for the widespread adoption of AI. Two other significant hurdles include the integration of AI into existing clinical workflows and ensuring transparency in AI decision-making processes. With continued research to address these limitations, AI promises to enable earlier diagnosis, optimized resource allocation, personalized treatment, and improved patient outcomes. Besides, synergistic human-AI systems could set a new standard for evidence-based, precise ophthalmic care.
Collapse
Affiliation(s)
- Hesam Hashemian
- Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Tunde Peto
- School of Medicine, Dentistry and Biomedical Sciences, Centre for Public Health, Queen’s University Belfast, Northern Ireland,
UK
| | - Renato Ambrósio Jr
- Department of Ophthalmology, Federal University the State of Rio de Janeiro (UNIRIO), Brazil
- Department of Ophthalmology, Federal University of São Paulo, São Paulo, Brazil
- Brazilian Study Group of Artificial Intelligence and Corneal Analysis – BrAIN, Rio de Janeiro & Maceió, Brazil
- Rio Vision Hospital, Rio de Janeiro, Brazil
- Instituto de Olhos Renato Ambrósio, Rio de Janeiro, Brazil
| | - Imre Lengyel
- School of Medicine, Dentistry and Biomedical Sciences, Queen’s University Belfast, Northern Ireland
| | - Rahele Kafieh
- Department of Engineering, Durham University, United Kingdom
| | | | - Masoud Khorrami-Nejad
- School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
- Department of Optical Techniques, Al-Mustaqbal University College, Hillah, Babylon 51001, Iraq
| |
Collapse
|
6
|
Daniyal M, Qureshi M, Marzo RR, Aljuaid M, Shahid D. Exploring clinical specialists' perspectives on the future role of AI: evaluating replacement perceptions, benefits, and drawbacks. BMC Health Serv Res 2024; 24:587. [PMID: 38725039 PMCID: PMC11080164 DOI: 10.1186/s12913-024-10928-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 03/29/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND OF STUDY Over the past few decades, the utilization of Artificial Intelligence (AI) has surged in popularity, and its application in the medical field is witnessing a global increase. Nevertheless, the implementation of AI-based healthcare solutions has been slow in developing nations like Pakistan. This unique study aims to assess the opinion of clinical specialists on the future replacement of AI, its associated benefits, and its drawbacks in form southern region of Pakistan. MATERIAL AND METHODS A cross-sectional selective study was conducted from 140 clinical specialists (Surgery = 24, Pathology = 31, Radiology = 35, Gynecology = 35, Pediatric = 17) from the neglected southern Punjab region of Pakistan. The study was analyzed using χ2 - the test of association and the nexus between different factors was examined by multinomial logistic regression. RESULTS Out of 140 respondents, 34 (24.3%) believed hospitals were ready for AI, while 81 (57.9%) disagreed. Additionally, 42(30.0%) were concerned about privacy violations, and 70(50%) feared AI could lead to unemployment. Specialists with less than 6 years of experience are more likely to embrace AI (p = 0.0327, OR = 3.184, 95% C.I; 0.262, 3.556) and those who firmly believe that AI knowledge will not replace their future tasks exhibit a lower likelihood of accepting AI (p = 0.015, OR = 0.235, 95% C.I: (0.073, 0.758). Clinical specialists who perceive AI as a technology that encompasses both drawbacks and benefits demonstrated a higher likelihood of accepting its adoption (p = 0.084, OR = 2.969, 95% C.I; 0.865, 5.187). CONCLUSION Clinical specialists have embraced AI as the future of the medical field while acknowledging concerns about privacy and unemployment.
Collapse
Affiliation(s)
- Muhammad Daniyal
- Department of Statistics, Faculty of Computing, Islamia University of Bahawalpur, Bahawalpur, Pakistan.
| | - Moiz Qureshi
- Government Degree College, TandoJam, Hyderabad, Sindh, Pakistan
| | - Roy Rillera Marzo
- Faculty of Humanities and Health Sciences, Curtin University, Malaysia, , Miri, Sarawak, Malaysia
- Jeffrey Cheah School of Medicine and Health Sciences, Global Public Health, Monash University Malaysia, Subang Jaya, Selangor, Malaysia
| | - Mohammed Aljuaid
- Department of Health Administration, College of Business Administration, King Saud University, Riyadh, Saudi Arabia
| | - Duaa Shahid
- Hult International Business School, 02141, Cambridge, MA, USA
| |
Collapse
|
7
|
Kwon HJ, Heo J, Park SH, Park SW, Byon I. Accuracy of generative deep learning model for macular anatomy prediction from optical coherence tomography images in macular hole surgery. Sci Rep 2024; 14:6913. [PMID: 38519532 PMCID: PMC10959933 DOI: 10.1038/s41598-024-57562-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 03/19/2024] [Indexed: 03/25/2024] Open
Abstract
This study aims to propose a generative deep learning model (GDLM) based on a variational autoencoder that predicts macular optical coherence tomography (OCT) images following full-thickness macular hole (FTMH) surgery and evaluate its clinical accuracy. Preoperative and 6-month postoperative swept-source OCT data were collected from 150 patients with successfully closed FTMH using 6 × 6 mm2 macular volume scan datasets. Randomly selected and augmented 120,000 training and 5000 validation pairs of OCT images were used to train the GDLM. We assessed the accuracy and F1 score of concordance for neurosensory retinal areas, performed Bland-Altman analysis of foveolar height (FH) and mean foveal thickness (MFT), and predicted postoperative external limiting membrane (ELM) and ellipsoid zone (EZ) restoration accuracy between artificial intelligence (AI)-OCT and ground truth (GT)-OCT images. Accuracy and F1 scores were 94.7% and 0.891, respectively. Average FH (228.2 vs. 233.4 μm, P = 0.587) and MFT (271.4 vs. 273.3 μm, P = 0.819) were similar between AI- and GT-OCT images, within 30.0% differences of 95% limits of agreement. ELM and EZ recovery prediction accuracy was 88.0% and 92.0%, respectively. The proposed GDLM accurately predicted macular OCT images following FTMH surgery, aiding patient and surgeon understanding of postoperative macular features.
Collapse
Affiliation(s)
- Han Jo Kwon
- Department of Ophthalmology, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Gudeok-ro 179, Seo-gu, Busan, 49241, South Korea
| | - Jun Heo
- Department of Ophthalmology, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Gudeok-ro 179, Seo-gu, Busan, 49241, South Korea
| | - Su Hwan Park
- Department of Ophthalmology, Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Geumo-ro 20, Mulgeum-eup, Yangsan-si, Gyeongsangnam-do, 50612, South Korea
| | - Sung Who Park
- Department of Ophthalmology, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Gudeok-ro 179, Seo-gu, Busan, 49241, South Korea
| | - Iksoo Byon
- Department of Ophthalmology, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Gudeok-ro 179, Seo-gu, Busan, 49241, South Korea.
| |
Collapse
|
8
|
Feng L, Zhang Y, Wei W, Qiu H, Shi M. Applying deep learning to recognize the properties of vitreous opacity in ophthalmic ultrasound images. Eye (Lond) 2024; 38:380-385. [PMID: 37596401 PMCID: PMC10810903 DOI: 10.1038/s41433-023-02705-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 07/20/2023] [Accepted: 08/09/2023] [Indexed: 08/20/2023] Open
Abstract
BACKGROUND To explore the feasibility of artificial intelligence technology based on deep learning to automatically recognize the properties of vitreous opacities in ophthalmic ultrasound images. METHODS A total of 2000 greyscale Doppler ultrasound images containing non-pathological eye and three typical vitreous opacities confirmed as physiological vitreous opacity (VO), asteroid hyalosis (AH), and vitreous haemorrhage (VH) were selected and labelled for each lesion type. Five residual networks (ResNet) and two GoogLeNet models were trained to recognize vitreous lesions. Seventy-five percent of the images were randomly selected as the training set, and the remaining 25% were selected as the test set. The accuracy and parameters were recorded and compared among these seven different deep learning (DL) models. The precision, recall, F1 score, and area under the receiver operating characteristic curve (AUC) values for recognizing vitreous lesions were calculated for the most accurate DL model. RESULTS These seven DL models had significant differences in terms of their accuracy and parameters. GoogLeNet Inception V1 achieved the highest accuracy (95.5%) and minor parameters (10315580) in vitreous lesion recognition. GoogLeNet Inception V1 achieved precision values of 0.94, 0.94, 0.96, and 0.96, recall values of 0.94, 0.93, 0.97 and 0.98, and F1 scores of 0.94, 0.93, 0.96 and 0.97 for normal, VO, AH, and VH recognition, respectively. The AUC values for these four vitreous lesion types were 0.99, 1.0, 0.99, and 0.99, respectively. CONCLUSIONS GoogLeNet Inception V1 has shown promising results in ophthalmic ultrasound image recognition. With increasing ultrasound image data, a wide variety of confidential information on eye diseases can be detected automatically by artificial intelligence technology based on deep learning.
Collapse
Affiliation(s)
- Li Feng
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China
| | | | - Wei Wei
- Hebei Eye Hospital, Xingtai, China
| | - Hui Qiu
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China
| | - Mingyu Shi
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China.
| |
Collapse
|
9
|
Soleimani M, Esmaili K, Rahdar A, Aminizadeh M, Cheraqpour K, Tabatabaei SA, Mirshahi R, Bibak Z, Mohammadi SF, Koganti R, Yousefi S, Djalilian AR. From the diagnosis of infectious keratitis to discriminating fungal subtypes; a deep learning-based study. Sci Rep 2023; 13:22200. [PMID: 38097753 PMCID: PMC10721811 DOI: 10.1038/s41598-023-49635-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 12/10/2023] [Indexed: 12/17/2023] Open
Abstract
Infectious keratitis (IK) is a major cause of corneal opacity. IK can be caused by a variety of microorganisms. Typically, fungal ulcers carry the worst prognosis. Fungal cases can be subdivided into filamentous and yeasts, which shows fundamental differences. Delays in diagnosis or initiation of treatment increase the risk of ocular complications. Currently, the diagnosis of IK is mainly based on slit-lamp examination and corneal scrapings. Notably, these diagnostic methods have their drawbacks, including experience-dependency, tissue damage, and time consumption. Artificial intelligence (AI) is designed to mimic and enhance human decision-making. An increasing number of studies have utilized AI in the diagnosis of IK. In this paper, we propose to use AI to diagnose IK (model 1), differentiate between bacterial keratitis and fungal keratitis (model 2), and discriminate the filamentous type from the yeast type of fungal cases (model 3). Overall, 9329 slit-lamp photographs gathered from 977 patients were enrolled in the study. The models exhibited remarkable accuracy, with model 1 achieving 99.3%, model 2 at 84%, and model 3 reaching 77.5%. In conclusion, our study offers valuable support in the early identification of potential fungal and bacterial keratitis cases and helps enable timely management.
Collapse
Affiliation(s)
- Mohammad Soleimani
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Kosar Esmaili
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Amir Rahdar
- Department of Telecommunication, Faculty of Electrical Engineering, Shahid Beheshti University, Tehran, Iran
| | - Mehdi Aminizadeh
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Kasra Cheraqpour
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed Ali Tabatabaei
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Reza Mirshahi
- Eye Research Center, The Five Senses Health Institute, Rasoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Zahra Bibak
- Translational Ophthalmology Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed Farzad Mohammadi
- Translational Ophthalmology Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Raghuram Koganti
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, USA
| | - Ali R Djalilian
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA.
- Cornea Service, Stem Cell Therapy and Corneal Tissue Engineering Laboratory, Illinois Eye and Ear Infirmary, 1855 W. Taylor Street, M/C 648, Chicago, IL, 60612, USA.
| |
Collapse
|
10
|
Feng HW, Chen JJ, Zhang ZC, Zhang SC, Yang WH. Bibliometric analysis of artificial intelligence and optical coherence tomography images: research hotspots and frontiers. Int J Ophthalmol 2023; 16:1431-1440. [PMID: 37724282 PMCID: PMC10475613 DOI: 10.18240/ijo.2023.09.09] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 07/05/2023] [Indexed: 09/20/2023] Open
Abstract
AIM To explore the latest application of artificial intelligence (AI) in optical coherence tomography (OCT) images, and to analyze the current research status of AI in OCT, and discuss the future research trend. METHODS On June 1, 2023, a bibliometric analysis of the Web of Science Core Collection was performed in order to explore the utilization of AI in OCT imagery. Key parameters such as papers, countries/regions, citations, databases, organizations, keywords, journal names, and research hotspots were extracted and then visualized employing the VOSviewer and CiteSpace V bibliometric platforms. RESULTS Fifty-five nations reported studies on AI biotechnology and its application in analyzing OCT images. The United States was the country with the largest number of published papers. Furthermore, 197 institutions worldwide provided published articles, where University of London had more publications than the rest. The reference clusters from the study could be divided into four categories: thickness and eyes, diabetic retinopathy (DR), images and segmentation, and OCT classification. CONCLUSION The latest hot topics and future directions in this field are identified, and the dynamic evolution of AI-based OCT imaging are outlined. AI-based OCT imaging holds great potential for revolutionizing clinical care.
Collapse
Affiliation(s)
- Hai-Wen Feng
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang 110870, Liaoning Province, China
| | - Jun-Jie Chen
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang 110870, Liaoning Province, China
| | - Zhi-Chang Zhang
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang 110122, Liaoning Province, China
| | - Shao-Chong Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| | - Wei-Hua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| |
Collapse
|
11
|
Wiedemann P. Artificial intelligence in ophthalmology. Int J Ophthalmol 2023; 16:1357-1360. [PMID: 37724277 PMCID: PMC10409517 DOI: 10.18240/ijo.2023.09.01] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 07/19/2023] [Indexed: 09/20/2023] Open
|
12
|
Bujoreanu Bezman L, Tiutiuca C, Totolici G, Carneciu N, Bujoreanu FC, Ciortea DA, Niculet E, Fulga A, Alexandru AM, Stan DJ, Nechita A. Latest Trends in Retinopathy of Prematurity: Research on Risk Factors, Diagnostic Methods and Therapies. Int J Gen Med 2023; 16:937-949. [PMID: 36942030 PMCID: PMC10024537 DOI: 10.2147/ijgm.s401122] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 02/17/2023] [Indexed: 03/15/2023] Open
Abstract
Retinopathy of prematurity (ROP) is a vasoproliferative disorder with an imminent risk of blindness, in cases where early diagnosis and treatment are not performed. The doctors' constant motivation to give these fragile beings a chance at life with optimal visual acuity has never stopped, since Terry first described this condition. Thus, throughout time, several specific advancements have been made in the management of ROP. Apart from the most known risk factors, this narrative review brings to light the latest research about new potential risk factors, such as: proteinuria, insulin-like growth factor 1 (IGF-1) and blood transfusions. Digital imaging has revolutionized the management of retinal pathologies, and it is more and more used in identifying and staging ROP, particularly in the disadvantaged regions by the means of telescreening. Moreover, optical coherence tomography (OCT) and automated diagnostic tools based on deep learning offer new perspectives on the ROP diagnosis. The new therapeutical trend based on the use of anti-VEGF agents is increasingly used in the treatment of ROP patients, and recent research sustains the theory according to which these agents do not interfere with the neurodevelopment of premature babies.
Collapse
Affiliation(s)
- Laura Bujoreanu Bezman
- Department of Ophthalmology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
- Department of Morphological and Functional Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Carmen Tiutiuca
- Department of Ophthalmology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
- Clinical Surgical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Correspondence: Carmen Tiutiuca, Clinical Surgical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, 800008, Romania, Tel +40741330788, Email
| | - Geanina Totolici
- Department of Ophthalmology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
- Clinical Surgical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Nicoleta Carneciu
- Department of Ophthalmology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
- Department of Morphological and Functional Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Florin Ciprian Bujoreanu
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Florin Ciprian Bujoreanu, Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, 800008, Romania, Tel +40741395844, Email
| | - Diana Andreea Ciortea
- Department of Pediatrics, “Sfantul Ioan” Emergency Clinical Hospital for Children, Galati, Romania
- Clinical Medical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Elena Niculet
- Department of Morphological and Functional Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Ana Fulga
- Clinical Surgical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Anamaria Madalina Alexandru
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Department of Neonatology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
| | - Daniela Jicman Stan
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Aurel Nechita
- Department of Pediatrics, “Sfantul Ioan” Emergency Clinical Hospital for Children, Galati, Romania
- Clinical Medical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| |
Collapse
|
13
|
Puneet, Kumar R, Gupta M. Optical coherence tomography image based eye disease detection using deep convolutional neural network. Health Inf Sci Syst 2022; 10:13. [PMID: 35756852 PMCID: PMC9213631 DOI: 10.1007/s13755-022-00182-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 06/08/2022] [Indexed: 12/23/2022] Open
Abstract
Over the past few decades, health care industries and medical practitioners faced a lot of obstacles to diagnosing medical-related problems due to inadequate technology and availability of equipment. In the present era, computer science technologies such as IoT, Cloud Computing, Artificial Intelligence and its allied techniques, etc. play a crucial role in the identification of medical diseases, especially in the domain of Ophthalmology. Despite this, ophthalmologists have to perform the various disease diagnosis task manually which is time-consuming and the chances of error are also very high because some of the abnormalities of eye diseases possess the same symptoms. Furthermore, multiple autonomous systems also exist to categorize the diseases but their prediction rate does not accomplish state-of-art accuracy. In the proposed approach by implementing the concept of Attention, Transfer Learning with the Deep Convolution Neural Network, the model accomplished an accuracy of 97.79% and 95.6% on the training and testing data respectively. This autonomous model efficiently classifies the various oscular disorders namely Choroidal Neovascularization, Diabetic Macular Edema, Drusen from the Optical Coherence Tomography images. It may provide a realistic solution to the healthcare sector to bring down the ophthalmologist burden in the screening of Diabetic Retinopathy.
Collapse
Affiliation(s)
- Puneet
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab India
| | - Rakesh Kumar
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab India
| | - Meenu Gupta
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab India
| |
Collapse
|
14
|
Thompson AC, Falconi A, Sappington RM. Deep learning and optical coherence tomography in glaucoma: Bridging the diagnostic gap on structural imaging. FRONTIERS IN OPHTHALMOLOGY 2022; 2:937205. [PMID: 38983522 PMCID: PMC11182271 DOI: 10.3389/fopht.2022.937205] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 08/22/2022] [Indexed: 07/11/2024]
Abstract
Glaucoma is a leading cause of progressive blindness and visual impairment worldwide. Microstructural evidence of glaucomatous damage to the optic nerve head and associated tissues can be visualized using optical coherence tomography (OCT). In recent years, development of novel deep learning (DL) algorithms has led to innovative advances and improvements in automated detection of glaucomatous damage and progression on OCT imaging. DL algorithms have also been trained utilizing OCT data to improve detection of glaucomatous damage on fundus photography, thus improving the potential utility of color photos which can be more easily collected in a wider range of clinical and screening settings. This review highlights ten years of contributions to glaucoma detection through advances in deep learning models trained utilizing OCT structural data and posits future directions for translation of these discoveries into the field of aging and the basic sciences.
Collapse
Affiliation(s)
- Atalie C. Thompson
- Department of Surgical Ophthalmology, Wake Forest School of Medicine, Winston Salem, NC, United States
- Department of Internal Medicine, Gerontology, and Geriatric Medicine, Wake Forest School of Medicine, Winston Salem, NC, United States
| | - Aurelio Falconi
- Wake Forest School of Medicine, Winston Salem, NC, United States
| | - Rebecca M. Sappington
- Department of Surgical Ophthalmology, Wake Forest School of Medicine, Winston Salem, NC, United States
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston Salem, NC, United States
| |
Collapse
|
15
|
Khan NC, Perera C, Dow ER, Chen KM, Mahajan VB, Mruthyunjaya P, Do DV, Leng T, Myung D. Predicting Systemic Health Features from Retinal Fundus Images Using Transfer-Learning-Based Artificial Intelligence Models. Diagnostics (Basel) 2022; 12:diagnostics12071714. [PMID: 35885619 PMCID: PMC9322827 DOI: 10.3390/diagnostics12071714] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 12/02/2022] Open
Abstract
While color fundus photos are used in routine clinical practice to diagnose ophthalmic conditions, evidence suggests that ocular imaging contains valuable information regarding the systemic health features of patients. These features can be identified through computer vision techniques including deep learning (DL) artificial intelligence (AI) models. We aim to construct a DL model that can predict systemic features from fundus images and to determine the optimal method of model construction for this task. Data were collected from a cohort of patients undergoing diabetic retinopathy screening between March 2020 and March 2021. Two models were created for each of 12 systemic health features based on the DenseNet201 architecture: one utilizing transfer learning with images from ImageNet and another from 35,126 fundus images. Here, 1277 fundus images were used to train the AI models. Area under the receiver operating characteristics curve (AUROC) scores were used to compare the model performance. Models utilizing the ImageNet transfer learning data were superior to those using retinal images for transfer learning (mean AUROC 0.78 vs. 0.65, p-value < 0.001). Models using ImageNet pretraining were able to predict systemic features including ethnicity (AUROC 0.93), age > 70 (AUROC 0.90), gender (AUROC 0.85), ACE inhibitor (AUROC 0.82), and ARB medication use (AUROC 0.78). We conclude that fundus images contain valuable information about the systemic characteristics of a patient. To optimize DL model performance, we recommend that even domain specific models consider using transfer learning from more generalized image sets to improve accuracy.
Collapse
Affiliation(s)
- Nergis C. Khan
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Chandrashan Perera
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
- Department of Ophthalmology, Fremantle Hospital, Perth, WA 6004, Australia
| | - Eliot R. Dow
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Karen M. Chen
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Vinit B. Mahajan
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Prithvi Mruthyunjaya
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Diana V. Do
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Theodore Leng
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - David Myung
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
- VA Palo Alto Health Care System, Palo Alto, CA 94304, USA
- Correspondence: ; Tel.: +1-650-724-3948
| |
Collapse
|
16
|
Sharma P, Ninomiya T, Omodaka K, Takahashi N, Miya T, Himori N, Okatani T, Nakazawa T. A lightweight deep learning model for automatic segmentation and analysis of ophthalmic images. Sci Rep 2022; 12:8508. [PMID: 35595784 PMCID: PMC9122907 DOI: 10.1038/s41598-022-12486-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 05/11/2022] [Indexed: 12/04/2022] Open
Abstract
Detection, diagnosis, and treatment of ophthalmic diseases depend on extraction of information (features and/or their dimensions) from the images. Deep learning (DL) model are crucial for the automation of it. Here, we report on the development of a lightweight DL model, which can precisely segment/detect the required features automatically. The model utilizes dimensionality reduction of image to extract important features, and channel contraction to allow only the required high-level features necessary for reconstruction of segmented feature image. Performance of present model in detection of glaucoma from optical coherence tomography angiography (OCTA) images of retina is high (area under the receiver-operator characteristic curve AUC ~ 0.81). Bland–Altman analysis gave exceptionally low bias (~ 0.00185), and high Pearson’s correlation coefficient (p = 0.9969) between the parameters determined from manual and DL based segmentation. On the same dataset, bias is an order of magnitude higher (~ 0.0694, p = 0.8534) for commercial software. Present model is 10 times lighter than Unet (popular for biomedical image segmentation) and have a better segmentation accuracy and model training reproducibility (based on the analysis of 3670 OCTA images). High dice similarity coefficient (D) for variety of ophthalmic images suggested it’s wider scope in precise segmentation of images even from other fields. Our concept of channel narrowing is not only important for the segmentation problems, but it can also reduce number of parameters significantly in object classification models. Enhanced disease diagnostic accuracy can be achieved for the resource limited devices (such as mobile phone, Nvidia’s Jetson, Raspberry pi) used in self-monitoring, and tele-screening (memory size of trained model ~ 35 MB).
Collapse
Affiliation(s)
- Parmanand Sharma
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Advanced Research Center for Innovations in Next-Generation Medicine, Tohoku University Graduate School of Medicine, Sendai, Japan.
| | - Takahiro Ninomiya
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Kazuko Omodaka
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan.,Department of Ophthalmic Imaging and Information Analytics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Naoki Takahashi
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Takehiro Miya
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan.,Department of Ophthalmic Imaging and Information Analytics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Noriko Himori
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan.,Department of Aging Vision Healthcare, Tohoku University Graduate School of Biomedical Engineering, Sendai, Japan
| | - Takayuki Okatani
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| | - Toru Nakazawa
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Advanced Research Center for Innovations in Next-Generation Medicine, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Department of Retinal Disease Control, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Department of Ophthalmic Imaging and Information Analytics, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Department of Advanced Ophthalmic Medicine, Tohoku University Graduate School of Medicine, Sendai, Japan.
| |
Collapse
|
17
|
Han J, Choi S, Park JI, Hwang JS, Han JM, Lee HJ, Ko J, Yoon J, Hwang DDJ. Classifying neovascular age-related macular degeneration with a deep convolutional neural network based on optical coherence tomography images. Sci Rep 2022; 12:2232. [PMID: 35140257 PMCID: PMC8828755 DOI: 10.1038/s41598-022-05903-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 01/17/2022] [Indexed: 12/27/2022] Open
Abstract
Neovascular age-related macular degeneration (nAMD) is among the main causes of visual impairment worldwide. We built a deep learning model to distinguish the subtypes of nAMD using spectral domain optical coherence tomography (SD-OCT) images. Data from SD-OCT images of nAMD (polypoidal choroidal vasculopathy, retinal angiomatous proliferation, and typical nAMD) and normal healthy patients were analyzed using a convolutional neural network (CNN). The model was trained and validated based on 4749 SD-OCT images from 347 patients and 50 healthy controls. To adopt an accurate and robust image classification architecture, we evaluated three well-known CNN structures (VGG-16, VGG-19, and ResNet) and two customized classification layers (fully connected layer with dropout vs. global average pooling). Following the test set performance, the model with the highest classification accuracy was used. Transfer learning and data augmentation were applied to improve the robustness and accuracy of the model. Our proposed model showed an accuracy of 87.4% on the test data (920 images), scoring higher than ten ophthalmologists, for the same data. Additionally, the part that our model judged to be important in classification was confirmed through Grad-CAM images, and consequently, it has a similar judgment criteria to that of ophthalmologists. Thus, we believe that our model can be used as an auxiliary tool in clinical practice.
Collapse
Affiliation(s)
- Jinyoung Han
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Korea.,RAON DATA, Seoul, Korea
| | - Seong Choi
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Korea.,RAON DATA, Seoul, Korea
| | - Ji In Park
- Department of Medicine, Kangwon National University Hospital, Kangwon National University School of Medicine, Chuncheon, Gangwon-do, South Korea
| | | | | | - Hak Jun Lee
- Department of Ophthalmology, Hangil Eye Hospital, 35 Bupyeong-daero, Bupyeong-gu, Incheon, 21388, Korea
| | - Junseo Ko
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Korea.,RAON DATA, Seoul, Korea
| | - Jeewoo Yoon
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Korea.,RAON DATA, Seoul, Korea
| | - Daniel Duck-Jin Hwang
- Department of Ophthalmology, Hangil Eye Hospital, 35 Bupyeong-daero, Bupyeong-gu, Incheon, 21388, Korea. .,Lux Mind, Incheon, Korea. .,Department of Ophthalmology, Catholic Kwandong University College of Medicine, Incheon, Korea.
| |
Collapse
|
18
|
Lichtenegger A, Salas M, Sing A, Duelk M, Licandro R, Gesperger J, Baumann B, Drexler W, Leitgeb RA. Reconstruction of visible light optical coherence tomography images retrieved from discontinuous spectral data using a conditional generative adversarial network. BIOMEDICAL OPTICS EXPRESS 2021; 12:6780-6795. [PMID: 34858680 PMCID: PMC8606123 DOI: 10.1364/boe.435124] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 08/16/2021] [Accepted: 08/19/2021] [Indexed: 06/13/2023]
Abstract
Achieving high resolution in optical coherence tomography typically requires the continuous extension of the spectral bandwidth of the light source. This work demonstrates an alternative approach: combining two discrete spectral windows located in the visible spectrum with a trained conditional generative adversarial network (cGAN) to reconstruct a high-resolution image equivalent to that generated using a continuous spectral band. The cGAN was trained using OCT image pairs acquired with the continuous and discontinuous visible range spectra to learn the relation between low- and high-resolution data. The reconstruction performance was tested using 6000 B-scans of a layered phantom, micro-beads and ex-vivo mouse ear tissue. The resultant cGAN-generated images demonstrate an image quality and axial resolution which approaches that of the high-resolution system.
Collapse
Affiliation(s)
- Antonia Lichtenegger
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria
- Christian Doppler Laboratory for Innovative Optical Imaging and Its Translation to Medicine, Medical University of Vienna, Austria
- These authors contributed equally
| | - Matthias Salas
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria
- Christian Doppler Laboratory for Innovative Optical Imaging and Its Translation to Medicine, Medical University of Vienna, Austria
- These authors contributed equally
| | - Alexander Sing
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria
| | | | - Roxane Licandro
- Department of and Biomedical Imaging and Image-guided Therapy, Computational Imaging Research, Medical University of Vienna, Austria
- Institute of Visual Computing and Human-Centered Technology, Computer Vision Lab, TU Wien, Austria
| | - Johanna Gesperger
- Division of Neuropathology and Neurochemistry, Department of Neurology, Medical University of Vienna, Austria
| | - Bernhard Baumann
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria
| | - Wolfgang Drexler
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria
| | - Rainer A. Leitgeb
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria
- Christian Doppler Laboratory for Innovative Optical Imaging and Its Translation to Medicine, Medical University of Vienna, Austria
| |
Collapse
|
19
|
Jahangir S, Khan HA. Artificial intelligence in ophthalmology and visual sciences: Current implications and future directions. Artif Intell Med Imaging 2021; 2:95-103. [DOI: 10.35711/aimi.v2.i5.95] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 06/30/2021] [Accepted: 10/27/2021] [Indexed: 02/06/2023] Open
Abstract
Since its inception in 1959, artificial intelligence (AI) has evolved at an unprecedented rate and has revolutionized the world of medicine. Ophthalmology, being an image-driven field of medicine, is well-suited for the implementation of AI. Machine learning (ML) and deep learning (DL) models are being utilized for screening of vision threatening ocular conditions of the eye. These models have proven to be accurate and reliable for diagnosing anterior and posterior segment diseases, screening large populations, and even predicting the natural course of various ocular morbidities. With the increase in population and global burden of managing irreversible blindness, AI offers a unique solution when implemented in clinical practice. In this review, we discuss what are AI, ML, and DL, their uses, future direction for AI, and its limitations in ophthalmology.
Collapse
Affiliation(s)
- Smaha Jahangir
- School of Optometry, The University of Faisalabad, Faisalabad, Punjab 38000, Pakistan
| | - Hashim Ali Khan
- Department of Ophthalmology, SEHHAT Foundation, Gilgit 15100, Gilgit-Baltistan, Pakistan
| |
Collapse
|
20
|
Leitgeb R, Placzek F, Rank E, Krainz L, Haindl R, Li Q, Liu M, Andreana M, Unterhuber A, Schmoll T, Drexler W. Enhanced medical diagnosis for dOCTors: a perspective of optical coherence tomography. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210150-PER. [PMID: 34672145 PMCID: PMC8528212 DOI: 10.1117/1.jbo.26.10.100601] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 09/23/2021] [Indexed: 05/17/2023]
Abstract
SIGNIFICANCE After three decades, more than 75,000 publications, tens of companies being involved in its commercialization, and a global market perspective of about USD 1.5 billion in 2023, optical coherence tomography (OCT) has become one of the fastest successfully translated imaging techniques with substantial clinical and economic impacts and acceptance. AIM Our perspective focuses on disruptive forward-looking innovations and key technologies to further boost OCT performance and therefore enable significantly enhanced medical diagnosis. APPROACH A comprehensive review of state-of-the-art accomplishments in OCT has been performed. RESULTS The most disruptive future OCT innovations include imaging resolution and speed (single-beam raster scanning versus parallelization) improvement, new implementations for dual modality or even multimodality systems, and using endogenous or exogenous contrast in these hybrid OCT systems targeting molecular and metabolic imaging. Aside from OCT angiography, no other functional or contrast enhancing OCT extension has accomplished comparable clinical and commercial impacts. Some more recently developed extensions, e.g., optical coherence elastography, dynamic contrast OCT, optoretinography, and artificial intelligence enhanced OCT are also considered with high potential for the future. In addition, OCT miniaturization for portable, compact, handheld, and/or cost-effective capsule-based OCT applications, home-OCT, and self-OCT systems based on micro-optic assemblies or photonic integrated circuits will revolutionize new applications and availability in the near future. Finally, clinical translation of OCT including medical device regulatory challenges will continue to be absolutely essential. CONCLUSIONS With its exquisite non-invasive, micrometer resolution depth sectioning capability, OCT has especially revolutionized ophthalmic diagnosis and hence is the fastest adopted imaging technology in the history of ophthalmology. Nonetheless, OCT has not been completely exploited and has substantial growth potential-in academics as well as in industry. This applies not only to the ophthalmic application field, but also especially to the original motivation of OCT to enable optical biopsy, i.e., the in situ imaging of tissue microstructure with a resolution approaching that of histology but without the need for tissue excision.
Collapse
Affiliation(s)
- Rainer Leitgeb
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
- Medical University of Vienna, Christian Doppler Laboratory OPTRAMED, Vienna, Austria
| | - Fabian Placzek
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Elisabet Rank
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Lisa Krainz
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Richard Haindl
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Qian Li
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Mengyang Liu
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Marco Andreana
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Angelika Unterhuber
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Tilman Schmoll
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
- Carl Zeiss Meditec, Inc., Dublin, California, United States
| | - Wolfgang Drexler
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
- Address all correspondence to Wolfgang Drexler,
| |
Collapse
|
21
|
Baxter SL, Lee AY. Gaps in standards for integrating artificial intelligence technologies into ophthalmic practice. Curr Opin Ophthalmol 2021; 32:431-438. [PMID: 34231531 PMCID: PMC8373825 DOI: 10.1097/icu.0000000000000781] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
PURPOSE OF REVIEW The purpose of this review is to provide an overview of healthcare standards and their relevance to multiple ophthalmic workflows, with a specific emphasis on describing gaps in standards development needed for improved integration of artificial intelligence technologies into ophthalmic practice. RECENT FINDINGS Healthcare standards are an essential component of data exchange and critical for clinical practice, research, and public health surveillance activities. Standards enable interoperability between clinical information systems, healthcare information exchange between institutions, and clinical decision support in a complex health information technology ecosystem. There are several gaps in standards in ophthalmology, including relatively low adoption of imaging standards, lack of use cases for integrating apps providing artificial intelligence -based decision support, lack of common data models to harmonize big data repositories, and no standards regarding interfaces and algorithmic outputs. SUMMARY These gaps in standards represent opportunities for future work to develop improved data flow between various elements of the digital health ecosystem. This will enable more widespread adoption and integration of artificial intelligence-based tools into clinical practice. Engagement and support from the ophthalmology community for standards development will be important for advancing this work.
Collapse
Affiliation(s)
- Sally L. Baxter
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Health Department of Biomedical Informatics, University of California San Diego, La Jolla, CA, USA
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
| |
Collapse
|
22
|
Oke I, VanderVeen D. Machine Learning Applications in Pediatric Ophthalmology. Semin Ophthalmol 2021; 36:210-217. [PMID: 33641598 DOI: 10.1080/08820538.2021.1890151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Purpose: To describe emerging applications of machine learning (ML) in pediatric ophthalmology with an emphasis on the diagnosis and treatment of disorders affecting visual development. Methods: Literature review of studies applying ML algorithms to problems in pediatric ophthalmology. Results: At present, the ML literature emphasizes applications in retinopathy of prematurity. However, there are increasing efforts to apply ML techniques in the diagnosis of amblyogenic conditions such as pediatric cataracts, strabismus, and high refractive error. Conclusions: A greater understanding of the principles governing ML will enable pediatric eye care providers to apply the methodology to unexplored challenges within the subspecialty.
Collapse
Affiliation(s)
- Isdin Oke
- Department of Ophthalmology, Boston Children's Hospital, Boston, MA, USA.,Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Deborah VanderVeen
- Department of Ophthalmology, Boston Children's Hospital, Boston, MA, USA.,Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
23
|
Artificial Intelligence and the Medical Physicist: Welcome to the Machine. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11041691] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) is a branch of computer science dedicated to giving machines or computers the ability to perform human-like cognitive functions, such as learning, problem-solving, and decision making. Since it is showing superior performance than well-trained human beings in many areas, such as image classification, object detection, speech recognition, and decision-making, AI is expected to change profoundly every area of science, including healthcare and the clinical application of physics to healthcare, referred to as medical physics. As a result, the Italian Association of Medical Physics (AIFM) has created the “AI for Medical Physics” (AI4MP) group with the aims of coordinating the efforts, facilitating the communication, and sharing of the knowledge on AI of the medical physicists (MPs) in Italy. The purpose of this review is to summarize the main applications of AI in medical physics, describe the skills of the MPs in research and clinical applications of AI, and define the major challenges of AI in healthcare.
Collapse
|
24
|
Terasaki H, Sonoda S, Tomita M, Sakamoto T. Recent Advances and Clinical Application of Color Scanning Laser Ophthalmoscope. J Clin Med 2021; 10:jcm10040718. [PMID: 33670287 PMCID: PMC7917686 DOI: 10.3390/jcm10040718] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Revised: 02/05/2021] [Accepted: 02/09/2021] [Indexed: 12/14/2022] Open
Abstract
Scanning laser ophthalmoscopes (SLOs) have been available since the early 1990s, but they were not commonly used because their advantages were not enough to replace conventional color fundus photography. In recent years, color SLOs have improved significantly, and the colored SLO images are obtained by combining multiple SLO images taken by lasers of different wavelengths. A combination of these images of different lasers can create an image that is close to that of the real ocular fundus. One advantage of the advanced SLOs is that they can obtain images with a wider view of the ocular fundus while maintaining a high resolution even through non-dilated eyes. The current SLOs are superior to the conventional fundus photography in their ability to image abnormal alterations of the retina and choroid. Thus, the purpose of this review was to present the characteristics of the current color SLOs and to show how that can help in the diagnosis and the following of changes after treatments. To accomplish these goals, we will present our findings in patients with different types of retinochoroidal disorders.
Collapse
Affiliation(s)
- Hiroto Terasaki
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima 890-8544, Japan; (S.S.); (M.T.); (T.S.)
- Correspondence: ; Tel.: +81-99-275-5402; Fax: +81-99-265-4894
| | - Shozo Sonoda
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima 890-8544, Japan; (S.S.); (M.T.); (T.S.)
- Kagoshima Sonoda Eye & Plastic Surgery Clinic, Kagoshima 890-0053, Japan
| | - Masatoshi Tomita
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima 890-8544, Japan; (S.S.); (M.T.); (T.S.)
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima 890-8544, Japan; (S.S.); (M.T.); (T.S.)
| |
Collapse
|