1
|
Moraes G, Struyven R, Wagner SK, Liu T, Chong D, Abbas A, Chopra R, Patel PJ, Balaskas K, Keenan TD, Keane PA. Quantifying Changes on OCT in Eyes Receiving Treatment for Neovascular Age-Related Macular Degeneration. OPHTHALMOLOGY SCIENCE 2024; 4:100570. [PMID: 39224530 PMCID: PMC11367487 DOI: 10.1016/j.xops.2024.100570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 06/24/2024] [Accepted: 06/24/2024] [Indexed: 09/04/2024]
Abstract
Purpose Application of artificial intelligence (AI) to macular OCT scans to segment and quantify volumetric change in anatomical and pathological features during intravitreal treatment for neovascular age-related macular degeneration (AMD). Design Retrospective analysis of OCT images from the Moorfields Eye Hospital AMD Database. Participants A total of 2115 eyes from 1801 patients starting anti-VEGF treatment between June 1, 2012, and June 30, 2017. Methods The Moorfields Eye Hospital neovascular AMD database was queried for first and second eyes receiving anti-VEGF treatment and had an OCT scan at baseline and 12 months. Follow-up scans were input into the AI system and volumes of OCT variables were studied at different time points and compared with baseline volume groups. Cross-sectional comparisons between time points were conducted using Mann-Whitney U test. Main Outcome Measures Volume outputs of the following variables were studied: intraretinal fluid, subretinal fluid, pigment epithelial detachment (PED), subretinal hyperreflective material (SHRM), hyperreflective foci, neurosensory retina, and retinal pigment epithelium. Results Mean volumes of analyzed features decreased significantly from baseline to both 4 and 12 months, in both first-treated and second-treated eyes. Pathological features that reflect exudation, including pure fluid components (intraretinal fluid and subretinal fluid) and those with fluid and fibrovascular tissue (PED and SHRM), displayed similar responses to treatment over 12 months. Mean PED and SHRM volumes showed less pronounced but also substantial decreases over the first 2 months, reaching a plateau postloading phase, and minimal change to 12 months. Both neurosensory retina and retinal pigment epithelium volumes showed gradual reductions over time, and were not as substantial as exudative features. Conclusions We report the results of a quantitative analysis of change in retinal segmented features over time, enabled by an AI segmentation system. Cross-sectional analysis at multiple time points demonstrated significant associations between baseline OCT-derived segmented features and the volume of biomarkers at follow-up. Demonstrating how certain OCT biomarkers progress with treatment and the impact of pretreatment retinal morphology on different structural volumes may provide novel insights into disease mechanisms and aid the personalization of care. Data will be made public for future studies. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Gabriella Moraes
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Robbert Struyven
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Siegfried K. Wagner
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Timing Liu
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - David Chong
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Abdallah Abbas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Reena Chopra
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Praveen J. Patel
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Konstantinos Balaskas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Tiarnan D.L. Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Pearse A. Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| |
Collapse
|
2
|
Zhang H, Zhang K, Wang J, Yu S, Li Z, Yin S, Zhu J, Wei W. Quickly diagnosing Bietti crystalline dystrophy with deep learning. iScience 2024; 27:110579. [PMID: 39220263 PMCID: PMC11365386 DOI: 10.1016/j.isci.2024.110579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 06/18/2024] [Accepted: 07/22/2024] [Indexed: 09/04/2024] Open
Abstract
Bietti crystalline dystrophy (BCD) is an autosomal recessive inherited retinal disease (IRD) and its early precise diagnosis is much challenging. This study aims to diagnose BCD and classify the clinical stage based on ultra-wide-field (UWF) color fundus photographs (CFPs) via deep learning (DL). All CFPs were labeled as BCD, retinitis pigmentosa (RP) or normal, and the BCD patients were further divided into three stages. DL models ResNeXt, Wide ResNet, and ResNeSt were developed, and model performance was evaluated using accuracy and confusion matrix. Then the diagnostic interpretability was verified by the heatmaps. The models achieved good classification results. Our study established the largest BCD database of Chinese population. We developed a quick diagnosing method for BCD and evaluated the potential efficacy of an automatic diagnosis and grading DL algorithm based on UWF fundus photography in a Chinese cohort of BCD patients.
Collapse
Affiliation(s)
- Haihan Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Kai Zhang
- Chongqing Chang’an Industrial Group Co. Ltd, Chongqing, China
| | - Jinyuan Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Shicheng Yu
- Department of Ophthalmology, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou 510060, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shiyi Yin
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jingyuan Zhu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
3
|
Li K, Yang J, Liang W, Li X, Zhang C, Chen L, Wu C, Zhang X, Xu Z, Wang Y, Meng L, Zhang Y, Chen Y, Zhou SK. O-PRESS: Boosting OCT axial resolution with Prior guidance, Recurrence, and Equivariant Self-Supervision. Med Image Anal 2024; 99:103319. [PMID: 39270466 DOI: 10.1016/j.media.2024.103319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 07/10/2024] [Accepted: 08/19/2024] [Indexed: 09/15/2024]
Abstract
Optical coherence tomography (OCT) is a noninvasive technology that enables real-time imaging of tissue microanatomies. The axial resolution of OCT is intrinsically constrained by the spectral bandwidth of the employed light source while maintaining a fixed center wavelength for a specific application. Physically extending this bandwidth faces strong limitations and requires a substantial cost. We present a novel computational approach, called as O-PRESS, for boosting the axial resolution of OCT with Prior guidance, a Recurrent mechanism, and Equivariant Self-Supervision. Diverging from conventional deconvolution methods that rely on physical models or data-driven techniques, our method seamlessly integrates OCT modeling and deep learning, enabling us to achieve real-time axial-resolution enhancement exclusively from measurements without a need for paired images. Our approach solves two primary tasks of resolution enhancement and noise reduction with one treatment. Both tasks are executed in a self-supervised manner, with equivariance imaging and free space priors guiding their respective processes. Experimental evaluations, encompassing both quantitative metrics and visual assessments, consistently verify the efficacy and superiority of our approach, which exhibits performance on par with fully supervised methods. Importantly, the robustness of our model is affirmed, showcasing its dual capability to enhance axial resolution while concurrently improving the signal-to-noise ratio.
Collapse
Affiliation(s)
- Kaiyan Li
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China (USTC), Hefei Anhui, 230026, China; Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou Jiangsu, 215123, China
| | - Jingyuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Wenxuan Liang
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China (USTC), Hefei Anhui, 230026, China; Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou Jiangsu, 215123, China; School of Physical Sciences, University of Science and Technology of China, Hefei Anhui, 230026, China
| | - Xingde Li
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, 21287, USA
| | - Chenxi Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Lulu Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Chan Wu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Xiao Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Zhiyan Xu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Yueling Wang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Lihui Meng
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Yue Zhang
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China (USTC), Hefei Anhui, 230026, China; Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou Jiangsu, 215123, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China.
| | - S Kevin Zhou
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China (USTC), Hefei Anhui, 230026, China; Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou Jiangsu, 215123, China; Key Laboratory of Precision and Intelligent Chemistry, USTC, Hefei Anhui, 230026, China; Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing, 100190, China.
| |
Collapse
|
4
|
Gim N, Wu Y, Blazes M, Lee CS, Wang RK, Lee AY. A Clinician's Guide to Sharing Data for AI in Ophthalmology. Invest Ophthalmol Vis Sci 2024; 65:21. [PMID: 38864811 PMCID: PMC11174091 DOI: 10.1167/iovs.65.6.21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 05/17/2024] [Indexed: 06/13/2024] Open
Abstract
Data is the cornerstone of using AI models, because their performance directly depends on the diversity, quantity, and quality of the data used for training. Using AI presents unique potential, particularly in medical applications that involve rich data such as ophthalmology, encompassing a variety of imaging methods, medical records, and eye-tracking data. However, sharing medical data comes with challenges because of regulatory issues and privacy concerns. This review explores traditional and nontraditional data sharing methods in medicine, focusing on previous works in ophthalmology. Traditional methods involve direct data transfer, whereas newer approaches prioritize security and privacy by sharing derived datasets, creating secure research environments, or using model-to-data strategies. We examine each method's mechanisms, variations, recent applications in ophthalmology, and their respective advantages and disadvantages. By empowering medical researchers with insights into data sharing methods and considerations, this review aims to assist informed decision-making while upholding ethical standards and patient privacy in medical AI development.
Collapse
Affiliation(s)
- Nayoon Gim
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
- The Roger and Angie Karalis Retina Center, Seattle, Washington, United States
- Department of Bioengineering, University of Washington, Seattle, WA, United States
| | - Yue Wu
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
- The Roger and Angie Karalis Retina Center, Seattle, Washington, United States
| | - Marian Blazes
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
- The Roger and Angie Karalis Retina Center, Seattle, Washington, United States
| | - Cecilia S. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
- The Roger and Angie Karalis Retina Center, Seattle, Washington, United States
| | - Ruikang K. Wang
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
- Department of Bioengineering, University of Washington, Seattle, WA, United States
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
- The Roger and Angie Karalis Retina Center, Seattle, Washington, United States
| |
Collapse
|
5
|
Roubelat FP, Soler V, Varenne F, Gualino V. Real-world artificial intelligence-based interpretation of fundus imaging as part of an eyewear prescription renewal protocol. J Fr Ophtalmol 2024; 47:104130. [PMID: 38461084 DOI: 10.1016/j.jfo.2024.104130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 11/17/2023] [Accepted: 11/23/2023] [Indexed: 03/11/2024]
Abstract
OBJECTIVE A real-world evaluation of the diagnostic accuracy of the Opthai® software for artificial intelligence-based detection of fundus image abnormalities in the context of the French eyewear prescription renewal protocol (RNO). METHODS A single-center, retrospective review of the sensitivity and specificity of the software in detecting fundus abnormalities among consecutive patients seen in our ophthalmology center in the context of the RNO protocol from July 28 through October 22, 2021. We compared abnormalities detected by the software operated by ophthalmic technicians (index test) to diagnoses confirmed by the ophthalmologist following additional examinations and/or consultation (reference test). RESULTS The study included 2056 eyes/fundus images of 1028 patients aged 6-50years. The software detected fundus abnormalities in 149 (7.2%) eyes or 107 (10.4%) patients. After examining the same fundus images, the ophthalmologist detected abnormalities in 35 (1.7%) eyes or 20 (1.9%) patients. The ophthalmologist did not detect abnormalities in fundus images deemed normal by the software. The most frequent diagnoses made by the ophthalmologist were glaucoma suspect (0.5% of eyes), peripapillary atrophy (0.44% of eyes), and drusen (0.39% of eyes). The software showed an overall sensitivity of 100% (95% CI 0.879-1.00) and an overall specificity of 94.4% (95% CI 0.933-0.953). The majority of false-positive software detections (5.6%) were glaucoma suspect, with the differential diagnosis of large physiological optic cups. Immediate OCT imaging by the technician allowed diagnosis by the ophthalmologist without separate consultation for 43/53 (81%) patients. CONCLUSION Ophthalmic technicians can use this software for highly-sensitive screening for fundus abnormalities that require evaluation by an ophthalmologist.
Collapse
Affiliation(s)
- F-P Roubelat
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Soler
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - F Varenne
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Gualino
- Ophthalmology Department, Clinique Honoré-Cave, Montauban, France.
| |
Collapse
|
6
|
S V A, G DB, Raman R. Automatic Identification and Severity Classification of Retinal Biomarkers in SD-OCT Using Dilated Depthwise Separable Convolution ResNet with SVM Classifier. Curr Eye Res 2024; 49:513-523. [PMID: 38251704 DOI: 10.1080/02713683.2024.2303713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 01/03/2024] [Indexed: 01/23/2024]
Abstract
PURPOSE Diagnosis of Uveitic Macular Edema (UME) using Spectral Domain OCT (SD-OCT) is a promising method for early detection and monitoring of sight-threatening visual impairment. Viewing multiple B-scans and identifying biomarkers is challenging and time-consuming for clinical practitioners. To overcome these challenges, this paper proposes an image classification hybrid framework for predicting the presence of biomarkers such as intraretinal cysts (IRC), hyperreflective foci (HRF), hard exudates (HE) and neurosensory detachment (NSD) in OCT B-scans along with their severity. METHODS A dataset of 10880 B-scans from 85 Uveitic patients is collected and graded by two board-certified ophthalmologists for the presence of biomarkers. A novel image classification framework, Dilated Depthwise Separable Convolution ResNet (DDSC-RN) with SVM classifier, is developed to achieve network compression with a larger receptive field that captures both low and high-level features of the biomarkers without loss of classification accuracy. The severity level of each biomarker is predicted from the feature map, extracted by the proposed DDSC-RN network. RESULTS The proposed hybrid model is evaluated using ground truth labels from the hospital. The deep learning model initially, identified the presence of biomarkers in B-scans. It achieved an overall accuracy of 98.64%, which is comparable to the performance of other state-of-the-art models, such as DRN-C-42 and ResNet-34. The SVM classifier then predicted the severity of each biomarker, achieving an overall accuracy of 89.3%. CONCLUSIONS A new hybrid model accurately identifies four retinal biomarkers on a tissue map and predicts their severity. The model outperforms other methods for identifying multiple biomarkers in complex OCT B-scans. This helps clinicians to screen multiple B-scans of UME more effectively, leading to better treatment outcomes.
Collapse
Affiliation(s)
- Adithiya S V
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Dharani Bai G
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, Tamil Nadu, India
| |
Collapse
|
7
|
Kulyabin M, Zhdanov A, Nikiforova A, Stepichev A, Kuznetsova A, Ronkin M, Borisov V, Bogachev A, Korotkich S, Constable PA, Maier A. OCTDL: Optical Coherence Tomography Dataset for Image-Based Deep Learning Methods. Sci Data 2024; 11:365. [PMID: 38605088 PMCID: PMC11009408 DOI: 10.1038/s41597-024-03182-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 03/22/2024] [Indexed: 04/13/2024] Open
Abstract
Optical coherence tomography (OCT) is a non-invasive imaging technique with extensive clinical applications in ophthalmology. OCT enables the visualization of the retinal layers, playing a vital role in the early detection and monitoring of retinal diseases. OCT uses the principle of light wave interference to create detailed images of the retinal microstructures, making it a valuable tool for diagnosing ocular conditions. This work presents an open-access OCT dataset (OCTDL) comprising over 2000 OCT images labeled according to disease group and retinal pathology. The dataset consists of OCT records of patients with Age-related Macular Degeneration (AMD), Diabetic Macular Edema (DME), Epiretinal Membrane (ERM), Retinal Artery Occlusion (RAO), Retinal Vein Occlusion (RVO), and Vitreomacular Interface Disease (VID). The images were acquired with an Optovue Avanti RTVue XR using raster scanning protocols with dynamic scan length and image resolution. Each retinal b-scan was acquired by centering on the fovea and interpreted and cataloged by an experienced retinal specialist. In this work, we applied Deep Learning classification techniques to this new open-access dataset.
Collapse
Affiliation(s)
- Mikhail Kulyabin
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Martensstr. 3, 91058, Erlangen, Germany.
| | - Aleksei Zhdanov
- Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University Named after the First President of Russia B. N. Yeltsin, Mira, 32, Yekaterinburg, 620078, Russia
| | - Anastasia Nikiforova
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
- Ural State Medical University, Repina, 3, Yekaterinburg, 620028, Russia
| | - Andrey Stepichev
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
| | - Anna Kuznetsova
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
| | - Mikhail Ronkin
- Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University Named after the First President of Russia B. N. Yeltsin, Mira, 32, Yekaterinburg, 620078, Russia
| | - Vasilii Borisov
- Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University Named after the First President of Russia B. N. Yeltsin, Mira, 32, Yekaterinburg, 620078, Russia
| | - Alexander Bogachev
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
- Ural State Medical University, Repina, 3, Yekaterinburg, 620028, Russia
| | - Sergey Korotkich
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
- Ural State Medical University, Repina, 3, Yekaterinburg, 620028, Russia
| | - Paul A Constable
- Flinders University, College of Nursing and Health Sciences, Caring Futures Institute, Adelaide, SA 5042, Australia
| | - Andreas Maier
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Martensstr. 3, 91058, Erlangen, Germany
| |
Collapse
|
8
|
Cuk A, Bezdan T, Jovanovic L, Antonijevic M, Stankovic M, Simic V, Zivkovic M, Bacanin N. Tuning attention based long-short term memory neural networks for Parkinson's disease detection using modified metaheuristics. Sci Rep 2024; 14:4309. [PMID: 38383690 PMCID: PMC10881563 DOI: 10.1038/s41598-024-54680-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Accepted: 02/15/2024] [Indexed: 02/23/2024] Open
Abstract
Parkinson's disease (PD) is a progressively debilitating neurodegenerative disorder that primarily affects the dopaminergic system in the basal ganglia, impacting millions of individuals globally. The clinical manifestations of the disease include resting tremors, muscle rigidity, bradykinesia, and postural instability. Diagnosis relies mainly on clinical evaluation, lacking reliable diagnostic tests and being inherently imprecise and subjective. Early detection of PD is crucial for initiating treatments that, while unable to cure the chronic condition, can enhance the life quality of patients and alleviate symptoms. This study explores the potential of utilizing long-short term memory neural networks (LSTM) with attention mechanisms to detect Parkinson's disease based on dual-task walking test data. Given that the performance of networks is significantly inductance by architecture and training parameter choices, a modified version of the recently introduced crayfish optimization algorithm (COA) is proposed, specifically tailored to the requirements of this investigation. The proposed optimizer is assessed on a publicly accessible real-world clinical gait in Parkinson's disease dataset, and the results demonstrate its promise, achieving an accuracy of 87.4187 % for the best-constructed models.
Collapse
Affiliation(s)
- Aleksa Cuk
- Singidunum University, Danijelova 32, Belgrade, 11010, Serbia
| | - Timea Bezdan
- Singidunum University, Danijelova 32, Belgrade, 11010, Serbia
| | - Luka Jovanovic
- Singidunum University, Danijelova 32, Belgrade, 11010, Serbia
| | | | - Milos Stankovic
- Singidunum University, Danijelova 32, Belgrade, 11010, Serbia
| | - Vladimir Simic
- Faculty of Transport and Traffic Engineering, University of Belgrade, Vojvode Stepe 305, Belgrade, 11010, Serbia
- College of Engineering, Department of Industrial Engineering and Management, Yuan Ze University, Taoyuan City, 320315, Taiwan
- College of Informatics, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul, Republic of Korea
| | | | - Nebojsa Bacanin
- Singidunum University, Danijelova 32, Belgrade, 11010, Serbia.
- MEU Research Unit, Middle East University, Amman, Jordan.
- Faculty of Data Science and Information Technology, INTI International University, 71800, Nilai, Malaysia.
| |
Collapse
|
9
|
Wang YZ, Juroch K, Birch DG. Deep Learning-Assisted Measurements of Photoreceptor Ellipsoid Zone Area and Outer Segment Volume as Biomarkers for Retinitis Pigmentosa. Bioengineering (Basel) 2023; 10:1394. [PMID: 38135984 PMCID: PMC10740805 DOI: 10.3390/bioengineering10121394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 11/13/2023] [Accepted: 11/29/2023] [Indexed: 12/24/2023] Open
Abstract
The manual segmentation of retinal layers from OCT scan images is time-consuming and costly. The deep learning approach has potential for the automatic delineation of retinal layers to significantly reduce the burden of human graders. In this study, we compared deep learning model (DLM) segmentation with manual correction (DLM-MC) to conventional manual grading (MG) for the measurements of the photoreceptor ellipsoid zone (EZ) area and outer segment (OS) volume in retinitis pigmentosa (RP) to assess whether DLM-MC can be a new gold standard for retinal layer segmentation and for the measurement of retinal layer metrics. Ninety-six high-speed 9 mm 31-line volume scans obtained from 48 patients with RPGR-associated XLRP were selected based on the following criteria: the presence of an EZ band within the scan limit and a detectable EZ in at least three B-scans in a volume scan. All the B-scan images in each volume scan were manually segmented for the EZ and proximal retinal pigment epithelium (pRPE) by two experienced human graders to serve as the ground truth for comparison. The test volume scans were also segmented by a DLM and then manually corrected for EZ and pRPE by the same two graders to obtain DLM-MC segmentation. The EZ area and OS volume were determined by interpolating the discrete two-dimensional B-scan EZ-pRPE layer over the scan area. Dice similarity, Bland-Altman analysis, correlation, and linear regression analyses were conducted to assess the agreement between DLM-MC and MG for the EZ area and OS volume measurements. For the EZ area, the overall mean dice score (SD) between DLM-MC and MG was 0.8524 (0.0821), which was comparable to 0.8417 (0.1111) between two MGs. For the EZ area > 1 mm2, the average dice score increased to 0.8799 (0.0614). When comparing DLM-MC to MG, the Bland-Altman plots revealed a mean difference (SE) of 0.0132 (0.0953) mm2 and a coefficient of repeatability (CoR) of 1.8303 mm2 for the EZ area and a mean difference (SE) of 0.0080 (0.0020) mm3 and a CoR of 0.0381 mm3 for the OS volume. The correlation coefficients (95% CI) were 0.9928 (0.9892-0.9952) and 0.9938 (0.9906-0.9958) for the EZ area and OS volume, respectively. The linear regression slopes (95% CI) were 0.9598 (0.9399-0.9797) and 1.0104 (0.9909-1.0298), respectively. The results from this study suggest that the manual correction of deep learning model segmentation can generate EZ area and OS volume measurements in excellent agreement with those of conventional manual grading in RP. Because DLM-MC is more efficient for retinal layer segmentation from OCT scan images, it has the potential to reduce the burden of human graders in obtaining quantitative measurements of biomarkers for assessing disease progression and treatment outcomes in RP.
Collapse
Affiliation(s)
- Yi-Zhong Wang
- Retina Foundation of the Southwest, 9600 North Central Expressway, Suite 200, Dallas, TX 75231, USA; (K.J.); (D.G.B.)
- Department of Ophthalmology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, USA
| | - Katherine Juroch
- Retina Foundation of the Southwest, 9600 North Central Expressway, Suite 200, Dallas, TX 75231, USA; (K.J.); (D.G.B.)
| | - David Geoffrey Birch
- Retina Foundation of the Southwest, 9600 North Central Expressway, Suite 200, Dallas, TX 75231, USA; (K.J.); (D.G.B.)
- Department of Ophthalmology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, USA
| |
Collapse
|
10
|
Ye T, Wang J, Yi J. Deep learning network for parallel self-denoising and segmentation in visible light optical coherence tomography of the human retina. BIOMEDICAL OPTICS EXPRESS 2023; 14:6088-6099. [PMID: 38021135 PMCID: PMC10659798 DOI: 10.1364/boe.501848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 09/25/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023]
Abstract
Visible light optical coherence tomography (VIS-OCT) of the human retina is an emerging imaging modality that uses shorter wavelengths in visible light range than conventional near-infrared (NIR) light. It provides one-micron level axial resolution to better separate stratified retinal layers, as well as microvascular oximetry. However, due to the practical limitation of laser safety and comfort, the permissible illumination power is much lower than NIR OCT, which can be challenging to obtain high-quality VIS-OCT images and subsequent image analysis. Therefore, improving VIS-OCT image quality by denoising is an essential step in the overall workflow in VIS-OCT clinical applications. In this paper, we provide the first VIS-OCT retinal image dataset from normal eyes, including retinal layer annotation and "noisy-clean" image pairs. We propose an efficient co-learning deep learning framework for parallel self-denoising and segmentation simultaneously. Both tasks synergize within the same network and improve each other's performance. The significant improvement of segmentation (2% higher Dice coefficient compared to segmentation-only process) for ganglion cell layer (GCL), inner plexiform layer (IPL) and inner nuclear layer (INL) is observed when available annotation drops to 25%, suggesting an annotation-efficient training. We also showed that the denoising model trained on our dataset generalizes well for a different scanning protocol.
Collapse
Affiliation(s)
- Tianyi Ye
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21231, USA
| | - Jingyu Wang
- Department of Ophthalmology, Johns Hopkins University, Baltimore, MD, 21231, USA
| | - Ji Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21231, USA
- Department of Ophthalmology, Johns Hopkins University, Baltimore, MD, 21231, USA
| |
Collapse
|
11
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
12
|
Abbas Q, Albathan M, Altameem A, Almakki RS, Hussain A. Deep-Ocular: Improved Transfer Learning Architecture Using Self-Attention and Dense Layers for Recognition of Ocular Diseases. Diagnostics (Basel) 2023; 13:3165. [PMID: 37891986 PMCID: PMC10605427 DOI: 10.3390/diagnostics13203165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023] Open
Abstract
It is difficult for clinicians or less-experienced ophthalmologists to detect early eye-related diseases. By hand, eye disease diagnosis is labor-intensive, prone to mistakes, and challenging because of the variety of ocular diseases such as glaucoma (GA), diabetic retinopathy (DR), cataract (CT), and normal eye-related diseases (NL). An automated ocular disease detection system with computer-aided diagnosis (CAD) tools is required to recognize eye-related diseases. Nowadays, deep learning (DL) algorithms enhance the classification results of retinograph images. To address these issues, we developed an intelligent detection system based on retinal fundus images. To create this system, we used ODIR and RFMiD datasets, which included various retinographics of distinct classes of the fundus, using cutting-edge image classification algorithms like ensemble-based transfer learning. In this paper, we suggest a three-step hybrid ensemble model that combines a classifier, a feature extractor, and a feature selector. The original image features are first extracted using a pre-trained AlexNet model with an enhanced structure. The improved AlexNet (iAlexNet) architecture with attention and dense layers offers enhanced feature extraction, task adaptability, interpretability, and potential accuracy benefits compared to other transfer learning architectures, making it particularly suited for tasks like retinograph classification. The extracted features are then selected using the ReliefF method, and then the most crucial elements are chosen to minimize the feature dimension. Finally, an XgBoost classifier offers classification outcomes based on the desired features. These classifications represent different ocular illnesses. We utilized data augmentation techniques to control class imbalance issues. The deep-ocular model, based mainly on the AlexNet-ReliefF-XgBoost model, achieves an accuracy of 95.13%. The results indicate the proposed ensemble model can assist dermatologists in making early decisions for the diagnosing and screening of eye-related diseases.
Collapse
Affiliation(s)
- Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (M.A.); (A.A.); (R.S.A.)
| | - Mubarak Albathan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (M.A.); (A.A.); (R.S.A.)
| | - Abdullah Altameem
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (M.A.); (A.A.); (R.S.A.)
| | - Riyad Saleh Almakki
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (M.A.); (A.A.); (R.S.A.)
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad 44000, Pakistan;
| |
Collapse
|
13
|
Feng HW, Chen JJ, Zhang ZC, Zhang SC, Yang WH. Bibliometric analysis of artificial intelligence and optical coherence tomography images: research hotspots and frontiers. Int J Ophthalmol 2023; 16:1431-1440. [PMID: 37724282 PMCID: PMC10475613 DOI: 10.18240/ijo.2023.09.09] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 07/05/2023] [Indexed: 09/20/2023] Open
Abstract
AIM To explore the latest application of artificial intelligence (AI) in optical coherence tomography (OCT) images, and to analyze the current research status of AI in OCT, and discuss the future research trend. METHODS On June 1, 2023, a bibliometric analysis of the Web of Science Core Collection was performed in order to explore the utilization of AI in OCT imagery. Key parameters such as papers, countries/regions, citations, databases, organizations, keywords, journal names, and research hotspots were extracted and then visualized employing the VOSviewer and CiteSpace V bibliometric platforms. RESULTS Fifty-five nations reported studies on AI biotechnology and its application in analyzing OCT images. The United States was the country with the largest number of published papers. Furthermore, 197 institutions worldwide provided published articles, where University of London had more publications than the rest. The reference clusters from the study could be divided into four categories: thickness and eyes, diabetic retinopathy (DR), images and segmentation, and OCT classification. CONCLUSION The latest hot topics and future directions in this field are identified, and the dynamic evolution of AI-based OCT imaging are outlined. AI-based OCT imaging holds great potential for revolutionizing clinical care.
Collapse
Affiliation(s)
- Hai-Wen Feng
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang 110870, Liaoning Province, China
| | - Jun-Jie Chen
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang 110870, Liaoning Province, China
| | - Zhi-Chang Zhang
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang 110122, Liaoning Province, China
| | - Shao-Chong Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| | - Wei-Hua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| |
Collapse
|
14
|
Leandro I, Lorenzo B, Aleksandar M, Dario M, Rosa G, Agostino A, Daniele T. OCT-based deep-learning models for the identification of retinal key signs. Sci Rep 2023; 13:14628. [PMID: 37670066 PMCID: PMC10480174 DOI: 10.1038/s41598-023-41362-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 08/25/2023] [Indexed: 09/07/2023] Open
Abstract
A new system based on binary Deep Learning (DL) convolutional neural networks has been developed to recognize specific retinal abnormality signs on Optical Coherence Tomography (OCT) images useful for clinical practice. Images from the local hospital database were retrospectively selected from 2017 to 2022. Images were labeled by two retinal specialists and included central fovea cross-section OCTs. Nine models were developed using the Visual Geometry Group 16 architecture to distinguish healthy versus abnormal retinas and to identify eight different retinal abnormality signs. A total of 21,500 OCT images were screened, and 10,770 central fovea cross-section OCTs were included in the study. The system achieved high accuracy in identifying healthy retinas and specific pathological signs, ranging from 93 to 99%. Accurately detecting abnormal retinal signs from OCT images is crucial for patient care. This study aimed to identify specific signs related to retinal pathologies, aiding ophthalmologists in diagnosis. The high-accuracy system identified healthy retinas and pathological signs, making it a useful diagnostic aid. Labelled OCT images remain a challenge, but our approach reduces dataset creation time and shows DL models' potential to improve ocular pathology diagnosis and clinical decision-making.
Collapse
Affiliation(s)
- Inferrera Leandro
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy.
| | - Borsatti Lorenzo
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | | | - Marangoni Dario
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | - Giglio Rosa
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | - Accardo Agostino
- Department of Engineering and Architecture, University of Trieste, Trieste, Italy
| | - Tognetto Daniele
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| |
Collapse
|
15
|
Nawaz M, Uvaliyev A, Bibi K, Wei H, Abaxi SMD, Masood A, Shi P, Ho HP, Yuan W. Unraveling the complexity of Optical Coherence Tomography image segmentation using machine and deep learning techniques: A review. Comput Med Imaging Graph 2023; 108:102269. [PMID: 37487362 DOI: 10.1016/j.compmedimag.2023.102269] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/30/2023] [Accepted: 07/03/2023] [Indexed: 07/26/2023]
Abstract
Optical Coherence Tomography (OCT) is an emerging technology that provides three-dimensional images of the microanatomy of biological tissue in-vivo and at micrometer-scale resolution. OCT imaging has been widely used to diagnose and manage various medical diseases, such as macular degeneration, glaucoma, and coronary artery disease. Despite its wide range of applications, the segmentation of OCT images remains difficult due to the complexity of tissue structures and the presence of artifacts. In recent years, different approaches have been used for OCT image segmentation, such as intensity-based, region-based, and deep learning-based methods. This paper reviews the major advances in state-of-the-art OCT image segmentation techniques. It provides an overview of the advantages and limitations of each method and presents the most relevant research works related to OCT image segmentation. It also provides an overview of existing datasets and discusses potential clinical applications. Additionally, this review gives an in-depth analysis of machine learning and deep learning approaches for OCT image segmentation. It outlines challenges and opportunities for further research in this field.
Collapse
Affiliation(s)
- Mehmood Nawaz
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Adilet Uvaliyev
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Khadija Bibi
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Hao Wei
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Sai Mu Dalike Abaxi
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Anum Masood
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Peilun Shi
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Ho-Pui Ho
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Wu Yuan
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China.
| |
Collapse
|
16
|
Bryan JM, Bryar PJ, Mirza RG. Convolutional Neural Networks Accurately Identify Ungradable Images in a Diabetic Retinopathy Telemedicine Screening Program. Telemed J E Health 2023; 29:1349-1355. [PMID: 36730708 DOI: 10.1089/tmj.2022.0357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Purpose: Diabetic retinopathy (DR) is a microvascular complication of diabetes mellitus (DM). Standard of care for patients with DM is an annual eye examination or retinal imaging to assess for DR, the latter of which may be completed through telemedicine approaches. One significant issue is poor-quality images that prevent adequate screening and are thus ungradable. We used artificial intelligence to enable point-of-care (at time of imaging) identification of ungradable images in a DR screening program. Methods: Nonmydriatic retinal images were gathered from patients with DM imaged during a primary care or endocrinology visit from September 1, 2017, to June 1, 2021. The Topcon TRC-NW400 retinal camera (Topcon Corp., Tokyo, Japan) was used. Images were interpreted by 5 ophthalmologists for gradeability, presence and stage of DR, and presence of non-DR pathologies. A convolutional neural network with Inception V3 network architecture was trained to assess image gradeability. Images were divided into training and test sets, and 10-fold cross-validation was performed. Results: A total of 1,377 images from 537 patients (56.1% female, median age 58) were analyzed. Ophthalmologists classified 25.9% of images as ungradable. Of gradable images, 18.6% had DR of varying degrees and 26.5% had non-DR pathology. 10 fold cross-validation produced an average area under receiver operating characteristic curve (AUC) of 0.922 (standard deviation: 0.027, range: 0.882 to 0.961). The final model exhibited similar test set performance with an AUC of 0.924. Conclusions: This model accurately assesses gradeability of nonmydriatic retinal images. It could be used for increasing the efficiency of DR screening programs by enabling point-of-care identification of poor-quality images.
Collapse
Affiliation(s)
- John M Bryan
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Paul J Bryar
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Rukhsana G Mirza
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| |
Collapse
|
17
|
Linde G, Chalakkal R, Zhou L, Huang JL, O’Keeffe B, Shah D, Davidson S, Hong SC. Automatic Refractive Error Estimation Using Deep Learning-Based Analysis of Red Reflex Images. Diagnostics (Basel) 2023; 13:2810. [PMID: 37685347 PMCID: PMC10486607 DOI: 10.3390/diagnostics13172810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/23/2023] [Accepted: 08/26/2023] [Indexed: 09/10/2023] Open
Abstract
Purpose/Background: We evaluate how a deep learning model can be applied to extract refractive error metrics from pupillary red reflex images taken by a low-cost handheld fundus camera. This could potentially provide a rapid and economical vision-screening method, allowing for early intervention to prevent myopic progression and reduce the socioeconomic burden associated with vision impairment in the later stages of life. Methods: Infrared and color images of pupillary crescents were extracted from eccentric photorefraction images of participants from Choithram Hospital in India and Dargaville Medical Center in New Zealand. The pre-processed images were then used to train different convolutional neural networks to predict refractive error in terms of spherical power and cylindrical power metrics. Results: The best-performing trained model achieved an overall accuracy of 75% for predicting spherical power using infrared images and a multiclass classifier. Conclusions: Even though the model's performance is not superior, the proposed method showed good usability of using red reflex images in estimating refractive error. Such an approach has never been experimented with before and can help guide researchers, especially when the future of eye care is moving towards highly portable and smartphone-based devices.
Collapse
Affiliation(s)
| | | | - Lydia Zhou
- University of Sydney, Sydney, NSW 2050, Australia
| | | | | | | | | | - Sheng Chiong Hong
- Public Health Unit, Dunedin Hospital, Te Whatu Ora Southern, Dunedin 9016, New Zealand
| |
Collapse
|
18
|
Dan Y, Jin W, Wang Z, Sun C. Optimization of U-shaped pure transformer medical image segmentation network. PeerJ Comput Sci 2023; 9:e1515. [PMID: 37705654 PMCID: PMC10495965 DOI: 10.7717/peerj-cs.1515] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 07/13/2023] [Indexed: 09/15/2023]
Abstract
In recent years, neural networks have made pioneering achievements in the field of medical imaging. In particular, deep neural networks based on U-shaped structures are widely used in different medical image segmentation tasks. In order to improve the early diagnosis and clinical decision-making system of lung diseases, it has become a key step to use the neural network for lung segmentation to assist in positioning and observing the shape. There is still the problem of low precision. For the sake of achieving better segmentation accuracy, an optimized pure Transformer U-shaped segmentation is proposed in this article. The optimization segmentation network adopts the method of adding skip connections and performing special splicing processing, which reduces the information loss in the encoding process and increases the information in the decoding process, so as to achieve the purpose of improving the segmentation accuracy. The final experiment shows that our improved network achieves 97.86% accuracy in segmentation of the "Chest Xray Masks and Labels" dataset, which is better than the full convolutional network or the combination of Transformer and convolution.
Collapse
Affiliation(s)
- Yongping Dan
- School of Electronic and Information, Zhongyuan University of Technology, Zhengzhou, Henan, China
| | - Weishou Jin
- School of Electronic and Information, Zhongyuan University of Technology, Zhengzhou, Henan, China
| | - Zhida Wang
- School of Electronic and Information, Zhongyuan University of Technology, Zhengzhou, Henan, China
| | - Changhao Sun
- School of Electronic and Information, Zhongyuan University of Technology, Zhengzhou, Henan, China
| |
Collapse
|
19
|
Rahdar A, Ahmadi MJ, Naseripour M, Akhtari A, Sedaghat A, Hosseinabadi VZ, Yarmohamadi P, Hajihasani S, Mirshahi R. Semi-supervised segmentation of retinoblastoma tumors in fundus images. Sci Rep 2023; 13:13010. [PMID: 37563285 PMCID: PMC10415254 DOI: 10.1038/s41598-023-39909-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 08/02/2023] [Indexed: 08/12/2023] Open
Abstract
Retinoblastoma is a rare form of cancer that predominantly affects young children as the primary intraocular malignancy. Studies conducted in developed and some developing countries have revealed that early detection can successfully cure over 90% of children with retinoblastoma. An unusual white reflection in the pupil is the most common presenting symptom. Depending on the tumor size, shape, and location, medical experts may opt for different approaches and treatments, with the results varying significantly due to the high reliance on prior knowledge and experience. This study aims to present a model based on semi-supervised machine learning that will yield segmentation results comparable to those achieved by medical experts. First, the Gaussian mixture model is utilized to detect abnormalities in approximately 4200 fundus images. Due to the high computational cost of this process, the results of this approach are then used to train a cost-effective model for the same purpose. The proposed model demonstrated promising results in extracting highly detailed boundaries in fundus images. Using the Sørensen-Dice coefficient as the comparison metric for segmentation tasks, an average accuracy of 93% on evaluation data was achieved.
Collapse
Affiliation(s)
| | | | - Masood Naseripour
- Eye Research Center, The Five Senses Institute, Rassoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Abtin Akhtari
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ahad Sedaghat
- Eye Research Center, The Five Senses Institute, Rassoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Vahid Zare Hosseinabadi
- Eye Research Center, The Five Senses Institute, Rassoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Parsa Yarmohamadi
- Young Researchers and Elite Club, Tehran Medical Sciences, Islamic Azad University, Tehran, Iran
| | - Samin Hajihasani
- Student Research Committee, Shahrood Branch, Islamic Azad University, Shahrood, Iran
| | - Reza Mirshahi
- Eye Research Center, The Five Senses Institute, Rassoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
20
|
Hanson RLW, Airody A, Sivaprasad S, Gale RP. Optical coherence tomography imaging biomarkers associated with neovascular age-related macular degeneration: a systematic review. Eye (Lond) 2023; 37:2438-2453. [PMID: 36526863 PMCID: PMC9871156 DOI: 10.1038/s41433-022-02360-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 10/13/2022] [Accepted: 12/06/2022] [Indexed: 12/23/2022] Open
Abstract
The aim of this systematic literature review is twofold, (1) detail the impact of retinal biomarkers identifiable via optical coherence tomography (OCT) on disease progression and response to treatment in neovascular age-related macular degeneration (nAMD) and (2) establish which biomarkers are currently identifiable by artificial intelligence (AI) models and the utilisation of this technology. Following the PRISMA guidelines, PubMed was searched for peer-reviewed publications dated between January 2016 and January 2022. POPULATION Patients diagnosed with nAMD with OCT imaging. SETTINGS Comparable settings to NHS hospitals. STUDY DESIGNS Randomised controlled trials, prospective/retrospective cohort studies and review articles. From 228 articles, 130 were full-text reviewed, 50 were removed for falling outside the scope of this review with 10 added from the author's inventory, resulting in the inclusion of 90 articles. From 9 biomarkers identified; intraretinal fluid (IRF), subretinal fluid, pigment epithelial detachment, subretinal hyperreflective material (SHRM), retinal pigmental epithelial (RPE) atrophy, drusen, outer retinal tabulation (ORT), hyperreflective foci (HF) and retinal thickness, 5 are considered pertinent to nAMD disease progression; IRF, SHRM, drusen, ORT and HF. A number of these biomarkers can be classified using current AI models. Significant retinal biomarkers pertinent to disease activity and progression in nAMD are identifiable via OCT; IRF being the most important in terms of the significant impact on visual outcome. Incorporating AI into ophthalmology practice is a promising advancement towards automated and reproducible analyses of OCT data with the ability to diagnose disease and predict future disease conversion. SYSTEMATIC REVIEW REGISTRATION This review has been registered with PROSPERO (registration ID: CRD42021233200).
Collapse
Affiliation(s)
- Rachel L W Hanson
- Academic Unit of Ophthalmology, York and Scarborough Teaching Hospitals NHS Foundation Trust, York, UK
| | - Archana Airody
- Academic Unit of Ophthalmology, York and Scarborough Teaching Hospitals NHS Foundation Trust, York, UK
| | - Sobha Sivaprasad
- Moorfields National Institute of Health Research, Biomedical Research Centre, London, UK
| | - Richard P Gale
- Academic Unit of Ophthalmology, York and Scarborough Teaching Hospitals NHS Foundation Trust, York, UK.
- Hull York Medical School, University of York, York, UK.
- York Biomedical Research Institute, University of York, York, UK.
| |
Collapse
|
21
|
Bar-David D, Bar-David L, Shapira Y, Leibu R, Dori D, Gebara A, Schneor R, Fischer A, Soudry S. Elastic Deformation of Optical Coherence Tomography Images of Diabetic Macular Edema for Deep-Learning Models Training: How Far to Go? IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 11:487-494. [PMID: 37817823 PMCID: PMC10561735 DOI: 10.1109/jtehm.2023.3294904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 05/09/2023] [Accepted: 07/04/2023] [Indexed: 10/12/2023]
Abstract
- Objective: To explore the clinical validity of elastic deformation of optical coherence tomography (OCT) images for data augmentation in the development of deep-learning model for detection of diabetic macular edema (DME). METHODS Prospective evaluation of OCT images of DME (n = 320) subject to elastic transformation, with the deformation intensity represented by ([Formula: see text]). Three sets of images, each comprising 100 pairs of scans (100 original & 100 modified), were grouped according to the range of ([Formula: see text]), including low-, medium- and high-degree of augmentation; ([Formula: see text] = 1-6), ([Formula: see text] = 7-12), and ([Formula: see text] = 13-18), respectively. Three retina specialists evaluated all datasets in a blinded manner and designated each image as 'original' versus 'modified'. The rate of assignment of 'original' value to modified images (false-negative) was determined for each grader in each dataset. RESULTS The false-negative rates ranged between 71-77% for the low-, 63-76% for the medium-, and 50-75% for the high-augmentation categories. The corresponding rates of correct identification of original images ranged between 75-85% ([Formula: see text]0.05) in the low-, 73-85% ([Formula: see text]0.05 for graders 1 & 2, p = 0.01 for grader 3) in the medium-, and 81-91% ([Formula: see text]) in the high-augmentation categories. In the subcategory ([Formula: see text] = 7-9) the false-negative rates were 93-83%, whereas the rates of correctly identifying original images ranged between 89-99% ([Formula: see text]0.05 for all graders). CONCLUSIONS Deformation of low-medium intensity ([Formula: see text] = 1-9) may be applied without compromising OCT image representativeness in DME. Clinical and Translational Impact Statement-Elastic deformation may efficiently augment the size, robustness, and diversity of training datasets without altering their clinical value, enhancing the development of high-accuracy algorithms for automated interpretation of OCT images.
Collapse
Affiliation(s)
- Daniel Bar-David
- Faculty of Mechanical EngineeringTechnion Israel Institute of TechnologyHaifa3200003Israel
| | - Laura Bar-David
- Department of OphthalmologyRambam Health Care CampusHaifa3109601Israel
| | - Yinon Shapira
- Department of OphthalmologyCarmel Medical CenterHaifa3436212Israel
| | - Rina Leibu
- Department of OphthalmologyRambam Health Care CampusHaifa3109601Israel
| | - Dalia Dori
- Department of OphthalmologyRambam Health Care CampusHaifa3109601Israel
| | - Aseel Gebara
- Department of OphthalmologyRambam Health Care CampusHaifa3109601Israel
| | - Ronit Schneor
- Faculty of Mechanical EngineeringTechnion Israel Institute of TechnologyHaifa3200003Israel
| | - Anath Fischer
- Faculty of Mechanical EngineeringTechnion Israel Institute of TechnologyHaifa3200003Israel
| | - Shiri Soudry
- Department of OphthalmologyRambam Health Care CampusHaifa3109601Israel
- Clinical Research Institute at RambamRambam Health Care CampusHaifa3109601Israel
- The Ruth and Bruce Rappaport Faculty of MedicineTechnion Israel Institute of TechnologyHaifa3525433Israel
| |
Collapse
|
22
|
Xie H, Xu W, Wang YX, Wu X. Deep learning network with differentiable dynamic programming for retina OCT surface segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:3190-3202. [PMID: 37497505 PMCID: PMC10368040 DOI: 10.1364/boe.492670] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/19/2023] [Accepted: 05/23/2023] [Indexed: 07/28/2023]
Abstract
Multiple-surface segmentation in optical coherence tomography (OCT) images is a challenging problem, further complicated by the frequent presence of weak image boundaries. Recently, many deep learning-based methods have been developed for this task and yield remarkable performance. Unfortunately, due to the scarcity of training data in medical imaging, it is challenging for deep learning networks to learn the global structure of the target surfaces, including surface smoothness. To bridge this gap, this study proposes to seamlessly unify a U-Net for feature learning with a constrained differentiable dynamic programming module to achieve end-to-end learning for retina OCT surface segmentation to explicitly enforce surface smoothness. It effectively utilizes the feedback from the downstream model optimization module to guide feature learning, yielding better enforcement of global structures of the target surfaces. Experiments on Duke AMD (age-related macular degeneration) and JHU MS (multiple sclerosis) OCT data sets for retinal layer segmentation demonstrated that the proposed method was able to achieve subvoxel accuracy on both datasets, with the mean absolute surface distance (MASD) errors of 1.88 ± 1.96μm and 2.75 ± 0.94μm, respectively, over all the segmented surfaces.
Collapse
Affiliation(s)
- Hui Xie
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, USA
| | - Weiyu Xu
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, USA
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital University of Medical Science, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Xiaodong Wu
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
23
|
Gu B, Sidhu S, Weinreb RN, Christopher M, Zangwill LM, Baxter SL. Review of Visualization Approaches in Deep Learning Models of Glaucoma. Asia Pac J Ophthalmol (Phila) 2023; 12:392-401. [PMID: 37523431 DOI: 10.1097/apo.0000000000000619] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 05/11/2023] [Indexed: 08/02/2023] Open
Abstract
Glaucoma is a major cause of irreversible blindness worldwide. As glaucoma often presents without symptoms, early detection and intervention are important in delaying progression. Deep learning (DL) has emerged as a rapidly advancing tool to help achieve these objectives. In this narrative review, data types and visualization approaches for presenting model predictions, including models based on tabular data, functional data, and/or structural data, are summarized, and the importance of data source diversity for improving the utility and generalizability of DL models is explored. Examples of innovative approaches to understanding predictions of artificial intelligence (AI) models and alignment with clinicians are provided. In addition, methods to enhance the interpretability of clinical features from tabular data used to train AI models are investigated. Examples of published DL models that include interfaces to facilitate end-user engagement and minimize cognitive and time burdens are highlighted. The stages of integrating AI models into existing clinical workflows are reviewed, and challenges are discussed. Reviewing these approaches may help inform the generation of user-friendly interfaces that are successfully integrated into clinical information systems. This review details key principles regarding visualization approaches in DL models of glaucoma. The articles reviewed here focused on usability, explainability, and promotion of clinician trust to encourage wider adoption for clinical use. These studies demonstrate important progress in addressing visualization and explainability issues required for successful real-world implementation of DL models in glaucoma.
Collapse
Affiliation(s)
- Byoungyoung Gu
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| | - Sophia Sidhu
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| | - Robert N Weinreb
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Mark Christopher
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Linda M Zangwill
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Sally L Baxter
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| |
Collapse
|
24
|
Darooei R, Nazari M, Kafieh R, Rabbani H. Optimal Deep Learning Architecture for Automated Segmentation of Cysts in OCT Images Using X-Let Transforms. Diagnostics (Basel) 2023; 13:1994. [PMID: 37370889 PMCID: PMC10297540 DOI: 10.3390/diagnostics13121994] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 05/22/2023] [Accepted: 06/02/2023] [Indexed: 06/29/2023] Open
Abstract
The retina is a thin, light-sensitive membrane with a multilayered structure found in the back of the eyeball. There are many types of retinal disorders. The two most prevalent retinal illnesses are Age-Related Macular Degeneration (AMD) and Diabetic Macular Edema (DME). Optical Coherence Tomography (OCT) is a vital retinal imaging technology. X-lets (such as curvelet, DTCWT, contourlet, etc.) have several benefits in image processing and analysis. They can capture both local and non-local features of an image simultaneously. The aim of this paper is to propose an optimal deep learning architecture based on sparse basis functions for the automated segmentation of cystic areas in OCT images. Different X-let transforms were used to produce different network inputs, including curvelet, Dual-Tree Complex Wavelet Transform (DTCWT), circlet, and contourlet. Additionally, three different combinations of these transforms are suggested to achieve more accurate segmentation results. Various metrics, including Dice coefficient, sensitivity, false positive ratio, Jaccard index, and qualitative results, were evaluated to find the optimal networks and combinations of the X-let's sub-bands. The proposed network was tested on both original and noisy datasets. The results show the following facts: (1) contourlet achieves the optimal results between different combinations; (2) the five-channel decomposition using high-pass sub-bands of contourlet transform achieves the best performance; and (3) the five-channel decomposition using high-pass sub-bands formations out-performs the state-of-the-art methods, especially in the noisy dataset. The proposed method has the potential to improve the accuracy and speed of the segmentation process in clinical settings, facilitating the diagnosis and treatment of retinal diseases.
Collapse
Affiliation(s)
- Reza Darooei
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan 8174673461, Iran; (R.D.); (R.K.)
- Department of Bioelectrics and Biomedical Engineering, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan 8174673461, Iran
| | - Milad Nazari
- Department of Molecular Biology and Genetics, Aarhus University, 8200 Aarhus, Denmark;
- The Danish Research Institute of Translational Neuroscience (DANDRITE), Aarhus University, 8200 Aarhus, Denmark
| | - Rahele Kafieh
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan 8174673461, Iran; (R.D.); (R.K.)
- Department of Engineering, Durham University, South Road, Durham DH1 3RW, UK
| | - Hossein Rabbani
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan 8174673461, Iran; (R.D.); (R.K.)
- Department of Bioelectrics and Biomedical Engineering, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan 8174673461, Iran
| |
Collapse
|
25
|
Feng H, Chen J, Zhang Z, Lou Y, Zhang S, Yang W. A bibliometric analysis of artificial intelligence applications in macular edema: exploring research hotspots and Frontiers. Front Cell Dev Biol 2023; 11:1174936. [PMID: 37255600 PMCID: PMC10225517 DOI: 10.3389/fcell.2023.1174936] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 05/02/2023] [Indexed: 06/01/2023] Open
Abstract
Background: Artificial intelligence (AI) is used in ophthalmological disease screening and diagnostics, medical image diagnostics, and predicting late-disease progression rates. We reviewed all AI publications associated with macular edema (ME) research Between 2011 and 2022 and performed modeling, quantitative, and qualitative investigations. Methods: On 1st February 2023, we screened the Web of Science Core Collection for AI applications related to ME, from which 297 studies were identified and analyzed (2011-2022). We collected information on: publications, institutions, country/region, keywords, journal name, references, and research hotspots. Literature clustering networks and Frontier knowledge bases were investigated using bibliometrix-BiblioShiny, VOSviewer, and CiteSpace bibliometric platforms. We used the R "bibliometrix" package to synopsize our observations, enumerate keywords, visualize collaboration networks between countries/regions, and generate a topic trends plot. VOSviewer was used to examine cooperation between institutions and identify citation relationships between journals. We used CiteSpace to identify clustering keywords over the timeline and identify keywords with the strongest citation bursts. Results: In total, 47 countries published AI studies related to ME; the United States had the highest H-index, thus the greatest influence. China and the United States cooperated most closely between all countries. Also, 613 institutions generated publications - the Medical University of Vienna had the highest number of studies. This publication record and H-index meant the university was the most influential in the ME field. Reference clusters were also categorized into 10 headings: retinal Optical Coherence Tomography (OCT) fluid detection, convolutional network models, deep learning (DL)-based single-shot predictions, retinal vascular disease, diabetic retinopathy (DR), convolutional neural networks (CNNs), automated macular pathology diagnosis, dry age-related macular degeneration (DARMD), class weight, and advanced DL architecture systems. Frontier keywords were represented by diabetic macular edema (DME) (2021-2022). Conclusion: Our review of the AI-related ME literature was comprehensive, systematic, and objective, and identified future trends and current hotspots. With increased DL outputs, the ME research focus has gradually shifted from manual ME examinations to automatic ME detection and associated symptoms. In this review, we present a comprehensive and dynamic overview of AI in ME and identify future research areas.
Collapse
Affiliation(s)
- Haiwen Feng
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Jiaqi Chen
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Zhichang Zhang
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Yan Lou
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Shaochong Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| |
Collapse
|
26
|
Rasti R, Biglari A, Rezapourian M, Yang Z, Farsiu S. RetiFluidNet: A Self-Adaptive and Multi-Attention Deep Convolutional Network for Retinal OCT Fluid Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1413-1423. [PMID: 37015695 DOI: 10.1109/tmi.2022.3228285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Optical coherence tomography (OCT) helps ophthalmologists assess macular edema, accumulation of fluids, and lesions at microscopic resolution. Quantification of retinal fluids is necessary for OCT-guided treatment management, which relies on a precise image segmentation step. As manual analysis of retinal fluids is a time-consuming, subjective, and error-prone task, there is increasing demand for fast and robust automatic solutions. In this study, a new convolutional neural architecture named RetiFluidNet is proposed for multi-class retinal fluid segmentation. The model benefits from hierarchical representation learning of textural, contextual, and edge features using a new self-adaptive dual-attention (SDA) module, multiple self-adaptive attention-based skip connections (SASC), and a novel multi-scale deep self-supervision learning (DSL) scheme. The attention mechanism in the proposed SDA module enables the model to automatically extract deformation-aware representations at different levels, and the introduced SASC paths further consider spatial-channel interdependencies for concatenation of counterpart encoder and decoder units, which improve representational capability. RetiFluidNet is also optimized using a joint loss function comprising a weighted version of dice overlap and edge-preserved connectivity-based losses, where several hierarchical stages of multi-scale local losses are integrated into the optimization process. The model is validated based on three publicly available datasets: RETOUCH, OPTIMA, and DUKE, with comparisons against several baselines. Experimental results on the datasets prove the effectiveness of the proposed model in retinal OCT fluid segmentation and reveal that the suggested method is more effective than existing state-of-the-art fluid segmentation algorithms in adapting to retinal OCT scans recorded by various image scanning instruments.
Collapse
|
27
|
Wu Y, Olvera-Barrios A, Yanagihara R, Kung TPH, Lu R, Leung I, Mishra AV, Nussinovitch H, Grimaldi G, Blazes M, Lee CS, Egan C, Tufail A, Lee AY. Training Deep Learning Models to Work on Multiple Devices by Cross-Domain Learning with No Additional Annotations. Ophthalmology 2023; 130:213-222. [PMID: 36154868 PMCID: PMC9868052 DOI: 10.1016/j.ophtha.2022.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 09/07/2022] [Accepted: 09/16/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE To create an unsupervised cross-domain segmentation algorithm for segmenting intraretinal fluid and retinal layers on normal and pathologic macular OCT images from different manufacturers and camera devices. DESIGN We sought to use generative adversarial networks (GANs) to generalize a segmentation model trained on one OCT device to segment B-scans obtained from a different OCT device manufacturer in a fully unsupervised approach without labeled data from the latter manufacturer. PARTICIPANTS A total of 732 OCT B-scans from 4 different OCT devices (Heidelberg Spectralis, Topcon 1000, Maestro2, and Zeiss Plex Elite 9000). METHODS We developed an unsupervised GAN model, GANSeg, to segment 7 retinal layers and intraretinal fluid in Topcon 1000 OCT images (domain B) that had access only to labeled data on Heidelberg Spectralis images (domain A). GANSeg was unsupervised because it had access only to 110 Heidelberg labeled OCTs and 556 raw and unlabeled Topcon 1000 OCTs. To validate GANSeg segmentations, 3 masked graders manually segmented 60 OCTs from an external Topcon 1000 test dataset independently. To test the limits of GANSeg, graders also manually segmented 3 OCTs from Zeiss Plex Elite 9000 and Topcon Maestro2. A U-Net was trained on the same labeled Heidelberg images as baseline. The GANSeg repository with labeled annotations is at https://github.com/uw-biomedical-ml/ganseg. MAIN OUTCOME MEASURES Dice scores comparing segmentation results from GANSeg and the U-Net model with the manual segmented images. RESULTS Although GANSeg and U-Net achieved comparable Dice scores performance as human experts on the labeled Heidelberg test dataset, only GANSeg achieved comparable Dice scores with the best performance for the ganglion cell layer plus inner plexiform layer (90%; 95% confidence interval [CI], 68%-96%) and the worst performance for intraretinal fluid (58%; 95% CI, 18%-89%), which was statistically similar to human graders (79%; 95% CI, 43%-94%). GANSeg significantly outperformed the U-Net model. Moreover, GANSeg generalized to both Zeiss and Topcon Maestro2 swept-source OCT domains, which it had never encountered before. CONCLUSIONS GANSeg enables the transfer of supervised deep learning algorithms across OCT devices without labeled data, thereby greatly expanding the applicability of deep learning algorithms.
Collapse
Affiliation(s)
- Yue Wu
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Abraham Olvera-Barrios
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Ryan Yanagihara
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | | | - Randy Lu
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Irene Leung
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Amit V Mishra
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | | | - Gabriela Grimaldi
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Marian Blazes
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington; Roger and Angie Karalis Johnson Retina Center, Seattle, Washington
| | - Catherine Egan
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington; Roger and Angie Karalis Johnson Retina Center, Seattle, Washington.
| |
Collapse
|
28
|
Philippi D, Rothaus K, Castelli M. A vision transformer architecture for the automated segmentation of retinal lesions in spectral domain optical coherence tomography images. Sci Rep 2023; 13:517. [PMID: 36627357 PMCID: PMC9832034 DOI: 10.1038/s41598-023-27616-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 01/04/2023] [Indexed: 01/12/2023] Open
Abstract
Neovascular age-related macular degeneration (nAMD) is one of the major causes of irreversible blindness and is characterized by accumulations of different lesions inside the retina. AMD biomarkers enable experts to grade the AMD and could be used for therapy prognosis and individualized treatment decisions. In particular, intra-retinal fluid (IRF), sub-retinal fluid (SRF), and pigment epithelium detachment (PED) are prominent biomarkers for grading neovascular AMD. Spectral-domain optical coherence tomography (SD-OCT) revolutionized nAMD early diagnosis by providing cross-sectional images of the retina. Automatic segmentation and quantification of IRF, SRF, and PED in SD-OCT images can be extremely useful for clinical decision-making. Despite the excellent performance of convolutional neural network (CNN)-based methods, the task still presents some challenges due to relevant variations in the location, size, shape, and texture of the lesions. This work adopts a transformer-based method to automatically segment retinal lesion from SD-OCT images and qualitatively and quantitatively evaluate its performance against CNN-based methods. The method combines the efficient long-range feature extraction and aggregation capabilities of Vision Transformers with data-efficient training of CNNs. The proposed method was tested on a private dataset containing 3842 2-dimensional SD-OCT retina images, manually labeled by experts of the Franziskus Eye-Center, Muenster. While one of the competitors presents a better performance in terms of Dice score, the proposed method is significantly less computationally expensive. Thus, future research will focus on the proposed network's architecture to increase its segmentation performance while maintaining its computational efficiency.
Collapse
Affiliation(s)
- Daniel Philippi
- grid.10772.330000000121511713NOVA Information Management School (NOVA IMS), Universidade Nova de Lisboa, 1070-312 Lisbon, Portugal
| | - Kai Rothaus
- grid.416655.5Department of Ophthalmology, St. Franziskus Hospital, 48145 Muenster, Germany
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade Nova de Lisboa, 1070-312, Lisbon, Portugal. .,School of Economics and Business, University of Ljubljana, Ljubljana, Slovenia.
| |
Collapse
|
29
|
Yousefi S. Clinical Applications of Artificial Intelligence in Glaucoma. J Ophthalmic Vis Res 2023; 18:97-112. [PMID: 36937202 PMCID: PMC10020779 DOI: 10.18502/jovr.v18i1.12730] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 11/05/2022] [Indexed: 02/25/2023] Open
Abstract
Ophthalmology is one of the major imaging-intensive fields of medicine and thus has potential for extensive applications of artificial intelligence (AI) to advance diagnosis, drug efficacy, and other treatment-related aspects of ocular disease. AI has made impressive progress in ophthalmology within the past few years and two autonomous AI-enabled systems have received US regulatory approvals for autonomously screening for mid-level or advanced diabetic retinopathy and macular edema. While no autonomous AI-enabled system for glaucoma screening has yet received US regulatory approval, numerous assistive AI-enabled software tools are already employed in commercialized instruments for quantifying retinal images and visual fields to augment glaucoma research and clinical practice. In this literature review (non-systematic), we provide an overview of AI applications in glaucoma, and highlight some limitations and considerations for AI integration and adoption into clinical practice.
Collapse
Affiliation(s)
- Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
30
|
Pavithra K, Kumar P, Geetha M, Bhandary SV. Computer aided diagnosis of diabetic macular edema in retinal fundus and OCT images: A review. Biocybern Biomed Eng 2023. [DOI: 10.1016/j.bbe.2022.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
31
|
Celebi ARC, Bulut E, Sezer A. Artificial intelligence based detection of age-related macular degeneration using optical coherence tomography with unique image preprocessing. Eur J Ophthalmol 2023; 33:65-73. [PMID: 35469472 DOI: 10.1177/11206721221096294] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
PURPOSE The aim of the study is to improve the accuracy of age related macular degeneration (AMD) disease in its earlier phases with proposed Capsule Network (CapsNet) architecture trained on speckle noise reduced spectral domain optical coherence tomography (SD-OCT) images based on an optimized Bayesian non-local mean (OBNLM) filter augmentation techniques. METHODS A total of 726 local SD-OCT images were collected and labelled as 159 drusen, 145 dry AMD, 156 wet AMD and 266 normal. Region of interest (ROI) was identified. Speckle noise in SD-OCT images were reduced based on OBNLM filter. The processed images were fed to proposed CapsNet architecture to clasify SD-OCT images. Accuracy rates were calculated in both public and local dataset. RESULTS Accuracy rate of local SD-OCT image dataset classification was achieved to a value of 96.39% after performing data augmentation and speckle noise reduction with OBNLM. The performance of proposed CapsNet was also evaluated on the public Kaggle dataset under the same processing procedures and the accuracy rate was calculated as 98.07%. The sensitivity and specificity rates were 96.72% and 99.98%, respectively. CONCLUSIONS The classification success of proposed CapsNet may be improved with robust pre-processing steps like; determination of ROI and denoised SD-OCT images based on OBNLM. These impactful image preprocessing steps yielded higher accuracy rates for determining different types of AMD including its precursor lesion on the both local and public dataset with proposed CapsNet architecture.
Collapse
Affiliation(s)
- Ali Riza Cenk Celebi
- Department of Ophthalmology, Acibadem University School of Medicine, Istanbul, Turkey
| | - Erkan Bulut
- Department of Ophthalmology, Beylikduzu Public Hospital, Istanbul, Turkey
| | - Aysun Sezer
- United'Informatique et d'Ingenierie des Systemes, 52849ENSTA-ParisTech, Universite de Paris-Saclay, Villefranche Sur Mer, Provence-Alpes-Côte d'azur, France
| |
Collapse
|
32
|
He X, Zhong Z, Fang L, He M, Sebe N. Structure-Guided Cross-Attention Network for Cross-Domain OCT Fluid Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; PP:309-320. [PMID: 37015552 DOI: 10.1109/tip.2022.3228163] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Accurate retinal fluid segmentation on Optical Coherence Tomography (OCT) images plays an important role in diagnosing and treating various eye diseases. The art deep models have shown promising performance on OCT image segmentation given pixel-wise annotated training data. However, the learned model will achieve poor performance on OCT images that are obtained from different devices (domains) due to the domain shift issue. This problem largely limits the real-world application of OCT image segmentation since the types of devices usually are different in each hospital. In this paper, we study the task of cross-domain OCT fluid segmentation, where we are given a labeled dataset of the source device (domain) and an unlabeled dataset of the target device (domain). The goal is to learn a model that can perform well on the target domain. To solve this problem, in this paper, we propose a novel Structure-guided Cross-Attention Network (SCAN), which leverages the retinal layer structure to facilitate domain alignment. Our SCAN is inspired by the fact that the retinal layer structure is robust to domains and can reflect regions that are important to fluid segmentation. In light of this, we build our SCAN in a multi-task manner by jointly learning the retinal structure prediction and fluid segmentation. To exploit the mutual benefit between layer structure and fluid segmentation, we further introduce a cross-attention module to measure the correlation between the layer-specific feature and the fluid-specific feature encouraging the model to concentrate on highly relative regions during domain alignment. Moreover, an adaptation difficulty map is evaluated based on the retinal structure predictions from different domains, which enforces the model focus on hard regions during structure-aware adversarial learning. Extensive experiments on the three domains of the RETOUCH dataset demonstrate the effectiveness of the proposed method and show that our approach produces state-of-the-art performance on cross-domain OCT fluid segmentation.
Collapse
|
33
|
Schwartz R, Khalid H, Liakopoulos S, Ouyang Y, de Vente C, González-Gonzalo C, Lee AY, Guymer R, Chew EY, Egan C, Wu Z, Kumar H, Farrington J, Müller PL, Sánchez CI, Tufail A. A Deep Learning Framework for the Detection and Quantification of Reticular Pseudodrusen and Drusen on Optical Coherence Tomography. Transl Vis Sci Technol 2022; 11:3. [PMID: 36458946 PMCID: PMC9728496 DOI: 10.1167/tvst.11.12.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 10/26/2022] [Indexed: 12/05/2022] Open
Abstract
Purpose The purpose of this study was to develop and validate a deep learning (DL) framework for the detection and quantification of reticular pseudodrusen (RPD) and drusen on optical coherence tomography (OCT) scans. Methods A DL framework was developed consisting of a classification model and an out-of-distribution (OOD) detection model for the identification of ungradable scans; a classification model to identify scans with drusen or RPD; and an image segmentation model to independently segment lesions as RPD or drusen. Data were obtained from 1284 participants in the UK Biobank (UKBB) with a self-reported diagnosis of age-related macular degeneration (AMD) and 250 UKBB controls. Drusen and RPD were manually delineated by five retina specialists. The main outcome measures were sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), kappa, accuracy, intraclass correlation coefficient (ICC), and free-response receiver operating characteristic (FROC) curves. Results The classification models performed strongly at their respective tasks (0.95, 0.93, and 0.99 AUC, respectively, for the ungradable scans classifier, the OOD model, and the drusen and RPD classification models). The mean ICC for the drusen and RPD area versus graders was 0.74 and 0.61, respectively, compared with 0.69 and 0.68 for intergrader agreement. FROC curves showed that the model's sensitivity was close to human performance. Conclusions The models achieved high classification and segmentation performance, similar to human performance. Translational Relevance Application of this robust framework will further our understanding of RPD as a separate entity from drusen in both research and clinical settings.
Collapse
Affiliation(s)
- Roy Schwartz
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Health Informatics, University College London, London, UK
- Quantitative Healthcare Analysis (qurAI) Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| | - Hagar Khalid
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Tanta University Hospital, Tanta, Egypt
| | - Sandra Liakopoulos
- Cologne Image Reading Center, Department of Ophthalmology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Department of Ophthalmology, Goethe University, Frankfurt, Germany
| | - Yanling Ouyang
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Coen de Vente
- Quantitative Healthcare Analysis (qurAI) Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam UMC location University of Amsterdam, Biomedical Engineering and Physics, Amsterdam, The Netherlands
- Diagnostic Image Analysis Group (DIAG), Department of Radiology and Nuclear Medicine, Radboud UMC, Nijmegen, The Netherlands
| | - Cristina González-Gonzalo
- Quantitative Healthcare Analysis (qurAI) Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
- Diagnostic Image Analysis Group (DIAG), Department of Radiology and Nuclear Medicine, Radboud UMC, Nijmegen, The Netherlands
| | - Aaron Y. Lee
- Roger and Angie Karalis Johnson Retina Center, University of Washington, Seattle, WA, USA
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
| | - Robyn Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Emily Y. Chew
- National Eye Institute (NEI), National Institutes of Health (NIH), Bethesda, MD, USA
| | - Catherine Egan
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Himeesh Kumar
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Australia
| | - Joseph Farrington
- Institute of Health Informatics, University College London, London, UK
| | - Philipp L. Müller
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Makula Center, Südblick Eye Centers, Augsburg, Germany
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Clara I. Sánchez
- Quantitative Healthcare Analysis (qurAI) Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam UMC location University of Amsterdam, Biomedical Engineering and Physics, Amsterdam, The Netherlands
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| |
Collapse
|
34
|
Higgins BE, Montesano G, Crabb DP, Naskas TT, Graham KW, Chakravarthy U, Kee F, Wright DM, Hogg RE. Assessment of the Classification of Age-Related Macular Degeneration Severity from the Northern Ireland Sensory Ageing Study Using a Measure of Dark Adaptation. OPHTHALMOLOGY SCIENCE 2022; 2:100204. [PMID: 36531574 PMCID: PMC9754971 DOI: 10.1016/j.xops.2022.100204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 05/27/2022] [Accepted: 07/12/2022] [Indexed: 06/17/2023]
Abstract
Purpose To assess the differences in rod-mediated dark adaptation (RMDA) between different grades of age-related macular degeneration (AMD) severity using an OCT-based criterion compared with those of AMD severity using the Beckman color fundus photography (CFP)-based classification and to assess the association between the presence of subretinal drusenoid deposits (SDDs) and RMDA at different grades of AMD severity using an OCT-based classification. Design Cross-sectional study. Participants Participants from the Northern Ireland Sensory Ageing study (Queen's University Belfast). Methods Complete RMDA (rod-intercept time [RIT]) data, CFP, and spectral-domain OCT images were extracted. Participants were stratified into 4 Beckman groups (omitting late-stage AMD) and 3 OCT-based groups. The presence and stage of SDDs were identified using OCT. Main Outcome Measures Rod-intercept time data (age-corrected). Results Data from 459 participants (median [interquartile range] age, 65 [59-71] years) were stratified by both the classifications. Subretinal drusenoid deposits were detected in 109 eyes. The median (interquartile range) RMDA for the Beckman classification (Beckman 0-3, with 3 being intermediate age-related macular degeneration [iAMD]) groups was 6.0 (4.5-8.7), 6.6 (4.7-10.5), 5.7 (4.4-7.4), and 13.2 (6-21.1) minutes, respectively. OCT classifications OCT0-OCT2 yielded different median (interquartile range) values: 5.8 (4.5-8.5), 8.4 (5.2-13.3), and 11.1 (5.3-20.1) minutes, respectively. After correcting for age, eyes in Beckman 3 (iAMD) had statistically significantly worse RMDA than eyes in the other Beckman groups (P ≤ 0.005 for all), with no statistically significant differences between the other Beckman groups. Similarly, after age correction, eyes in OCT2 had worse RMDA than eyes in OCT0 (P ≤ 0.001) and OCT1 (P < 0.01); however, there was no statistically significant difference between eyes in OCT0 and eyes in OCT1 (P = 0.195). The presence of SDDs was associated with worse RMDA in OCT2 (P < 0.01) but not in OCT1 (P = 0.285). Conclusions Eyes with a structural definition of iAMD have delayed RMDA, regardless of whether a CFP- or OCT-based criterion is used. In this study, after correcting for age, the RMDA did not differ between groups of eyes defined to have early AMD or normal aging, regardless of the classification. The presence of SDDs has some effect on RMDA at different grades of AMD severity.
Collapse
Affiliation(s)
- Bethany E. Higgins
- Optometry and Visual Sciences, City, University of London, London, United Kingdom
| | - Giovanni Montesano
- Optometry and Visual Sciences, City, University of London, London, United Kingdom
- National Institute for Health and Care Research, Biomedical Research Centre, Moorfields Eye Hospital, National Health Service Foundation Trust and University College London, Institute of Ophthalmology, London, United Kingdom
| | - David P. Crabb
- Optometry and Visual Sciences, City, University of London, London, United Kingdom
| | - Timos T. Naskas
- Centre for Public Health, Queen’s University Belfast, Northern Ireland, United Kingdom
| | - Katie W. Graham
- Centre for Public Health, Queen’s University Belfast, Northern Ireland, United Kingdom
| | - Usha Chakravarthy
- Centre for Public Health, Queen’s University Belfast, Northern Ireland, United Kingdom
| | - Frank Kee
- Centre for Public Health, Queen’s University Belfast, Northern Ireland, United Kingdom
| | - David M. Wright
- Centre for Public Health, Queen’s University Belfast, Northern Ireland, United Kingdom
| | - Ruth E. Hogg
- Centre for Public Health, Queen’s University Belfast, Northern Ireland, United Kingdom
| |
Collapse
|
35
|
Kar SS, Cetin H, Lunasco L, Le TK, Zahid R, Meng X, Srivastava SK, Madabhushi A, Ehlers JP. OCT-Derived Radiomic Features Predict Anti-VEGF Response and Durability in Neovascular Age-Related Macular Degeneration. OPHTHALMOLOGY SCIENCE 2022; 2:100171. [PMID: 36531588 PMCID: PMC9754979 DOI: 10.1016/j.xops.2022.100171] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 04/15/2022] [Accepted: 05/12/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE No established biomarkers currently exist for therapeutic efficacy and durability of anti-VEGF therapy in neovascular age-related macular degeneration (nAMD). This study evaluated radiomic-based quantitative OCT biomarkers that may be predictive of anti-VEGF treatment response and durability. DESIGN Assessment of baseline biomarkers using machine learning (ML) classifiers to predict tolerance to anti-VEGF therapy. PARTICIPANTS Eighty-one participants with treatment-naïve nAMD from the OSPREY study, including 15 super responders (patients who achieved and maintained retinal fluid resolution) and 66 non-super responders (patients who did not achieve or maintain retinal fluid resolution). METHODS A total of 962 texture-based radiomic features were extracted from fluid, subretinal hyperreflective material (SHRM), and different retinal tissue compartments of OCT scans. The top 8 features, chosen by the minimum redundancy maximum relevance feature selection method, were evaluated using 4 ML classifiers in a cross-validated approach to distinguish between the 2 patient groups. Longitudinal assessment of changes in different texture-based radiomic descriptors (delta-texture features) between baseline and month 3 also was performed to evaluate their association with treatment response. Additionally, 8 baseline clinical parameters and a combination of baseline OCT, delta-texture features, and the clinical parameters were evaluated in a cross-validated approach in terms of association with therapeutic response. MAIN OUTCOME MEASURES The cross-validated area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity were calculated to validate the classifier performance. RESULTS The cross-validated AUC by the quadratic discriminant analysis classifier was 0.75 ± 0.09 using texture-based baseline OCT features. The delta-texture features within different OCT compartments between baseline and month 3 yielded an AUC of 0.78 ± 0.08. The baseline clinical parameters sub-retinal pigment epithelium volume and intraretinal fluid volume yielded an AUC of 0.62 ± 0.07. When all the baseline, delta, and clinical features were combined, a statistically significant improvement in the classifier performance (AUC, 0.81 ± 0.07) was obtained. CONCLUSIONS Radiomic-based quantitative assessment of OCT images was shown to distinguish between super responders and non-super responders to anti-VEGF therapy in nAMD. The baseline fluid and SHRM delta-texture features were found to be most discriminating across groups.
Collapse
Key Words
- 3D, 3-dimensional
- AMD, age-related macular degeneration
- AUC, area under the receiver operating characteristic curve
- AUC-PRC, area under the precision recall curve
- IAI, intravitreal aflibercept injection
- ILM, internal limiting membrane
- IRF, intraretinal fluid
- ML, machine learning
- OCT
- QDA, quadratic discriminant analysis
- RFI, retinal fluid index
- RPE, retinal pigment epithelium
- Radiomics
- SHRM, subretinal hyperreflective material
- SRF, subretinal fluid
- SRFI, subretinal fluid index
- TRFI, total retinal fluid index
- Wet age-related macular degeneration
- mRmR, minimum redundancy maximum relevance
- nAMD, neovascular age-related macular degeneration
Collapse
Affiliation(s)
- Sudeshna Sil Kar
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, Ohio
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio
| | - Hasan Cetin
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, Ohio
| | - Leina Lunasco
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, Ohio
| | - Thuy K. Le
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, Ohio
| | - Robert Zahid
- Novartis Pharmaceuticals, East Hanover, New Jersey
| | - Xiangyi Meng
- Novartis Pharmaceuticals, East Hanover, New Jersey
| | - Sunil K. Srivastava
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, Ohio
- Vitreoretinal Service, Cole Eye Institute, Cleveland Clinic, Cleveland, Ohio
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio
- Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio
| | - Justis P. Ehlers
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, Ohio
- Vitreoretinal Service, Cole Eye Institute, Cleveland Clinic, Cleveland, Ohio
| |
Collapse
|
36
|
Sheng B, Chen X, Li T, Ma T, Yang Y, Bi L, Zhang X. An overview of artificial intelligence in diabetic retinopathy and other ocular diseases. Front Public Health 2022; 10:971943. [PMID: 36388304 PMCID: PMC9650481 DOI: 10.3389/fpubh.2022.971943] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 10/04/2022] [Indexed: 01/25/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is a branch of science that empowers machines using human intelligence. AI refers to the technology of rendering human intelligence through computer programs. From healthcare to the precise prevention, diagnosis, and management of diseases, AI is progressing rapidly in various interdisciplinary fields, including ophthalmology. Ophthalmology is at the forefront of AI in medicine because the diagnosis of ocular diseases heavy reliance on imaging. Recently, deep learning-based AI screening and prediction models have been applied to the most common visual impairment and blindness diseases, including glaucoma, cataract, age-related macular degeneration (ARMD), and diabetic retinopathy (DR). The success of AI in medicine is primarily attributed to the development of deep learning algorithms, which are computational models composed of multiple layers of simulated neurons. These models can learn the representations of data at multiple levels of abstraction. The Inception-v3 algorithm and transfer learning concept have been applied in DR and ARMD to reuse fundus image features learned from natural images (non-medical images) to train an AI system with a fraction of the commonly used training data (<1%). The trained AI system achieved performance comparable to that of human experts in classifying ARMD and diabetic macular edema on optical coherence tomography images. In this study, we highlight the fundamental concepts of AI and its application in these four major ocular diseases and further discuss the current challenges, as well as the prospects in ophthalmology.
Collapse
Affiliation(s)
- Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Xiaosi Chen
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tingyao Li
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Tianxing Ma
- Chongqing University-University of Cincinnati Joint Co-op Institute, Chongqing University, Chongqing, China
| | - Yang Yang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Bi
- School of Computer Science, University of Sydney, Sydney, NSW, Australia
| | - Xinyuan Zhang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
37
|
|
38
|
He X, Ren P, Lu L, Tang X, Wang J, Yang Z, Han W. Development of a deep learning algorithm for myopic maculopathy classification based on OCT images using transfer learning. Front Public Health 2022; 10:1005700. [PMID: 36211704 PMCID: PMC9532624 DOI: 10.3389/fpubh.2022.1005700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 08/29/2022] [Indexed: 01/27/2023] Open
Abstract
Purpose To apply deep learning (DL) techniques to develop an automatic intelligent classification system identifying the specific types of myopic maculopathy (MM) based on macular optical coherence tomography (OCT) images using transfer learning (TL). Method In this retrospective study, a total of 3,945 macular OCT images from 2,866 myopic patients were recruited from the ophthalmic outpatients of three hospitals. After culling out 545 images with poor quality, a dataset containing 3,400 macular OCT images was manually classified according to the ATN system, containing four types of MM with high OCT diagnostic values. Two DL classification algorithms were trained to identify the targeted lesion categories: Algorithm A was trained from scratch, and algorithm B using the TL approach initiated from the classification algorithm developed in our previous study. After comparing the training process, the algorithm with better performance was tested and validated. The performance of the classification algorithm in the test and validation sets was evaluated using metrics including sensitivity, specificity, accuracy, quadratic-weighted kappa score, and the area under the receiver operating characteristic curve (AUC). Moreover, the human-machine comparison was conducted. To better evaluate the algorithm and clarify the optimization direction, the dimensionality reduction analysis and heat map analysis were also used to visually analyze the algorithm. Results Algorithm B showed better performance in the training process. In the test set, the algorithm B achieved relatively robust performance with macro AUC, accuracy, and quadratic-weighted kappa of 0.986, 96.04% (95% CI: 0.951, 0.969), and 0.940 (95% CI: 0.909-0.971), respectively. In the external validation set, the performance of algorithm B was slightly inferior to that in the test set. In human-machine comparison test, the algorithm indicators were inferior to the retinal specialists but were the same as the ordinary ophthalmologists. In addition, dimensionality reduction visualization and heatmap visualization analysis showed excellent performance of the algorithm. Conclusion Our macular OCT image classification algorithm developed using the TL approach exhibited excellent performance. The automatic diagnosis system for macular OCT images of MM based on DL showed potential application prospects.
Collapse
Affiliation(s)
- Xiaoying He
- Department of Ophthalmology, Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Peifang Ren
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Li Lu
- Department of Ophthalmology, The First Affiliated Hospital of University of Science and Technology of China, Hefei, Anhui, China
| | - Xuyuan Tang
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jun Wang
- Department of Ophthalmology, Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Zixuan Yang
- Department of Ophthalmology, Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Wei Han
- Department of Ophthalmology, Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China,*Correspondence: Wei Han
| |
Collapse
|
39
|
González-Gonzalo C, Thee EF, Klaver CCW, Lee AY, Schlingemann RO, Tufail A, Verbraak F, Sánchez CI. Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res 2022; 90:101034. [PMID: 34902546 DOI: 10.1016/j.preteyeres.2021.101034] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/14/2023]
Abstract
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Collapse
Affiliation(s)
- Cristina González-Gonzalo
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands.
| | - Eric F Thee
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Caroline C W Klaver
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the Netherlands; Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Aaron Y Lee
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
| | - Reinier O Schlingemann
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Frank Verbraak
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands
| | - Clara I Sánchez
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Amsterdam, the Netherlands
| |
Collapse
|
40
|
Young LH, Kim J, Yakin M, Lin H, Dao DT, Kodati S, Sharma S, Lee AY, Lee CS, Sen HN. Automated Detection of Vascular Leakage in Fluorescein Angiography - A Proof of Concept. Transl Vis Sci Technol 2022; 11:19. [PMID: 35877095 PMCID: PMC9339697 DOI: 10.1167/tvst.11.7.19] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose The purpose of this paper was to develop a deep learning algorithm to detect retinal vascular leakage (leakage) in fluorescein angiography (FA) of patients with uveitis and use the trained algorithm to determine clinically notable leakage changes. Methods An algorithm was trained and tested to detect leakage on a set of 200 FA images (61 patients) and evaluated on a separate 50-image test set (21 patients). The ground truth was leakage segmentation by two clinicians. The Dice Similarity Coefficient (DSC) was used to measure concordance. Results During training, the algorithm achieved a best average DSC of 0.572 (95% confidence interval [CI] = 0.548–0.596). The trained algorithm achieved a DSC of 0.563 (95% CI = 0.543–0.582) when tested on an additional set of 50 images. The trained algorithm was then used to detect leakage on pairs of FA images from longitudinal patient visits. Longitudinal leakage follow-up showed a >2.21% change in the visible retina area covered by leakage (as detected by the algorithm) had a sensitivity and specificity of 90% (area under the curve [AUC] = 0.95) of detecting a clinically notable change compared to the gold standard, an expert clinician's assessment. Conclusions This deep learning algorithm showed modest concordance in identifying vascular leakage compared to ground truth but was able to aid in identifying vascular FA leakage changes over time. Translational Relevance This is a proof-of-concept study that vascular leakage can be detected in a more standardized way and that tools can be developed to help clinicians more objectively compare vascular leakage between FAs.
Collapse
Affiliation(s)
- LeAnne H Young
- National Eye Institute, Bethesda, MD, USA.,Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA
| | - Jongwoo Kim
- National Library of Medicine, Bethesda, MD, USA
| | | | - Henry Lin
- National Eye Institute, Bethesda, MD, USA
| | | | | | - Sumit Sharma
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | | | | | - H Nida Sen
- National Eye Institute, Bethesda, MD, USA
| |
Collapse
|
41
|
Wang YZ, Birch DG. Performance of Deep Learning Models in Automatic Measurement of Ellipsoid Zone Area on Baseline Optical Coherence Tomography (OCT) Images From the Rate of Progression of USH2A-Related Retinal Degeneration (RUSH2A) Study. Front Med (Lausanne) 2022; 9:932498. [PMID: 35865175 PMCID: PMC9294240 DOI: 10.3389/fmed.2022.932498] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 06/14/2022] [Indexed: 11/13/2022] Open
Abstract
PurposePreviously, we have shown the capability of a hybrid deep learning (DL) model that combines a U-Net and a sliding-window (SW) convolutional neural network (CNN) for automatic segmentation of retinal layers from OCT scan images in retinitis pigmentosa (RP). We found that one of the shortcomings of the hybrid model is that it tends to underestimate ellipsoid zone (EZ) width or area, especially when EZ extends toward or beyond the edge of the macula. In this study, we trained the model with additional data which included more OCT scans having extended EZ. We evaluated its performance in automatic measurement of EZ area on SD-OCT volume scans obtained from the participants of the RUSH2A natural history study by comparing the model’s performance to the reading center’s manual grading.Materials and MethodsDe-identified Spectralis high-resolution 9-mm 121-line macular volume scans as well as their EZ area measurements by a reading center were transferred from the management center of the RUSH2A study under the data transfer and processing agreement. A total of 86 baseline volume scans from 86 participants of the RUSH2A study were included to evaluate two hybrid models: the original RP240 model trained on 480 mid-line B-scans from 220 patients with retinitis pigmentosa (RP) and 20 participants with normal vision from a single site, and the new RP340 model trained on a revised RP340 dataset which included RP240 dataset plus an additional 200 mid-line B-scans from another 100 patients with RP. There was no overlap of patients between training and evaluation datasets. EZ and apical RPE in each B-scan image were automatically segmented by the hybrid model. EZ areas were determined by interpolating the discrete 2-dimensional B-scan EZ-RPE layer over the scan area. Dice similarity, correlation, linear regression, and Bland-Altman analyses were conducted to assess the agreement between the EZ areas measured by the hybrid model and by the reading center.ResultsFor EZ area > 1 mm2, average dice coefficients ± SD between the EZ band segmentations determined by the DL model and the manual grading were 0.835 ± 0.132 and 0.867 ± 0.105 for RP240 and RP340 hybrid models, respectively (p < 0.0005; n = 51). When compared to the manual grading, correlation coefficients (95% CI) were 0.991 (0.987–0.994) and 0.994 (0.991–0.996) for RP240 and RP340 hybrid models, respectively. Linear regression slopes (95% CI) were 0.918 (0.896–0.940) and 0.995 (0.975–1.014), respectively. Bland-Altman analysis revealed a mean difference ± SD of -0.137 ± 1.131 mm2 and 0.082 ± 0.825 mm2, respectively.ConclusionAdditional training data improved the hybrid model’s performance, especially reducing the bias and narrowing the range of the 95% limit of agreement when compared to manual grading. The close agreement of DL models to manual grading suggests that DL may provide effective tools to significantly reduce the burden of reading centers to analyze OCT scan images. In addition to EZ area, our DL models can also provide the measurements of photoreceptor outer segment volume and thickness to further help assess disease progression and to facilitate the study of structure and function relationship in RP.
Collapse
Affiliation(s)
- Yi-Zhong Wang
- Retina Foundation of the Southwest, Dallas, TX, United States
- Department of Ophthalmology, University of Texas Southwestern Medical Center, Dallas, TX, United States
- *Correspondence: Yi-Zhong Wang,
| | - David G. Birch
- Retina Foundation of the Southwest, Dallas, TX, United States
- Department of Ophthalmology, University of Texas Southwestern Medical Center, Dallas, TX, United States
| |
Collapse
|
42
|
Yaghy A, Lee AY, Keane PA, Keenan TDL, Mendonca LSM, Lee CS, Cairns AM, Carroll J, Chen H, Clark J, Cukras CA, de Sisternes L, Domalpally A, Durbin MK, Goetz KE, Grassmann F, Haines JL, Honda N, Hu ZJ, Mody C, Orozco LD, Owsley C, Poor S, Reisman C, Ribeiro R, Sadda SR, Sivaprasad S, Staurenghi G, Ting DS, Tumminia SJ, Zalunardo L, Waheed NK. Artificial intelligence-based strategies to identify patient populations and advance analysis in age-related macular degeneration clinical trials. Exp Eye Res 2022; 220:109092. [PMID: 35525297 PMCID: PMC9405680 DOI: 10.1016/j.exer.2022.109092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/18/2022] [Accepted: 04/20/2022] [Indexed: 11/04/2022]
Affiliation(s)
- Antonio Yaghy
- New England Eye Center, Tufts University Medical Center, Boston, MA, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA; Karalis Johnson Retina Center, Seattle, WA, USA
| | - Pearse A Keane
- Moorfields Eye Hospital & UCL Institute of Ophthalmology, London, UK
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA; Karalis Johnson Retina Center, Seattle, WA, USA
| | | | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, 925 N 87th Street, Milwaukee, WI, 53226, USA
| | - Hao Chen
- Genentech, South San Francisco, CA, USA
| | | | - Catherine A Cukras
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Amitha Domalpally
- Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA
| | | | - Kerry E Goetz
- Office of the Director, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Jonathan L Haines
- Department of Population and Quantitative Health Sciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Cleveland Institute of Computational Biology, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | | | - Zhihong Jewel Hu
- Doheny Eye Institute, University of California, Los Angeles, CA, USA
| | | | - Luz D Orozco
- Department of Bioinformatics, Genentech, South San Francisco, CA, 94080, USA
| | - Cynthia Owsley
- Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Stephen Poor
- Department of Ophthalmology, Novartis Institutes for Biomedical Research, Cambridge, MA, USA
| | | | | | - Srinivas R Sadda
- Doheny Eye Institute, David Geffen School of Medicine, University of California-Los Angeles, Los Angeles, CA, USA
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
| | - Giovanni Staurenghi
- Department of Biomedical and Clinical Sciences Luigi Sacco, Luigi Sacco Hospital, University of Milan, Italy
| | - Daniel Sw Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Santa J Tumminia
- Office of the Director, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Nadia K Waheed
- New England Eye Center, Tufts University Medical Center, Boston, MA, USA.
| |
Collapse
|
43
|
Tang W, Ye Y, Chen X, Shi F, Xiang D, Chen Z, Zhu W. Multi-class retinal fluid joint segmentation based on cascaded convolutional neural networks. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 05/25/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Retinal fluid mainly includes intra-retinal fluid (IRF), sub-retinal fluid (SRF) and pigment epithelial detachment (PED), whose accurate segmentation in optical coherence tomography (OCT) image is of great importance to the diagnosis and treatment of the relative fundus diseases. Approach. In this paper, a novel two-stage multi-class retinal fluid joint segmentation framework based on cascaded convolutional neural networks is proposed. In the pre-segmentation stage, a U-shape encoder–decoder network is adopted to acquire the retinal mask and generate a retinal relative distance map, which can provide the spatial prior information for the next fluid segmentation. In the fluid segmentation stage, an improved context attention and fusion network based on context shrinkage encode module and multi-scale and multi-category semantic supervision module (named as ICAF-Net) is proposed to jointly segment IRF, SRF and PED. Main results. the proposed segmentation framework was evaluated on the dataset of RETOUCH challenge. The average Dice similarity coefficient, intersection over union and accuracy (Acc) reach 76.39%, 64.03% and 99.32% respectively. Significance. The proposed framework can achieve good performance in the joint segmentation of multi-class fluid in retinal OCT images and outperforms some state-of-the-art segmentation networks.
Collapse
|
44
|
Hui VWK, Szeto SKH, Tang F, Yang D, Chen H, Lai TYY, Rong A, Zhang S, Zhao P, Ruamviboonsuk P, Lai CC, Chang A, Das T, Ohji M, Huang SS, Sivaprasad S, Wong TY, Lam DSC, Cheung CY. Optical Coherence Tomography Classification Systems for Diabetic Macular Edema and Their Associations With Visual Outcome and Treatment Responses - An Updated Review. Asia Pac J Ophthalmol (Phila) 2022; 11:247-257. [PMID: 34923521 DOI: 10.1097/apo.0000000000000468] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
ABSTRACT Optical coherence tomography (OCT) is an invaluable imaging tool in detecting and assessing diabetic macular edema (DME). Over the past decade, there have been different proposed OCT-based classification systems for DME. In this review, we present an update of spectral-domain OCT (SDOCT)-based DME classifications over the past 5 years. In addition, we attempt to summarize the proposed OCT qualitative and quantitative parameters from different classification systems in relation to disease severity, risk of progression, and treatment outcome. Although some OCT-based measurements were found to have prognostic value on visual outcome, there has been a lack of consensus or guidelines on which parameters can be reliably used to predict treatment outcomes. We also summarize recent literatures on the prognostic value of these parameters including quantitative measures such as macular thickness or volume, central subfield thickness or foveal thickness, and qualitative features such as the morphology of the vitreoretinal interface, disorganization of retinal inner layers, ellipsoid zone disruption integrity, and hyperreflec-tive foci. In addition, we discuss that a framework to assess the validity of biomarkers for treatment outcome is essentially important in assessing the prognosis before deciding on treatment in DME. Finally, we echo with other experts on the demand for updating the current diabetic retinal disease classification.
Collapse
Affiliation(s)
- Vivian W K Hui
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, china
- Hong Kong Eye Hospital, Hong Kong, China
| | - Simon K H Szeto
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, china
- Hong Kong Eye Hospital, Hong Kong, China
| | - Fangyao Tang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, china
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, china
| | - Haoyu Chen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Timothy Y Y Lai
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, china
- 2010 Retina & Macula Center, Kowloon, Hong Kong
| | - Ao Rong
- Department of Ophthalmology, Tongji Hospital Affiliated to Tongji University, Shanghai, China
- Shanghai Xin Shi Jie Eye Hospital, Shanghai, China
| | | | - Peiquan Zhao
- Department of Ophthalmology, Xin Hua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| | - Chi-Chun Lai
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
| | - Andrew Chang
- Sydney Retina Clinic, Sydney Eye Hospital, University of Sydney, Sydney, NSw, Australia
| | - Taraprasad Das
- Smt. Kanuri Santhamma Center for Vitreoretinal Diseases, Kallam Anji Reddy Campus, LV Prasad Eye Institute, Hyderabad, India
| | - Masahito Ohji
- Department of Ophthalmology, Shiga University of Medical Science, Otsu, Japan
| | - Suber S Huang
- Retina Center of Ohio, Cleveland, OH, US
- Bascom Palmer Eye Institute, University of Miami, Miami, FL, US
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-NUS Medical School, Singapore
| | - Dennis S C Lam
- C-MER International Eye Research Center of The Chinese University of Hong Kong (Shenzhen), Shenzhen, China
- C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, china
| |
Collapse
|
45
|
Hsu HY, Chou YB, Jheng YC, Kao ZK, Huang HY, Chen HR, Hwang DK, Chen SJ, Chiou SH, Wu YT. Automatic Segmentation of Retinal Fluid and Photoreceptor Layer from Optical Coherence Tomography Images of Diabetic Macular Edema Patients Using Deep Learning and Associations with Visual Acuity. Biomedicines 2022; 10:1269. [PMID: 35740291 PMCID: PMC9220118 DOI: 10.3390/biomedicines10061269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 05/25/2022] [Accepted: 05/27/2022] [Indexed: 01/27/2023] Open
Abstract
Diabetic macular edema (DME) is a highly common cause of vision loss in patients with diabetes. Optical coherence tomography (OCT) is crucial in classifying DME and tracking the results of DME treatment. The presence of intraretinal cystoid fluid (IRC) and subretinal fluid (SRF) and the disruption of the ellipsoid zone (EZ), which is part of the photoreceptor layer, are three crucial factors affecting the best corrected visual acuity (BCVA). However, the manual segmentation of retinal fluid and the EZ from retinal OCT images is laborious and time-consuming. Current methods focus only on the segmentation of retinal features, lacking a correlation with visual acuity. Therefore, we proposed a modified U-net, a deep learning algorithm, to segment these features from OCT images of patients with DME. We also correlated these features with visual acuity. The IRC, SRF, and EZ of the OCT retinal images were manually labeled and checked by doctors. We trained the modified U-net model on these labeled images. Our model achieved Sørensen-Dice coefficients of 0.80 and 0.89 for IRC and SRF, respectively. The area under the receiver operating characteristic curve (ROC) for EZ disruption was 0.88. Linear regression indicated that EZ disruption was the factor most strongly correlated with BCVA. This finding agrees with that of previous studies on OCT images. Thus, we demonstrate that our segmentation network can be feasibly applied to OCT image segmentation and assist physicians in assessing the severity of the disease.
Collapse
Affiliation(s)
- Huan-Yu Hsu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec-2, Li Nong Street, Taipei 112304, Taiwan; (H.-Y.H.); (Z.-K.K.)
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan; (Y.-B.C.); (H.-R.C.); (D.-K.H.); (S.-J.C.)
| | - Yu-Bai Chou
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan; (Y.-B.C.); (H.-R.C.); (D.-K.H.); (S.-J.C.)
- Department of Ophthalmology, Taipei Veterans General Hospital, 201, Sec-2, Shipai Rd., Taipei 112201, Taiwan
| | - Ying-Chun Jheng
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan; (Y.-C.J.); (H.-Y.H.)
- Big Data Center, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | - Zih-Kai Kao
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec-2, Li Nong Street, Taipei 112304, Taiwan; (H.-Y.H.); (Z.-K.K.)
| | - Hsin-Yi Huang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan; (Y.-C.J.); (H.-Y.H.)
- Big Data Center, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Hung-Ruei Chen
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan; (Y.-B.C.); (H.-R.C.); (D.-K.H.); (S.-J.C.)
| | - De-Kuang Hwang
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan; (Y.-B.C.); (H.-R.C.); (D.-K.H.); (S.-J.C.)
- Department of Ophthalmology, Taipei Veterans General Hospital, 201, Sec-2, Shipai Rd., Taipei 112201, Taiwan
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan; (Y.-C.J.); (H.-Y.H.)
- Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Shih-Jen Chen
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan; (Y.-B.C.); (H.-R.C.); (D.-K.H.); (S.-J.C.)
- Department of Ophthalmology, Taipei Veterans General Hospital, 201, Sec-2, Shipai Rd., Taipei 112201, Taiwan
| | - Shih-Hwa Chiou
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan; (Y.-B.C.); (H.-R.C.); (D.-K.H.); (S.-J.C.)
- Department of Ophthalmology, Taipei Veterans General Hospital, 201, Sec-2, Shipai Rd., Taipei 112201, Taiwan
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan; (Y.-C.J.); (H.-Y.H.)
- Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Pharmacology, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec-2, Li Nong Street, Taipei 112304, Taiwan; (H.-Y.H.); (Z.-K.K.)
- Brain Research Center, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
| |
Collapse
|
46
|
Deep-Learning-Based Algorithm for the Removal of Electromagnetic Interference Noise in Photoacoustic Endoscopic Image Processing. SENSORS 2022; 22:s22103961. [PMID: 35632370 PMCID: PMC9147354 DOI: 10.3390/s22103961] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 05/18/2022] [Accepted: 05/21/2022] [Indexed: 12/10/2022]
Abstract
Despite all the expectations for photoacoustic endoscopy (PAE), there are still several technical issues that must be resolved before the technique can be successfully translated into clinics. Among these, electromagnetic interference (EMI) noise, in addition to the limited signal-to-noise ratio (SNR), have hindered the rapid development of related technologies. Unlike endoscopic ultrasound, in which the SNR can be increased by simply applying a higher pulsing voltage, there is a fundamental limitation in leveraging the SNR of PAE signals because they are mostly determined by the optical pulse energy applied, which must be within the safety limits. Moreover, a typical PAE hardware situation requires a wide separation between the ultrasonic sensor and the amplifier, meaning that it is not easy to build an ideal PAE system that would be unaffected by EMI noise. With the intention of expediting the progress of related research, in this study, we investigated the feasibility of deep-learning-based EMI noise removal involved in PAE image processing. In particular, we selected four fully convolutional neural network architectures, U-Net, Segnet, FCN-16s, and FCN-8s, and observed that a modified U-Net architecture outperformed the other architectures in the EMI noise removal. Classical filter methods were also compared to confirm the superiority of the deep-learning-based approach. Still, it was by the U-Net architecture that we were able to successfully produce a denoised 3D vasculature map that could even depict the mesh-like capillary networks distributed in the wall of a rat colorectum. As the development of a low-cost laser diode or LED-based photoacoustic tomography (PAT) system is now emerging as one of the important topics in PAT, we expect that the presented AI strategy for the removal of EMI noise could be broadly applicable to many areas of PAT, in which the ability to apply a hardware-based prevention method is limited and thus EMI noise appears more prominently due to poor SNR.
Collapse
|
47
|
Bareja R, Mojahed D, Hibshoosh H, Hendon C. Classifying breast cancer in ultrahigh-resolution optical coherence tomography images using convolutional neural networks. APPLIED OPTICS 2022; 61:4458-4462. [PMID: 36256284 DOI: 10.1364/ao.455626] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 04/29/2022] [Indexed: 06/16/2023]
Abstract
Optical coherence tomography (OCT) is being investigated in breast cancer diagnostics as a real-time histology evaluation tool. We present a customized deep convolutional neural network (CNN) for classification of breast tissues in OCT B-scans. Images of human breast samples from mastectomies and breast reductions were acquired using a custom ultrahigh-resolution OCT system with 2.72 µm axial resolution and 5.52 µm lateral resolution. The network achieved 96.7% accuracy, 92% sensitivity, and 99.7% specificity on a dataset of 23 patients. The usage of deep learning will be important for the practical integration of OCT into clinical practice.
Collapse
|
48
|
Kaskar OG, Wells-Gray E, Fleischman D, Grace L. Evaluating machine learning classifiers for glaucoma referral decision support in primary care settings. Sci Rep 2022; 12:8518. [PMID: 35595794 PMCID: PMC9122936 DOI: 10.1038/s41598-022-12270-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 04/18/2022] [Indexed: 11/09/2022] Open
Abstract
Several artificial intelligence algorithms have been proposed to help diagnose glaucoma by analyzing the functional and/or structural changes in the eye. These algorithms require carefully curated datasets with access to ocular images. In the current study, we have modeled and evaluated classifiers to predict self-reported glaucoma using a single, easily obtained ocular feature (intraocular pressure (IOP)) and non-ocular features (age, gender, race, body mass index, systolic and diastolic blood pressure, and comorbidities). The classifiers were trained on publicly available data of 3015 subjects without a glaucoma diagnosis at the time of enrollment. 337 subjects subsequently self-reported a glaucoma diagnosis in a span of 1–12 years after enrollment. The classifiers were evaluated on the ability to identify these subjects by only using their features recorded at the time of enrollment. Support vector machine, logistic regression, and adaptive boosting performed similarly on the dataset with F1 scores of 0.31, 0.30, and 0.28, respectively. Logistic regression had the highest sensitivity at 60% with a specificity of 69%. Predictive classifiers using primarily non-ocular features have the potential to be used for identifying suspected glaucoma in non-eye care settings, including primary care. Further research into finding additional features that improve the performance of predictive classifiers is warranted.
Collapse
Affiliation(s)
- Omkar G Kaskar
- North Carolina State University, Raleigh, NC, 27695, USA
| | | | - David Fleischman
- University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Landon Grace
- North Carolina State University, Raleigh, NC, 27695, USA.
| |
Collapse
|
49
|
López-Varela E, Vidal PL, Pascual NO, Novo J, Ortega M. Fully-Automatic 3D Intuitive Visualization of Age-Related Macular Degeneration Fluid Accumulations in OCT Cubes. J Digit Imaging 2022; 35:1271-1282. [PMID: 35513586 PMCID: PMC9582110 DOI: 10.1007/s10278-022-00643-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 04/06/2022] [Accepted: 04/13/2022] [Indexed: 11/16/2022] Open
Abstract
Age-related macular degeneration is the leading cause of vision loss in developed countries, and wet-type AMD requires urgent treatment and rapid diagnosis because it causes rapid irreversible vision loss. Currently, AMD diagnosis is mainly carried out using images obtained by optical coherence tomography. This diagnostic process is performed by human clinicians, so human error may occur in some cases. Therefore, fully automatic methodologies are highly desirable adding a layer of robustness to the diagnosis. In this work, a novel computer-aided diagnosis and visualization methodology is proposed for the rapid identification and visualization of wet AMD. We adapted a convolutional neural network for segmentation of a similar domain of medical images to the problem of wet AMD segmentation, taking advantage of transfer learning, which allows us to work with and exploit a reduced number of samples. We generate a 3D intuitive visualization where the existence, position and severity of the fluid were represented in a clear and intuitive way to facilitate the analysis of the clinicians. The 3D visualization is robust and accurate, obtaining satisfactory 0.949 and 0.960 Dice coefficients in the different evaluated OCT cube configurations, allowing to quickly assess the presence and extension of the fluid associated to wet AMD.
Collapse
Affiliation(s)
- Emilio López-Varela
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| | - Plácido L. Vidal
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| | - Nuria Olivier Pascual
- Servizo de Oftalmoloxía, Complexo Hospitalario Universitario de Ferrol, CHUF, Av. da Residencia, S/N, Ferrol, 15405 Spain
| | - Jorge Novo
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| | - Marcos Ortega
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| |
Collapse
|
50
|
Sotoudeh-Paima S, Jodeiri A, Hajizadeh F, Soltanian-Zadeh H. Multi-scale convolutional neural network for automated AMD classification using retinal OCT images. Comput Biol Med 2022; 144:105368. [DOI: 10.1016/j.compbiomed.2022.105368] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 02/28/2022] [Accepted: 02/28/2022] [Indexed: 11/29/2022]
|