1
|
Nouri H, Abtahi SH, Mazloumi M, Samadikhadem S, Arevalo JF, Ahmadieh H. Optical coherence tomography angiography in diabetic retinopathy: A major review. Surv Ophthalmol 2024; 69:558-574. [PMID: 38521424 DOI: 10.1016/j.survophthal.2024.03.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Revised: 02/29/2024] [Accepted: 03/11/2024] [Indexed: 03/25/2024]
Abstract
Diabetic retinopathy (DR) is characterized by retinal vasculopathy and is a leading cause of visual impairment. Optical coherence tomography angiography (OCTA) is an innovative imaging technology that can detect various pathologies and quantifiable changes in retinal microvasculature. We briefly describe its functional principles and advantages over fluorescein angiography and perform a comprehensive review on its clinical applications in the screening or management of people with prediabetes, diabetes without clinical retinopathy (NDR), nonproliferative DR (NPDR), proliferative DR (PDR), and diabetic macular edema (DME). OCTA reveals early microvascular alterations in prediabetic and NDR eyes, which may coexist with sub-clinical neuroretinal dysfunction. Its applications in NPDR include measuring ischemia, detecting retinal neovascularization, and timing of early treatment through predicting the risk of retinopathy worsening or development of DME. In PDR, OCTA helps characterize the flow within neovascular complexes and evaluate their progression or regression in response to treatment. In eyes with DME, OCTA perfusion parameters may be of predictive value regarding the visual and anatomical gains associated with treatment. We further discussed the limitations of OCTA and the benefits of its incorporation into an updated DR severity scale.
Collapse
Affiliation(s)
- Hosein Nouri
- Ophthalmic Research Center, Research Institute for Ophthalmology and Vision Science, Shahid Beheshti University of Medical Sciences, Tehran, Iran; School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Seyed-Hossein Abtahi
- Ophthalmic Research Center, Research Institute for Ophthalmology and Vision Science, Shahid Beheshti University of Medical Sciences, Tehran, Iran; Department of Ophthalmology, Labbafinejad Medical Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Mehdi Mazloumi
- Eye Research Center, Rasoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Sanam Samadikhadem
- Department of Ophthalmology, Imam Hossein Medical Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - J Fernando Arevalo
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Hamid Ahmadieh
- Ophthalmic Research Center, Research Institute for Ophthalmology and Vision Science, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
2
|
Ebrahimi B, Le D, Abtahi M, Dadzie AK, Rossi A, Rahimi M, Son T, Ostmo S, Campbell JP, Paul Chan RV, Yao X. Assessing spectral effectiveness in color fundus photography for deep learning classification of retinopathy of prematurity. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:076001. [PMID: 38912212 PMCID: PMC11188587 DOI: 10.1117/1.jbo.29.7.076001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 05/24/2024] [Accepted: 05/29/2024] [Indexed: 06/25/2024]
Abstract
Significance Retinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities. Aim This study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP. Approach A convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared. Results For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture. Conclusions This study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.
Collapse
Affiliation(s)
- Behrouz Ebrahimi
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - David Le
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Mansour Abtahi
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Albert K. Dadzie
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Alfa Rossi
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Mojtaba Rahimi
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Taeyoon Son
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Susan Ostmo
- Oregon Health and Science University, Casey Eye Institute, Department of Ophthalmology, Portland, Oregon, United States
| | - J. Peter Campbell
- Oregon Health and Science University, Casey Eye Institute, Department of Ophthalmology, Portland, Oregon, United States
| | - R. V. Paul Chan
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
- University of Illinois Chicago, Department of Ophthalmology and Visual Sciences, Chicago, Illinois, United States
| | - Xincheng Yao
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
- University of Illinois Chicago, Department of Ophthalmology and Visual Sciences, Chicago, Illinois, United States
| |
Collapse
|
3
|
Sinha S, Ramesh PV, Nishant P, Morya AK, Prasad R. Novel automated non-invasive detection of ocular surface squamous neoplasia using artificial intelligence. World J Methodol 2024; 14:92267. [DOI: 10.5662/wjm.v14.i2.92267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Revised: 02/19/2024] [Accepted: 04/12/2024] [Indexed: 06/13/2024] Open
Abstract
Ocular surface squamous neoplasia (OSSN) is a common eye surface tumour, characterized by the growth of abnormal cells on the ocular surface. OSSN includes invasive squamous cell carcinoma (SCC), in which tumour cells penetrate the basement membrane and infiltrate the stroma, as well as non-invasive conjunctival intraepithelial neoplasia, dysplasia, and SCC in-situ thereby presenting a challenge in early detection and diagnosis. Early identification and precise demarcation of the OSSN border leads to straightforward and curative treatments, such as topical medicines, whereas advanced invasive lesions may need orbital exenteration, which carries a risk of death. Artificial intelligence (AI) has emerged as a promising tool in the field of eye care and holds potential for its application in OSSN management. AI algorithms trained on large datasets can analyze ocular surface images to identify suspicious lesions associated with OSSN, aiding ophthalmologists in early detection and diagnosis. AI can also track and monitor lesion progression over time, providing objective measurements to guide treatment decisions. Furthermore, AI can assist in treatment planning by offering personalized recommendations based on patient data and predicting the treatment response. This manuscript highlights the role of AI in OSSN, specifically focusing on its contributions in early detection and diagnosis, assessment of lesion progression, treatment planning, telemedicine and remote monitoring, and research and data analysis.
Collapse
Affiliation(s)
- Sony Sinha
- Department of Ophthalmology–Vitreo Retina, Neuro Ophthalmology and Oculoplasty, All India Institute of Medical Sciences, Patna 801507, India
| | | | - Prateek Nishant
- Department of Ophthalmology, ESIC Medical College, Patna 801113, India
| | - Arvind Kumar Morya
- Department of Ophthalmology, All India Institute of Medical Sciences, Hyderabad 508126, India
| | - Ripunjay Prasad
- Department of Ophthalmology, RP Eye Institute, Delhi 110001, India
| |
Collapse
|
4
|
Abtahi M, Le D, Ebrahimi B, Dadzie AK, Rahimi M, Hsieh YT, Heiferman MJ, Lim JI, Yao X. Differential artery-vein analysis improves the OCTA classification of diabetic retinopathy. BIOMEDICAL OPTICS EXPRESS 2024; 15:3889-3899. [PMID: 38867785 PMCID: PMC11166441 DOI: 10.1364/boe.521657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 04/25/2024] [Accepted: 05/14/2024] [Indexed: 06/14/2024]
Abstract
This study investigates the impact of differential artery-vein (AV) analysis in optical coherence tomography angiography (OCTA) on machine learning classification of diabetic retinopathy (DR). Leveraging deep learning for arterial-venous area (AVA) segmentation, six quantitative features, including perfusion intensity density (PID), blood vessel density (BVD), vessel area flux (VAF), blood vessel caliber (BVC), blood vessel tortuosity (BVT), and vessel perimeter index (VPI) features, were derived from OCTA images before and after AV differentiation. A support vector machine (SVM) classifier was utilized to assess both binary and multiclass classifications of control, diabetic patients without DR (NoDR), mild DR, moderate DR, and severe DR groups. Initially, one-region features, i.e., quantitative features extracted from the entire OCTA, were evaluated for DR classification. Differential AV analysis improved classification accuracies from 78.86% to 87.63% and from 79.62% to 85.66% for binary and multiclass classifications, respectively. Additionally, three-region features derived from the entire image, parafovea, and perifovea, were incorporated for DR classification. Differential AV analysis further enhanced classification accuracies from 84.43% to 93.33% and from 83.40% to 89.25% for binary and multiclass classifications, respectively. These findings highlight the potential of differential AV analysis in augmenting disease diagnosis and treatment assessment using OCTA.
Collapse
Affiliation(s)
- Mansour Abtahi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - David Le
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Behrouz Ebrahimi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Albert K. Dadzie
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Mojtaba Rahimi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Yi-Ting Hsieh
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| | - Michael J. Heiferman
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Jennifer I. Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|
5
|
Dadzie AK, Iddir SP, Abtahi M, Ebrahimi B, Le D, Ganesh S, Son T, Heiferman MJ, Yao X. Colour fusion effect on deep learning classification of uveal melanoma. Eye (Lond) 2024:10.1038/s41433-024-03148-4. [PMID: 38773261 DOI: 10.1038/s41433-024-03148-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 04/23/2024] [Accepted: 05/10/2024] [Indexed: 05/23/2024] Open
Abstract
BACKGROUND Reliable differentiation of uveal melanoma and choroidal nevi is crucial to guide appropriate treatment, preventing unnecessary procedures for benign lesions and ensuring timely treatment for potentially malignant cases. The purpose of this study is to validate deep learning classification of uveal melanoma and choroidal nevi, and to evaluate the effect of colour fusion options on the classification performance. METHODS A total of 798 ultra-widefield retinal images of 438 patients were included in this retrospective study, comprising 157 patients diagnosed with UM and 281 patients diagnosed with choroidal naevus. Colour fusion options, including early fusion, intermediate fusion and late fusion, were tested for deep learning image classification with a convolutional neural network (CNN). F1-score, accuracy and the area under the curve (AUC) of a receiver operating characteristic (ROC) were used to evaluate the classification performance. RESULTS Colour fusion options were observed to affect the deep learning performance significantly. For single-colour learning, the red colour image was observed to have superior performance compared to green and blue channels. For multi-colour learning, the intermediate fusion is better than early and late fusion options. CONCLUSION Deep learning is a promising approach for automated classification of uveal melanoma and choroidal nevi. Colour fusion options can significantly affect the classification performance.
Collapse
Affiliation(s)
- Albert K Dadzie
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - Sabrina P Iddir
- Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, 60612, USA
| | - Mansour Abtahi
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - Behrouz Ebrahimi
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - David Le
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - Sanjay Ganesh
- Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, 60612, USA
| | - Taeyoon Son
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - Michael J Heiferman
- Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, 60612, USA.
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA.
- Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, 60612, USA.
| |
Collapse
|
6
|
Badhon RH, Thompson AC, Lim JI, Leng T, Alam MN. Quantitative Characterization of Retinal Features in Translated OCTA. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.02.23.24303275. [PMID: 38464168 PMCID: PMC10925340 DOI: 10.1101/2024.02.23.24303275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
Purpose This study explores the feasibility of using generative machine learning (ML) to translate Optical Coherence Tomography (OCT) images into Optical Coherence Tomography Angiography (OCTA) images, potentially bypassing the need for specialized OCTA hardware. Methods The method involved implementing a generative adversarial network framework that includes a 2D vascular segmentation model and a 2D OCTA image translation model. The study utilizes a public dataset of 500 patients, divided into subsets based on resolution and disease status, to validate the quality of TR-OCTA images. The validation employs several quality and quantitative metrics to compare the translated images with ground truth OCTAs (GT-OCTA). We then quantitatively characterize vascular features generated in TR-OCTAs with GT-OCTAs to assess the feasibility of using TR-OCTA for objective disease diagnosis. Result TR-OCTAs showed high image quality in both 3 and 6 mm datasets (high-resolution, moderate structural similarity and contrast quality compared to GT-OCTAs). There were slight discrepancies in vascular metrics, especially in diseased patients. Blood vessel features like tortuosity and vessel perimeter index showed a better trend compared to density features which are affected by local vascular distortions. Conclusion This study presents a promising solution to the limitations of OCTA adoption in clinical practice by using vascular features from TR-OCTA for disease detection. Translation relevance This study has the potential to significantly enhance the diagnostic process for retinal diseases by making detailed vascular imaging more widely available and reducing dependency on costly OCTA equipment.
Collapse
Affiliation(s)
- Rashadul Hasan Badhon
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, Charlotte, NC, United States
| | - Atalie Carina Thompson
- Department of Surgical Ophthalmology, Atrium-Health Wake Forest Baptist, Winston-Salem, NC, United States
| | - Jennifer I. Lim
- Department of Ophthalmology and Visual Science, University of Illinois at Chicago, Chicago, IL, United States
| | - Theodore Leng
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, CA, United States
| | - Minhaj Nur Alam
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, Charlotte, NC, United States
| |
Collapse
|
7
|
El Habib Daho M, Li Y, Zeghlache R, Boité HL, Deman P, Borderie L, Ren H, Mannivanan N, Lepicard C, Cochener B, Couturier A, Tadayoni R, Conze PH, Lamard M, Quellec G. DISCOVER: 2-D multiview summarization of Optical Coherence Tomography Angiography for automatic diabetic retinopathy diagnosis. Artif Intell Med 2024; 149:102803. [PMID: 38462293 DOI: 10.1016/j.artmed.2024.102803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 12/19/2023] [Accepted: 02/03/2024] [Indexed: 03/12/2024]
Abstract
Diabetic Retinopathy (DR), an ocular complication of diabetes, is a leading cause of blindness worldwide. Traditionally, DR is monitored using Color Fundus Photography (CFP), a widespread 2-D imaging modality. However, DR classifications based on CFP have poor predictive power, resulting in suboptimal DR management. Optical Coherence Tomography Angiography (OCTA) is a recent 3-D imaging modality offering enhanced structural and functional information (blood flow) with a wider field of view. This paper investigates automatic DR severity assessment using 3-D OCTA. A straightforward solution to this task is a 3-D neural network classifier. However, 3-D architectures have numerous parameters and typically require many training samples. A lighter solution consists in using 2-D neural network classifiers processing 2-D en-face (or frontal) projections and/or 2-D cross-sectional slices. Such an approach mimics the way ophthalmologists analyze OCTA acquisitions: (1) en-face flow maps are often used to detect avascular zones and neovascularization, and (2) cross-sectional slices are commonly analyzed to detect macular edemas, for instance. However, arbitrary data reduction or selection might result in information loss. Two complementary strategies are thus proposed to optimally summarize OCTA volumes with 2-D images: (1) a parametric en-face projection optimized through deep learning and (2) a cross-sectional slice selection process controlled through gradient-based attribution. The full summarization and DR classification pipeline is trained from end to end. The automatic 2-D summary can be displayed in a viewer or printed in a report to support the decision. We show that the proposed 2-D summarization and classification pipeline outperforms direct 3-D classification with the advantage of improved interpretability.
Collapse
Affiliation(s)
- Mostafa El Habib Daho
- Univ Bretagne Occidentale, Brest, F-29200, France; Inserm, UMR 1101, Brest, F-29200, France
| | - Yihao Li
- Univ Bretagne Occidentale, Brest, F-29200, France; Inserm, UMR 1101, Brest, F-29200, France
| | - Rachid Zeghlache
- Univ Bretagne Occidentale, Brest, F-29200, France; Inserm, UMR 1101, Brest, F-29200, France
| | - Hugo Le Boité
- Sorbonne University, Paris, F-75006, France; Service d'Ophtalmologie, Hôpital Lariboisière, APHP, Paris, F-75475, France
| | - Pierre Deman
- ADCIS, Saint-Contest, F-14280, France; Evolucare Technologies, Le Pecq, F-78230, France
| | | | - Hugang Ren
- Carl Zeiss Meditec, Dublin, CA 94568, USA
| | | | - Capucine Lepicard
- Service d'Ophtalmologie, Hôpital Lariboisière, APHP, Paris, F-75475, France
| | - Béatrice Cochener
- Univ Bretagne Occidentale, Brest, F-29200, France; Inserm, UMR 1101, Brest, F-29200, France; Service d'Ophtalmologie, CHRU Brest, Brest, F-29200, France
| | - Aude Couturier
- Service d'Ophtalmologie, Hôpital Lariboisière, APHP, Paris, F-75475, France
| | - Ramin Tadayoni
- Service d'Ophtalmologie, Hôpital Lariboisière, APHP, Paris, F-75475, France; Paris Cité University, Paris, F-75006, France
| | - Pierre-Henri Conze
- Inserm, UMR 1101, Brest, F-29200, France; IMT Atlantique, Brest, F-29200, France
| | - Mathieu Lamard
- Univ Bretagne Occidentale, Brest, F-29200, France; Inserm, UMR 1101, Brest, F-29200, France
| | | |
Collapse
|
8
|
Li W, Bian L, Ma B, Sun T, Liu Y, Sun Z, Zhao L, Feng K, Yang F, Wang X, Chan S, Dou H, Qi H. Interpretable Detection of Diabetic Retinopathy, Retinal Vein Occlusion, Age-Related Macular Degeneration, and Other Fundus Conditions. Diagnostics (Basel) 2024; 14:121. [PMID: 38247998 DOI: 10.3390/diagnostics14020121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/23/2023] [Accepted: 12/27/2023] [Indexed: 01/23/2024] Open
Abstract
Diabetic retinopathy (DR), retinal vein occlusion (RVO), and age-related macular degeneration (AMD) pose significant global health challenges, often resulting in vision impairment and blindness. Automatic detection of these conditions is crucial, particularly in underserved rural areas with limited access to ophthalmic services. Despite remarkable advancements in artificial intelligence, especially convolutional neural networks (CNNs), their complexity can make interpretation difficult. In this study, we curated a dataset consisting of 15,089 color fundus photographs (CFPs) obtained from 8110 patients who underwent fundus fluorescein angiography (FFA) examination. The primary objective was to construct integrated models that merge CNNs with an attention mechanism. These models were designed for a hierarchical multilabel classification task, focusing on the detection of DR, RVO, AMD, and other fundus conditions. Furthermore, our approach extended to the detailed classification of DR, RVO, and AMD according to their respective subclasses. We employed a methodology that entails the translation of diagnostic information obtained from FFA results into CFPs. Our investigation focused on evaluating the models' ability to achieve precise diagnoses solely based on CFPs. Remarkably, our models showcased improvements across diverse fundus conditions, with the ConvNeXt-base + attention model standing out for its exceptional performance. The ConvNeXt-base + attention model achieved remarkable metrics, including an area under the receiver operating characteristic curve (AUC) of 0.943, a referable F1 score of 0.870, and a Cohen's kappa of 0.778 for DR detection. For RVO, it attained an AUC of 0.960, a referable F1 score of 0.854, and a Cohen's kappa of 0.819. Furthermore, in AMD detection, the model achieved an AUC of 0.959, an F1 score of 0.727, and a Cohen's kappa of 0.686. Impressively, the model demonstrated proficiency in subclassifying RVO and AMD, showcasing commendable sensitivity and specificity. Moreover, our models enhanced interpretability by visualizing attention weights on fundus images, aiding in the identification of disease findings. These outcomes underscore the substantial impact of our models in advancing the detection of DR, RVO, and AMD, offering the potential for improved patient outcomes and positively influencing the healthcare landscape.
Collapse
Affiliation(s)
- Wenlong Li
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Linbo Bian
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Baikai Ma
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Tong Sun
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Yiyun Liu
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Zhengze Sun
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Lin Zhao
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Kang Feng
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Fan Yang
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Xiaona Wang
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Szyyann Chan
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Hongliang Dou
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Hong Qi
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| |
Collapse
|
9
|
Pradeep K, Jeyakumar V, Bhende M, Shakeel A, Mahadevan S. Artificial intelligence and hemodynamic studies in optical coherence tomography angiography for diabetic retinopathy evaluation: A review. Proc Inst Mech Eng H 2024; 238:3-21. [PMID: 38044619 DOI: 10.1177/09544119231213443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
Diabetic retinopathy (DR) is a rapidly emerging retinal abnormality worldwide, which can cause significant vision loss by disrupting the vascular structure in the retina. Recently, optical coherence tomography angiography (OCTA) has emerged as an effective imaging tool for diagnosing and monitoring DR. OCTA produces high-quality 3-dimensional images and provides deeper visualization of retinal vessel capillaries and plexuses. The clinical relevance of OCTA in detecting, classifying, and planning therapeutic procedures for DR patients has been highlighted in various studies. Quantitative indicators obtained from OCTA, such as blood vessel segmentation of the retina, foveal avascular zone (FAZ) extraction, retinal blood vessel density, blood velocity, flow rate, capillary vessel pressure, and retinal oxygen extraction, have been identified as crucial hemodynamic features for screening DR using computer-aided systems in artificial intelligence (AI). AI has the potential to assist physicians and ophthalmologists in developing new treatment options. In this review, we explore how OCTA has impacted the future of DR screening and early diagnosis. It also focuses on how analysis methods have evolved over time in clinical trials. The future of OCTA imaging and its continued use in AI-assisted analysis is promising and will undoubtedly enhance the clinical management of DR.
Collapse
Affiliation(s)
- K Pradeep
- Department of Biomedical Engineering, Chennai Institute of Technology, Chennai, Tamil Nadu, India
| | - Vijay Jeyakumar
- Department of Biomedical Engineering, Sri Sivasubramaniya Nadar College of Engineering, Chennai, Tamil Nadu, India
| | - Muna Bhende
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya Medical Research Foundation, Chennai, Tamil Nadu, India
| | - Areeba Shakeel
- Vitreoretina Department, Sankara Nethralaya Medical Research Foundation, Chennai, Tamil Nadu, India
| | - Shriraam Mahadevan
- Department of Endocrinology, Sri Ramachandra Institute of Higher Education and Research, Chennai, Tamil Nadu, India
| |
Collapse
|
10
|
Zang P, Hormel TT, Wang J, Guo Y, Bailey ST, Flaxel CJ, Huang D, Hwang TS, Jia Y. Interpretable Diabetic Retinopathy Diagnosis Based on Biomarker Activation Map. IEEE Trans Biomed Eng 2024; 71:14-25. [PMID: 37405891 PMCID: PMC10796196 DOI: 10.1109/tbme.2023.3290541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/07/2023]
Abstract
OBJECTIVE Deep learning classifiers provide the most accurate means of automatically diagnosing diabetic retinopathy (DR) based on optical coherence tomography (OCT) and its angiography (OCTA). The power of these models is attributable in part to the inclusion of hidden layers that provide the complexity required to achieve a desired task. However, hidden layers also render algorithm outputs difficult to interpret. Here we introduce a novel biomarker activation map (BAM) framework based on generative adversarial learning that allows clinicians to verify and understand classifiers' decision-making. METHODS A data set including 456 macular scans were graded as non-referable or referable DR based on current clinical standards. A DR classifier that was used to evaluate our BAM was first trained based on this data set. The BAM generation framework was designed by combing two U-shaped generators to provide meaningful interpretability to this classifier. The main generator was trained to take referable scans as input and produce an output that would be classified by the classifier as non-referable. The BAM is then constructed as the difference image between the output and input of the main generator. To ensure that the BAM only highlights classifier-utilized biomarkers an assistant generator was trained to do the opposite, producing scans that would be classified as referable by the classifier from non-referable scans. RESULTS The generated BAMs highlighted known pathologic features including nonperfusion area and retinal fluid. CONCLUSION/SIGNIFICANCE A fully interpretable classifier based on these highlights could help clinicians better utilize and verify automated DR diagnosis.
Collapse
Affiliation(s)
- Pengxiao Zang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239 USA
| | - Tristan T. Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
| | - Jie Wang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239 USA
| | - Yukun Guo
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239 USA
| | - Steven T. Bailey
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
| | - Christina J. Flaxel
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239 USA
| | - Thomas S. Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239 USA
| |
Collapse
|
11
|
Huang S, Bacchi S, Chan W, Macri C, Selva D, Wong CX, Sun MT. Detection of systemic cardiovascular illnesses and cardiometabolic risk factors with machine learning and optical coherence tomography angiography: a pilot study. Eye (Lond) 2023; 37:3629-3633. [PMID: 37221360 PMCID: PMC10686409 DOI: 10.1038/s41433-023-02570-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 03/27/2023] [Accepted: 04/26/2023] [Indexed: 05/25/2023] Open
Abstract
BACKGROUND/OBJECTIVES Optical coherence tomography angiography (OCTA) has been found to identify changes in the retinal microvasculature of people with various cardiometabolic factors. Machine learning has previously been applied within ophthalmic imaging but has not yet been applied to these risk factors. The study aims to assess the feasibility of predicting the presence or absence of cardiovascular conditions and their associated risk factors using machine learning and OCTA. METHODS Cross-sectional study. Demographic and co-morbidity data was collected for each participant undergoing 3 × 3 mm, 6 × 6 mm and 8 × 8 mm OCTA scanning using the Carl Zeiss CIRRUS HD-OCT model 5000. The data was then pre-processed and randomly split into training and testing datasets (75%/25% split) before being applied to two models (Convolutional Neural Network and MoblieNetV2). Once developed on the training dataset, their performance was assessed on the unseen test dataset. RESULTS Two hundred forty-seven participants were included. Both models performed best in predicting the presence of hyperlipidaemia in 3 × 3 mm scans with an AUC of 0.74 and 0.81, and accuracy of 0.79 for CNN and MobileNetV2 respectively. Modest performance was achieved in the identification of diabetes mellitus, hypertension and congestive heart failure in 3 × 3 mm scans (all with AUC and accuracy >0.5). There was no significant recognition for 6 × 6 and 8 × 8 mm for any cardiometabolic risk factor. CONCLUSION This study demonstrates the strength of ML to identify the presence cardiometabolic factors, in particular hyperlipidaemia, in high-resolution 3 × 3 mm OCTA scans. Early detection of risk factors prior to a clinically significant event, will assist in preventing adverse outcomes for people.
Collapse
Affiliation(s)
- Sonia Huang
- South Australian Institute of Ophthalmology, The University of Adelaide and Royal Adelaide Hospital, Adelaide, SA, Australia.
| | - Stephen Bacchi
- Department of Neurology, Royal Adelaide Hospital, Adelaide, SA, Australia
| | - WengOnn Chan
- South Australian Institute of Ophthalmology, The University of Adelaide and Royal Adelaide Hospital, Adelaide, SA, Australia
| | - Carmelo Macri
- South Australian Institute of Ophthalmology, The University of Adelaide and Royal Adelaide Hospital, Adelaide, SA, Australia
| | - Dinesh Selva
- South Australian Institute of Ophthalmology, The University of Adelaide and Royal Adelaide Hospital, Adelaide, SA, Australia
| | - Christopher X Wong
- Department of Cardiology, University of Adelaide and Royal Adelaide Hospital, Adelaide, SA, Australia
| | - Michelle T Sun
- South Australian Institute of Ophthalmology, The University of Adelaide and Royal Adelaide Hospital, Adelaide, SA, Australia
| |
Collapse
|
12
|
Yao X, Dadzie A, Iddir S, Abtahi M, Ebrahimi B, Le D, Ganesh S, Son T, Heiferman M. Color Fusion Effect on Deep Learning Classification of Uveal Melanoma. RESEARCH SQUARE 2023:rs.3.rs-3399214. [PMID: 37986860 PMCID: PMC10659548 DOI: 10.21203/rs.3.rs-3399214/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Background Reliable differentiation of uveal melanoma and choroidal nevi is crucial to guide appropriate treatment, preventing unnecessary procedures for benign lesions and ensuring timely treatment for potentially malignant cases. The purpose of this study is to validate deep learning classification of uveal melanoma and choroidal nevi, and to evaluate the effect of color fusion options on the classification performance. Methods A total of 798 ultra-widefield retinal images of 438 patients were included in this retrospective study, comprising 157 patients diagnosed with UM and 281 patients diagnosed with choroidal nevus. Color fusion options, including early fusion, intermediate fusion and late fusion, were tested for deep learning image classification with a convolutional neural network (CNN). Specificity, sensitivity, F1-score, accuracy, and the area under the curve (AUC) of a receiver operating characteristic (ROC) were used to evaluate the classification performance. The saliency map visualization technique was used to understand the areas in the image that had the most influence on classification decisions of the CNN. Results Color fusion options were observed to affect the deep learning performance significantly. For single-color learning, the red color image was observed to have superior performance compared to green and blue channels. For multi-color learning, the intermediate fusion is better than early and late fusion options. Conclusion Deep learning is a promising approach for automated classification of uveal melanoma and choroidal nevi, and color fusion options can significantly affect the classification performance.
Collapse
|
13
|
Xue H, Sun Y, Chen J, Tian H, Liu Z, Shen M, Liu L. CAT-CBAM-Net: An Automatic Scoring Method for Sow Body Condition Based on CNN and Transformer. SENSORS (BASEL, SWITZERLAND) 2023; 23:7919. [PMID: 37765975 PMCID: PMC10535612 DOI: 10.3390/s23187919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 09/02/2023] [Accepted: 09/14/2023] [Indexed: 09/29/2023]
Abstract
Sow body condition scoring has been confirmed as a vital procedure in sow management. A timely and accurate assessment of the body condition of a sow is conducive to determining nutritional supply, and it takes on critical significance in enhancing sow reproductive performance. Manual sow body condition scoring methods have been extensively employed in large-scale sow farms, which are time-consuming and labor-intensive. To address the above-mentioned problem, a dual neural network-based automatic scoring method was developed in this study for sow body condition. The developed method aims to enhance the ability to capture local features and global information in sow images by combining CNN and transformer networks. Moreover, it introduces a CBAM module to help the network pay more attention to crucial feature channels while suppressing attention to irrelevant channels. To tackle the problem of imbalanced categories and mislabeling of body condition data, the original loss function was substituted with the optimized focal loss function. As indicated by the model test, the sow body condition classification achieved an average precision of 91.06%, the average recall rate was 91.58%, and the average F1 score reached 91.31%. The comprehensive comparative experimental results suggested that the proposed method yielded optimal performance on this dataset. The method developed in this study is capable of achieving automatic scoring of sow body condition, and it shows broad and promising applications.
Collapse
Affiliation(s)
- Hongxiang Xue
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (H.X.); (Y.S.); (J.C.); (Z.L.)
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
| | - Yuwen Sun
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (H.X.); (Y.S.); (J.C.); (Z.L.)
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
| | - Jinxin Chen
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (H.X.); (Y.S.); (J.C.); (Z.L.)
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
| | - Haonan Tian
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
| | - Zihao Liu
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (H.X.); (Y.S.); (J.C.); (Z.L.)
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
| | - Mingxia Shen
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
| | - Longshen Liu
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
| |
Collapse
|
14
|
Hormel TT, Jia Y. OCT angiography and its retinal biomarkers [Invited]. BIOMEDICAL OPTICS EXPRESS 2023; 14:4542-4566. [PMID: 37791289 PMCID: PMC10545210 DOI: 10.1364/boe.495627] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 07/13/2023] [Accepted: 07/13/2023] [Indexed: 10/05/2023]
Abstract
Optical coherence tomography angiography (OCTA) is a high-resolution, depth-resolved imaging modality with important applications in ophthalmic practice. An extension of structural OCT, OCTA enables non-invasive, high-contrast imaging of retinal and choroidal vasculature that are amenable to quantification. As such, OCTA offers the capability to identify and characterize biomarkers important for clinical practice and therapeutic research. Here, we review new methods for analyzing biomarkers and discuss new insights provided by OCTA.
Collapse
Affiliation(s)
- Tristan T. Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon, USA
| |
Collapse
|
15
|
Ebrahimi B, Le D, Abtahi M, Dadzie AK, Lim JI, Chan RVP, Yao X. Optimizing the OCTA layer fusion option for deep learning classification of diabetic retinopathy. BIOMEDICAL OPTICS EXPRESS 2023; 14:4713-4724. [PMID: 37791267 PMCID: PMC10545199 DOI: 10.1364/boe.495999] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 07/29/2023] [Accepted: 07/31/2023] [Indexed: 10/05/2023]
Abstract
The purpose of this study is to evaluate layer fusion options for deep learning classification of optical coherence tomography (OCT) angiography (OCTA) images. A convolutional neural network (CNN) end-to-end classifier was utilized to classify OCTA images from healthy control subjects and diabetic patients with no retinopathy (NoDR) and non-proliferative diabetic retinopathy (NPDR). For each eye, three en-face OCTA images were acquired from the superficial capillary plexus (SCP), deep capillary plexus (DCP), and choriocapillaris (CC) layers. The performances of the CNN classifier with individual layer inputs and multi-layer fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared. For individual layer inputs, the superficial OCTA was observed to have the best performance, with 87.25% accuracy, 78.26% sensitivity, and 90.10% specificity, to differentiate control, NoDR, and NPDR. For multi-layer fusion options, the best option is the intermediate-fusion architecture, which achieved 92.65% accuracy, 87.01% sensitivity, and 94.37% specificity. To interpret the deep learning performance, the Gradient-weighted Class Activation Mapping (Grad-CAM) was utilized to identify spatial characteristics for OCTA classification. Comparative analysis indicates that the layer data fusion options can affect the performance of deep learning classification, and the intermediate-fusion approach is optimal for OCTA classification of DR.
Collapse
Affiliation(s)
- Behrouz Ebrahimi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - David Le
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Mansour Abtahi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Albert K. Dadzie
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Jennifer I. Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - R. V. Paul Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|
16
|
Li Y, El Habib Daho M, Conze PH, Zeghlache R, Le Boité H, Bonnin S, Cosette D, Magazzeni S, Lay B, Le Guilcher A, Tadayoni R, Cochener B, Lamard M, Quellec G. Hybrid Fusion of High-Resolution and Ultra-Widefield OCTA Acquisitions for the Automatic Diagnosis of Diabetic Retinopathy. Diagnostics (Basel) 2023; 13:2770. [PMID: 37685306 PMCID: PMC10486731 DOI: 10.3390/diagnostics13172770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 08/19/2023] [Accepted: 08/24/2023] [Indexed: 09/10/2023] Open
Abstract
Optical coherence tomography angiography (OCTA) can deliver enhanced diagnosis for diabetic retinopathy (DR). This study evaluated a deep learning (DL) algorithm for automatic DR severity assessment using high-resolution and ultra-widefield (UWF) OCTA. Diabetic patients were examined with 6×6 mm2 high-resolution OCTA and 15×15 mm2 UWF-OCTA using PLEX®Elite 9000. A novel DL algorithm was trained for automatic DR severity inference using both OCTA acquisitions. The algorithm employed a unique hybrid fusion framework, integrating structural and flow information from both acquisitions. It was trained on data from 875 eyes of 444 patients. Tested on 53 patients (97 eyes), the algorithm achieved a good area under the receiver operating characteristic curve (AUC) for detecting DR (0.8868), moderate non-proliferative DR (0.8276), severe non-proliferative DR (0.8376), and proliferative/treated DR (0.9070). These results significantly outperformed detection with the 6×6 mm2 (AUC = 0.8462, 0.7793, 0.7889, and 0.8104, respectively) or 15×15 mm2 (AUC = 0.8251, 0.7745, 0.7967, and 0.8786, respectively) acquisitions alone. Thus, combining high-resolution and UWF-OCTA acquisitions holds the potential for improved early and late-stage DR detection, offering a foundation for enhancing DR management and a clear path for future works involving expanded datasets and integrating additional imaging modalities.
Collapse
Affiliation(s)
- Yihao Li
- Inserm, UMR 1101 LaTIM, F-29200 Brest, France
- Univ Bretagne Occidentale, F-29200 Brest, France
| | - Mostafa El Habib Daho
- Inserm, UMR 1101 LaTIM, F-29200 Brest, France
- Univ Bretagne Occidentale, F-29200 Brest, France
| | - Pierre-Henri Conze
- Inserm, UMR 1101 LaTIM, F-29200 Brest, France
- IMT Atlantique, ITI Department, F-29200 Brest, France
| | - Rachid Zeghlache
- Inserm, UMR 1101 LaTIM, F-29200 Brest, France
- Univ Bretagne Occidentale, F-29200 Brest, France
| | - Hugo Le Boité
- Sorbonne University, F-75006 Paris, France
- Service d’Ophtalmologie, Hôpital Lariboisière, AP-HP, F-75475 Paris, France
| | - Sophie Bonnin
- Service d’Ophtalmologie, Hôpital Lariboisière, AP-HP, F-75475 Paris, France
| | | | | | - Bruno Lay
- ADCIS, F-14280 Saint-Contest, France
| | | | - Ramin Tadayoni
- Service d’Ophtalmologie, Hôpital Lariboisière, AP-HP, F-75475 Paris, France
| | - Béatrice Cochener
- Inserm, UMR 1101 LaTIM, F-29200 Brest, France
- Univ Bretagne Occidentale, F-29200 Brest, France
- Service d’Ophtalmologie, CHRU Brest, F-29200 Brest, France
| | - Mathieu Lamard
- Inserm, UMR 1101 LaTIM, F-29200 Brest, France
- Univ Bretagne Occidentale, F-29200 Brest, France
| | | |
Collapse
|
17
|
Alharbi AH, Towfek SK, Abdelhamid AA, Ibrahim A, Eid MM, Khafaga DS, Khodadadi N, Abualigah L, Saber M. Diagnosis of Monkeypox Disease Using Transfer Learning and Binary Advanced Dipper Throated Optimization Algorithm. Biomimetics (Basel) 2023; 8:313. [PMID: 37504202 PMCID: PMC10807651 DOI: 10.3390/biomimetics8030313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 07/03/2023] [Accepted: 07/12/2023] [Indexed: 07/29/2023] Open
Abstract
The virus that causes monkeypox has been observed in Africa for several years, and it has been linked to the development of skin lesions. Public panic and anxiety have resulted from the deadly repercussions of virus infections following the COVID-19 pandemic. Rapid detection approaches are crucial since COVID-19 has reached a pandemic level. This study's overarching goal is to use metaheuristic optimization to boost the performance of feature selection and classification methods to identify skin lesions as indicators of monkeypox in the event of a pandemic. Deep learning and transfer learning approaches are used to extract the necessary features. The GoogLeNet network is the deep learning framework used for feature extraction. In addition, a binary implementation of the dipper throated optimization (DTO) algorithm is used for feature selection. The decision tree classifier is then used to label the selected set of features. The decision tree classifier is optimized using the continuous version of the DTO algorithm to improve the classification accuracy. Various evaluation methods are used to compare and contrast the proposed approach and the other competing methods using the following metrics: accuracy, sensitivity, specificity, p-Value, N-Value, and F1-score. Through feature selection and a decision tree classifier, the following results are achieved using the proposed approach; F1-score of 0.92, sensitivity of 0.95, specificity of 0.61, p-Value of 0.89, and N-Value of 0.79. The overall accuracy of the proposed methodology after optimizing the parameters of the decision tree classifier is 94.35%. Furthermore, the analysis of variation (ANOVA) and Wilcoxon signed rank test have been applied to the results to investigate the statistical distinction between the proposed methodology and the alternatives. This comparison verified the uniqueness and importance of the proposed approach to Monkeypox case detection.
Collapse
Affiliation(s)
- Amal H Alharbi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - S K Towfek
- Computer Science and Intelligent Systems Research Center, Blacksburg, VA 24060, USA
- Department of Communications and Electronics, Delta Higher Institute of Engineering and Technology, Mansoura 35111, Egypt
| | - Abdelaziz A Abdelhamid
- Department of Computer Science, College of Computing and Information Technology, Shaqra University, Shaqra 11961, Saudi Arabia
- Department of Computer Science, Faculty of Computer and Information Sciences, Ain Shams University, Cairo 11566, Egypt
| | - Abdelhameed Ibrahim
- Computer Engineering and Control Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
| | - Marwa M Eid
- Faculty of Artificial Intelligence, Delta University for Science and Technology, Mansoura P.O. Box 11152, Egypt
| | - Doaa Sami Khafaga
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Nima Khodadadi
- Department of Civil and Architectural Engineering, University of Miami, Coral Gables, FL 33146, USA
| | - Laith Abualigah
- Computer Science Department, Prince Hussein Bin Abdullah Faculty for Information Technology, Al al-Bayt University, Mafraq 25113, Jordan
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 13-5053, Lebanon
- Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman 19328, Jordan
- MEU Research Unit, Middle East University, Amman 11831, Jordan
- Applied Science Research Center, Applied Science Private University, Amman 11931, Jordan
- School of Computer Sciences, Universiti Sains Malaysia, Gelugor 11800, Malaysia
- School of Engineering and Technology, Sunway University Malaysia, Petaling Jaya 27500, Malaysia
| | - Mohamed Saber
- Electronics and Communications Engineering Department, Faculty of Engineering, Delta University for Science and Technology, Mansoura P.O. Box 11152, Egypt
| |
Collapse
|
18
|
Hassan E, Elmougy S, Ibraheem MR, Hossain MS, AlMutib K, Ghoneim A, AlQahtani SA, Talaat FM. Enhanced Deep Learning Model for Classification of Retinal Optical Coherence Tomography Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:5393. [PMID: 37420558 DOI: 10.3390/s23125393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/15/2023] [Accepted: 05/25/2023] [Indexed: 07/09/2023]
Abstract
Retinal optical coherence tomography (OCT) imaging is a valuable tool for assessing the condition of the back part of the eye. The condition has a great effect on the specificity of diagnosis, the monitoring of many physiological and pathological procedures, and the response and evaluation of therapeutic effectiveness in various fields of clinical practices, including primary eye diseases and systemic diseases such as diabetes. Therefore, precise diagnosis, classification, and automated image analysis models are crucial. In this paper, we propose an enhanced optical coherence tomography (EOCT) model to classify retinal OCT based on modified ResNet (50) and random forest algorithms, which are used in the proposed study's training strategy to enhance performance. The Adam optimizer is applied during the training process to increase the efficiency of the ResNet (50) model compared with the common pre-trained models, such as spatial separable convolutions and visual geometry group (VGG) (16). The experimentation results show that the sensitivity, specificity, precision, negative predictive value, false discovery rate, false negative rate accuracy, and Matthew's correlation coefficient are 0.9836, 0.9615, 0.9740, 0.9756, 0.0385, 0.0260, 0.0164, 0.9747, 0.9788, and 0.9474, respectively.
Collapse
Affiliation(s)
- Esraa Hassan
- Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh 33516, Egypt
| | - Samir Elmougy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Mai R Ibraheem
- Department of Information Technology, Faculty of Computers and information, Kafrelsheikh University, Kafrelsheikh 33516, Egypt
| | - M Shamim Hossain
- Research Chair of Pervasive and Mobile Computing, Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
| | - Khalid AlMutib
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11574, Saudi Arabia
| | - Ahmed Ghoneim
- Research Chair of Pervasive and Mobile Computing, Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
| | - Salman A AlQahtani
- Research Chair of Pervasive and Mobile Computing, Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11574, Saudi Arabia
| | - Fatma M Talaat
- Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh 33516, Egypt
| |
Collapse
|
19
|
Lee T, Rivera A, Brune M, Kundu A, Haystead A, Winslow L, Kundu R, Wisely CE, Robbins CB, Henao R, Grewal DS, Fekrat S. Convolutional Neural Network-Based Automated Quality Assessment of OCT and OCT Angiography Image Maps in Individuals With Neurodegenerative Disease. Transl Vis Sci Technol 2023; 12:30. [PMID: 37389540 PMCID: PMC10318591 DOI: 10.1167/tvst.12.6.30] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 06/04/2023] [Indexed: 07/01/2023] Open
Abstract
Purpose To train and test convolutional neural networks (CNNs) to automate quality assessment of optical coherence tomography (OCT) and OCT angiography (OCTA) images in patients with neurodegenerative disease. Methods Patients with neurodegenerative disease were enrolled in the Duke Eye Multimodal Imaging in Neurodegenerative Disease Study. Image inputs were ganglion cell-inner plexiform layer (GC-IPL) thickness maps and fovea-centered 6-mm × 6-mm OCTA scans of the superficial capillary plexus (SCP). Two trained graders manually labeled all images for quality (good versus poor). Interrater reliability (IRR) of manual quality assessment was calculated for a subset of each image type. Images were split into train, validation, and test sets in a 70%/15%/15% split. An AlexNet-based CNN was trained using these labels and evaluated with area under the receiver operating characteristic (AUC) and summaries of the confusion matrix. Results A total of 1465 GC-IPL thickness maps (1217 good and 248 poor quality) and 2689 OCTA scans of the SCP (1797 good and 892 poor quality) served as model inputs. The IRR of quality assessment agreement by two graders was 97% and 90% for the GC-IPL maps and OCTA scans, respectively. The AlexNet-based CNNs trained to assess quality of the GC-IPL images and OCTA scans achieved AUCs of 0.990 and 0.832, respectively. Conclusions CNNs can be trained to accurately differentiate good- from poor-quality GC-IPL thickness maps and OCTA scans of the macular SCP. Translational Relevance Since good-quality retinal images are critical for the accurate assessment of microvasculature and structure, incorporating an automated image quality sorter may obviate the need for manual image review.
Collapse
Affiliation(s)
- Terry Lee
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Alexandra Rivera
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Pratt School of Engineering, Duke University, Durham, NC, USA
| | - Matthew Brune
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Pratt School of Engineering, Duke University, Durham, NC, USA
| | - Anita Kundu
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Alice Haystead
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Pratt School of Engineering, Duke University, Durham, NC, USA
| | - Lauren Winslow
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Pratt School of Engineering, Duke University, Durham, NC, USA
| | - Raj Kundu
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Pratt School of Engineering, Duke University, Durham, NC, USA
| | - C. Ellis Wisely
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Cason B. Robbins
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Ricardo Henao
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
- Department of Biostatistics & Bioinformatics, Duke University, Durham, NC, USA
| | - Dilraj S. Grewal
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Sharon Fekrat
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- Department of Neurology, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
20
|
Matten P, Scherer J, Schlegl T, Nienhaus J, Stino H, Niederleithner M, Schmidt-Erfurth UM, Leitgeb RA, Drexler W, Pollreisz A, Schmoll T. Multiple instance learning based classification of diabetic retinopathy in weakly-labeled widefield OCTA en face images. Sci Rep 2023; 13:8713. [PMID: 37248309 DOI: 10.1038/s41598-023-35713-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 05/22/2023] [Indexed: 05/31/2023] Open
Abstract
Diabetic retinopathy (DR), a pathologic change of the human retinal vasculature, is the leading cause of blindness in working-age adults with diabetes mellitus. Optical coherence tomography angiography (OCTA), a functional extension of optical coherence tomography, has shown potential as a tool for early diagnosis of DR through its ability to visualize the retinal vasculature in all spatial dimensions. Previously introduced deep learning-based classifiers were able to support the detection of DR in OCTA images, but require expert labeling at the pixel level, a labor-intensive and expensive process. We present a multiple instance learning-based network, MIL-ResNet,14 that is capable of detecting biomarkers in an OCTA dataset with high accuracy, without the need for annotations other than the information whether a scan is from a diabetic patient or not. The dataset we used for this study was acquired with a diagnostic ultra-widefield swept-source OCT device with a MHz A-scan rate. We were able to show that our proposed method outperforms previous state-of-the-art networks for this classification task, ResNet14 and VGG16. In addition, our network pays special attention to clinically relevant biomarkers and is robust against adversarial attacks. Therefore, we believe that it could serve as a powerful diagnostic decision support tool for clinical ophthalmic screening.
Collapse
Affiliation(s)
- Philipp Matten
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria.
| | - Julius Scherer
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Thomas Schlegl
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Jonas Nienhaus
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Heiko Stino
- Department of Ophthalmology and Optometry, Medical University of Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Michael Niederleithner
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Ursula M Schmidt-Erfurth
- Department of Ophthalmology and Optometry, Medical University of Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Rainer A Leitgeb
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Wolfgang Drexler
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Andreas Pollreisz
- Department of Ophthalmology and Optometry, Medical University of Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Tilman Schmoll
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
- Carl Zeiss Meditec Inc, 5300 Central Pkwy, Dublin, CA, 94568, USA
| |
Collapse
|
21
|
Ong CJT, Wong MYZ, Cheong KX, Zhao J, Teo KYC, Tan TE. Optical Coherence Tomography Angiography in Retinal Vascular Disorders. Diagnostics (Basel) 2023; 13:diagnostics13091620. [PMID: 37175011 PMCID: PMC10178415 DOI: 10.3390/diagnostics13091620] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 04/28/2023] [Accepted: 05/01/2023] [Indexed: 05/15/2023] Open
Abstract
Traditionally, abnormalities of the retinal vasculature and perfusion in retinal vascular disorders, such as diabetic retinopathy and retinal vascular occlusions, have been visualized with dye-based fluorescein angiography (FA). Optical coherence tomography angiography (OCTA) is a newer, alternative modality for imaging the retinal vasculature, which has some advantages over FA, such as its dye-free, non-invasive nature, and depth resolution. The depth resolution of OCTA allows for characterization of the retinal microvasculature in distinct anatomic layers, and commercial OCTA platforms also provide automated quantitative vascular and perfusion metrics. Quantitative and qualitative OCTA analysis in various retinal vascular disorders has facilitated the detection of pre-clinical vascular changes, greater understanding of known clinical signs, and the development of imaging biomarkers to prognosticate and guide treatment. With further technological improvements, such as a greater field of view and better image quality processing algorithms, it is likely that OCTA will play an integral role in the study and management of retinal vascular disorders. Artificial intelligence methods-in particular, deep learning-show promise in refining the insights to be gained from the use of OCTA in retinal vascular disorders. This review aims to summarize the current literature on this imaging modality in relation to common retinal vascular disorders.
Collapse
Affiliation(s)
- Charles Jit Teng Ong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
| | - Mark Yu Zheng Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
| | - Kai Xiong Cheong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
| | - Jinzhi Zhao
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
| | - Kelvin Yi Chong Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (EYE ACP), Duke-NUS Medical School, Singapore 169857, Singapore
| | - Tien-En Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (EYE ACP), Duke-NUS Medical School, Singapore 169857, Singapore
| |
Collapse
|
22
|
Abtahi M, Le D, Ebrahimi B, Dadzie AK, Lim JI, Yao X. An open-source deep learning network AVA-Net for arterial-venous area segmentation in optical coherence tomography angiography. COMMUNICATIONS MEDICINE 2023; 3:54. [PMID: 37069396 PMCID: PMC10110614 DOI: 10.1038/s43856-023-00287-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 04/06/2023] [Indexed: 04/19/2023] Open
Abstract
BACKGROUND Differential artery-vein (AV) analysis in optical coherence tomography angiography (OCTA) holds promise for the early detection of eye diseases. However, currently available methods for AV analysis are limited for binary processing of retinal vasculature in OCTA, without quantitative information of vascular perfusion intensity. This study is to develop and validate a method for quantitative AV analysis of vascular perfusion intensity. METHOD A deep learning network AVA-Net has been developed for automated AV area (AVA) segmentation in OCTA. Seven new OCTA features, including arterial area (AA), venous area (VA), AVA ratio (AVAR), total perfusion intensity density (T-PID), arterial PID (A-PID), venous PID (V-PID), and arterial-venous PID ratio (AV-PIDR), were extracted and tested for early detection of diabetic retinopathy (DR). Each of these seven features was evaluated for quantitative evaluation of OCTA images from healthy controls, diabetic patients without DR (NoDR), and mild DR. RESULTS It was observed that the area features, i.e., AA, VA and AVAR, can reveal significant differences between the control and mild DR. Vascular perfusion parameters, including T-PID and A-PID, can differentiate mild DR from control group. AV-PIDR can disclose significant differences among all three groups, i.e., control, NoDR, and mild DR. According to Bonferroni correction, the combination of A-PID and AV-PIDR can reveal significant differences in all three groups. CONCLUSIONS AVA-Net, which is available on GitHub for open access, enables quantitative AV analysis of AV area and vascular perfusion intensity. Comparative analysis revealed AV-PIDR as the most sensitive feature for OCTA detection of early DR. Ensemble AV feature analysis, e.g., the combination of A-PID and AV-PIDR, can further improve the performance for early DR assessment.
Collapse
Affiliation(s)
- Mansour Abtahi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL, 60607, USA
| | - David Le
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL, 60607, USA
| | - Behrouz Ebrahimi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL, 60607, USA
| | - Albert K Dadzie
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL, 60607, USA
| | - Jennifer I Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL, 60607, USA.
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA.
| |
Collapse
|
23
|
Alam MN, Yamashita R, Ramesh V, Prabhune T, Lim JI, Chan RVP, Hallak J, Leng T, Rubin D. Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models. Sci Rep 2023; 13:6047. [PMID: 37055475 PMCID: PMC10102012 DOI: 10.1038/s41598-023-33365-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 04/12/2023] [Indexed: 04/15/2023] Open
Abstract
Diabetic retinopathy (DR) is a major cause of vision impairment in diabetic patients worldwide. Due to its prevalence, early clinical diagnosis is essential to improve treatment management of DR patients. Despite recent demonstration of successful machine learning (ML) models for automated DR detection, there is a significant clinical need for robust models that can be trained with smaller cohorts of dataset and still perform with high diagnostic accuracy in independent clinical datasets (i.e., high model generalizability). Towards this need, we have developed a self-supervised contrastive learning (CL) based pipeline for classification of referable vs non-referable DR. Self-supervised CL based pretraining allows enhanced data representation, therefore, the development of robust and generalized deep learning (DL) models, even with small, labeled datasets. We have integrated a neural style transfer (NST) augmentation in the CL pipeline to produce models with better representations and initializations for the detection of DR in color fundus images. We compare our CL pretrained model performance with two state of the art baseline models pretrained with Imagenet weights. We further investigate the model performance with reduced labeled training data (down to 10 percent) to test the robustness of the model when trained with small, labeled datasets. The model is trained and validated on the EyePACS dataset and tested independently on clinical datasets from the University of Illinois, Chicago (UIC). Compared to baseline models, our CL pretrained FundusNet model had higher area under the receiver operating characteristics (ROC) curve (AUC) (CI) values (0.91 (0.898 to 0.930) vs 0.80 (0.783 to 0.820) and 0.83 (0.801 to 0.853) on UIC data). At 10 percent labeled training data, the FundusNet AUC was 0.81 (0.78 to 0.84) vs 0.58 (0.56 to 0.64) and 0.63 (0.60 to 0.66) in baseline models, when tested on the UIC dataset. CL based pretraining with NST significantly improves DL classification performance, helps the model generalize well (transferable from EyePACS to UIC data), and allows training with small, annotated datasets, therefore reducing ground truth annotation burden of the clinicians.
Collapse
Affiliation(s)
- Minhaj Nur Alam
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA.
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, 9201 University City Boulevard, Charlotte, NC, 28223, USA.
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA.
| | - Rikiya Yamashita
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Vignav Ramesh
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Tejas Prabhune
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Jennifer I Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - R V P Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - Joelle Hallak
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - Theodore Leng
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| |
Collapse
|
24
|
Dadzie AK, Le D, Abtahi M, Ebrahimi B, Son T, Lim JI, Yao X. Normalized Blood Flow Index in Optical Coherence Tomography Angiography Provides a Sensitive Biomarker of Early Diabetic Retinopathy. Transl Vis Sci Technol 2023; 12:3. [PMID: 37017960 PMCID: PMC10082385 DOI: 10.1167/tvst.12.4.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 03/09/2023] [Indexed: 04/06/2023] Open
Abstract
Purpose To evaluate the sensitivity of normalized blood flow index (NBFI) for detecting early diabetic retinopathy (DR). Methods Optical coherence tomography angiography (OCTA) images of healthy controls, diabetic patients without DR (NoDR), and patients with mild nonproliferative DR (NPDR) were analyzed in this study. The OCTA images were centered on the fovea and covered a 6 mm × 6 mm area. Enface projections of the superficial vascular plexus (SVP) and the deep capillary plexus (DCP) were obtained for the quantitative OCTA feature analysis. Three quantitative OCTA features were examined: blood vessel density (BVD), blood flow flux (BFF), and NBFI. Each feature was calculated from both the SVP and DCP and their sensitivities to distinguish the three cohorts of the study were evaluated. Results The only quantitative feature capable of distinguishing all three cohorts was NBFI in the DCP image. Comparative study revealed that both BVD and BFF were able to distinguish the controls and NoDR from mild NPDR. However, neither BVD nor BFF was sensitive enough to separate NoDR from the healthy controls. Conclusions The NBFI has been demonstrated as a sensitive biomarker of early DR, revealing retinal blood flow abnormality better than traditional BVD and BFF. The NBFI in the DCP was verified as the most sensitive biomarker, supporting that diabetes affects the DCP earlier than SVP in DR. Translational Relevance NBFI provides a robust biomarker for quantitative analysis of DR-caused blood flow abnormalities, promising early detection and objective classification of DR.
Collapse
Affiliation(s)
- Albert K. Dadzie
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL, USA
| | - David Le
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL, USA
| | - Mansour Abtahi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL, USA
| | - Behrouz Ebrahimi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL, USA
| | - Taeyoon Son
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL, USA
| | - Jennifer I. Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
25
|
Cheng CY, Hsiao CC, Hsieh YT. Image processing and quantification analysis for optical coherence tomography angiography in epiretinal membrane. Photodiagnosis Photodyn Ther 2023; 42:103534. [PMID: 36965759 DOI: 10.1016/j.pdpdt.2023.103534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 02/24/2023] [Accepted: 03/22/2023] [Indexed: 03/27/2023]
Abstract
BACKGROUND To explore image processing methods for optical coherence tomography angiography (OCTA) of the epiretinal membrane (ERM), and to evaluate the impact of ERM on vision by analyzing the retinal vasculature. METHODS Thirty eyes of 30 patients with idiopathic ERM who underwent OCTA were retrospectively evaluated. Image processing of OCTA, including the Mexican hat filter (MHF) and exclusion of the foveal avascular zone (FAZ), was attempted using Fiji. OCTA parameters, including vessel density (VD), fractal dimension (FD), and vessel tortuosity (VT), were measured for large vessels only, capillaries only, and the whole vasculature. Pearson correlation analysis was used to evaluate the correlations between best-corrected visual acuity (BCVA) and OCTA parameters. RESULTS The correlations between BCVA and retinal vasculature were much increased when the capillaries only instead of the whole vasculature was used for analysis. Both higher VD and FD of capillaries were correlated with better BCVA, and MHF largely increased their correlations (P < 0.0001 for both). In contrast, both higher VD and FD of the large vessels were associated with poorer BCVA (P = 0.042 and 0.049, respectively). A higher VT of capillaries was correlated with better BCVA, and both MHF and exclusion of the FAZ were necessary to reveal their correlations (P = 0.028) CONCLUSIONS: Separation of large vessels and capillaries was necessary to reveal the correlation between retinal vasculature and BCVA in ERM. MHF was necessary to elucidate all microvascular parameters of capillaries, and exclusion of the FAZ was mandatory for evaluation of VT.
Collapse
Affiliation(s)
| | - Chia-Chieh Hsiao
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan; Universal Eye Center, Kaohsiung, Taiwan
| | - Yi-Ting Hsieh
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan.
| |
Collapse
|
26
|
Altun M, Gürüler H, Özkaraca O, Khan F, Khan J, Lee Y. Monkeypox Detection Using CNN with Transfer Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23041783. [PMID: 36850381 PMCID: PMC9964526 DOI: 10.3390/s23041783] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 01/31/2023] [Accepted: 02/01/2023] [Indexed: 06/01/2023]
Abstract
Monkeypox disease is caused by a virus that causes lesions on the skin and has been observed on the African continent in the past years. The fatal consequences caused by virus infections after the COVID pandemic have caused fear and panic among the public. As a result of COVID reaching the pandemic dimension, the development and implementation of rapid detection methods have become important. In this context, our study aims to detect monkeypox disease in case of a possible pandemic through skin lesions with deep-learning methods in a fast and safe way. Deep-learning methods were supported with transfer learning tools and hyperparameter optimization was provided. In the CNN structure, a hybrid function learning model was developed by customizing the transfer learning model together with hyperparameters. Implemented on the custom model MobileNetV3-s, EfficientNetV2, ResNET50, Vgg19, DenseNet121, and Xception models. In our study, AUC, accuracy, recall, loss, and F1-score metrics were used for evaluation and comparison. The optimized hybrid MobileNetV3-s model achieved the best score, with an average F1-score of 0.98, AUC of 0.99, accuracy of 0.96, and recall of 0.97. In this study, convolutional neural networks were used in conjunction with optimization of hyperparameters and a customized hybrid function transfer learning model to achieve striking results when a custom CNN model was developed. The custom CNN model design we have proposed is proof of how successfully and quickly the deep learning methods can achieve results in classification and discrimination.
Collapse
Affiliation(s)
- Murat Altun
- Department of Information Systems Engineering, Faculty of Technology, Mugla Sitki Kocman University, Mugla 48000, Turkey
| | - Hüseyin Gürüler
- Department of Information Systems Engineering, Faculty of Technology, Mugla Sitki Kocman University, Mugla 48000, Turkey
| | - Osman Özkaraca
- Department of Information Systems Engineering, Faculty of Technology, Mugla Sitki Kocman University, Mugla 48000, Turkey
| | - Faheem Khan
- Department of Computer Engineering, Gachon University, Seongnam-si 13120, Republic of Korea
| | - Jawad Khan
- Department of Robotics, Hanyang University, Ansan 15588, Republic of Korea
| | - Youngmoon Lee
- Department of Robotics, Hanyang University, Ansan 15588, Republic of Korea
| |
Collapse
|
27
|
Chen X, Xue Y, Wu X, Zhong Y, Rao H, Luo H, Weng Z. Deep Learning-Based System for Disease Screening and Pathologic Region Detection From Optical Coherence Tomography Images. Transl Vis Sci Technol 2023; 12:29. [PMID: 36716039 PMCID: PMC9896901 DOI: 10.1167/tvst.12.1.29] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023] Open
Abstract
Purpose This study was designed to apply deep learning models in retinal disease screening and lesion detection based on optical coherence tomography (OCT) images. Methods We collected 37,138 OCT images from 775 patients and labelled by ophthalmologists. Multiple deep learning models including ResNet50 and YOLOv3 were developed to identify the types and locations of diseases or lesions based on the images. Results The model were evaluated using patient-based independent holdout set. For binary classification of OCT images with or without lesions, the performance accuracy was 98.5%, sensitivity was 98.7%, specificity was 98.4%, and the F1 score was 97.7%. For multiclass multilabel disease classification, the models was able to detect vitreomacular traction syndrome and age-related macular degeneration both with an accuracy of more than 99%, sensitivity of more than 98%, specificity of more than 98%, and an F1 score of more than 97%. For lesion location detection, the recalls for different lesion types ranged from 87.0% (epiretinal membrane) to 98.2% (macular pucker). Conclusions Deep learning-based models have potentials to aid retinal disease screening, classification and diagnosis with excellent performance, which may serve as useful references for ophthalmologists. Translational Relevance The deep learning-based models are capable of identifying and predicting different eye diseases and lesions from OCT images and may have potential clinical application to assist the ophthalmologists for fast and accuracy retinal disease screening.
Collapse
Affiliation(s)
- Xiaoming Chen
- College of Mathematics and Computer Science, Fuzhou University, Fujian province, China,The Centre for Big Data Research in Burns and Trauma, College of Mathematics and Computer Science, Fuzhou University, Fujian province, China
| | - Ying Xue
- Department of Ophthalmology, Fujian Provincial Hospital, Fuzhou, China
| | - Xiaoyan Wu
- Department of Ophthalmology, Fujian Provincial Hospital, Fuzhou, China
| | - Yi Zhong
- The Centre for Big Data Research in Burns and Trauma, College of Mathematics and Computer Science, Fuzhou University, Fujian province, China,College of Biological Science and Engineering, Fuzhou University, Fujian province, China
| | - Huiying Rao
- Department of Ophthalmology, Fujian Provincial Hospital, Fuzhou, China
| | - Heng Luo
- The Centre for Big Data Research in Burns and Trauma, College of Mathematics and Computer Science, Fuzhou University, Fujian province, China,College of Biological Science and Engineering, Fuzhou University, Fujian province, China,MetaNovas Biotech Inc., Foster City, CA, USA
| | - Zuquan Weng
- The Centre for Big Data Research in Burns and Trauma, College of Mathematics and Computer Science, Fuzhou University, Fujian province, China,College of Biological Science and Engineering, Fuzhou University, Fujian province, China
| |
Collapse
|
28
|
Deep Learning in Optical Coherence Tomography Angiography: Current Progress, Challenges, and Future Directions. Diagnostics (Basel) 2023; 13:diagnostics13020326. [PMID: 36673135 PMCID: PMC9857993 DOI: 10.3390/diagnostics13020326] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 01/18/2023] Open
Abstract
Optical coherence tomography angiography (OCT-A) provides depth-resolved visualization of the retinal microvasculature without intravenous dye injection. It facilitates investigations of various retinal vascular diseases and glaucoma by assessment of qualitative and quantitative microvascular changes in the different retinal layers and radial peripapillary layer non-invasively, individually, and efficiently. Deep learning (DL), a subset of artificial intelligence (AI) based on deep neural networks, has been applied in OCT-A image analysis in recent years and achieved good performance for different tasks, such as image quality control, segmentation, and classification. DL technologies have further facilitated the potential implementation of OCT-A in eye clinics in an automated and efficient manner and enhanced its clinical values for detecting and evaluating various vascular retinopathies. Nevertheless, the deployment of this combination in real-world clinics is still in the "proof-of-concept" stage due to several limitations, such as small training sample size, lack of standardized data preprocessing, insufficient testing in external datasets, and absence of standardized results interpretation. In this review, we introduce the existing applications of DL in OCT-A, summarize the potential challenges of the clinical deployment, and discuss future research directions.
Collapse
|
29
|
Selvachandran G, Quek SG, Paramesran R, Ding W, Son LH. Developments in the detection of diabetic retinopathy: a state-of-the-art review of computer-aided diagnosis and machine learning methods. Artif Intell Rev 2023; 56:915-964. [PMID: 35498558 PMCID: PMC9038999 DOI: 10.1007/s10462-022-10185-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2022] [Indexed: 02/02/2023]
Abstract
The exponential increase in the number of diabetics around the world has led to an equally large increase in the number of diabetic retinopathy (DR) cases which is one of the major complications caused by diabetes. Left unattended, DR worsens the vision and would lead to partial or complete blindness. As the number of diabetics continue to increase exponentially in the coming years, the number of qualified ophthalmologists need to increase in tandem in order to meet the demand for screening of the growing number of diabetic patients. This makes it pertinent to develop ways to automate the detection process of DR. A computer aided diagnosis system has the potential to significantly reduce the burden currently placed on the ophthalmologists. Hence, this review paper is presented with the aim of summarizing, classifying, and analyzing all the recent development on automated DR detection using fundus images from 2015 up to this date. Such work offers an unprecedentedly thorough review of all the recent works on DR, which will potentially increase the understanding of all the recent studies on automated DR detection, particularly on those that deploys machine learning algorithms. Firstly, in this paper, a comprehensive state-of-the-art review of the methods that have been introduced in the detection of DR is presented, with a focus on machine learning models such as convolutional neural networks (CNN) and artificial neural networks (ANN) and various hybrid models. Each AI will then be classified according to its type (e.g. CNN, ANN, SVM), its specific task(s) in performing DR detection. In particular, the models that deploy CNN will be further analyzed and classified according to some important properties of the respective CNN architectures of each model. A total of 150 research articles related to the aforementioned areas that were published in the recent 5 years have been utilized in this review to provide a comprehensive overview of the latest developments in the detection of DR. Supplementary Information The online version contains supplementary material available at 10.1007/s10462-022-10185-6.
Collapse
Affiliation(s)
- Ganeshsree Selvachandran
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Shio Gai Quek
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Raveendran Paramesran
- Institute of Computer Science and Digital Innovation, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong, 226019 People’s Republic of China
| | - Le Hoang Son
- VNU Information Technology Institute, Vietnam National University, Hanoi, Vietnam
| |
Collapse
|
30
|
Schottenhamml J, Hohberger B, Mardin CY. Applications of Artificial Intelligence in Optical Coherence Tomography Angiography Imaging. Klin Monbl Augenheilkd 2022; 239:1412-1426. [PMID: 36493762 DOI: 10.1055/a-1961-7137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Optical coherence tomography angiography (OCTA) and artificial intelligence (AI) are two emerging fields that complement each other. OCTA enables the noninvasive, in vivo, 3D visualization of retinal blood flow with a micrometer resolution, which has been impossible with other imaging modalities. As it does not need dye-based injections, it is also a safer procedure for patients. AI has excited great interest in many fields of daily life, by enabling automatic processing of huge amounts of data with a performance that greatly surpasses previous algorithms. It has been used in many breakthrough studies in recent years, such as the finding that AlphaGo can beat humans in the strategic board game of Go. This paper will give a short introduction into both fields and will then explore the manifold applications of AI in OCTA imaging that have been presented in the recent years. These range from signal generation over signal enhancement to interpretation tasks like segmentation and classification. In all these areas, AI-based algorithms have achieved state-of-the-art performance that has the potential to improve standard care in ophthalmology when integrated into the daily clinical routine.
Collapse
Affiliation(s)
- Julia Schottenhamml
- Augenklinik, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Bettina Hohberger
- Augenklinik, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | | |
Collapse
|
31
|
Puneet, Kumar R, Gupta M. Optical coherence tomography image based eye disease detection using deep convolutional neural network. Health Inf Sci Syst 2022; 10:13. [PMID: 35756852 PMCID: PMC9213631 DOI: 10.1007/s13755-022-00182-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 06/08/2022] [Indexed: 12/23/2022] Open
Abstract
Over the past few decades, health care industries and medical practitioners faced a lot of obstacles to diagnosing medical-related problems due to inadequate technology and availability of equipment. In the present era, computer science technologies such as IoT, Cloud Computing, Artificial Intelligence and its allied techniques, etc. play a crucial role in the identification of medical diseases, especially in the domain of Ophthalmology. Despite this, ophthalmologists have to perform the various disease diagnosis task manually which is time-consuming and the chances of error are also very high because some of the abnormalities of eye diseases possess the same symptoms. Furthermore, multiple autonomous systems also exist to categorize the diseases but their prediction rate does not accomplish state-of-art accuracy. In the proposed approach by implementing the concept of Attention, Transfer Learning with the Deep Convolution Neural Network, the model accomplished an accuracy of 97.79% and 95.6% on the training and testing data respectively. This autonomous model efficiently classifies the various oscular disorders namely Choroidal Neovascularization, Diabetic Macular Edema, Drusen from the Optical Coherence Tomography images. It may provide a realistic solution to the healthcare sector to bring down the ophthalmologist burden in the screening of Diabetic Retinopathy.
Collapse
Affiliation(s)
- Puneet
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab India
| | - Rakesh Kumar
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab India
| | - Meenu Gupta
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab India
| |
Collapse
|
32
|
Zang P, Hormel TT, Hwang TS, Bailey ST, Huang D, Jia Y. Deep-Learning-Aided Diagnosis of Diabetic Retinopathy, Age-Related Macular Degeneration, and Glaucoma Based on Structural and Angiographic OCT. OPHTHALMOLOGY SCIENCE 2022; 3:100245. [PMID: 36579336 PMCID: PMC9791595 DOI: 10.1016/j.xops.2022.100245] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 10/21/2022] [Accepted: 10/28/2022] [Indexed: 11/11/2022]
Abstract
Purpose Timely diagnosis of eye diseases is paramount to obtaining the best treatment outcomes. OCT and OCT angiography (OCTA) have several advantages that lend themselves to early detection of ocular pathology; furthermore, the techniques produce large, feature-rich data volumes. However, the full clinical potential of both OCT and OCTA is stymied when complex data acquired using the techniques must be manually processed. Here, we propose an automated diagnostic framework based on structural OCT and OCTA data volumes that could substantially support the clinical application of these technologies. Design Cross sectional study. Participants Five hundred twenty-six OCT and OCTA volumes were scanned from the eyes of 91 healthy participants, 161 patients with diabetic retinopathy (DR), 95 patients with age-related macular degeneration (AMD), and 108 patients with glaucoma. Methods The diagnosis framework was constructed based on semisequential 3-dimensional (3D) convolutional neural networks. The trained framework classifies combined structural OCT and OCTA scans as normal, DR, AMD, or glaucoma. Fivefold cross-validation was performed, with 60% of the data reserved for training, 20% for validation, and 20% for testing. The training, validation, and test data sets were independent, with no shared patients. For scans diagnosed as DR, AMD, or glaucoma, 3D class activation maps were generated to highlight subregions that were considered important by the framework for automated diagnosis. Main Outcome Measures The area under the curve (AUC) of the receiver operating characteristic curve and quadratic-weighted kappa were used to quantify the diagnostic performance of the framework. Results For the diagnosis of DR, the framework achieved an AUC of 0.95 ± 0.01. For the diagnosis of AMD, the framework achieved an AUC of 0.98 ± 0.01. For the diagnosis of glaucoma, the framework achieved an AUC of 0.91 ± 0.02. Conclusions Deep learning frameworks can provide reliable, sensitive, interpretable, and fully automated diagnosis of eye diseases. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Pengxiao Zang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon,Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon
| | - Tristan T. Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Thomas S. Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Steven T. Bailey
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon,Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon,Correspondence: Yali Jia, PhD, Casey Eye Institute & Department of Biomedical Engineering, Oregon Health & Science University, 515 SW Campus Dr., CEI 3154, Portland, OR 97239-4197.
| |
Collapse
|
33
|
Elgafi M, Sharafeldeen A, Elnakib A, Elgarayhi A, Alghamdi NS, Sallah M, El-Baz A. Detection of Diabetic Retinopathy Using Extracted 3D Features from OCT Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:7833. [PMID: 36298186 PMCID: PMC9610651 DOI: 10.3390/s22207833] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 10/06/2022] [Accepted: 10/11/2022] [Indexed: 06/16/2023]
Abstract
Diabetic retinopathy (DR) is a major health problem that can lead to vision loss if not treated early. In this study, a three-step system for DR detection utilizing optical coherence tomography (OCT) is presented. First, the proposed system segments the retinal layers from the input OCT images. Second, 3D features are extracted from each retinal layer that include the first-order reflectivity and the 3D thickness of the individual OCT layers. Finally, backpropagation neural networks are used to classify OCT images. Experimental studies on 188 cases confirm the advantages of the proposed system over related methods, achieving an accuracy of 96.81%, using the leave-one-subject-out (LOSO) cross-validation. These outcomes show the potential of the suggested method for DR detection using OCT images.
Collapse
Affiliation(s)
- Mahmoud Elgafi
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
| | - Ahmed Sharafeldeen
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ahmed Elnakib
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ahmed Elgarayhi
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
| | - Norah S. Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Mohammed Sallah
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
- Higher Institute of Engineering and Technology, New Damietta 34517, Egypt
| | - Ayman El-Baz
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
34
|
Zeng P, Wang J, Tian P, Peng YY, Liang JQ, Wang M, Zhou SY. Macular and peripapillary optical coherence tomography angiography metrics in thyroid-associated ophthalmopathy with chorioretinal folds. Photodiagnosis Photodyn Ther 2022; 42:103146. [PMID: 36210040 DOI: 10.1016/j.pdpdt.2022.103146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 09/28/2022] [Accepted: 10/04/2022] [Indexed: 11/07/2022]
Abstract
PURPOSE To evaluate macular and radial peripapillary capillary (RPC) microvascular densities and the thickness of the retinal nerve fiber layer (RNFL) in thyroid-associated ophthalmopathy (TAO) with chorioretinal folds (CRFs) and the associations of these characteristics with visual function. METHOD A cross-sectional study was performed at the Ophthalmology Department of the Sun Yat-sen Memorial Hospital from March 2018 to August 2021. All patients underwent ocular examination, ophthalmic function tests and optical coherence tomography angiography (OCTA). The microvascular densities in the macula and optic papilla in the TAO with CRFs or without CRFs. Correlation analyses were used to examine the association of microvascular density and visual function. RESULTS Ten TAO patients with CRFs (CRF group, 20 eyes) and 10 TAO patients without CRFs (NCRF group, 20 eyes) were recruited for the study. Visual function measurements, including best-corrected visual acuity (BCVA), were found to be worse in the CRF group (all p < 0.05). The macular whole-image vessel density in the superficial layer (SLR-mwiVD) was significantly decreased in the CRF group (p < 0.05). The RPC whole-image vessel density (RPC-wiVD) was significantly decreased in the CRF group (p < 0.05), particularly in the temporal subfields. The P100 amplitude of visual evoked potentials (VEPs) was positively associated with SLR-mwiVD and RPC-wiVD. The thickness of RNFL in the CRF group was obviously thicker than that in the NCRF group (p < 0.05). CONCLUSIONS Our study showed decreased microvascular density of the macula and RPC and thicker RNFL in TAO patients with CRFs. CRFs with decreased microvascular density should be regard as an indicator of visually threatening conditions.
Collapse
Affiliation(s)
- Peng Zeng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China; Department of Ophthalmology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou 510120, China
| | - Jing Wang
- Department of Ophthalmology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou 510120, China
| | - Peng Tian
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou 510120, China
| | - Yuan-Yu Peng
- Department of Ophthalmology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou 510120, China
| | - Jia-Qi Liang
- Department of Ophthalmology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou 510120, China
| | - Mei Wang
- Department of Ophthalmology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou 510120, China.
| | - Shi-You Zhou
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China.
| |
Collapse
|
35
|
Abtahi M, Le D, Lim JI, Yao X. MF-AV-Net: an open-source deep learning network with multimodal fusion options for artery-vein segmentation in OCT angiography. BIOMEDICAL OPTICS EXPRESS 2022; 13:4870-4888. [PMID: 36187235 PMCID: PMC9484445 DOI: 10.1364/boe.468483] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/12/2022] [Accepted: 08/12/2022] [Indexed: 06/16/2023]
Abstract
This study is to demonstrate the effect of multimodal fusion on the performance of deep learning artery-vein (AV) segmentation in optical coherence tomography (OCT) and OCT angiography (OCTA); and to explore OCT/OCTA characteristics used in the deep learning AV segmentation. We quantitatively evaluated multimodal architectures with early and late OCT-OCTA fusions, compared to the unimodal architectures with OCT-only and OCTA-only inputs. The OCTA-only architecture, early OCT-OCTA fusion architecture, and late OCT-OCTA fusion architecture yielded competitive performances. For the 6 mm×6 mm and 3 mm×3 mm datasets, the late fusion architecture achieved an overall accuracy of 96.02% and 94.00%, slightly better than the OCTA-only architecture which achieved an overall accuracy of 95.76% and 93.79%. 6 mm×6 mm OCTA images show AV information at pre-capillary level structure, while 3 mm×3 mm OCTA images reveal AV information at capillary level detail. In order to interpret the deep learning performance, saliency maps were produced to identify OCT/OCTA image characteristics for AV segmentation. Comparative OCT and OCTA saliency maps support the capillary-free zone as one of the possible features for AV segmentation in OCTA. The deep learning network MF-AV-Net used in this study is available on GitHub for open access.
Collapse
Affiliation(s)
- Mansour Abtahi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
- These authors contributed equally to this work
| | - David Le
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
- These authors contributed equally to this work
| | - Jennifer I. Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|
36
|
Zang P, Hormel TT, Wang X, Tsuboi K, Huang D, Hwang TS, Jia Y. A Diabetic Retinopathy Classification Framework Based on Deep-Learning Analysis of OCT Angiography. Transl Vis Sci Technol 2022; 11:10. [PMID: 35822949 PMCID: PMC9288155 DOI: 10.1167/tvst.11.7.10] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Reliable classification of referable and vision threatening diabetic retinopathy (DR) is essential for patients with diabetes to prevent blindness. Optical coherence tomography (OCT) and its angiography (OCTA) have several advantages over fundus photographs. We evaluated a deep-learning-aided DR classification framework using volumetric OCT and OCTA. Methods Four hundred fifty-six OCT and OCTA volumes were scanned from eyes of 50 healthy participants and 305 patients with diabetes. Retina specialists labeled the eyes as non-referable (nrDR), referable (rDR), or vision threatening DR (vtDR). Each eye underwent a 3 × 3-mm scan using a commercial 70 kHz spectral-domain OCT system. We developed a DR classification framework and trained it using volumetric OCT and OCTA to classify eyes into rDR and vtDR. For the scans identified as rDR or vtDR, 3D class activation maps were generated to highlight the subregions which were considered important by the framework for DR classification. Results For rDR classification, the framework achieved a 0.96 ± 0.01 area under the receiver operating characteristic curve (AUC) and 0.83 ± 0.04 quadratic-weighted kappa. For vtDR classification, the framework achieved a 0.92 ± 0.02 AUC and 0.73 ± 0.04 quadratic-weighted kappa. In addition, the multiple DR classification (non-rDR, rDR but non-vtDR, or vtDR) achieved a 0.83 ± 0.03 quadratic-weighted kappa. Conclusions A deep learning framework only based on OCT and OCTA can provide specialist-level DR classification using only a single imaging modality. Translational Relevance The proposed framework can be used to develop clinically valuable automated DR diagnosis system because of the specialist-level performance showed in this study.
Collapse
Affiliation(s)
- Pengxiao Zang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.,Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR, USA
| | - Tristan T Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | | | - Kotaro Tsuboi
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.,Department of Ophthalmology, Aichi Medical University, Nagakute, Japan
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Thomas S Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.,Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR, USA
| |
Collapse
|
37
|
Li B, Ding Y, Wei Z, Fu Z, Sun P, Sun Q, Zhang H, Mo H. A Self-Supervised Model Advance OCTA Image Disease Diagnosis. INT J PATTERN RECOGN 2022. [DOI: 10.1142/s0218001422570038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Due to the lack of medical image datasets, transfer learning/fine-tuning is generally used to realize disease detection (mainly the ImageNet transfer model). Significant differences of dominance between natural and medical images seriously restrict the performance of the model. In this paper, a contrastive learning method (BY-OCTA) combined with patient metadata is proposed to detect the pathology in fundus OCTA images. This method uses the patient’s metadata to construct positive sample pairs. By introducing super-parameters into the loss function, we can reasonably adjust the approximate proportion of the same patient metadata sample pair, so as to produce a better representation and initialization model. This paper evaluates the performance of downstream tasks by fine-tuning the multi-layer perceptron of the model. Experiments show that the linear model pretrained by BY-OCTA is better than that pretrained by ImageNet and BYOL on multiple datasets. Furthermore, in the case of limited labeled training data, BY-OCTA provides the most significant benefit. This shows that the BY-OCTA pretraining model has better characterization extraction ability and transferability. This method allows a flexible combination of medical opinions and uses metadata to construct positive sample pairs, which can be widely used in medical image interpretation.
Collapse
Affiliation(s)
- Bingbing Li
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang, P. R. China
- College of Engineering, Jilin Business and Technology College, Changchun, Jilin, P. R. China
| | - Yiheng Ding
- Department of Ophthalmology, The First Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, P. R. China
| | - Ziqiang Wei
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang, P. R. China
| | - Zhijie Fu
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang, P. R. China
| | - Peng Sun
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang, P. R. China
| | - Qi Sun
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang, P. R. China
| | - Hong Zhang
- Department of Ophthalmology, The First Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, P. R. China
| | - Hongwei Mo
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang, P. R. China
| |
Collapse
|
38
|
Yasser I, Khalifa F, Abdeltawab H, Ghazal M, Sandhu HS, El-Baz A. Automated Diagnosis of Optical Coherence Tomography Angiography (OCTA) Based on Machine Learning Techniques. SENSORS 2022; 22:s22062342. [PMID: 35336513 PMCID: PMC8952189 DOI: 10.3390/s22062342] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 03/11/2022] [Accepted: 03/14/2022] [Indexed: 12/17/2022]
Abstract
Diabetic retinopathy (DR) refers to the ophthalmological complications of diabetes mellitus. It is primarily a disease of the retinal vasculature that can lead to vision loss. Optical coherence tomography angiography (OCTA) demonstrates the ability to detect the changes in the retinal vascular system, which can help in the early detection of DR. In this paper, we describe a novel framework that can detect DR from OCTA based on capturing the appearance and morphological markers of the retinal vascular system. This new framework consists of the following main steps: (1) extracting retinal vascular system from OCTA images based on using joint Markov-Gibbs Random Field (MGRF) model to model the appearance of OCTA images and (2) estimating the distance map inside the extracted vascular system to be used as imaging markers that describe the morphology of the retinal vascular (RV) system. The OCTA images, extracted vascular system, and the RV-estimated distance map is then composed into a three-dimensional matrix to be used as an input to a convolutional neural network (CNN). The main motivation for using this data representation is that it combines the low-level data as well as high-level processed data to allow the CNN to capture significant features to increase its ability to distinguish DR from the normal retina. This has been applied on multi-scale levels to include the original full dimension images as well as sub-images extracted from the original OCTA images. The proposed approach was tested on in-vivo data using about 91 patients, which were qualitatively graded by retinal experts. In addition, it was quantitatively validated using datasets based on three metrics: sensitivity, specificity, and overall accuracy. Results showed the capability of the proposed approach, outperforming the current deep learning as well as features-based detecting DR approaches.
Collapse
Affiliation(s)
- Ibrahim Yasser
- Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt;
| | - Fahmi Khalifa
- Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (F.K.); (H.A.); (H.S.S.)
| | - Hisham Abdeltawab
- Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (F.K.); (H.A.); (H.S.S.)
| | - Mohammed Ghazal
- Electrical and Computer Engineering Department, Abu Dhabi University, Abu Dhabi P.O. Box 59911, United Arab Emirates;
| | - Harpal Singh Sandhu
- Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (F.K.); (H.A.); (H.S.S.)
| | - Ayman El-Baz
- Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (F.K.); (H.A.); (H.S.S.)
- Correspondence:
| |
Collapse
|
39
|
Elsharkawy M, Sharafeldeen A, Soliman A, Khalifa F, Ghazal M, El-Daydamony E, Atwan A, Sandhu HS, El-Baz A. A Novel Computer-Aided Diagnostic System for Early Detection of Diabetic Retinopathy Using 3D-OCT Higher-Order Spatial Appearance Model. Diagnostics (Basel) 2022; 12:diagnostics12020461. [PMID: 35204552 PMCID: PMC8871295 DOI: 10.3390/diagnostics12020461] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 01/31/2022] [Accepted: 02/07/2022] [Indexed: 12/04/2022] Open
Abstract
Early diagnosis of diabetic retinopathy (DR) is of critical importance to suppress severe damage to the retina and/or vision loss. In this study, an optical coherence tomography (OCT)-based computer-aided diagnosis (CAD) method is proposed to detect DR early using structural 3D retinal scans. This system uses prior shape knowledge to automatically segment all retinal layers of the 3D-OCT scans using an adaptive, appearance-based method. After the segmentation step, novel texture features are extracted from the segmented layers of the OCT B-scans volume for DR diagnosis. For every layer, Markov–Gibbs random field (MGRF) model is used to extract the 2nd-order reflectivity. In order to represent the extracted image-derived features, we employ cumulative distribution function (CDF) descriptors. For layer-wise classification in 3D volume, using the extracted Gibbs energy feature, an artificial neural network (ANN) is fed the extracted feature for every layer. Finally, the classification outputs for all twelve layers are fused using a majority voting schema for global subject diagnosis. A cohort of 188 3D-OCT subjects are used for system evaluation using different k-fold validation techniques and different validation metrics. Accuracy of 90.56%, 93.11%, and 96.88% are achieved using 4-, 5-, and 10-fold cross-validation, respectively. Additional comparison with deep learning networks, which represent the state-of-the-art, documented the promise of our system’s ability to diagnose the DR early.
Collapse
Affiliation(s)
- Mohamed Elsharkawy
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (A.S.); (A.S.); (F.K.); (H.S.S.)
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (A.S.); (A.S.); (F.K.); (H.S.S.)
| | - Ahmed Soliman
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (A.S.); (A.S.); (F.K.); (H.S.S.)
| | - Fahmi Khalifa
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (A.S.); (A.S.); (F.K.); (H.S.S.)
| | - Mohammed Ghazal
- Electrical and Computer Engineering Department, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Eman El-Daydamony
- Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt; (E.E.-D.); (A.A.)
| | - Ahmed Atwan
- Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt; (E.E.-D.); (A.A.)
| | - Harpal Singh Sandhu
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (A.S.); (A.S.); (F.K.); (H.S.S.)
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (A.S.); (A.S.); (F.K.); (H.S.S.)
- Correspondence:
| |
Collapse
|
40
|
Ji Y, Yang S, Zhou K, Rocliffe HR, Pellicoro A, Cash JL, Wang R, Li C, Huang Z. Deep-learning approach for automated thickness measurement of epithelial tissue and scab using optical coherence tomography. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:015002. [PMID: 35043611 PMCID: PMC8765552 DOI: 10.1117/1.jbo.27.1.015002] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 11/23/2021] [Indexed: 10/29/2023]
Abstract
SIGNIFICANCE In order to elucidate therapeutic treatment to accelerate wound healing, it is crucial to understand the process underlying skin wound healing, especially re-epithelialization. Epidermis and scab detection is of importance in the wound healing process as their thickness is a vital indicator to judge whether the re-epithelialization process is normal or not. Since optical coherence tomography (OCT) is a real-time and non-invasive imaging technique that can perform a cross-sectional evaluation of tissue microstructure, it is an ideal imaging modality to monitor the thickness change of epidermal and scab tissues during wound healing processes in micron-level resolution. Traditional segmentation on epidermal and scab regions was performed manually, which is time-consuming and impractical in real time. AIM We aim to develop a deep-learning-based skin layer segmentation method for automated quantitative assessment of the thickness of in vivo epidermis and scab tissues during a time course of healing within a rodent model. APPROACH Five convolution neural networks were trained using manually labeled epidermis and scab regions segmentation from 1000 OCT B-scan images (assisted by its corresponding angiographic information). The segmentation performance of five segmentation architectures was compared qualitatively and quantitatively for validation set. RESULTS Our results show higher accuracy and higher speed of the calculated thickness compared with human experts. The U-Net architecture represents a better performance than other deep neural network architectures with 0.894 at F1-score, 0.875 at mean intersection over union, 0.933 at Dice similarity coefficient, and 18.28 μm at an average symmetric surface distance. Furthermore, our algorithm is able to provide abundant quantitative parameters of the wound based on its corresponding thickness maps in different healing phases. Among them, normalized epidermal thickness is recommended as an essential hallmark to describe the re-epithelialization process of the rodent model. CONCLUSIONS The automatic segmentation and thickness measurements within different phases of wound healing data demonstrates that our pipeline provides a robust, quantitative, and accurate method for serving as a standard model for further research into effect of external pharmacological and physical factors.
Collapse
Affiliation(s)
- Yubo Ji
- University of Dundee, School of Science and Engineering, Dundee, United Kingdom
| | - Shufan Yang
- Edinburgh Napier University, School of Computing, Edinburgh, United Kingdom
- University of Glasgow, Center of Medical and Industrial Ultrasonics, Glasgow, United Kingdom
| | - Kanheng Zhou
- University of Dundee, School of Science and Engineering, Dundee, United Kingdom
| | - Holly R. Rocliffe
- The University of Edinburgh, The Queen’s Medical Research Institute, MRC Centre for Inflammation Research, Edinburgh, United Kingdom
| | - Antonella Pellicoro
- The University of Edinburgh, The Queen’s Medical Research Institute, MRC Centre for Inflammation Research, Edinburgh, United Kingdom
| | - Jenna L. Cash
- The University of Edinburgh, The Queen’s Medical Research Institute, MRC Centre for Inflammation Research, Edinburgh, United Kingdom
| | - Ruikang Wang
- University of Washington, Department of Bioengineering, Seattle, Washington, United States
| | - Chunhui Li
- University of Dundee, School of Science and Engineering, Dundee, United Kingdom
| | - Zhihong Huang
- University of Dundee, School of Science and Engineering, Dundee, United Kingdom
| |
Collapse
|
41
|
Wang R, Zuo G, Li K, Li W, Xuan Z, Han Y, Yang W. Systematic bibliometric and visualized analysis of research hotspots and trends on the application of artificial intelligence in diabetic retinopathy. Front Endocrinol (Lausanne) 2022; 13:1036426. [PMID: 36387891 PMCID: PMC9659570 DOI: 10.3389/fendo.2022.1036426] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Accepted: 10/17/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Artificial intelligence (AI), which has been used to diagnose diabetic retinopathy (DR), may impact future medical and ophthalmic practices. Therefore, this study explored AI's general applications and research frontiers in the detection and gradation of DR. METHODS Citation data were obtained from the Web of Science Core Collection database (WoSCC) to assess the application of AI in diagnosing DR in the literature published from January 1, 2012, to June 30, 2022. These data were processed by CiteSpace 6.1.R3 software. RESULTS Overall, 858 publications from 77 countries and regions were examined, with the United States considered the leading country in this domain. The largest cluster labeled "automated detection" was employed in the generating stage from 2007 to 2014. The burst keywords from 2020 to 2022 were artificial intelligence and transfer learning. CONCLUSION Initial research focused on the study of intelligent algorithms used to localize or recognize lesions on fundus images to assist in diagnosing DR. Presently, the focus of research has changed from upgrading the accuracy and efficiency of DR lesion detection and classification to research on DR diagnostic systems. However, further studies on DR and computer engineering are required.
Collapse
Affiliation(s)
- Ruoyu Wang
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Guangxi Zuo
- The First School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Kunke Li
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Wangting Li
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Zhiqiang Xuan
- Institute of Occupational Health and Radiation Protection, Zhejiang Provincial Center for Disease Control and Prevention, Hangzhou, China
- *Correspondence: Zhiqiang Xuan, ; Yongzhao Han, ; Weihua Yang,
| | - Yongzhao Han
- Affiliated Jiangning Hospital, Nanjing Medical University, Nanjing, China
- *Correspondence: Zhiqiang Xuan, ; Yongzhao Han, ; Weihua Yang,
| | - Weihua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
- *Correspondence: Zhiqiang Xuan, ; Yongzhao Han, ; Weihua Yang,
| |
Collapse
|
42
|
Optical Coherence Tomography Angiography in Diabetic Patients: A Systematic Review. Biomedicines 2021; 10:biomedicines10010088. [PMID: 35052768 PMCID: PMC8773551 DOI: 10.3390/biomedicines10010088] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 12/28/2021] [Accepted: 12/29/2021] [Indexed: 01/20/2023] Open
Abstract
Background: Diabetic retinopathy (DR) is the leading cause of legal blindness in the working population in developed countries. Optical coherence tomography (OCT) angiography (OCTA) has risen as an essential tool in the diagnosis and control of diabetic patients, with and without DR, allowing visualisation of the retinal and choroidal microvasculature, their qualitative and quantitative changes, the progression of vascular disease, quantification of ischaemic areas, and the detection of preclinical changes. The aim of this article is to analyse the current applications of OCTA and provide an updated overview of them in the evaluation of DR. Methods: A systematic literature search was performed in PubMed and Embase, including the keywords “OCTA” OR “OCT angiography” OR “optical coherence tomography angiography” AND “diabetes” OR “diabetes mellitus” OR “diabetic retinopathy” OR “diabetic maculopathy” OR “diabetic macular oedema” OR “diabetic macular ischaemia”. Of the 1456 studies initially identified, 107 studies were screened after duplication, and those articles that did not meet the selection criteria were removed. Finally, after looking for missing data, we included 135 studies in this review. Results: We present the common and distinctive findings in the analysed papers after the literature search including the diagnostic use of OCTA in diabetes mellitus (DM) patients. We describe previous findings in retinal vascularization, including microaneurysms, foveal avascular zone (FAZ) changes in both size and morphology, changes in vascular perfusion, the appearance of retinal microvascular abnormalities or new vessels, and diabetic macular oedema (DME) and the use of deep learning technology applied to this disease. Conclusion: OCTA findings enable the diagnosis and follow-up of DM patients, including those with no detectable lesions with other devices. The evaluation of retinal and choroidal plexuses using OCTA is a fundamental tool for the diagnosis and prognosis of DR.
Collapse
|
43
|
Deep Learning to Distinguish ABCA4-Related Stargardt Disease from PRPH2-Related Pseudo-Stargardt Pattern Dystrophy. J Clin Med 2021; 10:jcm10245742. [PMID: 34945039 PMCID: PMC8708395 DOI: 10.3390/jcm10245742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 10/18/2021] [Accepted: 12/06/2021] [Indexed: 11/17/2022] Open
Abstract
(1) Background: Recessive Stargardt disease (STGD1) and multifocal pattern dystrophy simulating Stargardt disease (“pseudo-Stargardt pattern dystrophy”, PSPD) share phenotypic similitudes, leading to a difficult clinical diagnosis. Our aim was to assess whether a deep learning classifier pretrained on fundus autofluorescence (FAF) images can assist in distinguishing ABCA4-related STGD1 from the PRPH2/RDS-related PSPD and to compare the performance with that of retinal specialists. (2) Methods: We trained a convolutional neural network (CNN) using 729 FAF images from normal patients or patients with inherited retinal diseases (IRDs). Transfer learning was then used to update the weights of a ResNet50V2 used to classify the 370 FAF images into STGD1 and PSPD. Retina specialists evaluated the same dataset. The performance of the CNN and that of retina specialists were compared in terms of accuracy, sensitivity, and precision. (3) Results: The CNN accuracy on the test dataset of 111 images was 0.882. The AUROC was 0.890, the precision was 0.883 and the sensitivity was 0.883. The accuracy for retina experts averaged 0.816, whereas for retina fellows it averaged 0.724. (4) Conclusions: This proof-of-concept study demonstrates that, even with small databases, a pretrained CNN is able to distinguish between STGD1 and PSPD with good accuracy.
Collapse
|
44
|
Wang Z, Lin L, Wu J, Tang X. Multi-task Learning Based Ocular Disease Discrimination and FAZ Segmentation Utilizing OCTA Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2790-2793. [PMID: 34891828 DOI: 10.1109/embc46164.2021.9631043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In this paper, we proposed and validated a multi-task based deep learning method for simultaneously segmenting the foveal avascular zone (FAZ) and classifying three ocular disease related states (normal, diabetic, and myopia) utilizing optical coherence tomography angiography (OCTA) images. The essential motivation of this work is that reliable predictions on disease states may be made based on features extracted from a segmentation network, by sharing a same encoder between the classification network and the segmentation network. In this study, a cotraining network structure was designed for simultaneous ocular disease discrimination and FAZ segmentation. Specifically, we made use of a classification head following a segmentation network's encoder, so that the classification branch used the feature information extracted in the segmentation branch to improve the classification results. The performance of our proposed network structure has been tested and validated on the FAZID dataset, with the best Dice and Jaccard being 0.9031±0.0772 and 0.8302 ±0.0990 for FAZ segmentation, and the best Accuracy and Kappa being 0.7533 and 0.6282 for classifying three ocular disease related states.Clinical Relevance- This work provides a useful tool for segmenting FAZ and discriminating three ocular disease related states utilizing OCTA images, which has a great clinical potential in ocular disease screening and biomarker delivering.
Collapse
|
45
|
Jiao S, Jia Y, Yao X. Emerging imaging developments in experimental vision sciences and ophthalmology. Exp Biol Med (Maywood) 2021; 246:2137-2139. [PMID: 34404253 PMCID: PMC8718248 DOI: 10.1177/15353702211038891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Affiliation(s)
- Shuliang Jiao
- Department of Biomedical Engineering, Florida International University, Miami, FL 33174, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| |
Collapse
|
46
|
Le D, Son T, Yao X. Machine learning in optical coherence tomography angiography. Exp Biol Med (Maywood) 2021; 246:2170-2183. [PMID: 34279136 DOI: 10.1177/15353702211026581] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Optical coherence tomography angiography (OCTA) offers a noninvasive label-free solution for imaging retinal vasculatures at the capillary level resolution. In principle, improved resolution implies a better chance to reveal subtle microvascular distortions associated with eye diseases that are asymptomatic in early stages. However, massive screening requires experienced clinicians to manually examine retinal images, which may result in human error and hinder objective screening. Recently, quantitative OCTA features have been developed to standardize and document retinal vascular changes. The feasibility of using quantitative OCTA features for machine learning classification of different retinopathies has been demonstrated. Deep learning-based applications have also been explored for automatic OCTA image analysis and disease classification. In this article, we summarize recent developments of quantitative OCTA features, machine learning image analysis, and classification.
Collapse
Affiliation(s)
- David Le
- Department of Bioengineering, 14681University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Taeyoon Son
- Department of Bioengineering, 14681University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Xincheng Yao
- Department of Bioengineering, 14681University of Illinois at Chicago, Chicago, IL 60607, USA.,Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|
47
|
Song J, Zheng Y, Wang J, Zakir Ullah M, Jiao W. Multicolor image classification using the multimodal information bottleneck network (MMIB-Net) for detecting diabetic retinopathy. OPTICS EXPRESS 2021; 29:22732-22748. [PMID: 34266030 DOI: 10.1364/oe.430508] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 06/26/2021] [Indexed: 06/13/2023]
Abstract
Multicolor (MC) imaging is an imaging modality that records confocal scanning laser ophthalmoscope (cSLO) fundus images, which can be used for the diabetic retinopathy (DR) detection. By utilizing this imaging technique, multiple modal images can be obtained in a single case. Additional symptomatic features can be obtained if these images are considered during the diagnosis of DR. However, few studies have been carried out to classify MC Images using deep learning methods, let alone using multi modal features for analysis. In this work, we propose a novel model which uses the multimodal information bottleneck network (MMIB-Net) to classify the MC Images for the detection of DR. Our model can extract the features of multiple modalities simultaneously while finding concise feature representations of each modality using the information bottleneck theory. MC Images classification can be achieved by picking up the combined representations and features of all modalities. In our experiments, it is shown that the proposed method can achieve an accurate classification of MC Images. Comparative experiments also demonstrate that the use of multimodality and information bottleneck improves the performance of MC Images classification. To the best of our knowledge, this is the first report of DR identification utilizing the multimodal information bottleneck convolutional neural network in MC Images.
Collapse
|
48
|
Singh H, Saini SS, Lakshminarayanan V. Rapid classification of glaucomatous fundus images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2021; 38:765-774. [PMID: 34143145 DOI: 10.1364/josaa.415395] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 04/13/2021] [Indexed: 06/12/2023]
Abstract
We propose a new method for training convolutional neural networks (CNNs) and use it to classify glaucoma from fundus images. This method integrates reinforcement learning along with supervised learning and uses it for transfer learning. The training method uses hill climbing techniques via two different climber types, namely, "random movement" and "random detection," integrated with a supervised learning model through a stochastic gradient descent with momentum model. The model was trained and tested using the Drishti-GS and RIM-ONE-r2 datasets having glaucomatous and normal fundus images. The performance for prediction was tested by transfer learning on five CNN architectures, namely, GoogLeNet, DenseNet-201, NASNet, VGG-19, and Inception-Resnet v2. A five-fold classification was used for evaluating the performance, and high sensitivities while maintaining high accuracies were achieved. Of the models tested, the DenseNet-201 architecture performed the best in terms of sensitivity and area under the curve. This method of training allows transfer learning on small datasets and can be applied for tele-ophthalmology applications including training with local datasets.
Collapse
|
49
|
Ran A, Cheung CY. Deep Learning-Based Optical Coherence Tomography and Optical Coherence Tomography Angiography Image Analysis: An Updated Summary. Asia Pac J Ophthalmol (Phila) 2021; 10:253-260. [PMID: 34383717 DOI: 10.1097/apo.0000000000000405] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
ABSTRACT Deep learning (DL) is a subset of artificial intelligence based on deep neural networks. It has made remarkable breakthroughs in medical imaging, particularly for image classification and pattern recognition. In ophthalmology, there are rising interests in applying DL methods to analyze optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA) images. Studies showed that OCT and OCTA image evaluation by DL algorithms achieved good performance for disease detection, prognosis prediction, and image quality control, suggesting that the incorporation of DL technology could potentially enhance the accuracy of disease evaluation and the efficiency of clinical workflow. However, substantial issues, such as small training sample size, data preprocessing standardization, model robustness, results explanation, and performance cross-validation, are yet to be tackled before deploying these DL models in real-time clinics. This review summarized recent studies on DL-based image analysis models for OCT and OCTA images and discussed the potential challenges of clinical deployment and future research directions.
Collapse
Affiliation(s)
- Anran Ran
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong SAR
| | | |
Collapse
|
50
|
Precise higher-order reflectivity and morphology models for early diagnosis of diabetic retinopathy using OCT images. Sci Rep 2021; 11:4730. [PMID: 33633139 PMCID: PMC7907116 DOI: 10.1038/s41598-021-83735-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 01/29/2021] [Indexed: 12/15/2022] Open
Abstract
This study proposes a novel computer assisted diagnostic (CAD) system for early diagnosis of diabetic retinopathy (DR) using optical coherence tomography (OCT) B-scans. The CAD system is based on fusing novel OCT markers that describe both the morphology/anatomy and the reflectivity of retinal layers to improve DR diagnosis. This system separates retinal layers automatically using a segmentation approach based on an adaptive appearance and their prior shape information. High-order morphological and novel reflectivity markers are extracted from individual segmented layers. Namely, the morphological markers are layer thickness and tortuosity while the reflectivity markers are the 1st-order reflectivity of the layer in addition to local and global high-order reflectivity based on Markov-Gibbs random field (MGRF) and gray-level co-occurrence matrix (GLCM), respectively. The extracted image-derived markers are represented using cumulative distribution function (CDF) descriptors. The constructed CDFs are then described using their statistical measures, i.e., the 10th through 90th percentiles with a 10% increment. For individual layer classification, each extracted descriptor of a given layer is fed to a support vector machine (SVM) classifier with a linear kernel. The results of the four classifiers are then fused using a backpropagation neural network (BNN) to diagnose each retinal layer. For global subject diagnosis, classification outputs (probabilities) of the twelve layers are fused using another BNN to make the final diagnosis of the B-scan. This system is validated and tested on 130 patients, with two scans for both eyes (i.e. 260 OCT images), with a balanced number of normal and DR subjects using different validation metrics: 2-folds, 4-folds, 10-folds, and leave-one-subject-out (LOSO) cross-validation approaches. The performance of the proposed system was evaluated using sensitivity, specificity, F1-score, and accuracy metrics. The system's performance after the fusion of these different markers showed better performance compared with individual markers and other machine learning fusion methods. Namely, it achieved [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively, using the LOSO cross-validation technique. The reported results, based on the integration of morphology and reflectivity markers and by using state-of-the-art machine learning classifications, demonstrate the ability of the proposed system to diagnose the DR early.
Collapse
|