1
|
Kang C, Lo JE, Zhang H, Ng SM, Lin JC, Scott IU, Kalpathy-Cramer J, Liu SHA, Greenberg PB. Artificial intelligence for diagnosing exudative age-related macular degeneration. Cochrane Database Syst Rev 2024; 10:CD015522. [PMID: 39417312 PMCID: PMC11483348 DOI: 10.1002/14651858.cd015522.pub2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
Abstract
BACKGROUND Age-related macular degeneration (AMD) is a retinal disorder characterized by central retinal (macular) damage. Approximately 10% to 20% of non-exudative AMD cases progress to the exudative form, which may result in rapid deterioration of central vision. Individuals with exudative AMD (eAMD) need prompt consultation with retinal specialists to minimize the risk and extent of vision loss. Traditional methods of diagnosing ophthalmic disease rely on clinical evaluation and multiple imaging techniques, which can be resource-consuming. Tests leveraging artificial intelligence (AI) hold the promise of automatically identifying and categorizing pathological features, enabling the timely diagnosis and treatment of eAMD. OBJECTIVES To determine the diagnostic accuracy of artificial intelligence (AI) as a triaging tool for exudative age-related macular degeneration (eAMD). SEARCH METHODS We searched CENTRAL, MEDLINE, Embase, three clinical trials registries, and Data Archiving and Networked Services (DANS) for gray literature. We did not restrict searches by language or publication date. The date of the last search was April 2024. SELECTION CRITERIA Included studies compared the test performance of algorithms with that of human readers to detect eAMD on retinal images collected from people with AMD who were evaluated at eye clinics in community or academic medical centers, and who were not receiving treatment for eAMD when the images were taken. We included algorithms that were either internally or externally validated or both. DATA COLLECTION AND ANALYSIS Pairs of review authors independently extracted data and assessed study quality using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool with revised signaling questions. For studies that reported more than one set of performance results, we extracted only one set of diagnostic accuracy data per study based on the last development stage or the optimal algorithm as indicated by the study authors. For two-class algorithms, we collected data from the 2x2 table whenever feasible. For multi-class algorithms, we first consolidated data from all classes other than eAMD before constructing the corresponding 2x2 tables. Assuming a common positivity threshold applied by the included studies, we chose random-effects, bivariate logistic models to estimate summary sensitivity and specificity as the primary performance metrics. MAIN RESULTS We identified 36 eligible studies that reported 40 sets of algorithm performance data, encompassing over 16,000 participants and 62,000 images. We included 28 studies (78%) that reported 31 algorithms with performance data in the meta-analysis. The remaining nine studies (25%) reported eight algorithms that lacked usable performance data; we reported them in the qualitative synthesis. Study characteristics and risk of bias Most studies were conducted in Asia, followed by Europe, the USA, and collaborative efforts spanning multiple countries. Most studies identified study participants from the hospital setting, while others used retinal images from public repositories; a few studies did not specify image sources. Based on four of the 36 studies reporting demographic information, the age of the study participants ranged from 62 to 82 years. The included algorithms used various retinal image types as model input, such as optical coherence tomography (OCT) images (N = 15), fundus images (N = 6), and multi-modal imaging (N = 7). The predominant core method used was deep neural networks. All studies that reported externally validated algorithms were at high risk of bias mainly due to potential selection bias from either a two-gate design or the inappropriate exclusion of potentially eligible retinal images (or participants). Findings Only three of the 40 included algorithms were externally validated (7.5%, 3/40). The summary sensitivity and specificity were 0.94 (95% confidence interval (CI) 0.90 to 0.97) and 0.99 (95% CI 0.76 to 1.00), respectively, when compared to human graders (3 studies; 27,872 images; low-certainty evidence). The prevalence of images with eAMD ranged from 0.3% to 49%. Twenty-eight algorithms were reportedly either internally validated (20%, 8/40) or tested on a development set (50%, 20/40); the pooled sensitivity and specificity were 0.93 (95% CI 0.89 to 0.96) and 0.96 (95% CI 0.94 to 0.98), respectively, when compared to human graders (28 studies; 33,409 images; low-certainty evidence). We did not identify significant sources of heterogeneity among these 28 algorithms. Although algorithms using OCT images appeared more homogeneous and had the highest summary specificity (0.97, 95% CI 0.93 to 0.98), they were not superior to algorithms using fundus images alone (0.94, 95% CI 0.89 to 0.97) or multimodal imaging (0.96, 95% CI 0.88 to 0.99; P for meta-regression = 0.239). The median prevalence of images with eAMD was 30% (interquartile range [IQR] 22% to 39%). We did not include eight studies that described nine algorithms (one study reported two sets of algorithm results) to distinguish eAMD from normal images, images of other AMD, or other non-AMD retinal lesions in the meta-analysis. Five of these algorithms were generally based on smaller datasets (range 21 to 218 participants per study) yet with a higher prevalence of eAMD images (range 33% to 66%). Relative to human graders, the reported sensitivity in these studies ranged from 0.95 and 0.97, while the specificity ranged from 0.94 to 0.99. Similarly, using small datasets (range 46 to 106), an additional four algorithms for detecting eAMD from other retinal lesions showed high sensitivity (range 0.96 to 1.00) and specificity (range 0.77 to 1.00). AUTHORS' CONCLUSIONS Low- to very low-certainty evidence suggests that an algorithm-based test may correctly identify most individuals with eAMD without increasing unnecessary referrals (false positives) in either the primary or the specialty care settings. There were significant concerns for applying the review findings due to variations in the eAMD prevalence in the included studies. In addition, among the included algorithm-based tests, diagnostic accuracy estimates were at risk of bias due to study participants not reflecting real-world characteristics, inadequate model validation, and the likelihood of selective results reporting. Limited quality and quantity of externally validated algorithms highlighted the need for high-certainty evidence. This evidence will require a standardized definition for eAMD on different imaging modalities and external validation of the algorithm to assess generalizability.
Collapse
Affiliation(s)
- Chaerim Kang
- Division of Ophthalmology, Brown University, Providence, RI, USA
| | - Jui-En Lo
- Department of Internal Medicine, MetroHealth Medical Center/Case Western Reserve University, Cleveland, USA
| | - Helen Zhang
- Program in Liberal Medical Education, Brown University, Providence, RI, USA
| | - Sueko M Ng
- Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - John C Lin
- Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Ingrid U Scott
- Department of Ophthalmology and Public Health Sciences, Penn State College of Medicine, Hershey, PA, USA
| | | | - Su-Hsun Alison Liu
- Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
- Department of Epidemiology, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Paul B Greenberg
- Division of Ophthalmology, Brown University, Providence, RI, USA
- Section of Ophthalmology, VA Providence Healthcare System, Providence, RI, USA
| |
Collapse
|
2
|
Ghalibafan S, Taylor Gonzalez DJ, Cai LZ, Graham Chou B, Panneerselvam S, Conrad Barrett S, Djulbegovic MB, Yannuzzi NA. APPLICATIONS OF MULTIMODAL GENERATIVE ARTIFICIAL INTELLIGENCE IN A REAL-WORLD RETINA CLINIC SETTING. Retina 2024; 44:1732-1740. [PMID: 39287535 DOI: 10.1097/iae.0000000000004204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/19/2024]
Abstract
PURPOSE This study evaluates a large language model, Generative Pre-trained Transformer 4 with vision, for diagnosing vitreoretinal diseases in real-world ophthalmology settings. METHODS A retrospective cross-sectional study at Bascom Palmer Eye Clinic, analyzing patient data from January 2010 to March 2023, assesses Generative Pre-trained Transformer 4 with vision's performance on retinal image analysis and International Classification of Diseases 10th revision coding across 2 patient groups: simpler cases (Group A) and complex cases (Group B) requiring more in-depth analysis. Diagnostic accuracy was assessed through open-ended questions and multiple-choice questions independently verified by three retina specialists. RESULTS In 256 eyes from 143 patients, Generative Pre-trained Transformer 4-V demonstrated a 13.7% accuracy for open-ended questions and 31.3% for multiple-choice questions, with International Classification of Diseases 10th revision code accuracies at 5.5% and 31.3%, respectively. Accurately diagnosed posterior vitreous detachment, nonexudative age-related macular degeneration, and retinal detachment. International Classification of Diseases 10th revision coding was most accurate for nonexudative age-related macular degeneration, central retinal vein occlusion, and macular hole in OEQs, and for posterior vitreous detachment, nonexudative age-related macular degeneration, and retinal detachment in multiple-choice questions. No significant difference in diagnostic or coding accuracy was found in Groups A and B. CONCLUSION Generative Pre-trained Transformer 4 with vision has potential in clinical care and record keeping, particularly with standardized questions. Its effectiveness in open-ended scenarios is limited, indicating a significant limitation in providing complex medical advice.
Collapse
Affiliation(s)
- Seyyedehfatemeh Ghalibafan
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida; and
| | - David J Taylor Gonzalez
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida; and
| | - Louis Z Cai
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida; and
| | - Brandon Graham Chou
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida; and
| | - Sugi Panneerselvam
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida; and
| | - Spencer Conrad Barrett
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida; and
| | - Mak B Djulbegovic
- Wills Eye Hospital, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Nicolas A Yannuzzi
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida; and
| |
Collapse
|
3
|
Li W, Bian L, Ma B, Sun T, Liu Y, Sun Z, Zhao L, Feng K, Yang F, Wang X, Chan S, Dou H, Qi H. Interpretable Detection of Diabetic Retinopathy, Retinal Vein Occlusion, Age-Related Macular Degeneration, and Other Fundus Conditions. Diagnostics (Basel) 2024; 14:121. [PMID: 38247998 PMCID: PMC11487407 DOI: 10.3390/diagnostics14020121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/23/2023] [Accepted: 12/27/2023] [Indexed: 01/23/2024] Open
Abstract
Diabetic retinopathy (DR), retinal vein occlusion (RVO), and age-related macular degeneration (AMD) pose significant global health challenges, often resulting in vision impairment and blindness. Automatic detection of these conditions is crucial, particularly in underserved rural areas with limited access to ophthalmic services. Despite remarkable advancements in artificial intelligence, especially convolutional neural networks (CNNs), their complexity can make interpretation difficult. In this study, we curated a dataset consisting of 15,089 color fundus photographs (CFPs) obtained from 8110 patients who underwent fundus fluorescein angiography (FFA) examination. The primary objective was to construct integrated models that merge CNNs with an attention mechanism. These models were designed for a hierarchical multilabel classification task, focusing on the detection of DR, RVO, AMD, and other fundus conditions. Furthermore, our approach extended to the detailed classification of DR, RVO, and AMD according to their respective subclasses. We employed a methodology that entails the translation of diagnostic information obtained from FFA results into CFPs. Our investigation focused on evaluating the models' ability to achieve precise diagnoses solely based on CFPs. Remarkably, our models showcased improvements across diverse fundus conditions, with the ConvNeXt-base + attention model standing out for its exceptional performance. The ConvNeXt-base + attention model achieved remarkable metrics, including an area under the receiver operating characteristic curve (AUC) of 0.943, a referable F1 score of 0.870, and a Cohen's kappa of 0.778 for DR detection. For RVO, it attained an AUC of 0.960, a referable F1 score of 0.854, and a Cohen's kappa of 0.819. Furthermore, in AMD detection, the model achieved an AUC of 0.959, an F1 score of 0.727, and a Cohen's kappa of 0.686. Impressively, the model demonstrated proficiency in subclassifying RVO and AMD, showcasing commendable sensitivity and specificity. Moreover, our models enhanced interpretability by visualizing attention weights on fundus images, aiding in the identification of disease findings. These outcomes underscore the substantial impact of our models in advancing the detection of DR, RVO, and AMD, offering the potential for improved patient outcomes and positively influencing the healthcare landscape.
Collapse
Affiliation(s)
- Wenlong Li
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China; (W.L.); (L.B.); (B.M.); (T.S.); (Y.L.); (Z.S.); (L.Z.); (K.F.); (F.Y.); (X.W.); (S.C.)
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Linbo Bian
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China; (W.L.); (L.B.); (B.M.); (T.S.); (Y.L.); (Z.S.); (L.Z.); (K.F.); (F.Y.); (X.W.); (S.C.)
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Baikai Ma
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China; (W.L.); (L.B.); (B.M.); (T.S.); (Y.L.); (Z.S.); (L.Z.); (K.F.); (F.Y.); (X.W.); (S.C.)
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Tong Sun
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China; (W.L.); (L.B.); (B.M.); (T.S.); (Y.L.); (Z.S.); (L.Z.); (K.F.); (F.Y.); (X.W.); (S.C.)
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Yiyun Liu
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China; (W.L.); (L.B.); (B.M.); (T.S.); (Y.L.); (Z.S.); (L.Z.); (K.F.); (F.Y.); (X.W.); (S.C.)
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Zhengze Sun
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China; (W.L.); (L.B.); (B.M.); (T.S.); (Y.L.); (Z.S.); (L.Z.); (K.F.); (F.Y.); (X.W.); (S.C.)
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Lin Zhao
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China; (W.L.); (L.B.); (B.M.); (T.S.); (Y.L.); (Z.S.); (L.Z.); (K.F.); (F.Y.); (X.W.); (S.C.)
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Kang Feng
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China; (W.L.); (L.B.); (B.M.); (T.S.); (Y.L.); (Z.S.); (L.Z.); (K.F.); (F.Y.); (X.W.); (S.C.)
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Fan Yang
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China; (W.L.); (L.B.); (B.M.); (T.S.); (Y.L.); (Z.S.); (L.Z.); (K.F.); (F.Y.); (X.W.); (S.C.)
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Xiaona Wang
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China; (W.L.); (L.B.); (B.M.); (T.S.); (Y.L.); (Z.S.); (L.Z.); (K.F.); (F.Y.); (X.W.); (S.C.)
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Szyyann Chan
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China; (W.L.); (L.B.); (B.M.); (T.S.); (Y.L.); (Z.S.); (L.Z.); (K.F.); (F.Y.); (X.W.); (S.C.)
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Hongliang Dou
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China; (W.L.); (L.B.); (B.M.); (T.S.); (Y.L.); (Z.S.); (L.Z.); (K.F.); (F.Y.); (X.W.); (S.C.)
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Hong Qi
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China; (W.L.); (L.B.); (B.M.); (T.S.); (Y.L.); (Z.S.); (L.Z.); (K.F.); (F.Y.); (X.W.); (S.C.)
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| |
Collapse
|
4
|
Wu M, Lu Y, Hong X, Zhang J, Zheng B, Zhu S, Chen N, Zhu Z, Yang W. Classification of dry and wet macular degeneration based on the ConvNeXT model. Front Comput Neurosci 2022; 16:1079155. [PMID: 36568576 PMCID: PMC9773079 DOI: 10.3389/fncom.2022.1079155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 11/24/2022] [Indexed: 12/13/2022] Open
Abstract
Purpose To assess the value of an automated classification model for dry and wet macular degeneration based on the ConvNeXT model. Methods A total of 672 fundus images of normal, dry, and wet macular degeneration were collected from the Affiliated Eye Hospital of Nanjing Medical University and the fundus images of dry macular degeneration were expanded. The ConvNeXT three-category model was trained on the original and expanded datasets, and compared to the results of the VGG16, ResNet18, ResNet50, EfficientNetB7, and RegNet three-category models. A total of 289 fundus images were used to test the models, and the classification results of the models on different datasets were compared. The main evaluation indicators were sensitivity, specificity, F1-score, area under the curve (AUC), accuracy, and kappa. Results Using 289 fundus images, three-category models trained on the original and expanded datasets were assessed. The ConvNeXT model trained on the expanded dataset was the most effective, with a diagnostic accuracy of 96.89%, kappa value of 94.99%, and high diagnostic consistency. The sensitivity, specificity, F1-score, and AUC values for normal fundus images were 100.00, 99.41, 99.59, and 99.80%, respectively. The sensitivity, specificity, F1-score, and AUC values for dry macular degeneration diagnosis were 87.50, 98.76, 90.32, and 97.10%, respectively. The sensitivity, specificity, F1-score, and AUC values for wet macular degeneration diagnosis were 97.52, 97.02, 96.72, and 99.10%, respectively. Conclusion The ConvNeXT-based category model for dry and wet macular degeneration automatically identified dry and wet macular degeneration, aiding rapid, and accurate clinical diagnosis.
Collapse
Affiliation(s)
- Maonian Wu
- School of Information Engineering, Huzhou University, Huzhou, China,Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Ying Lu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Xiangqian Hong
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Jie Zhang
- Advanced Ophthalmology Laboratory, Brightview Medical Technologies (Nanjing) Co., Ltd., Nanjing, China
| | - Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, China,Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China,Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Naimei Chen
- Department of Ophthalmology, Huaian Hospital of Huaian City, Huaian, China
| | - Zhentao Zhu
- Department of Ophthalmology, Huaian Hospital of Huaian City, Huaian, China,*Correspondence: Zhentao Zhu,
| | - Weihua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China,Weihua Yang,
| |
Collapse
|
5
|
Pucchio A, Krance SH, Pur DR, Miranda RN, Felfeli T. Artificial Intelligence Analysis of Biofluid Markers in Age-Related Macular Degeneration: A Systematic Review. Clin Ophthalmol 2022; 16:2463-2476. [PMID: 35968055 PMCID: PMC9369085 DOI: 10.2147/opth.s377262] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 07/26/2022] [Indexed: 11/23/2022] Open
Abstract
This systematic review explores the use of artificial intelligence (AI) in the analysis of biofluid markers in age-related macular degeneration (AMD). We detail the accuracy and validity of AI in diagnostic and prognostic models and biofluid markers that provide insight into AMD pathogenesis and progression. This review was conducted in accordance with the Preferred Reporting Items for a Systematic Review and Meta-analysis guidelines. A comprehensive search was conducted across 5 electronic databases including Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, EMBASE, Medline, and Web of Science from inception to July 14, 2021. Studies pertaining to biofluid marker analysis using AI or bioinformatics in AMD were included. Identified studies were assessed for risk of bias and critically appraised using the Joanna Briggs Institute Critical Appraisal tools. A total of 10,264 articles were retrieved from all databases and 37 studies met the inclusion criteria, including 15 cross-sectional studies, 15 prospective cohort studies, five retrospective cohort studies, one randomized controlled trial, and one case–control study. The majority of studies had a general focus on AMD (58%), while neovascular AMD (nAMD) was the focus in 11 studies (30%), and geographic atrophy (GA) was highlighted by three studies. Fifteen studies examined disease characteristics, 15 studied risk factors, and seven guided treatment decisions. Altered lipid metabolism (HDL-cholesterol, total serum triglycerides), inflammation (c-reactive protein), oxidative stress, and protein digestion were implicated in AMD development and progression. AI tools were able to both accurately differentiate controls and AMD patients with accuracies as high as 87% and predict responsiveness to anti-VEGF therapy in nAMD patients. Use of AI models such as discriminant analysis could inform prognostic and diagnostic decision-making in a clinical setting. The identified pathways provide opportunity for future studies of AMD development and could be valuable in the advancement of novel treatments.
Collapse
Affiliation(s)
- Aidan Pucchio
- School of Medicine, Queen’s University, Kingston, ON, Canada
| | - Saffire H Krance
- Schulich School of Medicine & Dentistry, Western University, London, ON, Canada
| | - Daiana R Pur
- Schulich School of Medicine & Dentistry, Western University, London, ON, Canada
| | - Rafael N Miranda
- Toronto Health Economics and Technology Assessment Collaborative, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
| | - Tina Felfeli
- Toronto Health Economics and Technology Assessment Collaborative, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, ON, Canada
- Correspondence: Tina Felfeli, Department of Ophthalmology and Vision Sciences, University of Toronto, 340 College Street, Suite 400, Toronto, ON, M5T 3A9, Canada, Fax +416-978-4590, Email
| |
Collapse
|