1
|
Li H, Liu H, Fu H, Xu Y, Shu H, Niu K, Hu Y, Liu J. A generic fundus image enhancement network boosted by frequency self-supervised representation learning. Med Image Anal 2023; 90:102945. [PMID: 37703674 DOI: 10.1016/j.media.2023.102945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 06/12/2023] [Accepted: 08/29/2023] [Indexed: 09/15/2023]
Abstract
Fundus photography is prone to suffer from image quality degradation that impacts clinical examination performed by ophthalmologists or intelligent systems. Though enhancement algorithms have been developed to promote fundus observation on degraded images, high data demands and limited applicability hinder their clinical deployment. To circumvent this bottleneck, a generic fundus image enhancement network (GFE-Net) is developed in this study to robustly correct unknown fundus images without supervised or extra data. Levering image frequency information, self-supervised representation learning is conducted to learn robust structure-aware representations from degraded images. Then with a seamless architecture that couples representation learning and image enhancement, GFE-Net can accurately correct fundus images and meanwhile preserve retinal structures. Comprehensive experiments are implemented to demonstrate the effectiveness and advantages of GFE-Net. Compared with state-of-the-art algorithms, GFE-Net achieves superior performance in data dependency, enhancement performance, deployment efficiency, and scale generalizability. Follow-up fundus image analysis is also facilitated by GFE-Net, whose modules are respectively verified to be effective for image enhancement.
Collapse
Affiliation(s)
- Heng Li
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China.
| | - Haofeng Liu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China.
| | - Huazhu Fu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), Singapore
| | - Yanwu Xu
- School of Future Technology, South China University of Technology, Guangzhou, China; Pazhou Lab, Guangzhou, China
| | - Hai Shu
- Department of Biostatistics, School of Global Public Health, New York University, NY, USA
| | - Ke Niu
- Computer School, Beijing Information Science and Technology University, Beijing, China
| | - Yan Hu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China.
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China; Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China; Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Southern University of Science and Technology, Shenzhen, China.
| |
Collapse
|
2
|
Chan E, Tang Z, Najjar RP, Narayanaswamy A, Sathianvichitr K, Newman NJ, Biousse V, Milea D. A Deep Learning System for Automated Quality Evaluation of Optic Disc Photographs in Neuro-Ophthalmic Disorders. Diagnostics (Basel) 2023; 13:diagnostics13010160. [PMID: 36611452 PMCID: PMC9818957 DOI: 10.3390/diagnostics13010160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/27/2022] [Accepted: 12/28/2022] [Indexed: 01/05/2023] Open
Abstract
The quality of ocular fundus photographs can affect the accuracy of the morphologic assessment of the optic nerve head (ONH), either by humans or by deep learning systems (DLS). In order to automatically identify ONH photographs of optimal quality, we have developed, trained, and tested a DLS, using an international, multicentre, multi-ethnic dataset of 5015 ocular fundus photographs from 31 centres in 20 countries participating to the Brain and Optic Nerve Study with Artificial Intelligence (BONSAI). The reference standard in image quality was established by three experts who independently classified photographs as of "good", "borderline", or "poor" quality. The DLS was trained on 4208 fundus photographs and tested on an independent external dataset of 807 photographs, using a multi-class model, evaluated with a one-vs-rest classification strategy. In the external-testing dataset, the DLS could identify with excellent performance "good" quality photographs (AUC = 0.93 (95% CI, 0.91-0.95), accuracy = 91.4% (95% CI, 90.0-92.9%), sensitivity = 93.8% (95% CI, 92.5-95.2%), specificity = 75.9% (95% CI, 69.7-82.1%) and "poor" quality photographs (AUC = 1.00 (95% CI, 0.99-1.00), accuracy = 99.1% (95% CI, 98.6-99.6%), sensitivity = 81.5% (95% CI, 70.6-93.8%), specificity = 99.7% (95% CI, 99.6-100.0%). "Borderline" quality images were also accurately classified (AUC = 0.90 (95% CI, 0.88-0.93), accuracy = 90.6% (95% CI, 89.1-92.2%), sensitivity = 65.4% (95% CI, 56.6-72.9%), specificity = 93.4% (95% CI, 92.1-94.8%). The overall accuracy to distinguish among the three classes was 90.6% (95% CI, 89.1-92.1%), suggesting that this DLS could select optimal quality fundus photographs in patients with neuro-ophthalmic and neurological disorders affecting the ONH.
Collapse
Affiliation(s)
- Ebenezer Chan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
| | - Zhiqun Tang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
| | - Raymond P. Najjar
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117597, Singapore
- Center for Innovation & Precision Eye Health, National University of Singapore, Singapore 119077, Singapore
| | - Arun Narayanaswamy
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Glaucoma Department, Singapore National Eye Centre, Singapore 168751, Singapore
| | | | - Nancy J. Newman
- Departments of Ophthalmology and Neurology, Emory University, Atlanta, GA 30322, USA
| | - Valérie Biousse
- Departments of Ophthalmology and Neurology, Emory University, Atlanta, GA 30322, USA
| | - Dan Milea
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
- Department of Ophthalmology, Rigshospitalet, University of Copenhagen, 2600 Copenhagen, Denmark
- Department of Ophthalmology, Angers University Hospital, 49100 Angers, France
- Neuro-Ophthalmology Department, Singapore National Eye Centre, Singapore 168751, Singapore
- Correspondence:
| | | |
Collapse
|
3
|
Deng Z, Cai Y, Chen L, Gong Z, Bao Q, Yao X, Fang D, Yang W, Zhang S, Ma L. RFormer: Transformer-Based Generative Adversarial Network for Real Fundus Image Restoration on a New Clinical Benchmark. IEEE J Biomed Health Inform 2022; 26:4645-4655. [PMID: 35767498 DOI: 10.1109/jbhi.2022.3187103] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Ophthalmologists have used fundus images to screen and diagnose eye diseases. However, different equipments and ophthalmologists pose large variations to the quality of fundus images. Low-quality (LQ) degraded fundus images easily lead to uncertainty in clinical screening and generally increase the risk of misdiagnosis. Thus, real fundus image restoration is worth studying. Unfortunately, real clinical benchmark has not been explored for this task so far. In this paper, we investigate the real clinical fundus image restoration problem. Firstly, We establish a clinical dataset, Real Fundus (RF), including 120 low- and high-quality (HQ) image pairs. Then we propose a novel Transformer-based Generative Adversarial Network (RFormer) to restore the real degradation of clinical fundus images. The key component in our network is the Window-based Self-Attention Block (WSAB) which captures non-local self-similarity and long-range dependencies. To produce more visually pleasant results, a Transformer-based discriminator is introduced. Extensive experiments on our clinical benchmark show that the proposed RFormer significantly outperforms the state-of-the-art (SOTA) methods. In addition, experiments of downstream tasks such as vessel segmentation and optic disc/cup detection demonstrate that our proposed RFormer benefits clinical fundus image analysis and applications. The dataset, code, and models will be made publicly available at https://github.com/dengzhuo-AI/Real-Fundus.
Collapse
|
4
|
Zhao J, Lu Y, Qian Y, Luo Y, Yang W. Emerging Trends and Research Foci in Artificial Intelligence for Retinal Diseases: A Bibliometric and Visualized Study (Preprint). J Med Internet Res 2022; 24:e37532. [PMID: 35700021 PMCID: PMC9240965 DOI: 10.2196/37532] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 03/25/2022] [Accepted: 04/04/2022] [Indexed: 01/28/2023] Open
Abstract
Background Patients with retinal diseases may exhibit serious complications that cause severe visual impairment owing to a lack of awareness of retinal diseases and limited medical resources. Understanding how artificial intelligence (AI) is used to make predictions and perform relevant analyses is a very active area of research on retinal diseases. In this study, the relevant Science Citation Index (SCI) literature on the AI of retinal diseases published from 2012 to 2021 was integrated and analyzed. Objective The aim of this study was to gain insights into the overall application of AI technology to the research of retinal diseases from set time and space dimensions. Methods Citation data downloaded from the Web of Science Core Collection database for AI in retinal disease publications from January 1, 2012, to December 31, 2021, were considered for this analysis. Information retrieval was analyzed using the online analysis platforms of literature metrology: Bibliometrc, CiteSpace V, and VOSviewer. Results A total of 197 institutions from 86 countries contributed to relevant publications; China had the largest number and researchers from University College London had the highest H-index. The reference clusters of SCI papers were clustered into 12 categories. “Deep learning” was the cluster with the widest range of cocited references. The burst keywords represented the research frontiers in 2018-2021, which were “eye disease” and “enhancement.” Conclusions This study provides a systematic analysis method on the literature regarding AI in retinal diseases. Bibliometric analysis enabled obtaining results that were objective and comprehensive. In the future, high-quality retinal image–forming AI technology with strong stability and clinical applicability will continue to be encouraged.
Collapse
Affiliation(s)
- Junqiang Zhao
- Department of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China
- Department of Nursing, Xinxiang Medical University, Xinxiang, Henan, China
| | - Yi Lu
- Department of Nursing, Xinxiang Medical University, Xinxiang, Henan, China
| | - Yong Qian
- Jiangsu Testing and Inspection Institute for Medical Devices, Nanjing, Jiangsu, China
| | - Yuxin Luo
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Weihua Yang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| |
Collapse
|
5
|
Ma Y, Liu J, Liu Y, Fu H, Hu Y, Cheng J, Qi H, Wu Y, Zhang J, Zhao Y. Structure and Illumination Constrained GAN for Medical Image Enhancement. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3955-3967. [PMID: 34339369 DOI: 10.1109/tmi.2021.3101937] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The development of medical imaging techniques has greatly supported clinical decision making. However, poor imaging quality, such as non-uniform illumination or imbalanced intensity, brings challenges for automated screening, analysis and diagnosis of diseases. Previously, bi-directional GANs (e.g., CycleGAN), have been proposed to improve the quality of input images without the requirement of paired images. However, these methods focus on global appearance, without imposing constraints on structure or illumination, which are essential features for medical image interpretation. In this paper, we propose a novel and versatile bi-directional GAN, named Structure and illumination constrained GAN (StillGAN), for medical image quality enhancement. Our StillGAN treats low- and high-quality images as two distinct domains, and introduces local structure and illumination constraints for learning both overall characteristics and local details. Extensive experiments on three medical image datasets (e.g., corneal confocal microscopy, retinal color fundus and endoscopy images) demonstrate that our method performs better than both conventional methods and other deep learning-based methods. In addition, we have investigated the impact of the proposed method on different medical image analysis and clinical tasks such as nerve segmentation, tortuosity grading, fovea localization and disease classification.
Collapse
|
6
|
Shen Z, Fu H, Shen J, Shao L. Modeling and Enhancing Low-Quality Retinal Fundus Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:996-1006. [PMID: 33296301 DOI: 10.1109/tmi.2020.3043495] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Retinal fundus images are widely used for the clinical screening and diagnosis of eye diseases. However, fundus images captured by operators with various levels of experience have a large variation in quality. Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis. However, due to the special optical beam of fundus imaging and structure of the retina, natural image enhancement methods cannot be utilized directly to address this. In this article, we first analyze the ophthalmoscope imaging system and simulate a reliable degradation of major inferior-quality factors, including uneven illumination, image blurring, and artifacts. Then, based on the degradation model, a clinically oriented fundus enhancement network (cofe-Net) is proposed to suppress global degradation factors, while simultaneously preserving anatomical retinal structures and pathological characteristics for clinical observation and analysis. Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details. Moreover, we also show that the fundus correction method can benefit medical image analysis applications, e.g., retinal vessel segmentation and optic disc/cup detection.
Collapse
|
7
|
WANG XUEWEI, ZHANG SHULIN, LIANG XIAO, ZHENG CHUN, ZHENG JINJIN, Sun MINGZHAI. A CNN-BASED RETINAL IMAGE QUALITY ASSESSMENT SYSTEM FOR TELEOPHTHALMOLOGY. J MECH MED BIOL 2019. [DOI: 10.1142/s0219519419500301] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Oculopathy is a widespread disease among people of all ages around the world. Teleophthalmology can facilitate the ophthalmological diagnosis for less developed countries that lack medical resources. In teleophthalmology, the assessment of retinal image quality is of great importance. In this paper, we propose a no-reference retinal image assessment system based on DenseNet, a convolutional neural network architecture. This system classified fundus images into good and bad quality or five categories: adequate, just noticeable blur, inappropriate illumination, incomplete optic disc, and opacity. The proposed system was evaluated on different datasets and compared to the applications based on other two networks: VGG-16 and GoogLenet. For binary classification, the good-and-bad binary classifier achieves an AUC of 1.000, and the degradation-specified classifiers that distinguish one specified degradation versus the rest achieve AUC values of 0.972, 0.990, 0.982, 0.982 for four categories, respectively. The multi-classification based on DenseNet achieves an overall accuracy of 0.927, which is significantly higher than 0.549 and 0.757 obtained using VGG-16 and GoogLeNet, respectively. The experimental results indicate that the proposed approach produces outstanding performance in retinal image quality assessment and is worth applying in ophthalmological telemedicine applications. In addition, the proposed approach is robust to the image noise. This study fills the gap of multi-classification in retinal image quality assessment.
Collapse
Affiliation(s)
- XUEWEI WANG
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei 230022, P. R. China
| | - SHULIN ZHANG
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei 230022, P. R. China
| | - XIAO LIANG
- School of Mechanical Engineering, Shijiazhuang Tiedao University, Shijiazhuang 050043, P. R. China
| | - CHUN ZHENG
- The 105 Hospital of PLA, Hefei 230031, P. R. China
| | - JINJIN ZHENG
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei 230022, P. R. China
| | - MINGZHAI Sun
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei 230022, P. R. China
| |
Collapse
|
8
|
Jin K, Zhou M, Wang S, Lou L, Xu Y, Ye J, Qian D. Computer-aided diagnosis based on enhancement of degraded fundus photographs. Acta Ophthalmol 2018; 96:e320-e326. [PMID: 29090844 DOI: 10.1111/aos.13573] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2017] [Accepted: 08/01/2017] [Indexed: 01/09/2023]
Abstract
PURPOSE Retinal imaging is an important and effective tool for detecting retinal diseases. However, degraded images caused by the aberrations of the eye can disguise lesions, so that a diseased eye can be mistakenly diagnosed as normal. In this work, we propose a new image enhancement method to improve the quality of degraded images. METHODS A new method is used to enhance degraded-quality fundus images. In this method, the image is converted from the input RGB colour space to LAB colour space and then each normalized component is enhanced using contrast-limited adaptive histogram equalization. Human visual system (HVS)-based fundus image quality assessment, combined with diagnosis by experts, is used to evaluate the enhancement. RESULTS The study included 191 degraded-quality fundus photographs of 143 subjects with optic media opacity. Objective quality assessment of image enhancement (range: 0-1) indicated that our method improved colour retinal image quality from an average of 0.0773 (variance 0.0801) to an average of 0.3973 (variance 0.0756). Following enhancement, area under curves (AUC) were 0.996 for the glaucoma classifier, 0.989 for the diabetic retinopathy (DR) classifier, 0.975 for the age-related macular degeneration (AMD) classifier and 0.979 for the other retinal diseases classifier. CONCLUSION The relatively simple method for enhancing degraded-quality fundus images achieves superior image enhancement, as demonstrated in a qualitative HVS-based image quality assessment. This retinal image enhancement may, therefore, be employed to assist ophthalmologists in more efficient screening of retinal diseases and the development of computer-aided diagnosis.
Collapse
Affiliation(s)
- Kai Jin
- Department of Ophthalmology; College of Medicine; the Second Affiliated Hospital of Zhejiang University; Hangzhou China
| | - Mei Zhou
- Shanghai Key Laboratory of Multidimensional Information Processing; East China Normal University; Shanghai China
| | - Shaoze Wang
- Institute of VLSI Design; Zhejiang University; Hangzhou China
| | - Lixia Lou
- Department of Ophthalmology; College of Medicine; the Second Affiliated Hospital of Zhejiang University; Hangzhou China
| | - Yufeng Xu
- Department of Ophthalmology; College of Medicine; the Second Affiliated Hospital of Zhejiang University; Hangzhou China
| | - Juan Ye
- Department of Ophthalmology; College of Medicine; the Second Affiliated Hospital of Zhejiang University; Hangzhou China
| | - Dahong Qian
- School of Biomedical Engineering; Shanghai Jiao Tong University; Shanghai China
| |
Collapse
|
9
|
Besenczi R, Tóth J, Hajdu A. A review on automatic analysis techniques for color fundus photographs. Comput Struct Biotechnol J 2016; 14:371-384. [PMID: 27800125 PMCID: PMC5072151 DOI: 10.1016/j.csbj.2016.10.001] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Revised: 10/01/2016] [Accepted: 10/03/2016] [Indexed: 12/25/2022] Open
Abstract
In this paper, we give a review on automatic image processing tools to recognize diseases causing specific distortions in the human retina. After a brief summary of the biology of the retina, we give an overview of the types of lesions that may appear as biomarkers of both eye and non-eye diseases. We present several state-of-the-art procedures to extract the anatomic components and lesions in color fundus photographs and decision support methods to help clinical diagnosis. We list publicly available databases and appropriate measurement techniques to compare quantitatively the performance of these approaches. Furthermore, we discuss on how the performance of image processing-based systems can be improved by fusing the output of individual detector algorithms. Retinal image analysis using mobile phones is also addressed as an expected future trend in this field.
Collapse
Key Words
- ACC, accuracy
- AMD, age-related macular degeneration
- AUC, area under the receiver operator characteristics curve
- Biomedical imaging
- Clinical decision support
- DR, diabetic retinopathy
- FN, false negative
- FOV, field-of-view
- FP, false positive
- FPI, false positive per image
- Fundus image analysis
- MA, microaneurysm
- NA, not available
- OC, optic cup
- OD, optic disc
- PPV, positive predictive value (precision)
- ROC, Retinopathy Online Challenge
- RS, Retinopathy Online Challenge score
- Retinal diseases
- SCC, Spearman's rank correlation coefficient
- SE, sensitivity
- SP, specificity
- TN, true negative
- TP, true positive
- kNN, k-nearest neighbor
Collapse
Affiliation(s)
- Renátó Besenczi
- Faculty of Informatics, University of Debrecen 4002 Debrecen PO Box 400, Hungary
| | - János Tóth
- Faculty of Informatics, University of Debrecen 4002 Debrecen PO Box 400, Hungary
| | - András Hajdu
- Faculty of Informatics, University of Debrecen 4002 Debrecen PO Box 400, Hungary
| |
Collapse
|
10
|
Leese GP, Stratton IM, Land M, Bachmann MO, Jones C, Scanlon P, Looker HC, Ferguson B. Progression of diabetes retinal status within community screening programs and potential implications for screening intervals. Diabetes Care 2015; 38:488-94. [PMID: 25524948 DOI: 10.2337/dc14-1778] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
OBJECTIVE This study aimed to follow the natural progression of retinal changes in patients with diabetes. Such information should inform decisions with regard to the screening intervals for such patients. RESEARCH DESIGN AND METHODS An observational study was undertaken linking the data from seven diabetes retinal screening programs across the U.K. for retinal grading results between 2005 and 2012. Patients with absent or background retinopathy were followed up for progression to the end points referable retinopathy and treatable retinopathy (proliferative retinopathy). RESULTS In total, 354,549 patients were observed for up to 4 years during which 16,196 patients progressed to referable retinopathy. Of patients with no retinopathy in either eye for two successive screening episodes at least 12 months apart, the conditions of between 0.3% (95% CI 0.3-0.8%) and 1.3% (1.0-1.6%) of patients progressed to referable retinopathy, and rates of treatable eye disease were <0.3% at 2 years. The corresponding progression rates for patients with bilateral background retinopathy in successive screening episodes were 13-29% and up to 4%, respectively, in the different programs. CONCLUSIONS It may be possible to stratify patients for risk, according to baseline retinal criteria, into groups with low and high risk of their conditions progressing to proliferative retinopathy. Screening intervals for such diverse groups of patients could safely be modified according to their risk.
Collapse
Affiliation(s)
| | | | | | | | - Colin Jones
- Norwich and Norfolk University Hospital, Norwich, U.K
| | | | | | | | | |
Collapse
|
11
|
Looker HC, Nyangoma SO, Cromie DT, Olson JA, Leese GP, Black MW, Doig J, Lee N, Lindsay RS, McKnight JA, Morris AD, Pearson DWM, Philip S, Wild SH, Colhoun HM. Rates of referable eye disease in the Scottish National Diabetic Retinopathy Screening Programme. Br J Ophthalmol 2014; 98:790-5. [PMID: 24599419 PMCID: PMC4033179 DOI: 10.1136/bjophthalmol-2013-303948] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
AIMS Diabetic retinopathy screening aims to detect people at risk of visual loss due to proliferative diabetic retinopathy, but also refers cases of suspected macular oedema (maculopathy). At the introduction of screening, ophthalmology was concerned that referral rates would be unmanageable. We report yield of referable disease by referral reason for the first 5 years of the programme. METHODS We extracted screening results from a nationwide clinical diabetes database to calculate annual referral rates to ophthalmic clinics. We used logistic regression to examine associations between clinical measures and referable disease. RESULTS 182 397 people underwent ≥ 1successful retinal screening between 2006 and 2010. The yield of referable eye disease was highest in the first 2 years of screening (7.0% and 6.0%) before stabilising at ∼4.3%. The majority of referrals are due to maculopathy with 73% of referrals in 2010 based on a finding of maculopathy. CONCLUSIONS The commonest cause for referral is for suspected macular oedema (maculopathy). Referral rates for retinopathy have stabilised, as predicted, at relatively low rates. However, ophthalmology workload continues to rise as new treatment options (ie, monthly intraocular injections) have unexpectedly increased the impact on ophthalmology. A review of the screening referral path for maculopathy may be timely.
Collapse
Affiliation(s)
| | | | | | | | - G P Leese
- Ninewells Hospital & Medical School, Dundee, UK
| | - M W Black
- Diabetic Retinopathy Screening Collaborative, NHS Highland, UK
| | - J Doig
- Forth Valley Royal Hospital, Edinburgh, UK
| | - N Lee
- Diabetic Retinopathy Screening Collaborative, NHS Highland, UK
| | | | - J A McKnight
- Western General Hospital, Edinburgh, UK University of Edinburgh, Edinburgh, UK
| | | | | | - S Philip
- Grampian Diabetes Research Unit, NHS Grampian, Aberdeen, UK
| | - S H Wild
- University of Edinburgh, Edinburgh, UK
| | | | | | | |
Collapse
|
12
|
Looker HC, Nyangoma SO, Cromie DT, Olson JA, Leese GP, Philip S, Black MW, Doig J, Lee N, Briggs A, Hothersall EJ, Morris AD, Lindsay RS, McKnight JA, Pearson DWM, Sattar NA, Wild SH, McKeigue P, Colhoun HM. Predicted impact of extending the screening interval for diabetic retinopathy: the Scottish Diabetic Retinopathy Screening programme. Diabetologia 2013; 56:1716-25. [PMID: 23689796 PMCID: PMC3699707 DOI: 10.1007/s00125-013-2928-7] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/29/2013] [Accepted: 04/12/2013] [Indexed: 12/18/2022]
Abstract
AIMS/HYPOTHESIS The aim of our study was to identify subgroups of patients attending the Scottish Diabetic Retinopathy Screening (DRS) programme who might safely move from annual to two yearly retinopathy screening. METHODS This was a retrospective cohort study of screening data from the DRS programme collected between 2005 and 2011 for people aged ≥12 years with type 1 or type 2 diabetes in Scotland. We used hidden Markov models to calculate the probabilities of transitions to referable diabetic retinopathy (referable background or proliferative retinopathy) or referable maculopathy. RESULTS The study included 155,114 individuals with no referable diabetic retinopathy or maculopathy at their first DRS examination and with one or more further DRS examinations. There were 11,275 incident cases of referable diabetic eye disease (9,204 referable maculopathy, 2,071 referable background or proliferative retinopathy). The observed transitions to referable background or proliferative retinopathy were lower for people with no visible retinopathy vs mild background retinopathy at their prior examination (respectively, 1.2% vs 8.1% for type 1 diabetes and 0.6% vs 5.1% for type 2 diabetes). The lowest probability for transitioning to referable background or proliferative retinopathy was among people with two consecutive screens showing no visible retinopathy, where the probability was <0.3% for type 1 and <0.2% for type 2 diabetes at 2 years. CONCLUSIONS/INTERPRETATION Transition rates to referable diabetic eye disease were lowest among people with type 2 diabetes and two consecutive screens showing no visible retinopathy. If such people had been offered two yearly screening the DRS service would have needed to screen 40% fewer people in 2009.
Collapse
|
13
|
Looker HC, Nyangoma SO, Cromie D, Olson JA, Leese GP, Black M, Doig J, Lee N, Lindsay RS, McKnight JA, Morris AD, Philip S, Sattar N, Wild SH, Colhoun HM. Diabetic retinopathy at diagnosis of type 2 diabetes in Scotland. Diabetologia 2012; 55:2335-42. [PMID: 22688348 PMCID: PMC3411303 DOI: 10.1007/s00125-012-2596-z] [Citation(s) in RCA: 66] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/29/2012] [Accepted: 04/30/2012] [Indexed: 11/30/2022]
Abstract
AIMS/HYPOTHESIS The aim of this study was to examine the prevalence of and risk factors for diabetic retinopathy in people with newly diagnosed type 2 diabetes mellitus, using Scottish national data. METHODS We identified individuals diagnosed with type 2 diabetes mellitus in Scotland between January 2005 and May 2008 using data from the national diabetes database. We calculated the prevalence of retinopathy and ORs for risk factors associated with retinopathy at first screening. RESULTS Of the 51,526 people with newly diagnosed type 2 diabetes mellitus identified, 91.4% had been screened by 31 December 2010. The median time to first screening was 315 days (interquartile range [IQR] 111-607 days), but by 2008 the median was 83 days (IQR 51-135 days). The prevalence at first screening of any retinopathy was 19.3%, and for referable retinopathy it was 1.9%. For individuals screened after a year the prevalence of any retinopathy was 20.5% and referable retinopathy was 2.3%. Any retinopathy at screening was associated with male sex (OR 1.19, 95% CI 1.14, 1.25), HbA(1c) (OR 1.07, 95% CI 1.06, 1.08 per 1% [11 mmol/mol] increase), systolic BP (OR 1.06, 95% CI 1.05, 1.08 per 10 mmHg increase), time to screening (OR for screening >1 year post diagnosis = 1.12, 95% CI 1.07, 1.17) and obesity (OR 0.87, 95% CI 0.82, 0.93) in multivariate analysis. CONCLUSIONS/INTERPRETATION The prevalence of retinopathy at first screening is lower than in previous UK studies, consistent with earlier diagnosis of diabetes. Most newly diagnosed type 2 diabetic patients in Scotland are screened within an acceptable interval and the prevalence of referable disease is low, even in those with delayed screening.
Collapse
Affiliation(s)
- H C Looker
- Medical Research Institute, University of Dundee, The Mackenzie Building, Kirsty Semple Way, Dundee, DD2 4BF, UK,
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
14
|
Dias JMP, Oliveira CM, Cruz LADS. Evaluation of Retinal Image Gradability by Image Features Classification. ACTA ACUST UNITED AC 2012. [DOI: 10.1016/j.protcy.2012.09.096] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
15
|
Radha V, Kanthimathi S, Mohan V. Genetics of Type 2 diabetes in Asian Indians. ACTA ACUST UNITED AC 2011. [DOI: 10.2217/dmt.11.14] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/06/2022]
|
16
|
Joshi GD, Sivaswamy J. DrishtiCare: a telescreening platform for diabetic retinopathy powered with fundus image analysis. J Diabetes Sci Technol 2011; 5:23-31. [PMID: 21303621 PMCID: PMC3045228 DOI: 10.1177/193229681100500104] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE Diabetic retinopathy is the leading cause of blindness in urban populations. Early diagnosis through regular screening and timely treatment has been shown to prevent visual loss and blindness. It is very difficult to cater to this vast set of diabetes patients, primarily because of high costs in reaching out to patients and a scarcity of skilled personnel. Telescreening offers a cost-effective solution to reach out to patients but is still inadequate due to an insufficient number of experts who serve the diabetes population. Developments toward fundus image analysis have shown promise in addressing the scarcity of skilled personnel for large-scale screening. This article aims at addressing the underlying issues in traditional telescreening to develop a solution that leverages the developments carried out in fundus image analysis. METHOD We propose a novel Web-based telescreening solution (called DrishtiCare) integrating various value-added fundus image analysis components. A Web-based platform on the software as a service (SaaS) delivery model is chosen to make the service cost-effective, easy to use, and scalable. A server-based prescreening system is employed to scrutinize the fundus images of patients and to refer them to the experts. An automatic quality assessment module ensures transfer of fundus images that meet grading standards. An easy-to-use interface, enabled with new visualization features, is designed for case examination by experts. RESULTS Three local primary eye hospitals have participated and used DrishtiCare's telescreening service. A preliminary evaluation of the proposed platform is performed on a set of 119 patients, of which 23% are identified with the sight-threatening retinopathy. Currently, evaluation at a larger scale is under process, and a total of 450 patients have been enrolled. CONCLUSION The proposed approach provides an innovative way of integrating automated fundus image analysis in the telescreening framework to address well-known challenges in large-scale disease screening. It offers a low-cost, effective, and easily adoptable screening solution to primary care providers.
Collapse
Affiliation(s)
- Gopal Datt Joshi
- Centre for Visual Information Technology (CVIT), International Institute of Information Technology, Hyderabad (IIIT-H), Gachibowli, Hyderabad, Andhra Pradesh, India
| | | |
Collapse
|
17
|
Abstract
PURPOSE Retinal images acquired by means of digital photography are often used for evaluation and documentation of the ocular fundus, especially in patients with diabetes, glaucoma or age-related macular degeneration. The clinical usefulness of an image is highly dependent on its quality. We set out to develop and evaluate an automatic method of evaluating the quality of digital fundus photographs. METHODS A method for making a numerical quantification of image sharpness and illumination was developed using Matlab image analysis functions. Based on their sharpness and illumination measures, 1000 fundus photographs, randomly selected from a clinical database, were assigned to four predefined quality groups (not acceptable, acceptable, good, very good). Six independent observers, comprising three experienced ophthalmologists and three ophthalmic nurses with extensive experience in fundus image acquisition, classified a selection of 100 of these images into the corresponding quality groups. RESULTS Automatic quality evaluation was more sensitive than evaluation by human observers in terms of ability to discriminate between good and very good images. The median concordance between the six human observers and the automatic evaluation was substantial (kappa = 0.64). CONCLUSIONS The proposed method provides an objective quality assessment of digital fundus photographs which agrees well with evaluations made by qualified human observers and which may be useful in clinical practice.
Collapse
Affiliation(s)
- Herman Bartling
- Department of Clinical Neuroscience, Ophthalmology and Vision, Karolinska Institute, Stockholm, Sweden
| | | | | |
Collapse
|
18
|
Pradeepa R, Anitha B, Mohan V, Ganesan A, Rema M. Risk factors for diabetic retinopathy in a South Indian Type 2 diabetic population--the Chennai Urban Rural Epidemiology Study (CURES) Eye Study 4. Diabet Med 2008; 25:536-42. [PMID: 18346159 DOI: 10.1111/j.1464-5491.2008.02423.x] [Citation(s) in RCA: 65] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
AIMS To determine risk factors for diabetic retinopathy (DR) in an urban South Indian Type 2 diabetic population. METHODS The Chennai Urban Rural Epidemiology Study is a large cross-sectional study conducted in Chennai, South India. A total of 1736 Type 2 diabetic subjects were recruited for this study, which included 1382 known diabetic subjects (90.4% response rate) and 354 randomly selected, newly detected diabetic subjects diagnosed by oral glucose tolerance test. All subjects underwent four-field stereo retinal colour photography, graded by the Early Treatment Diabetic Retinopathy Study protocol. RESULTS Of the 1736 Type 2 diabetic subjects photographed, photographs could be graded in 1715 subjects. Stepwise ordinal logistic regression analysis revealed that male gender (P = 0.041), duration of diabetes (P < 0.0001), glycated haemoglobin (HbA(1c); P < 0.0001), macroalbuminuria (P = 0.0002) and insulin therapy (P = 0.0001) were significantly associated with severity of DR. The risk for developing DR was 7.7 times (95% confidence interval 4.71-12.48, P < 0.0001) for elevated postprandial plasma glucose levels compared with 4.2 times (95% confidence interval 2.78-6.34, P < 0.0001) for elevated fasting plasma glucose when the fourth quartile values were compared with the first quartile glucose values. CONCLUSIONS In South Indian Type 2 diabetic subjects, duration of diabetes, HbA1c, male gender, macroalbuminuria and insulin therapy were independent risk factors for severity of DR. Postprandial hyperglycaemia indicated a higher risk for DR compared with elevated fasting plasma glucose levels.
Collapse
Affiliation(s)
- R Pradeepa
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai, India
| | | | | | | | | |
Collapse
|
19
|
Philip S, Fleming AD, Goatman KA, Fonseca S, McNamee P, Scotland GS, Prescott GJ, Sharp PF, Olson JA. The efficacy of automated "disease/no disease" grading for diabetic retinopathy in a systematic screening programme. Br J Ophthalmol 2007; 91:1512-7. [PMID: 17504851 PMCID: PMC2095421 DOI: 10.1136/bjo.2007.119453] [Citation(s) in RCA: 121] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
AIM To assess the efficacy of automated "disease/no disease" grading for diabetic retinopathy within a systematic screening programme. METHODS Anonymised images were obtained from consecutive patients attending a regional primary care based diabetic retinopathy screening programme. A training set of 1067 images was used to develop automated grading algorithms. The final software was tested using a separate set of 14 406 images from 6722 patients. The sensitivity and specificity of manual and automated systems operating as "disease/no disease" graders (detecting poor quality images and any diabetic retinopathy) were determined relative to a clinical reference standard. RESULTS The reference standard classified 8.2% of the patients as having ungradeable images (technical failures) and 62.5% as having no retinopathy. Detection of technical failures or any retinopathy was achieved by manual grading with 86.5% sensitivity (95% confidence interval 85.1 to 87.8) and 95.3% specificity (94.6 to 95.9) and by automated grading with 90.5% sensitivity (89.3 to 91.6) and 67.4% specificity (66.0 to 68.8). Manual and automated grading detected 99.1% and 97.9%, respectively, of patients with referable or observable retinopathy/maculopathy. Manual and automated grading detected 95.7% and 99.8%, respectively, of technical failures. CONCLUSION Automated "disease/no disease" grading of diabetic retinopathy could safely reduce the burden of grading in diabetic retinopathy screening programmes.
Collapse
Affiliation(s)
- S Philip
- Biomedical Physics and Grampian Retinal Screening Programme, University of Aberdeen, Foresterhill, Aberdeen
| | | | | | | | | | | | | | | | | |
Collapse
|