1
|
Dos Reis Carvalho A, da Silva MV, Comin CH. Artificial vascular image generation using blood vessel texture maps. Comput Biol Med 2024; 183:109226. [PMID: 39378578 DOI: 10.1016/j.compbiomed.2024.109226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 09/25/2024] [Accepted: 09/27/2024] [Indexed: 10/10/2024]
Abstract
BACKGROUND Current methods for identifying blood vessels in digital images typically involve training neural networks on pixel-wise annotated data. However, manually outlining whole vessel trees in images tends to be very costly. One approach for reducing the amount of manual annotation is to pre-train networks on artificially generated vessel images. Recent pre-training approaches focus on generating proper artificial geometries for the vessels, while the appearance of the vessels is defined using general statistics of the real samples or generative networks requiring an additional training procedure to be defined. In contrast, we propose a methodology for generating blood vessels with realistic textures extracted directly from manually annotated vessel segments from real samples. The method allows the generation of artificial images having blood vessels with similar geometry and texture to the real samples using only a handful of manually annotated vessels. METHODS The first step of the method is the manual annotation of the borders of a small vessel segment, which takes only a few seconds. The annotation is then used for creating a reference image containing the texture of the vessel, called a texture map. A procedure is then defined to allow texture maps to be placed on top of any smooth curve using a piecewise linear transformation. Artificial images are then created by generating a set of vessel geometries using Bézier curves and assigning vessel texture maps to the curves. RESULTS The method is validated on a fluorescence microscopy (CORTEX) and a fundus photography (DRIVE) dataset. We show that manually annotating only 0.03% of the vessels in the CORTEX dataset allows pre-training a network to reach, on average, a Dice score of 0.87 ± 0.02, which is close to the baseline score of 0.92 obtained when all vessels of the training split of the dataset are annotated. For the DRIVE dataset, on average, a Dice score of 0.74 ± 0.02 is obtained by annotating only 0.29% of the vessels, which is also close to the baseline Dice score of 0.81 obtained when all vessels are annotated. CONCLUSION The proposed method can be used for disentangling the geometry and texture of blood vessels, which allows a significant improvement of network pre-training performance when compared to other pre-training methods commonly used in the literature.
Collapse
Affiliation(s)
| | - Matheus Viana da Silva
- Department of Computer Science, Federal University of São Carlos, São Carlos, SP, Brazil
| | - Cesar H Comin
- Department of Computer Science, Federal University of São Carlos, São Carlos, SP, Brazil.
| |
Collapse
|
2
|
Cleland CR, Rwiza J, Evans JR, Gordon I, MacLeod D, Burton MJ, Bascaran C. Artificial intelligence for diabetic retinopathy in low-income and middle-income countries: a scoping review. BMJ Open Diabetes Res Care 2023; 11:e003424. [PMID: 37532460 PMCID: PMC10401245 DOI: 10.1136/bmjdrc-2023-003424] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 07/11/2023] [Indexed: 08/04/2023] Open
Abstract
Diabetic retinopathy (DR) is a leading cause of blindness globally. There is growing evidence to support the use of artificial intelligence (AI) in diabetic eye care, particularly for screening populations at risk of sight loss from DR in low-income and middle-income countries (LMICs) where resources are most stretched. However, implementation into clinical practice remains limited. We conducted a scoping review to identify what AI tools have been used for DR in LMICs and to report their performance and relevant characteristics. 81 articles were included. The reported sensitivities and specificities were generally high providing evidence to support use in clinical practice. However, the majority of studies focused on sensitivity and specificity only and there was limited information on cost, regulatory approvals and whether the use of AI improved health outcomes. Further research that goes beyond reporting sensitivities and specificities is needed prior to wider implementation.
Collapse
Affiliation(s)
- Charles R Cleland
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
- Eye Department, Kilimanjaro Christian Medical Centre, Moshi, United Republic of Tanzania
| | - Justus Rwiza
- Eye Department, Kilimanjaro Christian Medical Centre, Moshi, United Republic of Tanzania
| | - Jennifer R Evans
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| | - Iris Gordon
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| | - David MacLeod
- Tropical Epidemiology Group, Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine, London, UK
| | - Matthew J Burton
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Covadonga Bascaran
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| |
Collapse
|
3
|
Sangeethaa S. Presumptive discerning of the severity level of glaucoma through clinical fundus images using hybrid PolyNet. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
4
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
5
|
RAJARAJESWARI P, MOORTHY JAYASHREE, BÉG OANWAR. SIMULATION OF DIABETIC RETINOPATHY UTILIZING CONVOLUTIONAL NEURAL NETWORKS. J MECH MED BIOL 2022. [DOI: 10.1142/s0219519422500117] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Currently, diabetic retinopathy is still screened as a three-stage classification, which is a tedious strategy and along these lines of this paper focuses on developing an improved methodology. In this methodology, we taught a convolutional neural network form on a major dataset, which includes around 45 depictions to do mathematical analysis and characterization. In this paper, DR is constructed, which takes the enter parameters as the HRF fundus photo of the eye. Three classes of patients are considered — healthy patients, diabetic’s retinopathy patients and glaucoma patients. An informed convolutional neural system without a fully connected model will also separate the highlights of the fundus pixel with the help of the enactment abilities like ReLu and softmax and arrangement. The yield obtained from the convolutional neural network (CNN) model and patient data achieves an institutionalized 97% accuracy. Therefore, the resulting methodology is having a great potential benefiting ophthalmic specialists in clinical medicine in terms of diagnosing earlier the symptoms of DR and mitigating its effects.
Collapse
Affiliation(s)
- P. RAJARAJESWARI
- Department of Computer Science and Engineering, Sreenivasa Institute of Technology and Management Studies, Chittoor, Andhra Pradesh, India
| | - JAYASHREE MOORTHY
- Department of Computer Science and Engineering, Sreenivasa Institute of Technology and Management Studies, Chittoor, Andhra Pradesh, India
| | - O. ANWAR BÉG
- Professor of Engineering Science & Director Multi Physical Engineering Sciences Group (MPESG), School of Science, Engineering and Environment (SEE), University of Salford, Manchester, M5 4WT, UK
| |
Collapse
|
6
|
Detection of exudates from clinical fundus images using machine learning algorithms in diabetic maculopathy. Int J Diabetes Dev Ctries 2022. [DOI: 10.1007/s13410-021-01039-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
7
|
Wu JH, Liu TYA, Hsu WT, Ho JHC, Lee CC. Performance and Limitation of Machine Learning Algorithms for Diabetic Retinopathy Screening: Meta-analysis. J Med Internet Res 2021; 23:e23863. [PMID: 34407500 PMCID: PMC8406115 DOI: 10.2196/23863] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 11/19/2020] [Accepted: 04/30/2021] [Indexed: 12/23/2022] Open
Abstract
Background Diabetic retinopathy (DR), whose standard diagnosis is performed by human experts, has high prevalence and requires a more efficient screening method. Although machine learning (ML)–based automated DR diagnosis has gained attention due to recent approval of IDx-DR, performance of this tool has not been examined systematically, and the best ML technique for use in a real-world setting has not been discussed. Objective The aim of this study was to systematically examine the overall diagnostic accuracy of ML in diagnosing DR of different categories based on color fundus photographs and to determine the state-of-the-art ML approach. Methods Published studies in PubMed and EMBASE were searched from inception to June 2020. Studies were screened for relevant outcomes, publication types, and data sufficiency, and a total of 60 out of 2128 (2.82%) studies were retrieved after study selection. Extraction of data was performed by 2 authors according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), and the quality assessment was performed according to the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2). Meta-analysis of diagnostic accuracy was pooled using a bivariate random effects model. The main outcomes included diagnostic accuracy, sensitivity, and specificity of ML in diagnosing DR based on color fundus photographs, as well as the performances of different major types of ML algorithms. Results The primary meta-analysis included 60 color fundus photograph studies (445,175 interpretations). Overall, ML demonstrated high accuracy in diagnosing DR of various categories, with a pooled area under the receiver operating characteristic (AUROC) ranging from 0.97 (95% CI 0.96-0.99) to 0.99 (95% CI 0.98-1.00). The performance of ML in detecting more-than-mild DR was robust (sensitivity 0.95; AUROC 0.97), and by subgroup analyses, we observed that robust performance of ML was not limited to benchmark data sets (sensitivity 0.92; AUROC 0.96) but could be generalized to images collected in clinical practice (sensitivity 0.97; AUROC 0.97). Neural network was the most widely used method, and the subgroup analysis revealed a pooled AUROC of 0.98 (95% CI 0.96-0.99) for studies that used neural networks to diagnose more-than-mild DR. Conclusions This meta-analysis demonstrated high diagnostic accuracy of ML algorithms in detecting DR on color fundus photographs, suggesting that state-of-the-art, ML-based DR screening algorithms are likely ready for clinical applications. However, a significant portion of the earlier published studies had methodology flaws, such as the lack of external validation and presence of spectrum bias. The results of these studies should be interpreted with caution.
Collapse
Affiliation(s)
- Jo-Hsuan Wu
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, United States
| | - T Y Alvin Liu
- Retina Division, Wilmer Eye Institute, The Johns Hopkins Medicine, Baltimore, MD, United States
| | - Wan-Ting Hsu
- Harvard TH Chan School of Public Health, Boston, MA, United States
| | | | - Chien-Chang Lee
- Health Data Science Research Group, National Taiwan University Hospital, Taipei, Taiwan.,The Centre for Intelligent Healthcare, National Taiwan University Hospital, Taipei, Taiwan.,Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|
8
|
Du X, Wang J, Sun W. Densely connected U-Net retinal vessel segmentation algorithm based on multi-scale feature convolution extraction. Med Phys 2021; 48:3827-3841. [PMID: 34028030 DOI: 10.1002/mp.14944] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Revised: 03/26/2021] [Accepted: 05/05/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The segmentation results of retinal blood vessels have a significant impact on the automatic diagnosis of various ophthalmic diseases. In order to further improve the segmentation accuracy of retinal vessels, we propose an improved algorithm based on multiscale vessel detection, which extracts features through densely connected networks and reuses features. METHODS A parallel fusion and serial embedding multiscale feature dense connection U-Net structure are designed. In the parallel fusion method, features of the input images are extracted for Inception multiscale convolution and dense block convolution, respectively, and then the features are fused and input into the subsequent network. In serial embedding mode, the Inception multiscale convolution structure is embedded in the dense connection network module, and then the dense connection structure is used to replace the classical convolution block in the U-Net network encoder part, so as to achieve multiscale feature extraction and efficient utilization of complex structure vessels and thereby improve the network segmentation performance. RESULTS The experimental analysis on the standard DRIVE and CHASE_DB1 databases shows that the sensitivity, specificity, accuracy, and AUC of the parallel fusion and serial embedding methods reach 0.7854, 0.9813, 0.9563, 0.9794; 0.7876, 0.9811, 0.9565, 0.9793 and 0.8110, 0.9737, 0.9547, 0.9667; 0.8113, 0.9717, 0.9574, 0.9750, respectively. CONCLUSIONS The experimental results show that multiscale feature detection and feature dense connection can effectively enhance the network model's ability to detect blood vessels and improve the network segmentation performance, which is superior to U-Net algorithm and some mainstream retinal blood vessel segmentation algorithms at present.
Collapse
Affiliation(s)
- Xinfeng Du
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan, 114051, China
| | - Jiesheng Wang
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan, 114051, China
| | - Weizhen Sun
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, Jiangsu, 210000, China
| |
Collapse
|
9
|
Gegundez-Arias ME, Marin-Santos D, Perez-Borrero I, Vasallo-Vazquez MJ. A new deep learning method for blood vessel segmentation in retinal images based on convolutional kernels and modified U-Net model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106081. [PMID: 33882418 DOI: 10.1016/j.cmpb.2021.106081] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Accepted: 03/28/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic monitoring of retinal blood vessels proves very useful for the clinical assessment of ocular vascular anomalies or retinopathies. This paper presents an efficient and accurate deep learning-based method for vessel segmentation in eye fundus images. METHODS The approach consists of a convolutional neural network based on a simplified version of the U-Net architecture that combines residual blocks and batch normalization in the up- and downscaling phases. The network receives patches extracted from the original image as input and is trained with a novel loss function that considers the distance of each pixel to the vascular tree. At its output, it generates the probability of each pixel of the input patch belonging to the vascular structure. The application of the network to the patches in which a retinal image can be divided allows obtaining the pixel-wise probability map of the complete image. This probability map is then binarized with a certain threshold to generate the blood vessel segmentation provided by the method. RESULTS The method has been developed and evaluated in the DRIVE, STARE and CHASE_Db1 databases, which offer a manual segmentation of the vascular tree by each of its images. Using this set of images as ground truth, the accuracy of the vessel segmentations obtained for an operating point proposal (established by a single threshold value for each database) was quantified. The overall performance was measured using the area of its receiver operating characteristic curve. The method demonstrated robustness in the face of the variability of the fundus images of diverse origin, being capable of working with the highest level of accuracy in the entire set of possible points of operation, compared to those provided by the most accurate methods found in literature. CONCLUSIONS The analysis of results concludes that the proposed method reaches better performance than the rest of state-of-art methods and can be considered the most promising for integration into a real tool for vascular structure segmentation.
Collapse
Affiliation(s)
- Manuel E Gegundez-Arias
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Diego Marin-Santos
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Isaac Perez-Borrero
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Manuel J Vasallo-Vazquez
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| |
Collapse
|
10
|
Abstract
Accurate segmentation of retinal blood vessels is a key step in the diagnosis of fundus diseases, among which cataracts, glaucoma, and diabetic retinopathy (DR) are the main diseases that cause blindness. Most segmentation methods based on deep convolutional neural networks can effectively extract features. However, convolution and pooling operations also filter out some useful information, and the final segmented retinal vessels have problems such as low classification accuracy. In this paper, we propose a multi-scale residual attention network called MRA-UNet. Multi-scale inputs enable the network to learn features at different scales, which increases the robustness of the network. In the encoding phase, we reduce the negative influence of the background and eliminate noise by using the residual attention module. We use the bottom reconstruction module to aggregate the feature information under different receptive fields, so that the model can extract the information of different thicknesses of blood vessels. Finally, the spatial activation module is used to process the up-sampled image to further increase the difference between blood vessels and background, which promotes the recovery of small blood vessels at the edges. Our method was verified on the DRIVE, CHASE, and STARE datasets. Respectively, the segmentation accuracy rates reached 96.98%, 97.58%, and 97.63%; the specificity reached 98.28%, 98.54%, and 98.73%; and the F-measure scores reached 82.93%, 81.27%, and 84.22%. We compared the experimental results with some state-of-art methods, such as U-Net, R2U-Net, and AG-UNet in terms of accuracy, sensitivity, specificity, F-measure, and AUCROC. Particularly, MRA-UNet outperformed U-Net by 1.51%, 3.44%, and 0.49% on DRIVE, CHASE, and STARE datasets, respectively.
Collapse
|
11
|
|
12
|
Machine learning and artificial intelligence based Diabetes Mellitus detection and self-management: A systematic review. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2020. [DOI: 10.1016/j.jksuci.2020.06.013] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
13
|
Yang WH, Zheng B, Wu MN, Zhu SJ, Fei FQ, Weng M, Zhang X, Lu PR. An Evaluation System of Fundus Photograph-Based Intelligent Diagnostic Technology for Diabetic Retinopathy and Applicability for Research. Diabetes Ther 2019; 10:1811-1822. [PMID: 31290125 PMCID: PMC6778552 DOI: 10.1007/s13300-019-0652-0] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Indexed: 11/26/2022] Open
Abstract
INTRODUCTION In April 2018, the US Food and Drug Administration (FDA) approved the world's first artificial intelligence (AI) medical device for detecting diabetic retinopathy (DR), the IDx-DR. However, there is a lack of evaluation systems for DR intelligent diagnostic technology. METHODS Five hundred color fundus photographs of diabetic patients were selected. DR severity varied from grade 0 to 4, with 100 photographs for each grade. Following that, these were diagnosed by both ophthalmologists and the intelligent technology, the results of which were compared by applying the evaluation system. The system includes primary, intermediate, and advanced evaluations, of which the intermediate evaluation incorporated two methods. Main evaluation indicators were sensitivity, specificity, and kappa value. RESULTS The AI technology diagnosed 93 photographs with no DR, 107 with mild non-proliferative DR (NPDR), 107 with moderate NPDR, 108 with severe NPDR, and 85 with proliferative DR (PDR). The sensitivity, specificity, and kappa value of the AI diagnoses in the primary evaluation were 98.8%, 88.0%, and 0.89, respectively. According to method 1 of the intermediate evaluation, the sensitivity of AI diagnosis was 98.0%, specificity 97.0%, and the kappa value 0.95. In method 2 of the intermediate evaluation, the sensitivity of AI diagnosis was 95.5%, the specificity 99.3%, and kappa value 0.95. In the advanced evaluation, the kappa value of the intelligent diagnosis was 0.86. CONCLUSIONS This article proposes an evaluation system for color fundus photograph-based intelligent diagnostic technology of DR and demonstrates an application of this system in a clinical setting. The results from this evaluation system serve as the basis for the selection of scenarios in which DR intelligent diagnostic technology can be applied.
Collapse
Affiliation(s)
- Wei-Hua Yang
- Department of Ophthalmology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, China
- Department of Ophthalmology, The First People's Hospital of Huzhou, Huzhou, Zhejiang, China
- Key Laboratory of Medical Artificial Intelligence, Huzhou University, Huzhou, Zhejiang, China
| | - Bo Zheng
- The Information Engineering College of Huzhou University, Huzhou, Zhejiang, China
- Key Laboratory of Medical Artificial Intelligence, Huzhou University, Huzhou, Zhejiang, China
| | - Mao-Nian Wu
- The Information Engineering College of Huzhou University, Huzhou, Zhejiang, China
- Key Laboratory of Medical Artificial Intelligence, Huzhou University, Huzhou, Zhejiang, China
| | - Shao-Jun Zhu
- The Information Engineering College of Huzhou University, Huzhou, Zhejiang, China
- Key Laboratory of Medical Artificial Intelligence, Huzhou University, Huzhou, Zhejiang, China
| | - Fang-Qin Fei
- Key Laboratory of Medical Artificial Intelligence, Huzhou University, Huzhou, Zhejiang, China
- Department of Endocrinology, The First Affiliated Hospital of Huzhou University, Huzhou, Zhejiang, China
| | - Ming Weng
- Department of Ophthalmology, Wuxi Third People's Hospital, Wuxi, Jiangsu, China
| | - Xian Zhang
- Department of Ophthalmology, Ningbo Medical Center Lihuili Eastern Hospital, Ningbo, Zhejiang, China
| | - Pei-Rong Lu
- Department of Ophthalmology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, China.
| |
Collapse
|
14
|
Pandey SK, Sharma V. World diabetes day 2018: Battling the Emerging Epidemic of Diabetic Retinopathy. Indian J Ophthalmol 2018; 66:1652-1653. [PMID: 30355895 PMCID: PMC6213704 DOI: 10.4103/ijo.ijo_1681_18] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Affiliation(s)
- Suresh K Pandey
- SuVi Eye Institute & Lasik Laser Center, Kota, Rajasthan, India
| | - Vidushi Sharma
- SuVi Eye Institute & Lasik Laser Center, Kota, Rajasthan, India
| |
Collapse
|