1
|
Zhang S, Webers CAB, Berendschot TTJM. Computational single fundus image restoration techniques: a review. FRONTIERS IN OPHTHALMOLOGY 2024; 4:1332197. [PMID: 38984141 PMCID: PMC11199880 DOI: 10.3389/fopht.2024.1332197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 04/19/2024] [Indexed: 07/11/2024]
Abstract
Fundus cameras are widely used by ophthalmologists for monitoring and diagnosing retinal pathologies. Unfortunately, no optical system is perfect, and the visibility of retinal images can be greatly degraded due to the presence of problematic illumination, intraocular scattering, or blurriness caused by sudden movements. To improve image quality, different retinal image restoration/enhancement techniques have been developed, which play an important role in improving the performance of various clinical and computer-assisted applications. This paper gives a comprehensive review of these restoration/enhancement techniques, discusses their underlying mathematical models, and shows how they may be effectively applied in real-life practice to increase the visual quality of retinal images for potential clinical applications including diagnosis and retinal structure recognition. All three main topics of retinal image restoration/enhancement techniques, i.e., illumination correction, dehazing, and deblurring, are addressed. Finally, some considerations about challenges and the future scope of retinal image restoration/enhancement techniques will be discussed.
Collapse
Affiliation(s)
- Shuhe Zhang
- University Eye Clinic Maastricht, Maastricht University Medical Center, Maastricht, Netherlands
| | - Carroll A B Webers
- University Eye Clinic Maastricht, Maastricht University Medical Center, Maastricht, Netherlands
| | - Tos T J M Berendschot
- University Eye Clinic Maastricht, Maastricht University Medical Center, Maastricht, Netherlands
| |
Collapse
|
2
|
Elsawy A, Keenan TDL, Chen Q, Thavikulwat AT, Bhandari S, Quek TC, Goh JHL, Tham YC, Cheng CY, Chew EY, Lu Z. A deep network DeepOpacityNet for detection of cataracts from color fundus photographs. COMMUNICATIONS MEDICINE 2023; 3:184. [PMID: 38104223 PMCID: PMC10725427 DOI: 10.1038/s43856-023-00410-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 11/21/2023] [Indexed: 12/19/2023] Open
Abstract
BACKGROUND Cataract diagnosis typically requires in-person evaluation by an ophthalmologist. However, color fundus photography (CFP) is widely performed outside ophthalmology clinics, which could be exploited to increase the accessibility of cataract screening by automated detection. METHODS DeepOpacityNet was developed to detect cataracts from CFP and highlight the most relevant CFP features associated with cataracts. We used 17,514 CFPs from 2573 AREDS2 participants curated from the Age-Related Eye Diseases Study 2 (AREDS2) dataset, of which 8681 CFPs were labeled with cataracts. The ground truth labels were transferred from slit-lamp examination of nuclear cataracts and reading center grading of anterior segment photographs for cortical and posterior subcapsular cataracts. DeepOpacityNet was internally validated on an independent test set (20%), compared to three ophthalmologists on a subset of the test set (100 CFPs), externally validated on three datasets obtained from the Singapore Epidemiology of Eye Diseases study (SEED), and visualized to highlight important features. RESULTS Internally, DeepOpacityNet achieved a superior accuracy of 0.66 (95% confidence interval (CI): 0.64-0.68) and an area under the curve (AUC) of 0.72 (95% CI: 0.70-0.74), compared to that of other state-of-the-art methods. DeepOpacityNet achieved an accuracy of 0.75, compared to an accuracy of 0.67 for the ophthalmologist with the highest performance. Externally, DeepOpacityNet achieved AUC scores of 0.86, 0.88, and 0.89 on SEED datasets, demonstrating the generalizability of our proposed method. Visualizations show that the visibility of blood vessels could be characteristic of cataract absence while blurred regions could be characteristic of cataract presence. CONCLUSIONS DeepOpacityNet could detect cataracts from CFPs in AREDS2 with performance superior to that of ophthalmologists and generate interpretable results. The code and models are available at https://github.com/ncbi/DeepOpacityNet ( https://doi.org/10.5281/zenodo.10127002 ).
Collapse
Affiliation(s)
- Amr Elsawy
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, 20894, USA
| | - Tiarnan D L Keenan
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, 20894, USA
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, 20894, USA
| | - Alisa T Thavikulwat
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Sanjeeb Bhandari
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Jocelyn Hui Lin Goh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA.
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, 20894, USA.
| |
Collapse
|
3
|
Ganokratanaa T, Ketcham M, Pramkeaw P. Advancements in Cataract Detection: The Systematic Development of LeNet-Convolutional Neural Network Models. J Imaging 2023; 9:197. [PMID: 37888304 PMCID: PMC10607181 DOI: 10.3390/jimaging9100197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/30/2023] [Accepted: 09/21/2023] [Indexed: 10/28/2023] Open
Abstract
Regular screening and timely treatment play a crucial role in addressing the progression and visual impairment caused by cataracts, the leading cause of blindness in Thailand and many other countries. Despite the potential for prevention and successful treatment, patients often delay seeking medical attention due to the gradual and relatively asymptomatic nature of cataracts. To address this challenge, this research focuses on the identification of cataract abnormalities using image processing techniques and machine learning for preliminary assessment. The LeNet-convolutional neural network (LeNet-CNN) model is employed to train a dataset of digital camera images, and its performance is compared to the support vector machine (SVM) model in categorizing cataract abnormalities. The evaluation demonstrates that the LeNet-CNN model achieves impressive results in the testing phase. It attains an accuracy rate of 96%, exhibiting a sensitivity of 95% for detecting positive cases and a specificity of 96% for accurately identifying negative cases. These outcomes surpass the performance of previous studies in this field. This highlights the accuracy and effectiveness of the proposed approach, particularly the superior performance of LeNet-CNN. By utilizing image processing technology and convolutional neural networks, this research provides an effective tool for initial cataract screening. Patients can independently assess their eye health by capturing self-images, facilitating early intervention and medical consultations. The proposed method holds promise in enhancing the preliminary assessment of cataracts, enabling early detection and timely access to appropriate care.
Collapse
Affiliation(s)
- Thittaporn Ganokratanaa
- Applied Computer Science Programme, King Mongkut’s University of Technology Thonburi, Bangkok 10140, Thailand;
| | - Mahasak Ketcham
- Department of Information Technology Management, King Mongkut’s University of Technology North Bangkok, Bangkok 10800, Thailand
| | - Patiyuth Pramkeaw
- Media Technology Programme, King Mongkut’s University of Technology Thonburi, Bangkok 10150, Thailand;
| |
Collapse
|
4
|
Xie H, Li Z, Wu C, Zhao Y, Lin C, Wang Z, Wang C, Gu Q, Wang M, Zheng Q, Jiang J, Chen W. Deep learning for detecting visually impaired cataracts using fundus images. Front Cell Dev Biol 2023; 11:1197239. [PMID: 37576595 PMCID: PMC10416247 DOI: 10.3389/fcell.2023.1197239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 07/20/2023] [Indexed: 08/15/2023] Open
Abstract
Purpose: To develop a visual function-based deep learning system (DLS) using fundus images to screen for visually impaired cataracts. Materials and methods: A total of 8,395 fundus images (5,245 subjects) with corresponding visual function parameters collected from three clinical centers were used to develop and evaluate a DLS for classifying non-cataracts, mild cataracts, and visually impaired cataracts. Three deep learning algorithms (DenseNet121, Inception V3, and ResNet50) were leveraged to train models to obtain the best one for the system. The performance of the system was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. Results: The AUC of the best algorithm (DenseNet121) on the internal test dataset and the two external test datasets were 0.998 (95% CI, 0.996-0.999) to 0.999 (95% CI, 0.998-1.000),0.938 (95% CI, 0.924-0.951) to 0.966 (95% CI, 0.946-0.983) and 0.937 (95% CI, 0.918-0.953) to 0.977 (95% CI, 0.962-0.989), respectively. In the comparison between the system and cataract specialists, better performance was observed in the system for detecting visually impaired cataracts (p < 0.05). Conclusion: Our study shows the potential of a function-focused screening tool to identify visually impaired cataracts from fundus images, enabling timely patient referral to tertiary eye hospitals.
Collapse
Affiliation(s)
- He Xie
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Chengchao Wu
- School of Electronic Engineering, Xi’an University of Posts and Telecommunications, Xi’an, China
| | - Yitian Zhao
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Chengmin Lin
- Department of Ophthalmology, Wenzhou Hospital of Integrated Traditional Chinese and Western Medicine, Wenzhou, China
| | - Zhouqian Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Chenxi Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Qinyi Gu
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Minye Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Qinxiang Zheng
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi’an University of Posts and Telecommunications, Xi’an, China
| | - Wei Chen
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| |
Collapse
|
5
|
Automated cataract disease detection on anterior segment eye images using adaptive thresholding and fine tuned inception-v3 model. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
6
|
Xu Z, Xu J, Shi C, Xu W, Jin X, Han W, Jin K, Grzybowski A, Yao K. Artificial Intelligence for Anterior Segment Diseases: A Review of Potential Developments and Clinical Applications. Ophthalmol Ther 2023; 12:1439-1455. [PMID: 36884203 PMCID: PMC10164195 DOI: 10.1007/s40123-023-00690-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 02/13/2023] [Indexed: 03/09/2023] Open
Abstract
Artificial intelligence (AI) technology is promising in the field of healthcare. With the developments of big data and image-based analysis, AI shows potential value in ophthalmology applications. Recently, machine learning and deep learning algorithms have made significant progress. Emerging evidence has demonstrated the capability of AI in the diagnosis and management of anterior segment diseases. In this review, we provide an overview of AI applications and potential future applications in anterior segment diseases, focusing on cornea, refractive surgery, cataract, anterior chamber angle detection, and refractive error prediction.
Collapse
Affiliation(s)
- Zhe Xu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Jia Xu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Ce Shi
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Wen Xu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Xiuming Jin
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Wei Han
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Kai Jin
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Andrzej Grzybowski
- Department of Ophthalmology, University of Warmia and Mazury, Olsztyn, Poland.
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Ke Yao
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China.
| |
Collapse
|
7
|
Zhang S, Webers CAB, Berendschot TTJM. Luminosity rectified blind Richardson-Lucy deconvolution for single retinal image restoration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107297. [PMID: 36563648 DOI: 10.1016/j.cmpb.2022.107297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 11/14/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Due to imperfect imaging conditions, retinal images can be degraded by uneven/insufficient illumination, blurriness caused by optical aberrations and unintentional motions. Degraded images reduce the effectiveness of diagnosis by an ophthalmologist. To restore the image quality, in this research we propose the luminosity rectified Richardson-Lucy (LRRL) blind deconvolution framework for single retinal image restoration. METHODS We established an image formation model based on the double-pass fundus reflection feature and developed a differentiable non-convex cost function that jointly achieves illumination correction and blind deconvolution. To solve this non-convex optimization problem, we derived the closed-form expression of the gradients and used gradient descent with Nesterov-accelerated adaptive momentum estimation to accelerate the optimization, which is more efficient than the traditional half quadratic splitting method. RESULTS The LRRL was tested on 1719 images from three public databases. Four image quality matrixes including image definition, image sharpness, image entropy, and image multiscale contrast were used for objective assessments. The LRRL was compared against the state-of-the-art retinal image blind deconvolution methods. CONCLUSIONS Our LRRL corrects the problematic illumination and improves the clarity of the retinal image simultaneously, showing its superiority in terms of restoration quality and implementation efficiency. The MATLAB code is available on Github.
Collapse
Affiliation(s)
- Shuhe Zhang
- University Eye Clinic Maastricht, Maastricht University Medical Center +, P.O. Box 5800, Maastricht, AZ 6202, the Netherlands.
| | - Carroll A B Webers
- University Eye Clinic Maastricht, Maastricht University Medical Center +, P.O. Box 5800, Maastricht, AZ 6202, the Netherlands
| | - Tos T J M Berendschot
- University Eye Clinic Maastricht, Maastricht University Medical Center +, P.O. Box 5800, Maastricht, AZ 6202, the Netherlands
| |
Collapse
|
8
|
Han R, Tang C, Xu M, Liang B, Wu T, Lei Z. Enhancement method with naturalness preservation and artifact suppression based on an improved Retinex variational model for color retinal images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:155-164. [PMID: 36607085 DOI: 10.1364/josaa.474020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
Retinal images are widely used for the diagnosis of various diseases. However, low-quality retinal images with uneven illumination, low contrast, or blurring may seriously interfere with diagnosis by ophthalmologists. This study proposes an enhancement method for low-quality retinal color images. In this paper, an improved variational Retinex model for color retinal images is first proposed and applied to each channel of the RGB color space to obtain the illuminance and reflectance layers. Subsequently, the Naka-Rushton equation is introduced to correct the illumination layer, and an enhancement operator is constructed to improve the clarity of the reflectance layer. Finally, the corrected illuminance and enhanced reflectance are recombined. Contrast-limited adaptive histogram equalization is introduced to further improve the clarity and contrast. To demonstrate the effectiveness of the proposed method, this method is tested on 527 images from four publicly available datasets and 40 local clinical images from Tianjin Eye Hospital (China). Experimental results show that the proposed method outperforms the other four enhancement methods and has obvious advantages in naturalness preservation and artifact suppression.
Collapse
|
9
|
Fu J, Cao L, Wei S, Xu M, Song Y, Li H, You Y. A GAN-based deep enhancer for quality enhancement of retinal images photographed by a handheld fundus camera. ADVANCES IN OPHTHALMOLOGY PRACTICE AND RESEARCH 2022; 2:100077. [PMID: 37846289 PMCID: PMC10577846 DOI: 10.1016/j.aopr.2022.100077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 08/05/2022] [Accepted: 08/12/2022] [Indexed: 10/18/2023]
Abstract
Objective Due to limited imaging conditions, the quality of fundus images is often unsatisfactory, especially for images photographed by handheld fundus cameras. Here, we have developed an automated method based on combining two mirror-symmetric generative adversarial networks (GANs) for image enhancement. Methods A total of 1047 retinal images were included. The raw images were enhanced by a GAN-based deep enhancer and another methods based on luminosity and contrast adjustment. All raw images and enhanced images were anonymously assessed and classified into 6 levels of quality classification by three experienced ophthalmologists. The quality classification and quality change of images were compared. In addition, image-detailed reading results for the number of dubiously pathological fundi were also compared. Results After GAN enhancement, 42.9% of images increased their quality, 37.5% remained stable, and 19.6% decreased. After excluding the images at the highest level (level 0) before enhancement, a large number (75.6%) of images showed an increase in quality classification, and only a minority (9.3%) showed a decrease. The GAN-enhanced method was superior for quality improvement over a luminosity and contrast adjustment method (P<0.001). In terms of image reading results, the consistency rate fluctuated from 86.6% to 95.6%, and for the specific disease subtypes, both discrepancy number and discrepancy rate were less than 15 and 15%, for two ophthalmologists. Conclusions Learning the style of high-quality retinal images based on the proposed deep enhancer may be an effective way to improve the quality of retinal images photographed by handheld fundus cameras.
Collapse
Affiliation(s)
- Junxia Fu
- Beijing Aier Intech Eye Hospital, Beijing, China
- Aier Eye Hospital Group, Hunan, China
- Department of Ophthalmology, The Chinese People's Liberation Army General Hospital, Beijing, China
| | - Lvchen Cao
- School of Artificial Intelligence, Henan University, Zhengzhou, China
| | - Shihui Wei
- Department of Ophthalmology, The Chinese People's Liberation Army General Hospital, Beijing, China
| | - Ming Xu
- Aier Eye Hospital Group, Hunan, China
| | - Yali Song
- Aier Eye Hospital Group, Hunan, China
| | - Huiqi Li
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Yuxia You
- Beijing Aier Intech Eye Hospital, Beijing, China
- Aier Eye Hospital Group, Hunan, China
| |
Collapse
|
10
|
Ahn H, Jun I, Seo KY, Kim EK, Kim TI. Artificial Intelligence for the Estimation of Visual Acuity Using Multi-Source Anterior Segment Optical Coherence Tomographic Images in Senile Cataract. Front Med (Lausanne) 2022; 9:871382. [PMID: 35655854 PMCID: PMC9152093 DOI: 10.3389/fmed.2022.871382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 04/04/2022] [Indexed: 12/05/2022] Open
Abstract
Purpose To investigate an artificial intelligence (AI) model performance using multi-source anterior segment optical coherence tomographic (OCT) images in estimating the preoperative best-corrected visual acuity (BCVA) in patients with senile cataract. Design Retrospective, cross-instrument validation study. Subjects A total of 2,332 anterior segment images obtained using swept-source OCT, optical biometry for intraocular lens calculation, and a femtosecond laser platform in patients with senile cataract and postoperative BCVA ≥ 0.0 logMAR were included in the training/validation dataset. A total of 1,002 images obtained using optical biometry and another femtosecond laser platform in patients who underwent cataract surgery in 2021 were used for the test dataset. Methods AI modeling was based on an ensemble model of Inception-v4 and ResNet. The BCVA training/validation dataset was used for model training. The model performance was evaluated using the test dataset. Analysis of absolute error (AE) was performed by comparing the difference between true preoperative BCVA and estimated preoperative BCVA, as ≥0.1 logMAR (AE≥0.1) or <0.1 logMAR (AE <0.1). AE≥0.1 was classified into underestimation and overestimation groups based on the logMAR scale. Outcome Measurements Mean absolute error (MAE), root mean square error (RMSE), mean percentage error (MPE), and correlation coefficient between true preoperative BCVA and estimated preoperative BCVA. Results The test dataset MAE, RMSE, and MPE were 0.050 ± 0.130 logMAR, 0.140 ± 0.134 logMAR, and 1.3 ± 13.9%, respectively. The correlation coefficient was 0.969 (p < 0.001). The percentage of cases with AE≥0.1 was 8.4%. The incidence of postoperative BCVA > 0.1 was 21.4% in the AE≥0.1 group, of which 88.9% were in the underestimation group. The incidence of vision-impairing disease in the underestimation group was 95.7%. Preoperative corneal astigmatism and lens thickness were higher, and nucleus cataract was more severe (p < 0.001, 0.007, and 0.024, respectively) in AE≥0.1 than that in AE <0.1. The longer the axial length and the more severe the cortical/posterior subcapsular opacity, the better the estimated BCVA than the true BCVA. Conclusions The AI model achieved high-level visual acuity estimation in patients with senile cataract. This quantification method encompassed both visual acuity and cataract severity of OCT image, which are the main indications for cataract surgery, showing the potential to objectively evaluate cataract severity.
Collapse
Affiliation(s)
- Hyunmin Ahn
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea
| | - Ikhyun Jun
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea.,Corneal Dystrophy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| | - Kyoung Yul Seo
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea
| | - Eung Kweon Kim
- Corneal Dystrophy Research Institute, Yonsei University College of Medicine, Seoul, South Korea.,Saevit Eye Hospital, Goyang, South Korea
| | - Tae-Im Kim
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea.,Corneal Dystrophy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| |
Collapse
|
11
|
Son KY, Ko J, Kim E, Lee SY, Kim MJ, Han J, Shin E, Chung TY, Lim DH. Deep Learning-Based Cataract Detection and Grading from Slit-Lamp and Retro-Illumination Photographs. OPHTHALMOLOGY SCIENCE 2022; 2:100147. [PMID: 36249697 PMCID: PMC9559082 DOI: 10.1016/j.xops.2022.100147] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 02/23/2022] [Accepted: 03/14/2022] [Indexed: 11/10/2022]
Abstract
Purpose To develop and validate an automated deep learning (DL)-based artificial intelligence (AI) platform for diagnosing and grading cataracts using slit-lamp and retroillumination lens photographs based on the Lens Opacities Classification System (LOCS) III. Design Cross-sectional study in which a convolutional neural network was trained and tested using photographs of slit-lamp and retroillumination lens photographs. Participants One thousand three hundred thirty-five slit-lamp images and 637 retroillumination lens images from 596 patients. Methods Slit-lamp and retroillumination lens photographs were graded by 2 trained graders using LOCS III. Image datasets were labeled and divided into training, validation, and test datasets. We trained and validated AI platforms with 4 key strategies in the AI domain: (1) region detection network for redundant information inside data, (2) data augmentation and transfer learning for the small dataset size problem, (3) generalized cross-entropy loss for dataset bias, and (4) class balanced loss for class imbalance problems. The performance of the AI platform was reinforced with an ensemble of 3 AI algorithms: ResNet18, WideResNet50-2, and ResNext50. Main Outcome Measures Diagnostic and LOCS III-based grading prediction performance of AI platforms. Results The AI platform showed robust diagnostic performance (area under the receiver operating characteristic curve [AUC], 0.9992 [95% confidence interval (CI), 0.9986–0.9998] and 0.9994 [95% CI, 0.9989–0.9998]; accuracy, 98.82% [95% CI, 97.7%–99.9%] and 98.51% [95% CI, 97.4%–99.6%]) and LOCS III-based grading prediction performance (AUC, 0.9567 [95% CI, 0.9501–0.9633] and 0.9650 [95% CI, 0.9509–0.9792]; accuracy, 91.22% [95% CI, 89.4%–93.0%] and 90.26% [95% CI, 88.6%–91.9%]) for nuclear opalescence (NO) and nuclear color (NC) using slit-lamp photographs, respectively. For cortical opacity (CO) and posterior subcapsular opacity (PSC), the system achieved high diagnostic performance (AUC, 0.9680 [95% CI, 0.9579–0.9781] and 0.9465 [95% CI, 0.9348–0.9582]; accuracy, 96.21% [95% CI, 94.4%–98.0%] and 92.17% [95% CI, 88.6%–95.8%]) and good LOCS III-based grading prediction performance (AUC, 0.9044 [95% CI, 0.8958–0.9129] and 0.9174 [95% CI, 0.9055–0.9295]; accuracy, 91.33% [95% CI, 89.7%–93.0%] and 87.89% [95% CI, 85.6%–90.2%]) using retroillumination images. Conclusions Our DL-based AI platform successfully yielded accurate and precise detection and grading of NO and NC in 7-level classification and CO and PSC in 6-level classification, overcoming the limitations of medical databases such as few training data or biased label distribution.
Collapse
|
12
|
Yadav JKPS, Yadav S. Computer‐aided diagnosis of cataract severity using retinal fundus images and deep learning. Comput Intell 2022. [DOI: 10.1111/coin.12518] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Jay Kant Pratap Singh Yadav
- Department of Computer Science & Engineering Ajay Kumar Garg Engineering College (Affiliated to Dr. A.P.J. Abdul Kalam Technical University, Lucknow) Ghaziabad Uttar Pradesh India
| | - Sunita Yadav
- Department of Computer Science & Engineering Ajay Kumar Garg Engineering College (Affiliated to Dr. A.P.J. Abdul Kalam Technical University, Lucknow) Ghaziabad Uttar Pradesh India
| |
Collapse
|
13
|
Gutierrez L, Lim JS, Foo LL, Ng WY, Yip M, Lim GYS, Wong MHY, Fong A, Rosman M, Mehta JS, Lin H, Ting DSJ, Ting DSW. Application of artificial intelligence in cataract management: current and future directions. EYE AND VISION (LONDON, ENGLAND) 2022; 9:3. [PMID: 34996524 PMCID: PMC8739505 DOI: 10.1186/s40662-021-00273-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 12/07/2021] [Indexed: 02/10/2023]
Abstract
The rise of artificial intelligence (AI) has brought breakthroughs in many areas of medicine. In ophthalmology, AI has delivered robust results in the screening and detection of diabetic retinopathy, age-related macular degeneration, glaucoma, and retinopathy of prematurity. Cataract management is another field that can benefit from greater AI application. Cataract is the leading cause of reversible visual impairment with a rising global clinical burden. Improved diagnosis, monitoring, and surgical management are necessary to address this challenge. In addition, patients in large developing countries often suffer from limited access to tertiary care, a problem further exacerbated by the ongoing COVID-19 pandemic. AI on the other hand, can help transform cataract management by improving automation, efficacy and overcoming geographical barriers. First, AI can be applied as a telediagnostic platform to screen and diagnose patients with cataract using slit-lamp and fundus photographs. This utilizes a deep-learning, convolutional neural network (CNN) to detect and classify referable cataracts appropriately. Second, some of the latest intraocular lens formulas have used AI to enhance prediction accuracy, achieving superior postoperative refractive results compared to traditional formulas. Third, AI can be used to augment cataract surgical skill training by identifying different phases of cataract surgery on video and to optimize operating theater workflows by accurately predicting the duration of surgical procedures. Fourth, some AI CNN models are able to effectively predict the progression of posterior capsule opacification and eventual need for YAG laser capsulotomy. These advances in AI could transform cataract management and enable delivery of efficient ophthalmic services. The key challenges include ethical management of data, ensuring data security and privacy, demonstrating clinically acceptable performance, improving the generalizability of AI models across heterogeneous populations, and improving the trust of end-users.
Collapse
Affiliation(s)
| | - Jane Sujuan Lim
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Li Lian Foo
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Wei Yan Ng
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Michelle Yip
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | | | - Melissa Hsing Yi Wong
- Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Allan Fong
- Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Mohamad Rosman
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Jodhbir Singth Mehta
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Haotian Lin
- Zhongshan Ophthalmic Center, Sun Yet Sen University, Guangzhou, China
| | - Darren Shu Jeng Ting
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, UK
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore, Singapore. .,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore.
| |
Collapse
|
14
|
Guergueb T, Akhloufi MA. Ocular Diseases Detection using Recent Deep Learning Techniques. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3336-3339. [PMID: 34891954 DOI: 10.1109/embc46164.2021.9629763] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Early fundus screening is a cost-effective and efficient approach to reduce ophthalmic disease-related blindness in ophthalmology. Manual evaluation is time-consuming. Ophthalmic disease detection studies have shown interesting results thanks to the advancement in deep learning techniques, but the majority of them are limited to a single disease. In this paper we propose the study of various deep learning models for eyes disease detection where several optimizations were performed. The results show that the best model achieves high scores with an AUC of 98.31% for six diseases and an AUC of 96.04% for eight diseases.
Collapse
|
15
|
Xu X, Li J, Guan Y, Zhao L, Zhao Q, Zhang L, Li L. GLA-Net: A global-local attention network for automatic cataract classification. J Biomed Inform 2021; 124:103939. [PMID: 34752858 DOI: 10.1016/j.jbi.2021.103939] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 10/02/2021] [Accepted: 10/25/2021] [Indexed: 10/19/2022]
Abstract
Cataracts are the most crucial cause of blindness among all ophthalmic diseases. Convenient and cost-effective early cataract screening is urgently needed to reduce the risks of visual loss. To date, many studies have investigated automatic cataract classification based on fundus images. However, existing methods mainly rely on global image information while ignoring various local and subtle features. Notably, these local features are highly helpful for the identification of cataracts with different severities. To avoid this disadvantage, we introduce a deep learning technique to learn multilevel feature representations of the fundus image simultaneously. Specifically, a global-local attention network (GLA-Net) is proposed to handle the cataract classification task, which consists of two levels of subnets: the global-level attention subnet pays attention to the global structure information of the fundus image, while the local-level attention subnet focuses on the local discriminative features of the specific regions. These two types of subnets extract retinal features at different attention levels, which are then combined for final cataract classification. Our GLA-Net achieves the best performance in all metrics (90.65% detection accuracy, 83.47% grading accuracy, and 81.11% classification accuracy of grades 1 and 2). The experimental results on a real clinical dataset show that the combination of global-level and local-level attention models is effective for cataract screening and provides significant potential for other medical tasks.
Collapse
Affiliation(s)
- Xi Xu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Yu Guan
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Linna Zhao
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Qing Zhao
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China.
| | - Li Zhang
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Li
- National Center for Children's Health, Beijing Children's Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
16
|
Pratap T, Kokil P. Deep neural network based robust computer-aided cataract diagnosis system using fundus retinal images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102985] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
17
|
Detail-richest-channel based enhancement for retinal image and beyond. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
18
|
Rampat R, Deshmukh R, Chen X, Ting DSW, Said DG, Dua HS, Ting DSJ. Artificial Intelligence in Cornea, Refractive Surgery, and Cataract: Basic Principles, Clinical Applications, and Future Directions. Asia Pac J Ophthalmol (Phila) 2021; 10:268-281. [PMID: 34224467 PMCID: PMC7611495 DOI: 10.1097/apo.0000000000000394] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
ABSTRACT Corneal diseases, uncorrected refractive errors, and cataract represent the major causes of blindness globally. The number of refractive surgeries, either cornea- or lens-based, is also on the rise as the demand for perfect vision continues to increase. With the recent advancement and potential promises of artificial intelligence (AI) technologies demonstrated in the realm of ophthalmology, particularly retinal diseases and glaucoma, AI researchers and clinicians are now channeling their focus toward the less explored ophthalmic areas related to the anterior segment of the eye. Conditions that rely on anterior segment imaging modalities, including slit-lamp photography, anterior segment optical coherence tomography, corneal tomography, in vivo confocal microscopy and/or optical biometers, are the most commonly explored areas. These include infectious keratitis, keratoconus, corneal grafts, ocular surface pathologies, preoperative screening before refractive surgery, intraocular lens calculation, and automated refraction, among others. In this review, we aimed to provide a comprehensive update on the utilization of AI in anterior segment diseases, with particular emphasis on the recent advancement in the past few years. In addition, we demystify some of the basic principles and terminologies related to AI, particularly machine learning and deep learning, to help improve the understanding, research and clinical implementation of these AI technologies among the ophthalmologists and vision scientists. As we march toward the era of digital health, guidelines such as CONSORT-AI, SPIRIT-AI, and STARD-AI will play crucial roles in guiding and standardizing the conduct and reporting of AI-related trials, ultimately promoting their potential for clinical translation.
Collapse
Affiliation(s)
| | - Rashmi Deshmukh
- Department of Ophthalmology, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - Xin Chen
- School of Computer Science, University of Nottingham, Nottingham, UK
| | - Daniel S. W. Ting
- Duke-NUS Medical School, National University of Singapore, Singapore
- Singapore National Eye Centre / Singapore Eye Research Institute, Singapore
| | - Dalia G. Said
- Academic Ophthalmology, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham, UK
| | - Harminder S. Dua
- Academic Ophthalmology, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham, UK
| | - Darren S. J. Ting
- Singapore National Eye Centre / Singapore Eye Research Institute, Singapore
- Academic Ophthalmology, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham, UK
| |
Collapse
|
19
|
Imran A, Li J, Pei Y, Akhtar F, Yang JJ, Dang Y. Automated identification of cataract severity using retinal fundus images. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2020. [DOI: 10.1080/21681163.2020.1806733] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Affiliation(s)
- Azhar Imran
- School of Software Engineering, Beijing University of Technology, Beijing, China
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Fukushima, Japan
| | - Faheem Akhtar
- School of Software Engineering, Beijing University of Technology, Beijing, China
- Department of Computer Science, Sukkur IBA University, Sukkur, Pakistan
| | - Ji-Jiang Yang
- Research Institute of Information Technology, Tsinghua University, Beijing, China
| | - Yanping Dang
- General Internal Medicine, Beijing Moslem Hospital, Beijing, China
| |
Collapse
|
20
|
Zhou Y, Li G, Li H. Automatic Cataract Classification Using Deep Neural Network With Discrete State Transition. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:436-446. [PMID: 31295110 DOI: 10.1109/tmi.2019.2928229] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Cataract is the clouding of lens, which affects vision and it is the leading cause of blindness in the world's population. Accurate and convenient cataract detection and cataract severity evaluation will improve the situation. Automatic cataract detection and grading methods are proposed in this paper. With prior knowledge, the improved Haar features and visible structure features are combined as features, and multilayer perceptron with discrete state transition (DST-MLP) or exponential DST (EDST-MLP) are designed as classifiers. Without prior knowledge, residual neural networks with DST (DST-ResNet) or EDST (EDST-ResNet) are proposed. Whether with prior knowledge or not, our proposed DST and EDST strategy can prevent overfitting and reduce storage memory during network training and implementation, and neural networks with these strategies achieve state-of-the-art accuracy in cataract detection and grading. The experimental results indicate that combined features always achieve better performance than a single type of feature, and classification methods with feature extraction based on prior knowledge are more suitable for complicated medical image classification task. These analyses can provide constructive advice for other medical image processing applications.
Collapse
|
21
|
Cao L, Li H. Enhancement of blurry retinal image based on non-uniform contrast stretching and intensity transfer. Med Biol Eng Comput 2020; 58:483-496. [PMID: 31897799 DOI: 10.1007/s11517-019-02106-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 12/18/2019] [Indexed: 11/26/2022]
Abstract
Proper contrast and sufficient illuminance are important in clearly identifying the retinal structures, while the required quality cannot always be guaranteed due to major reasons like acquisition process and diseases. To ensure the effectiveness of enhancement, two solutions are developed for blurry retinal images with sufficient illuminance and insufficient illuminance, respectively. The proposed contrast stretching and intensity transfer are main steps in both of the two solutions. The contrast stretching is based on base-intensity removal and non-uniform addition. We assume that a base-intensity exists in an image, which mainly supports the basic illuminance but has less contribution to texture information. The base-intensity is estimated by the constrained Gaussian function and then removed. The non-uniform addition using compressed Gamma map is further developed to improve the contrast. Additionally, an effective intensity transfer strategy is introduced, which can provide required illuminance for a single channel after contrast stretching. The color correction can be achieved if the intensity transfer is performed on three channels. Results show that the proposed solutions can effectively improve the contrast and illuminance, and good visual perception for quality degraded retinal images is obtained. Illustration of contrast stretching based on a signal colour channel.
Collapse
Affiliation(s)
- Lvchen Cao
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Huiqi Li
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
22
|
Zhang H, Niu K, Xiong Y, Yang W, He Z, Song H. Automatic cataract grading methods based on deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 182:104978. [PMID: 31450174 DOI: 10.1016/j.cmpb.2019.07.006] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 06/20/2019] [Accepted: 07/04/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE The shortage of ophthalmologists in rural areas in China causes a lot of cataract patients not getting timely diagnosis and effective treatment. We develop an algorithm and platform to automatically diagnose and grade cataract based on fundus images of patients. This method can help government assisting poor population more accurately. METHODS The novel six-level cataract grading method proposed in this paper focuses on the multi-feature fusion based on stacking. We extract two kinds of features which can effectively distinguish different levels of cataract. One is high-level features extracted from residual network (ResNet18). The other is texture features extarcted by gray level co-occurrence matrix (GLCM). Then a frame is proposed to automatically grade cataract by the extracted features. In the frame, two support vector machine (SVM) classifiers are used as base-learners to obtain the probability outputs of each fundus image, and fully connected neural network (FCNN) are used as meta-learner to output the final classification result, which consists of two fully-connected layers. RESULT The accuracy of six-level grading achieved by the proposed method is up to 92.66% on average, the highest of which reaches 93.33%. The proposed method achieves 94.75% accuracy on four-level grading for cataract, which is at least 1.75% higher than those of the exiting methods. CONCLUSIONS Six-category cataract classification algorithm show that Multi-feature & Stacking proposed in this paper helps achieve higher grading performance and lower volatility than grading using high-level features and texture features respectively. We also apply our algorithm into four-level cataract grading system and it shows higher accuracy compared with previous reports.
Collapse
Affiliation(s)
- Hongyan Zhang
- Beijing Tongren Eye Center, Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Key Laboratory of Ophthalmology and visual Sciences, National Engineering Research Center for Ophthalmology, Beijing, China
| | - Kai Niu
- Key Laboratory of Universal Wireless Communations, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Yanmin Xiong
- Key Laboratory of Universal Wireless Communations, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Weihua Yang
- The First People's Hospital of Huzhou, Huzhou, Zhejiang, China
| | - ZhiQiang He
- Key Laboratory of Universal Wireless Communations, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China; College of Big Data and Information Engineering, Guizhou University, Guizhou, China.
| | - Hongxin Song
- Beijing Tongren Eye Center, Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Key Laboratory of Ophthalmology and visual Sciences, National Engineering Research Center for Ophthalmology, Beijing, China.
| |
Collapse
|
23
|
|
24
|
Abstract
PURPOSE OF REVIEW To provide a comprehensive summary of past cataract grading systems, how they have shaped current grading systems, and the developing technologies that are being used to assess and grade cataracts. RECENT FINDINGS This summary of cataract grading systems examines the development and limitations that existed in past grading systems and how they have shaped the grading systems of present time. The Lens Opacities Classification System III (LOCS III) system is currently used both clinically and for research purposes. Recent advancements in imaging technologies have allowed researchers to create automatic systems that can locate lens landmarks and provide cataract grading scores that correlate well with LOCS III clinical grades. Utilizing existing technologies, researchers demonstrate that fundus photography and optical coherence tomography can be used as cataract grading tools. Lastly, deep learning has proved to be a powerful tool that can provide objective and reproducible cataract grading scores. SUMMARY Cataract grading schemes have provided ophthalmologists with a way to communicate clinical findings and to compare new developments in diagnostic technologies. As technologies advance, cataract grading can become more objective and standardized, allowing for improved patient care.
Collapse
|
25
|
Mitra A, Roy S, Roy S, Setua SK. Enhancement and restoration of non-uniform illuminated Fundus Image of Retina obtained through thin layer of cataract. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 156:169-178. [PMID: 29428069 DOI: 10.1016/j.cmpb.2018.01.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2017] [Revised: 01/07/2018] [Accepted: 01/07/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVES Retinal fundus images are extensively used in manually or without human intervention to identify and analyze various diseases. Due to the comprehensive imaging arrangement, there is a large radiance, reflectance and contrast inconsistency within and across images. METHOD A novel method is proposed based on the cataract physical model to reduce the generated blurriness of the fundus image at the time of image acquisition through the thin layer of cataract by the fundus camera. After the blurriness reduction the method is proposed the enhancement procedure of the images with an objective on contrast perfection with no preamble of artifacts. Due to the uneven distribution of thickness of the cataract, the cataract surroundings are first predicted in the domain of frequency. Second, the resultant image of first step enhanced by the intensity histogram equalization in the adapted Hue Saturation Intensity (HSI) color image space such as the gamut problem can be avoided. The concluding image with suitable color and disparity is acquired by using the proposed max-min color correction approach. RESULTS The result indicates that not only the proposed method can more effectively enhanced the non-uniform image of retina obtain through thin layer of cataract, but also the resulting image show appropriate brightness and saturation and maintain complete color space information. The projected enhancement method has been tested on the openly available datasets and the result evaluated with the standard used image enhancement algorithms and the cataract removal method. Results show noticeable development over existing methods. CONCLUSIONS Cataract often prevents the clinician from objectively evaluating fundus feature. Cataract also affect subjective test. Enhancement and restoration of non-uniform illuminated Fundus Image of Retina obtained through thin layer of Cataract has shown here to be potentially beneficial.
Collapse
Affiliation(s)
- Anirban Mitra
- Department of Computer Science & Engineering, Academy of Technology, Adisaptagram, West Bengal 712121, India; Department of Computer Science & Engineering, Calcutta University Technology Campus, JD-2, Sector-III, Salt Lake, Kolkata 700098, India
| | - Sudipta Roy
- Department of Computer Science & Engineering, Ganpat University, Kherva, Mehsana, Gujarat 384012, India; Department of Computer Science & Engineering, Calcutta University Technology Campus, JD-2, Sector-III, Salt Lake, Kolkata 700098, India.
| | - Somais Roy
- Department of Computer Science & Engineering, Calcutta University Technology Campus, JD-2, Sector-III, Salt Lake, Kolkata 700098, India
| | - Sanjit Kumar Setua
- Department of Computer Science & Engineering, Calcutta University Technology Campus, JD-2, Sector-III, Salt Lake, Kolkata 700098, India
| |
Collapse
|