1
|
Leitão Guerra RL, Leitão Guerra CL, Bastos Meirelles MG, Sandoval Barbosa GC, Novais EA, Badaró E, Adami Lucatto LF, Roisman L. Exploring Retinal Conditions Through Blue Light Reflectance Imaging. Prog Retin Eye Res 2025:101326. [PMID: 39756669 DOI: 10.1016/j.preteyeres.2024.101326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2024] [Revised: 12/18/2024] [Accepted: 12/20/2024] [Indexed: 01/07/2025]
Abstract
Blue light reflectance (BLR) imaging offers a non-invasive, cost-effective method for evaluating retinal structures by analyzing the reflectance and absorption characteristics of the inner retinal layers. By leveraging blue light's interaction with retinal tissues, BLR enhances visualization beyond the retinal nerve fiber layer, improving detection of structures such as the outer plexiform layer and macular pigment. Its diagnostic utility has been demonstrated in distinct retinal conditions, including hyperreflectance in early macular telangiectasia, hyporeflectance in non-perfused areas indicative of ischemia, identification of pseudodrusen patterns (notably the ribbon type), and detection of peripheral retinal tears and degenerative retinoschisis in eyes with reduced retinal pigment epithelial pigmentation. Best practices for image acquisition and interpretation are discussed, emphasizing standardization to minimize variability. Common artifacts and mitigation strategies are also addressed, ensuring image reliability. BLR's clinical utility, limitations, and future research directions are highlighted, particularly its potential in automated image analysis and quantitative assessment. Different BLR acquisition methods, such as fundus photography, confocal scanning laser ophthalmoscopy, and broad line fundus imaging, are evaluated for their respective advantages and limitations. As research advances, BLR's integration into multimodal workflows is expected to improve early detection and precise monitoring of retinal diseases.
Collapse
Affiliation(s)
- Ricardo Luz Leitão Guerra
- Department of Ophthalmology Leitão Guerra, Oftalmologia (Salvador, Brazil), Rua Rio de São Pedro, no 256 Graça. Salvador BA, CEP 40.150-350, Brazil; Orbit Ophthalmo Learning, Rua Rio de São Pedro, no 256 Graça. Salvador BA, CEP 40.150-350, Brazil.
| | - Cezar Luz Leitão Guerra
- Department of Ophthalmology Leitão Guerra, Oftalmologia (Salvador, Brazil), Rua Rio de São Pedro, no 256 Graça. Salvador BA, CEP 40.150-350, Brazil
| | - Mariana Gouveia Bastos Meirelles
- Department of Ophthalmology Leitão Guerra, Oftalmologia (Salvador, Brazil), Rua Rio de São Pedro, no 256 Graça. Salvador BA, CEP 40.150-350, Brazil
| | | | - Eduardo Amorim Novais
- Orbit Ophthalmo Learning, Rua Rio de São Pedro, no 256 Graça. Salvador BA, CEP 40.150-350, Brazil
| | - Emmerson Badaró
- Orbit Ophthalmo Learning, Rua Rio de São Pedro, no 256 Graça. Salvador BA, CEP 40.150-350, Brazil
| | | | - Luiz Roisman
- Orbit Ophthalmo Learning, Rua Rio de São Pedro, no 256 Graça. Salvador BA, CEP 40.150-350, Brazil
| |
Collapse
|
2
|
Zhang S, Webers CAB, Berendschot TTJM. Computational single fundus image restoration techniques: a review. FRONTIERS IN OPHTHALMOLOGY 2024; 4:1332197. [PMID: 38984141 PMCID: PMC11199880 DOI: 10.3389/fopht.2024.1332197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 04/19/2024] [Indexed: 07/11/2024]
Abstract
Fundus cameras are widely used by ophthalmologists for monitoring and diagnosing retinal pathologies. Unfortunately, no optical system is perfect, and the visibility of retinal images can be greatly degraded due to the presence of problematic illumination, intraocular scattering, or blurriness caused by sudden movements. To improve image quality, different retinal image restoration/enhancement techniques have been developed, which play an important role in improving the performance of various clinical and computer-assisted applications. This paper gives a comprehensive review of these restoration/enhancement techniques, discusses their underlying mathematical models, and shows how they may be effectively applied in real-life practice to increase the visual quality of retinal images for potential clinical applications including diagnosis and retinal structure recognition. All three main topics of retinal image restoration/enhancement techniques, i.e., illumination correction, dehazing, and deblurring, are addressed. Finally, some considerations about challenges and the future scope of retinal image restoration/enhancement techniques will be discussed.
Collapse
Affiliation(s)
- Shuhe Zhang
- University Eye Clinic Maastricht, Maastricht University Medical Center, Maastricht, Netherlands
| | - Carroll A B Webers
- University Eye Clinic Maastricht, Maastricht University Medical Center, Maastricht, Netherlands
| | - Tos T J M Berendschot
- University Eye Clinic Maastricht, Maastricht University Medical Center, Maastricht, Netherlands
| |
Collapse
|
3
|
Zhang S, Mohan A, Webers CAB, Berendschot TTJM. MUTE: A multilevel-stimulated denoising strategy for single cataractous retinal image dehazing. Med Image Anal 2023; 88:102848. [PMID: 37263110 DOI: 10.1016/j.media.2023.102848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/03/2023]
Abstract
In this research, we studied the duality between cataractous retinal image dehazing and image denoising and proposed that the dehazing task for cataractous retinal images can be achieved with the combination of image denoising and sigmoid function. To do so, we introduce the double-pass fundus reflection model in the YPbPr color space and developed a multilevel stimulated denoising strategy termed MUTE. The transmission matrix of the cataract layer is expressed as the superposition of denoised raw images of different levels weighted by pixel-wise sigmoid functions. We further designed an intensity-based cost function that can guide the updating of the model parameters. They are updated by gradient descent with adaptive momentum estimation, which gives us the final refined transmission matrix of the cataract layer. We tested our methods on cataract retinal images from both public and proprietary databases, and we compared the performance of our method with other state-of-the-art enhancement methods. Both visual assessments and objective assessments show the superiority of the proposed method. We further demonstrated three potential applications including blood vessel segmentation, retinal image registrations, and diagnosing with enhanced images that may largely benefit from our proposed methods.
Collapse
Affiliation(s)
- Shuhe Zhang
- University Eye Clinic Maastricht, Maastricht University Medical Center +, P.O. Box 5800, Maastricht 6202 AZ, The Netherlands.
| | - Ashwin Mohan
- University Eye Clinic Maastricht, Maastricht University Medical Center +, P.O. Box 5800, Maastricht 6202 AZ, The Netherlands
| | - Carroll A B Webers
- University Eye Clinic Maastricht, Maastricht University Medical Center +, P.O. Box 5800, Maastricht 6202 AZ, The Netherlands
| | - Tos T J M Berendschot
- University Eye Clinic Maastricht, Maastricht University Medical Center +, P.O. Box 5800, Maastricht 6202 AZ, The Netherlands
| |
Collapse
|
4
|
C P, R JK. Retinal image enhancement based on color dominance of image. Sci Rep 2023; 13:7172. [PMID: 37138000 PMCID: PMC10156681 DOI: 10.1038/s41598-023-34212-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 04/26/2023] [Indexed: 05/05/2023] Open
Abstract
Real-time fundus images captured to detect multiple diseases are prone to different quality issues like illumination, noise, etc., resulting in less visibility of anomalies. So, enhancing the retinal fundus images is essential for a better prediction rate of eye diseases. In this paper, we propose Lab color space-based enhancement techniques for retinal image enhancement. Existing research works does not consider the relation between color spaces of the fundus image in selecting a specific channel to perform retinal image enhancement. Our unique contribution to this research work is utilizing the color dominance of an image in quantifying the distribution of information in the blue channel and performing enhancement in Lab space followed by a series of steps to optimize overall brightness and contrast. The test set of the Retinal Fundus Multi-disease Image Dataset is used to evaluate the performance of the proposed enhancement technique in identifying the presence or absence of retinal abnormality. The proposed technique achieved an accuracy of 89.53 percent.
Collapse
Affiliation(s)
- Priyadharsini C
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamilnadu, 600127, India
| | - Jagadeesh Kannan R
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamilnadu, 600127, India.
| |
Collapse
|
5
|
Zhang S, Webers CAB, Berendschot TTJM. Luminosity rectified blind Richardson-Lucy deconvolution for single retinal image restoration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107297. [PMID: 36563648 DOI: 10.1016/j.cmpb.2022.107297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 11/14/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Due to imperfect imaging conditions, retinal images can be degraded by uneven/insufficient illumination, blurriness caused by optical aberrations and unintentional motions. Degraded images reduce the effectiveness of diagnosis by an ophthalmologist. To restore the image quality, in this research we propose the luminosity rectified Richardson-Lucy (LRRL) blind deconvolution framework for single retinal image restoration. METHODS We established an image formation model based on the double-pass fundus reflection feature and developed a differentiable non-convex cost function that jointly achieves illumination correction and blind deconvolution. To solve this non-convex optimization problem, we derived the closed-form expression of the gradients and used gradient descent with Nesterov-accelerated adaptive momentum estimation to accelerate the optimization, which is more efficient than the traditional half quadratic splitting method. RESULTS The LRRL was tested on 1719 images from three public databases. Four image quality matrixes including image definition, image sharpness, image entropy, and image multiscale contrast were used for objective assessments. The LRRL was compared against the state-of-the-art retinal image blind deconvolution methods. CONCLUSIONS Our LRRL corrects the problematic illumination and improves the clarity of the retinal image simultaneously, showing its superiority in terms of restoration quality and implementation efficiency. The MATLAB code is available on Github.
Collapse
Affiliation(s)
- Shuhe Zhang
- University Eye Clinic Maastricht, Maastricht University Medical Center +, P.O. Box 5800, Maastricht, AZ 6202, the Netherlands.
| | - Carroll A B Webers
- University Eye Clinic Maastricht, Maastricht University Medical Center +, P.O. Box 5800, Maastricht, AZ 6202, the Netherlands
| | - Tos T J M Berendschot
- University Eye Clinic Maastricht, Maastricht University Medical Center +, P.O. Box 5800, Maastricht, AZ 6202, the Netherlands
| |
Collapse
|
6
|
Han R, Tang C, Xu M, Liang B, Wu T, Lei Z. Enhancement method with naturalness preservation and artifact suppression based on an improved Retinex variational model for color retinal images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:155-164. [PMID: 36607085 DOI: 10.1364/josaa.474020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
Retinal images are widely used for the diagnosis of various diseases. However, low-quality retinal images with uneven illumination, low contrast, or blurring may seriously interfere with diagnosis by ophthalmologists. This study proposes an enhancement method for low-quality retinal color images. In this paper, an improved variational Retinex model for color retinal images is first proposed and applied to each channel of the RGB color space to obtain the illuminance and reflectance layers. Subsequently, the Naka-Rushton equation is introduced to correct the illumination layer, and an enhancement operator is constructed to improve the clarity of the reflectance layer. Finally, the corrected illuminance and enhanced reflectance are recombined. Contrast-limited adaptive histogram equalization is introduced to further improve the clarity and contrast. To demonstrate the effectiveness of the proposed method, this method is tested on 527 images from four publicly available datasets and 40 local clinical images from Tianjin Eye Hospital (China). Experimental results show that the proposed method outperforms the other four enhancement methods and has obvious advantages in naturalness preservation and artifact suppression.
Collapse
|
7
|
Toptaş B, Hanbay D. Separation of arteries and veins in retinal fundus images with a new CNN architecture. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2151066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Buket Toptaş
- Computer Engineering Department, Engineering and Natural Science Faculty, Bandırma Onyedi Eylül University, Balıkesir, Turkey
| | - Davut Hanbay
- Computer Engineering Department, Engineering Faculty, Inonu University, Malatya, Turkey
| |
Collapse
|
8
|
Qayyum A, Sultani W, Shamshad F, Tufail R, Qadir J. Single-shot retinal image enhancement using untrained and pretrained neural networks priors integrated with analytical image priors. Comput Biol Med 2022; 148:105879. [PMID: 35863248 DOI: 10.1016/j.compbiomed.2022.105879] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 06/20/2022] [Accepted: 07/09/2022] [Indexed: 01/08/2023]
Abstract
Retinal images acquired using fundus cameras are often visually blurred due to imperfect imaging conditions, refractive medium turbidity, and motion blur. In addition, ocular diseases such as the presence of cataracts also result in blurred retinal images. The presence of blur in retinal fundus images reduces the effectiveness of the diagnosis process of an expert ophthalmologist or a computer-aided detection/diagnosis system. In this paper, we put forward a single-shot deep image prior (DIP)-based approach for retinal image enhancement. Unlike typical deep learning-based approaches, our method does not require any training data. Instead, our DIP-based method can learn the underlying image prior while using a single degraded image. To perform retinal image enhancement, we frame it as a layer decomposition problem and investigate the use of two well-known analytical priors, i.e., dark channel prior (DCP) and bright channel prior (BCP) for atmospheric light estimation. We show that both the untrained neural networks and the pretrained neural networks can be used to generate an enhanced image while using only a single degraded image. The proposed approach is time and memory-efficient, which makes the solution feasible for real-world resource-constrained environments. We evaluate our proposed framework quantitatively on five datasets using three widely used metrics and complement that with a subjective qualitative assessment of the enhancement by two expert ophthalmologists. For instance, our method has achieved significant performance for untrained CDIPs coupled with DCP in terms of average PSNR, SSIM, and BRISQUE values of 40.41, 0.97, and 34.2, respectively, and for untrained CDIPs coupled with BCP, it achieved average PSNR, SSIM, and BRISQUE values of 40.22, 0.98, and 36.38, respectively. Our extensive experimental comparison with several competitive baselines on public and non-public proprietary datasets validates the proposed ideas and framework.
Collapse
Affiliation(s)
- Adnan Qayyum
- Information Technology University of the Punjab, Lahore, Pakistan
| | - Waqas Sultani
- Information Technology University of the Punjab, Lahore, Pakistan
| | - Fahad Shamshad
- Information Technology University of the Punjab, Lahore, Pakistan
| | | | | |
Collapse
|
9
|
Deng Z, Cai Y, Chen L, Gong Z, Bao Q, Yao X, Fang D, Yang W, Zhang S, Ma L. RFormer: Transformer-Based Generative Adversarial Network for Real Fundus Image Restoration on a New Clinical Benchmark. IEEE J Biomed Health Inform 2022; 26:4645-4655. [PMID: 35767498 DOI: 10.1109/jbhi.2022.3187103] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Ophthalmologists have used fundus images to screen and diagnose eye diseases. However, different equipments and ophthalmologists pose large variations to the quality of fundus images. Low-quality (LQ) degraded fundus images easily lead to uncertainty in clinical screening and generally increase the risk of misdiagnosis. Thus, real fundus image restoration is worth studying. Unfortunately, real clinical benchmark has not been explored for this task so far. In this paper, we investigate the real clinical fundus image restoration problem. Firstly, We establish a clinical dataset, Real Fundus (RF), including 120 low- and high-quality (HQ) image pairs. Then we propose a novel Transformer-based Generative Adversarial Network (RFormer) to restore the real degradation of clinical fundus images. The key component in our network is the Window-based Self-Attention Block (WSAB) which captures non-local self-similarity and long-range dependencies. To produce more visually pleasant results, a Transformer-based discriminator is introduced. Extensive experiments on our clinical benchmark show that the proposed RFormer significantly outperforms the state-of-the-art (SOTA) methods. In addition, experiments of downstream tasks such as vessel segmentation and optic disc/cup detection demonstrate that our proposed RFormer benefits clinical fundus image analysis and applications. The dataset, code, and models will be made publicly available at https://github.com/dengzhuo-AI/Real-Fundus.
Collapse
|
10
|
A Novel Un-Supervised GAN for Fundus Image Enhancement with Classification Prior Loss. ELECTRONICS 2022. [DOI: 10.3390/electronics11071000] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Fundus images captured for clinical diagnosis usually suffer from degradation factors due to variation in equipment, operators, or environment. These degraded fundus images need to be enhanced to achieve better diagnosis and improve the results of downstream tasks. As there is no paired low- and high-quality fundus image, existing methods mainly focus on supervised or semi-supervised learning methods for color fundus image enhancement (CFIE) tasks by utilizing synthetic image pairs. Consequently, domain gaps between real images and synthetic images arise. With respect to existing unsupervised methods, the most important low scale pathological features and structural information in degraded fundus images are prone to be erased after enhancement. To solve these problems, an unsupervised GAN is proposed for CFIE tasks utilizing adversarial training to enhance low quality fundus images. Synthetic image pairs are no longer required during the training. A specially designed U-Net with skip connection in our enhancement network can effectively remove degradation factors while preserving pathological features and structural information. Global and local discriminators adopted in the GAN lead to better illumination uniformity in the enhanced fundus image. To better improve the visual quality of enhanced fundus images, a novel non-reference loss function based on a pretrained fundus image quality classification network was designed to guide the enhancement network to produce high quality images. Experiments demonstrated that our method could effectively remove degradation factors in low-quality fundus images and produce a competitive result compared with previous methods in both quantitative and qualitative metrics.
Collapse
|
11
|
Enhance Contrast and Balance Color of Retinal Image. Symmetry (Basel) 2021. [DOI: 10.3390/sym13112089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
This paper proposes a simple and effective retinal fundus image simulation modeling to enhance contrast and adjust the color balance for symmetric information in biomedicine. The aim of the study is for reliable diagnosis of AMD (age-related macular degeneration) screening. The method consists of a few simple steps. Firstly, local image contrast is refined with the CLAHE (Contrast Limited Adaptive Histogram Equalization) technique by operating CIE L*a*b* color space. Then, the contrast-enhanced image is stretched and rescaled by a histogram scaling equation to adjust the overall brightness offsets of the image and standardize it to Hubbard’s retinal image brightness range. The proposed method was assessed with retinal images from the DiaretDB0 and STARE datasets. The findings in the experimentation section indicate that the proposed method results in delightful color naturalness along with a standard color of retinal lesions.
Collapse
|