1
|
Alksas A, Sharafeldeen A, Balaha HM, Haq MZ, Mahmoud A, Ghazal M, Alghamdi NS, Alhalabi M, Yousaf J, Sandhu H, El-Baz A. Advanced OCTA imaging segmentation: Unsupervised, non-linear retinal vessel detection using modified self-organizing maps and joint MGRF modeling. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 254:108309. [PMID: 39002431 DOI: 10.1016/j.cmpb.2024.108309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 06/06/2024] [Accepted: 06/25/2024] [Indexed: 07/15/2024]
Abstract
BACKGROUND AND OBJECTIVE This paper proposes a fully automated and unsupervised stochastic segmentation approach using two-level joint Markov-Gibbs Random Field (MGRF) to detect the vascular system from retinal Optical Coherence Tomography Angiography (OCTA) images, which is a critical step in developing Computer-Aided Diagnosis (CAD) systems for detecting retinal diseases. METHODS Using a new probabilistic model based on a Linear Combination of Discrete Gaussian (LCDG), the first level models the appearance of OCTA images and their spatially smoothed images. The parameters of the LCDG model are estimated using a modified Expectation Maximization (EM) algorithm. The second level models the maps of OCTA images, including the vascular system and other retina tissues, using MGRF with analytically estimated parameters from the input images. The proposed segmentation approach employs modified self-organizing maps as a MAP-based optimizer maximizing the joint likelihood and handles the Joint MGRF model in a new, unsupervised way. This approach deviates from traditional stochastic optimization approaches and leverages non-linear optimization to achieve more accurate segmentation results. RESULTS The proposed segmentation framework is evaluated quantitatively on a dataset of 204 subjects. Achieving 0.92 ± 0.03 Dice similarity coefficient, 0.69 ± 0.25 95-percentile bidirectional Hausdorff distance, and 0.93 ± 0.03 accuracy, confirms the superior performance of the proposed approach. CONCLUSIONS The conclusions drawn from the study highlight the superior performance of the proposed unsupervised and fully automated segmentation approach in detecting the vascular system from OCTA images. This approach not only deviates from traditional methods but also achieves more accurate segmentation results, demonstrating its potential in aiding the development of CAD systems for detecting retinal diseases.
Collapse
Affiliation(s)
- Ahmed Alksas
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Hossam Magdy Balaha
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohammad Z Haq
- School of Medicine, University of Louisville, Louisville, KY 40292, USA
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohamed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Marah Alhalabi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| | - Jawad Yousaf
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| | - Harpal Sandhu
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA.
| |
Collapse
|
2
|
Attention-Driven Cascaded Network for Diabetic Retinopathy Grading from Fundus Images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
3
|
Al Ashoor M, Al Hamza A, Zaboon I, Almomin A, Mansour A. Prevalence and risk factors of diabetic retinopathy in Basrah, Iraq. J Med Life 2023; 16:299-306. [PMID: 36937483 PMCID: PMC10015581 DOI: 10.25122/jml-2022-0170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Accepted: 10/22/2022] [Indexed: 03/21/2023] Open
Abstract
This study aimed to measure the prevalence and risk factors of diabetic retinopathy (DR) among patients with diabetes mellitus aged 20 to 82 years attending the Faiha Diabetes, Endocrine, and Metabolism Center (FDEMC) in Basrah. A cross-sectional study was conducted at FDEMC, including 1542 participants aged 20 to 82 from January 2019 to December 2019. Both eyes were examined for evidence of DR by a mobile nonmydriatic camera, and statistical analysis was performed to measure the prevalence rates (95% CI) for patients with different characteristics. The mean age of participants was 35.9, with 689 males (44.7%; 95% CI: 42.2-47.2%) and 853 females (55.3%; 95% CI: 52.8-57.8%). The prevalence rate of DR was 30.5% (95% CI: 28.1-32.8%), and 11.27% of cases were proliferative retinopathy. DR significantly increased with age (p-value=0.000), it was higher in females (p-value=0.005), and significantly increased with a longer duration of diabetes (p-value<0.001), hyperglycemia (p-value<0.001), hypertension (p-value=0.004), dyslipidemia (p-value<0.001), nephropathy (p-value<0.001) and smoking (p-value<0.001). There was no statistical association between DR and the type of diabetes or obesity. One-third of the participants in this study had DR. Screening and early detection of DR using a simple tool such as a digital camera should be a priority to improve a person's health status.
Collapse
Affiliation(s)
- Mohammed Al Ashoor
- Department of Ophthalmology, Al Zahraa Medical College, University of Basrah, Basrah, Iraq
- Department of Ophthalmology, Basrah Teaching Hospital, Basrah, Iraq
- Corresponding Author: Mohammed Al Ashoor, Department of Ophthalmology, Basrah Teaching Hospital, Basrah, Iraq. Department of Ophthalmology, Al Zahraa Medical College, University of Basrah, Basrah, Iraq. E-mail:
| | - Ali Al Hamza
- Department of Medicine, University of Basrah, Basrah, Iraq
| | - Ibrahim Zaboon
- Department of Medicine, University of Basrah, Basrah, Iraq
| | - Ammar Almomin
- Department of Medicine, University of Basrah, Basrah, Iraq
| | - Abbas Mansour
- Department of Medicine, University of Basrah, Basrah, Iraq
| |
Collapse
|
4
|
Iqbal S, Khan TM, Naveed K, Naqvi SS, Nawaz SJ. Recent trends and advances in fundus image analysis: A review. Comput Biol Med 2022; 151:106277. [PMID: 36370579 DOI: 10.1016/j.compbiomed.2022.106277] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/19/2022] [Accepted: 10/30/2022] [Indexed: 11/05/2022]
Abstract
Automated retinal image analysis holds prime significance in the accurate diagnosis of various critical eye diseases that include diabetic retinopathy (DR), age-related macular degeneration (AMD), atherosclerosis, and glaucoma. Manual diagnosis of retinal diseases by ophthalmologists takes time, effort, and financial resources, and is prone to error, in comparison to computer-aided diagnosis systems. In this context, robust classification and segmentation of retinal images are primary operations that aid clinicians in the early screening of patients to ensure the prevention and/or treatment of these diseases. This paper conducts an extensive review of the state-of-the-art methods for the detection and segmentation of retinal image features. Existing notable techniques for the detection of retinal features are categorized into essential groups and compared in depth. Additionally, a summary of quantifiable performance measures for various important stages of retinal image analysis, such as image acquisition and preprocessing, is provided. Finally, the widely used in the literature datasets for analyzing retinal images are described and their significance is emphasized.
Collapse
Affiliation(s)
- Shahzaib Iqbal
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan; Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark
| | - Syed S Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Syed Junaid Nawaz
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| |
Collapse
|
5
|
Four Severity Levels for Grading the Tortuosity of a Retinal Fundus Image. J Imaging 2022; 8:jimaging8100258. [PMID: 36286352 PMCID: PMC9605460 DOI: 10.3390/jimaging8100258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 09/11/2022] [Accepted: 09/15/2022] [Indexed: 12/02/2022] Open
Abstract
Hypertensive retinopathy severity classification is proportionally related to tortuosity severity grading. No tortuosity severity scale enables a computer-aided system to classify the tortuosity severity of a retinal image. This work aimed to introduce a machine learning model that can identify the severity of a retinal image automatically and hence contribute to developing a hypertensive retinopathy or diabetic retinopathy automated grading system. First, the tortuosity is quantified using fourteen tortuosity measurement formulas for the retinal images of the AV-Classification dataset to create the tortuosity feature set. Secondly, a manual labeling is performed and reviewed by two ophthalmologists to construct a tortuosity severity ground truth grading for each image in the AV classification dataset. Finally, the feature set is used to train and validate the machine learning models (J48 decision tree, ensemble rotation forest, and distributed random forest). The best performance learned model is used as the tortuosity severity classifier to identify the tortuosity severity (normal, mild, moderate, and severe) for any given retinal image. The distributed random forest model has reported the highest accuracy (99.4%) compared to the J48 Decision tree model and the rotation forest model with minimal least root mean square error (0.0000192) and the least mean average error (0.0000182). The proposed tortuosity severity grading matched the ophthalmologist’s judgment. Moreover, detecting the tortuosity severity of the retinal vessels’, optimizing vessel segmentation, the vessel segment extraction, and the created feature set have increased the accuracy of the automatic tortuosity severity detection model.
Collapse
|
6
|
Clinical Validation of Saliency Maps for Understanding Deep Neural Networks in Ophthalmology. Med Image Anal 2022; 77:102364. [DOI: 10.1016/j.media.2022.102364] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 11/02/2021] [Accepted: 01/10/2022] [Indexed: 01/17/2023]
|
7
|
Fast and efficient retinal blood vessel segmentation method based on deep learning network. Comput Med Imaging Graph 2021; 90:101902. [PMID: 33892389 DOI: 10.1016/j.compmedimag.2021.101902] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 03/04/2021] [Accepted: 03/06/2021] [Indexed: 01/28/2023]
Abstract
The segmentation of the retinal vascular tree presents a major step for detecting ocular pathologies. The clinical context expects higher segmentation performance with a reduced processing time. For higher accurate segmentation, several automated methods have been based on Deep Learning (DL) networks. However, the used convolutional layers bring to a higher computational complexity and so for execution times. For such need, this work presents a new DL based method for retinal vessel tree segmentation. Our main contribution consists in suggesting a new U-form DL architecture using lightweight convolution blocks in order to preserve a higher segmentation performance while reducing the computational complexity. As a second main contribution, preprocessing and data augmentation steps are proposed with respect to the retinal image and blood vessel characteristics. The proposed method is tested on DRIVE and STARE databases, which can achieve a better trade-off between the retinal blood vessel detection rate and the detection time with average accuracy of 0.978 and 0.98 in 0.59 s and 0.48 s per fundus image on GPU NVIDIA GTX 980 platforms, respectively for DRIVE and STARE database fundus images.
Collapse
|
8
|
Avendaño-Valencia LD, Yderstræde KB, Nadimi ES, Blanes-Vidal V. Video-based eye tracking performance for computer-assisted diagnostic support of diabetic neuropathy. Artif Intell Med 2021; 114:102050. [PMID: 33875161 DOI: 10.1016/j.artmed.2021.102050] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Revised: 02/16/2021] [Accepted: 02/21/2021] [Indexed: 10/22/2022]
Abstract
Diabetes is currently one of the major public health threats. The essential components for effective treatment of diabetes include early diagnosis and regular monitoring. However, health-care providers are often short of human resources to closely monitor populations at risk. In this work, a video-based eye-tracking method is proposed as a low-cost alternative for detection of diabetic neuropathy. The method is based on the tracking of the eye-trajectories recorded on videos while the subject follows a target on a screen, forcing saccadic movements. Upon extraction of the eye trajectories, representation of the obtained time-series is made with the help of heteroscedastic ARX (H-ARX) models, which capture the dynamics and latency on the subject's response, while features based on the H-ARX model's predictive ability are subsequently used for classification. The methodology is evaluated on a population constituted by 11 control and 20 insulin-treated diabetic individuals suffering from diverse diabetic complications including neuropathy and retinopathy. Results show significant differences on latency and eye movement precision between the populations of control subjects and diabetics, while simultaneously demonstrating that both groups can be classified with an accuracy of 95%. Although this study is limited by the small sample size, the results align with other findings in the literature and encourage further research.
Collapse
Affiliation(s)
- Luis David Avendaño-Valencia
- Group of Applied AI and Data Science, Maersk-McKinney-Moller Institute, University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark.
| | - Knud B Yderstræde
- Steno Diabetes Center and Center for Innovative Medical Technology, Odense University Hospital, Sdr. Boulevard 29, 5000 Odense C, Denmark.
| | - Esmaeil S Nadimi
- Group of Applied AI and Data Science, Maersk-McKinney-Moller Institute, University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark.
| | - Victoria Blanes-Vidal
- Group of Applied AI and Data Science, Maersk-McKinney-Moller Institute, University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark.
| |
Collapse
|
9
|
Yu Q, Wang F, Zhou L, Yang J, Liu K, Xu X. Quantification of Diabetic Retinopathy Lesions in DME Patients With Intravitreal Conbercept Treatment Using Deep Learning. Ophthalmic Surg Lasers Imaging Retina 2020; 51:95-100. [PMID: 32084282 DOI: 10.3928/23258160-20200129-05] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Accepted: 09/10/2019] [Indexed: 11/20/2022]
Abstract
BACKGROUND AND OBJECTIVES To quantitatively evaluate diabetic retinopathy (DR) lesions using the authors' validated machine learning algorithms and provide physicians with an automated and precise method to follow the progression of DR and outcome of interventions. PATIENTS AND METHODS Retrospective analyses were conducted of 3,496 color fundus photography images from 19 patients with clinically significant diabetic macular edema receiving conbercept treatment. The modified seven-field fundus images were obtained at baseline and at the third, sixth, and twelfth month visit, whereas the modified two-field fundus images were obtained at the other monthly visits. The area of intraretinal hemorrhage and hard exudate lesions was traced by the authors' validated algorithms. RESULTS The mean central foveal thickness at baseline was 459.9 μm ± 127.5 μm. Mean central foveal thickness was 316.5 μm ± 53.0 μm at the twelfth month visit, which decreased by 143.4 μm when compared with the baseline optical coherence tomography. The mean total area of intraretinal hemorrhage in the study eye in seven fields was 5.656 ± 1.176 mm2 at baseline, 2.438 ± 0.976 mm2 at the third month, 2.901 ± 0.521 mm2 at the sixth month, and 2.122 ± 0.582 mm2 at the end of the study. The area of intraretinal hemorrhage was reduced by 62.49% from baseline to the end of study (P < .0001). The mean total area of hard exudates in the study eye was 2.549 ± 0.776 mm2 at baseline, 2.233 ± 0.576 mm2 at the third month, 2.710 ± 0.621 mm2 at the sixth month, and 1.473 ± 0.564 mm2 at the end of the study. The mean total area of hard exudates decreased by 41.1% at the twelfth month (P < .0001) compared with the first visit. Significant decrease was observed in the area of intraretinal hemorrhage during conbercept treatment. The hard exudates area fluctuated during loading then subsequently decreased at the twelfth month. CONCLUSIONS The present study quantitatively analyzed the change in the area change of intraretinal hemorrhage and hard exudate lesions during the course of conbercept treatment. The automated system is promising to be a precise and objective method for monitoring the progression of DR and outcomes of interventions in clinical settings. [Ophthalmic Surg Lasers Imaging Retina. 2020;51:95-100.].
Collapse
|
10
|
Algorithms for Diagnosis of Diabetic Retinopathy and Diabetic Macula Edema- A Review. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1307:357-373. [PMID: 32166636 DOI: 10.1007/5584_2020_499] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Human eye is one of the important organs in human body, with iris, pupil, sclera, cornea, lens, retina and optic nerve. Many important eye diseases as well as systemic diseases manifest themselves in the retina. The most widespread causes of blindness in the industrialized world are glaucoma, Age Related Macular Degeneration (ARMD), Diabetic Retinopathy (DR) and Diabetic Macula Edema (DME). The development of a retinal image analysis system is a demanding research topic for early detection, progression analysis and diagnosis of eye diseases. Early diagnosis and treatment of retinal diseases are essential to prevent vision loss. The huge and growing number of retinal disease affected patients, cost of current hospital-based detection methods (by eye care specialists) and scarcity in the number of ophthalmologists are the barriers to achieve the recommended screening compliance in the patient who is at the risk of retinal diseases. Developing an automated system which uses pattern recognition, computer vision and machine learning to diagnose retinal diseases is a potential solution to this problem. Damage to the tiny blood vessels in the retina in the posterior part of the eye due to diabetes is named as DR. Diabetes is a disease which occurs when the pancreas does not secrete enough insulin or the body does not utilize it properly. This disease slowly affects the circulatory system including that of the retina. As diabetes intensifies, the vision of a patient may start deteriorating and leading to DR. The retinal landmarks like OD and blood vessels, white lesions and red lesions are segmented to develop automated screening system for DR. DME is an advanced symptom of DR that can lead to irreversible vision loss. DME is a general term defined as retinal thickening or exudates present within 2 disk diameter of the fovea center; it can either focal or diffuse DME in distribution. In this paper, review the algorithms used in diagnosis of DR and DME.
Collapse
|
11
|
Randive SN, Senapati RK, Rahulkar AD. A review on computer-aided recent developments for automatic detection of diabetic retinopathy. J Med Eng Technol 2019; 43:87-99. [PMID: 31198073 DOI: 10.1080/03091902.2019.1576790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Diabetic retinopathy is a serious microvascular disorder that might result in loss of vision and blindness. It seriously damages the retinal blood vessels and reduces the light-sensitive inner layer of the eye. Due to the manual inspection of retinal fundus images on diabetic retinopathy to detect the morphological abnormalities in Microaneurysms (MAs), Exudates (EXs), Haemorrhages (HMs), and Inter retinal microvascular abnormalities (IRMA) is very difficult and time consuming process. In order to avoid this, the regular follow-up screening process, and early automatic Diabetic Retinopathy detection are necessary. This paper discusses various methods of analysing automatic retinopathy detection and classification of different grading based on the severity levels. In addition, retinal blood vessel detection techniques are also discussed for the ultimate detection and diagnostic procedure of proliferative diabetic retinopathy. Furthermore, the paper elaborately discussed the systematic review accessed by authors on various publicly available databases collected from different medical sources. In the survey, meta-analysis of several methods for diabetic feature extraction, segmentation and various types of classifiers have been used to evaluate the system performance metrics for the diagnosis of DR. This survey will be helpful for the technical persons and researchers who want to focus on enhancing the diagnosis of a system that would be more powerful in real life.
Collapse
Affiliation(s)
- Santosh Nagnath Randive
- a Department of Electronics & Communication Engineering , Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram , Guntur , Andhra Pradesh , India
| | - Ranjan K Senapati
- a Department of Electronics & Communication Engineering , Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram , Guntur , Andhra Pradesh , India
| | - Amol D Rahulkar
- b Department of Electrical and Electronics Engineering , National Institute of Technology , Goa , India
| |
Collapse
|
12
|
S K, D M. Distinguising Proof of Diabetic Retinopathy Detection by Hybrid Approaches in Two Dimensional Retinal Fundus Images. J Med Syst 2019; 43:173. [PMID: 31069550 DOI: 10.1007/s10916-019-1313-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Accepted: 04/25/2019] [Indexed: 12/29/2022]
Abstract
Diabetes is characterized by constant high level of blood glucose. The human body needs to maintain insulin at very constrict range. The patients who are all affected by diabetes for a long time affected by eye disease called Diabetic Retinopathy (DR). The retinal landmarks namely Optic disc is predicted and masked to decrease the false positive in the exudates detection. The abnormalities like Exudates, Microaneurysms and Hemorrhages are segmented to classify the various stages of DR. The proposed approach is employed to separate the landmarks of retina and lesions of retina for the classification of stages of DR. The segmentation algorithms like Gabor double-sided hysteresis thresholding, maximum intensity variation, inverse surface adaptive thresholding, multi-agent approach and toboggan segmentation are used to detect and segment BVs, ODs, EXs, MAs and HAs. The feature vector formation and machine learning algorithm used to classify the various stages of DR are evaluated using images available in various retinal databases, and their performance measures are presented in this paper.
Collapse
Affiliation(s)
- Karkuzhali S
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education ( Deemed to be University), Srivilliputtur, Tamilnadu, India.
| | - Manimegalai D
- Department of Information Technology, National Engineering College, Kovilpatti, Tamilnadu, India
| |
Collapse
|
13
|
Uribe-Valencia LJ, Martínez-Carballido JF. Automated Optic Disc region location from fundus images: Using local multi-level thresholding, best channel selection, and an Intensity Profile Model. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.02.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
14
|
Khojasteh P, Aliahmad B, Kumar DK. A novel color space of fundus images for automatic exudates detection. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.12.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
15
|
Mazlan N, Yazid H. An improved retinal blood vessel segmentation for diabetic retinopathy detection. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2019. [DOI: 10.1080/21681163.2017.1402711] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Noratikah Mazlan
- School of Mechatronic Engineering, University Malaysia Perlis (UniMAP), Arau, Malaysia
| | - Haniza Yazid
- School of Mechatronic Engineering, University Malaysia Perlis (UniMAP), Arau, Malaysia
| |
Collapse
|
16
|
Exudate detection in fundus images using deeply-learnable features. Comput Biol Med 2018; 104:62-69. [PMID: 30439600 DOI: 10.1016/j.compbiomed.2018.10.031] [Citation(s) in RCA: 79] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Revised: 10/27/2018] [Accepted: 10/27/2018] [Indexed: 01/28/2023]
Abstract
Presence of exudates on a retina is an early sign of diabetic retinopathy, and automatic detection of these can improve the diagnosis of the disease. Convolutional Neural Networks (CNNs) have been used for automatic exudate detection, but with poor performance. This study has investigated different deep learning techniques to maximize the sensitivity and specificity. We have compared multiple deep learning methods, and both supervised and unsupervised classifiers for improving the performance of automatic exudate detection, i.e., CNNs, pre-trained Residual Networks (ResNet-50) and Discriminative Restricted Boltzmann Machines. The experiments were conducted on two publicly available databases: (i) DIARETDB1 and (ii) e-Ophtha. The results show that ResNet-50 with Support Vector Machines outperformed other networks with an accuracy and sensitivity of 98% and 0.99, respectively. This shows that ResNet-50 can be used for the analysis of the fundus images to detect exudates.
Collapse
|
17
|
Diabetic Retinopathy Diagnosis from Retinal Images Using Modified Hopfield Neural Network. J Med Syst 2018; 42:247. [DOI: 10.1007/s10916-018-1111-6] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2018] [Accepted: 10/24/2018] [Indexed: 12/26/2022]
|
18
|
Bode C, Kranz H, Siepmann F, Siepmann J. In-situ forming PLGA implants for intraocular dexamethasone delivery. Int J Pharm 2018; 548:337-348. [PMID: 29981408 DOI: 10.1016/j.ijpharm.2018.07.013] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2018] [Revised: 07/02/2018] [Accepted: 07/03/2018] [Indexed: 11/15/2022]
Abstract
Different types of in-situ forming implants based on poly(lactic-co-glycolic acid) (PLGA) and N-methyl-pyrrolidone (NMP) were prepared for controlled ocular delivery of dexamethasone. The impact of the volume of the release medium, initial drug content, polymer molecular weight and PLGA concentration on the resulting drug release kinetics were studied and explained based on a thorough physico-chemical characterization of the systems. This included for instance the monitoring of dynamic changes in the implants' wet and dry mass, morphology, PLGA polymer molecular weight, pH of the surrounding bulk fluid and water/NMP contents upon exposure to phosphate buffer pH 7.4. Importantly, the systems can be expected to be rather robust with respect to variations in the vitreous humor volumes encountered in vivo. Interestingly, limited drug solubility effects within the implants as well as in the surrounding aqueous medium play an important role for the control of drug release at a drug loading of only 7.5%. Furthermore, the polymer molecular weight and PLGA concentration in the liquid formulations are decisive for how the polymer precipitates during solvent exchange and for the swelling behavior of the systems. These features determine the resulting inner system structure and the conditions for mass transport. Consequently, they affect the degradation and drug release of the in-situ formed implants.
Collapse
Affiliation(s)
- C Bode
- Univ. Lille, Inserm, CHU Lille, U1008, 59000 Lille, France
| | - H Kranz
- Bayer AG, Muellerstraße 178, 13353 Berlin, Germany
| | - F Siepmann
- Univ. Lille, Inserm, CHU Lille, U1008, 59000 Lille, France
| | - J Siepmann
- Univ. Lille, Inserm, CHU Lille, U1008, 59000 Lille, France.
| |
Collapse
|
19
|
Leveraging uncertainty information from deep neural networks for disease detection. Sci Rep 2017; 7:17816. [PMID: 29259224 PMCID: PMC5736701 DOI: 10.1038/s41598-017-17876-z] [Citation(s) in RCA: 131] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 12/01/2017] [Indexed: 12/19/2022] Open
Abstract
Deep learning (DL) has revolutionized the field of computer vision and image processing. In medical imaging, algorithmic solutions based on DL have been shown to achieve high performance on tasks that previously required medical experts. However, DL-based solutions for disease detection have been proposed without methods to quantify and control their uncertainty in a decision. In contrast, a physician knows whether she is uncertain about a case and will consult more experienced colleagues if needed. Here we evaluate drop-out based Bayesian uncertainty measures for DL in diagnosing diabetic retinopathy (DR) from fundus images and show that it captures uncertainty better than straightforward alternatives. Furthermore, we show that uncertainty informed decision referral can improve diagnostic performance. Experiments across different networks, tasks and datasets show robust generalization. Depending on network capacity and task/dataset difficulty, we surpass 85% sensitivity and 80% specificity as recommended by the NHS when referring 0−20% of the most uncertain decisions for further inspection. We analyse causes of uncertainty by relating intuitions from 2D visualizations to the high-dimensional image space. While uncertainty is sensitive to clinically relevant cases, sensitivity to unfamiliar data samples is task dependent, but can be rendered more robust.
Collapse
|
20
|
Multiscale segmentation of exudates in retinal images using contextual cues and ensemble classification. Biomed Signal Process Control 2017. [DOI: 10.1016/j.bspc.2017.02.012] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|