1
|
Kim S, Chung H, Park SH, Chung ES, Yi K, Ye JC. Fundus Image Enhancement Through Direct Diffusion Bridges. IEEE J Biomed Health Inform 2024; 28:7275-7286. [PMID: 39167517 DOI: 10.1109/jbhi.2024.3446866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/23/2024]
Abstract
We propose FD3, a fundus image enhancement method based on direct diffusion bridges, which can cope with a wide range of complex degradations, including haze, blur, noise, and shadow. We first propose a synthetic forward model through a human feedback loop with board-certified ophthalmologists for maximal quality improvement of low-quality in-vivo images. Using the proposed forward model, we train a robust and flexible diffusion-based image enhancement network that is highly effective as a stand-alone method, unlike previous diffusion model-based approaches which act only as a refiner on top of pre-trained models. Through extensive experiments, we show that FD3 establishes superior quality not only on synthetic degradations but also on in vivo studies with low-quality fundus photos taken from patients with cataracts or small pupils.
Collapse
|
2
|
Steffi S, Sam Emmanuel WR. Resilient back-propagation machine learning-based classification on fundus images for retinal microaneurysm detection. Int Ophthalmol 2024; 44:91. [PMID: 38367192 DOI: 10.1007/s10792-024-02982-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/29/2023] [Indexed: 02/19/2024]
Abstract
BACKGROUND The timely diagnosis of medical conditions, particularly diabetic retinopathy, relies on the identification of retinal microaneurysms. However, the commonly used retinography method poses a challenge due to the diminutive dimensions and limited differentiation of microaneurysms in images. PROBLEM STATEMENT Automated identification of microaneurysms becomes crucial, necessitating the use of comprehensive ad-hoc processing techniques. Although fluorescein angiography enhances detectability, its invasiveness limits its suitability for routine preventative screening. OBJECTIVE This study proposes a novel approach for detecting retinal microaneurysms using a fundus scan, leveraging circular reference-based shape features (CR-SF) and radial gradient-based texture features (RG-TF). METHODOLOGY The proposed technique involves extracting CR-SF and RG-TF for each candidate microaneurysm, employing a robust back-propagation machine learning method for training. During testing, extracted features from test images are compared with training features to categorize microaneurysm presence. RESULTS The experimental assessment utilized four datasets (MESSIDOR, Diaretdb1, e-ophtha-MA, and ROC), employing various measures. The proposed approach demonstrated high accuracy (98.01%), sensitivity (98.74%), specificity (97.12%), and area under the curve (91.72%). CONCLUSION The presented approach showcases a successful method for detecting retinal microaneurysms using a fundus scan, providing promising accuracy and sensitivity. This non-invasive technique holds potential for effective screening in diabetic retinopathy and other related medical conditions.
Collapse
Affiliation(s)
- S Steffi
- Department of Computer Science, Nesamony Memorial Christian College Affiliated to Manonmaniam Sundaranar University, Abishekapatti, Tirunelveli, Tamil Nadu, 627012, India.
| | - W R Sam Emmanuel
- Department of PG Computer Science, Nesamony Memorial Christian College Affiliated to Manonmaniam Sundaranar University, Abishekapatti, Tirunelveli, Tamil Nadu, 627012, India
| |
Collapse
|
3
|
Dao QT, Trinh HQ, Nguyen VA. An effective and comprehensible method to detect and evaluate retinal damage due to diabetes complications. PeerJ Comput Sci 2023; 9:e1585. [PMID: 37810367 PMCID: PMC10557496 DOI: 10.7717/peerj-cs.1585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 08/20/2023] [Indexed: 10/10/2023]
Abstract
The leading cause of vision loss globally is diabetic retinopathy. Researchers are making great efforts to automatically detect and diagnose correctly diabetic retinopathy. Diabetic retinopathy includes five stages: no diabetic retinopathy, mild diabetic retinopathy, moderate diabetic retinopathy, severe diabetic retinopathy and proliferative diabetic retinopathy. Recent studies have offered several multi-tasking deep learning models to detect and assess the level of diabetic retinopathy. However, the explanation for the assessment of disease severity of these models is limited, and only stops at showing lesions through images. These studies have not explained on what basis the appraisal of disease severity is based. In this article, we present a system for assessing and interpreting the five stages of diabetic retinopathy. The proposed system is built from internal models including a deep learning model that detects lesions and an explanatory model that assesses disease stage. The deep learning model that detects lesions uses the Mask R-CNN deep learning network to specify the location and shape of the lesion and classify the lesion types. This model is a combination of two networks: one used to detect hemorrhagic and exudative lesions, and one used to detect vascular lesions like aneurysm and proliferation. The explanatory model appraises disease severity based on the severity of each type of lesion and the association between types. The severity of the disease will be decided by the model based on the number of lesions, the density and the area of the lesions. The experimental results on real-world datasets show that our proposed method achieves high accuracy of assessing five stages of diabetic retinopathy comparable to existing state-of-the-art methods and is capable of explaining the causes of disease severity.
Collapse
Affiliation(s)
- Quang Toan Dao
- Institute of Information Technology, Vietnam Academy of Science and Technology, Hanoi, Vietnam
| | - Hoang Quan Trinh
- Vietnam Space Center, Vietnam Academy of Science and Technology, Hanoi, Vietnam
| | - Viet Anh Nguyen
- Institute of Information Technology, Vietnam Academy of Science and Technology, Hanoi, Vietnam
| |
Collapse
|
4
|
ExpACVO-Hybrid Deep learning: Exponential Anti Corona Virus Optimization enabled Hybrid Deep learning for tongue image segmentation towards diabetes mellitus detection. Biomed Signal Process Control 2023; 83:104635. [PMID: 36741196 PMCID: PMC9886667 DOI: 10.1016/j.bspc.2023.104635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 12/26/2022] [Accepted: 01/25/2023] [Indexed: 02/01/2023]
Abstract
A metabolic disease known as diabetes mellitus (DM) is primarily brought on by an increase in blood sugar levels. On the other hand, DM and the complications it causes, such as diabetic Retinopathy (DR), will quickly emerge as one of the major health challenges of the twenty-first century. This indicates a huge economic burden on health-related authorities and governments. The detection of DM in the earlier stage can lead to early diagnosis and a considerable drop in mortality. Therefore, in order to detect DM at an early stage, an efficient detection system having the ability to detect DM is required. An effective classification method, named Exponential Anti Corona Virus Optimization (ExpACVO) is devised in this research work for Diabetes Mellitus (DM) detection using tongue images. Here, the UNet-Conditional Random Field-Recurrent Neural Network (UNet-CRF-RNN) is used to segment the images, and the proposed ExpACVO algorithm is used to train the UNet-CRF-RNN. Deep Q Network (DQN) classifier is used for DM detection, and the proposed ExpACVO is used for DQN training. The proposed ExpACVO algorithm is a newly created formula that combines Anti Corona Virus Optimization(ACVO) with Exponential Weighted Moving Average (EWMA). With maximum testing accuracy, sensitivity, and specificity values of 0.932, 0.950, and 0.914, respectively, the developed technique thus achieved improved performance.
Collapse
|
5
|
Upadhyay K, Agrawal M, Vashist P. Characteristic patch-based deep and handcrafted feature learning for red lesion segmentation in fundus images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
6
|
Soares I, Castelo-Branco M, Pinheiro A. Microaneurysms detection in retinal images using a multi-scale approach. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
7
|
Yang Y, Lv H, Chen N. A Survey on ensemble learning under the era of deep learning. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10283-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
8
|
Morya AK, Janti SS, Sisodiya P, Tejaswini A, Prasad R, Mali KR, Gurnani B. Everything real about unreal artificial intelligence in diabetic retinopathy and in ocular pathologies. World J Diabetes 2022; 13:822-834. [PMID: 36311999 PMCID: PMC9606792 DOI: 10.4239/wjd.v13.i10.822] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 08/11/2022] [Accepted: 09/09/2022] [Indexed: 02/05/2023] Open
Abstract
Artificial Intelligence is a multidisciplinary field with the aim of building platforms that can make machines act, perceive, reason intelligently and whose goal is to automate activities that presently require human intelligence. From the cornea to the retina, artificial intelligence (AI) is expected to help ophthalmologists diagnose and treat ocular diseases. In ophthalmology, computerized analytics are being viewed as efficient and more objective ways to interpret the series of images and come to a conclusion. AI can be used to diagnose and grade diabetic retinopathy, glaucoma, age-related macular degeneration, cataracts, IOL power calculation, retinopathy of prematurity and keratoconus. This review article intends to discuss various aspects of artificial intelligence in ophthalmology.
Collapse
Affiliation(s)
- Arvind Kumar Morya
- Department of Ophthalmology, All India Institute of Medical Sciences Bibinagar, Hyderabad 508126, Telangana, India
| | - Siddharam S Janti
- Department of Ophthalmology, All India Institute of Medical Sciences Bibinagar, Hyderabad 508126, Telangana, India
| | - Priya Sisodiya
- Department of Ophthalmology, Sadguru Netra Chikitsalaya, Chitrakoot 485001, Madhya Pradesh, India
| | - Antervedi Tejaswini
- Department of Ophthalmology, All India Institute of Medical Sciences Bibinagar, Hyderabad 508126, Telangana, India
| | - Rajendra Prasad
- Department of Ophthalmology, R P Eye Institute, New Delhi 110001, New Delhi, India
| | - Kalpana R Mali
- Department of Pharmacology, All India Institute of Medical Sciences, Bibinagar, Hyderabad 508126, Telangana, India
| | - Bharat Gurnani
- Department of Ophthalmology, Aravind Eye Hospital and Post Graduate Institute of Ophthalmology, Pondicherry 605007, Pondicherry, India
| |
Collapse
|
9
|
Detection of microaneurysms in color fundus images based on local Fourier transform. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103648] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
10
|
Xia H, Rao Z, Zhou Z. A multi-scale gated network for retinal hemorrhage detection. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03476-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
11
|
Huang S, Li J, Xiao Y, Shen N, Xu T. RTNet: Relation Transformer Network for Diabetic Retinopathy Multi-Lesion Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1596-1607. [PMID: 35041595 DOI: 10.1109/tmi.2022.3143833] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Automatic diabetic retinopathy (DR) lesions segmentation makes great sense of assisting ophthalmologists in diagnosis. Although many researches have been conducted on this task, most prior works paid too much attention to the designs of networks instead of considering the pathological association for lesions. Through investigating the pathogenic causes of DR lesions in advance, we found that certain lesions are closed to specific vessels and present relative patterns to each other. Motivated by the observation, we propose a relation transformer block (RTB) to incorporate attention mechanisms at two main levels: a self-attention transformer exploits global dependencies among lesion features, while a cross-attention transformer allows interactions between lesion and vessel features by integrating valuable vascular information to alleviate ambiguity in lesion detection caused by complex fundus structures. In addition, to capture the small lesion patterns first, we propose a global transformer block (GTB) which preserves detailed information in deep network. By integrating the above blocks of dual-branches, our network segments the four kinds of lesions simultaneously. Comprehensive experiments on IDRiD and DDR datasets well demonstrate the superiority of our approach, which achieves competitive performance compared to state-of-the-arts.
Collapse
|
12
|
Latha D, Bell TB, Sheela CJJ. Red lesion in fundus image with hexagonal pattern feature and two-level segmentation. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:26143-26161. [PMID: 35368859 PMCID: PMC8959564 DOI: 10.1007/s11042-022-12667-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 12/16/2021] [Accepted: 02/21/2022] [Indexed: 06/14/2023]
Abstract
Red lesion identification at its early stage is very essential for the treatment of diabetic retinopathy to prevent loss of vision. This work proposes a red lesion detection algorithm that uses Hexagonal pattern-based features with two-level segmentation that can detect hemorrhage and microaneurysms in the fundus image. The proposed scheme initially pre-processes the fundus image followed by a two-level segmentation. The level 1 segmentation eliminates the background whereas the level 2 segmentation eliminates the blood vessels that introduce more false positives. A hexagonal pattern-based feature is extracted from the red lesion candidates which can highly differentiate the lesion from non-lesion regions. The hexagonal pattern features are then trained using the recurrent neural network and are classified to eliminate the false negatives. For the evaluation of the proposed red lesion algorithm, the datasets namely ROC challenge, e-ophtha, DiaretDB1, and Messidor are used with the metrics such as Accuracy, Recall, Precision, F1 score, Specificity, and AUC. The scheme provides an average Accuracy, Recall (Sensitivity), Precision, F1 score, Specificity, and AUC of 95.48%, 84.54%, 97.3%, 90.47%, 86.81% and 93.43% respectively.
Collapse
Affiliation(s)
- D. Latha
- Department of PG Computer Science, Nesamony Memorial Christian College, Marthandam, India
| | - T. Beula Bell
- Department of Computer Applications, Nesamony Memorial Christian College, Marthandam, India
| | - C. Jaspin Jeba Sheela
- Department of PG Computer Science, Nesamony Memorial Christian College, Marthandam, India
| |
Collapse
|
13
|
Das D, Biswas SK, Bandyopadhyay S. A critical review on diagnosis of diabetic retinopathy using machine learning and deep learning. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25613-25655. [PMID: 35342328 PMCID: PMC8940593 DOI: 10.1007/s11042-022-12642-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 06/29/2021] [Accepted: 02/09/2022] [Indexed: 06/12/2023]
Abstract
Diabetic Retinopathy (DR) is a health condition caused due to Diabetes Mellitus (DM). It causes vision problems and blindness due to disfigurement of human retina. According to statistics, 80% of diabetes patients battling from long diabetic period of 15 to 20 years, suffer from DR. Hence, it has become a dangerous threat to the health and life of people. To overcome DR, manual diagnosis of the disease is feasible but overwhelming and cumbersome at the same time and hence requires a revolutionary method. Thus, such a health condition necessitates primary recognition and diagnosis to prevent DR from developing into severe stages and prevent blindness. Innumerable Machine Learning (ML) models are proposed by researchers across the globe, to achieve this purpose. Various feature extraction techniques are proposed for extraction of DR features for early detection. However, traditional ML models have shown either meagre generalization throughout feature extraction and classification for deploying smaller datasets or consumes more of training time causing inefficiency in prediction while using larger datasets. Hence Deep Learning (DL), a new domain of ML, is introduced. DL models can handle a smaller dataset with help of efficient data processing techniques. However, they generally incorporate larger datasets for their deep architectures to enhance performance in feature extraction and image classification. This paper gives a detailed review on DR, its features, causes, ML models, state-of-the-art DL models, challenges, comparisons and future directions, for early detection of DR.
Collapse
Affiliation(s)
- Dolly Das
- National Institute of Technology Silchar, Cachar, Assam India
| | | | | |
Collapse
|
14
|
Deep Red Lesion Classification for Early Screening of Diabetic Retinopathy. MATHEMATICS 2022. [DOI: 10.3390/math10050686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Diabetic retinopathy (DR) is an asymptotic and vision-threatening complication among working-age adults. To prevent blindness, a deep convolutional neural network (CNN) based diagnosis can help to classify less-discriminative and small-sized red lesions in early screening of DR patients. However, training deep models with minimal data is a challenging task. Fine-tuning through transfer learning is a useful alternative, but performance degradation, overfitting, and domain adaptation issues further demand architectural amendments to effectively train deep models. Various pre-trained CNNs are fine-tuned on an augmented set of image patches. The best-performing ResNet50 model is modified by introducing reinforced skip connections, a global max-pooling layer, and the sum-of-squared-error loss function. The performance of the modified model (DR-ResNet50) on five public datasets is found to be better than state-of-the-art methods in terms of well-known metrics. The highest scores (0.9851, 0.991, 0.991, 0.991, 0.991, 0.9939, 0.0029, 0.9879, and 0.9879) for sensitivity, specificity, AUC, accuracy, precision, F1-score, false-positive rate, Matthews’s correlation coefficient, and kappa coefficient are obtained within a 95% confidence interval for unseen test instances from e-Ophtha_MA. This high sensitivity and low false-positive rate demonstrate the worth of a proposed framework. It is suitable for early screening due to its performance, simplicity, and robustness.
Collapse
|
15
|
Yadav Y, Chand S, Sahoo RC, Sahoo BM, Kumar S. Comparative analysis of detection and classification of diabetic retinopathy by using transfer learning of CNN based models. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-212771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Machine learning and deep learning methods have become exponentially more accurate. These methods are now as precise as experts of respective fields, so it is being used in almost all areas of life. Nowadays, people have more faith in machines than men, so, in this vein, deep learning models with the concept of transfer learning of CNN are used to detect and classify diabetic retinopathy and its different stages. The backbone of various CNN-based models such as InceptionResNetV2, InceptionV3, Xception, MobileNetV2, VGG19, and DenceNet201 are used to classify this vision loss disease. In these base models, transfer learning has been applied by adding some layers like batch normalization, dropout, and dense layers to make the model more effective and accurate for the given problem. The training of the resulting models has been done for the Kaggle retinopathy 2019 dataset with about 3662 fundus fluorescein angiography colored images. Performance of all six trained models have been measured on the test dataset in terms of precision, recall, F1 score, macro average, weighted average, confusion matrix, and accuracy. A confusion matrix is based on maximum class probability prediction that is the incapability of the confusion matrix. The ROC-AUC of different classes and the models are analyzed. ROC-AUC is based on the actual probability of different categories. The results obtained from this study show that InceptionResNetV2 is proven the best model for diabetic retinopathy detection and classification, among other models considered here. It can work accurately in case of less training data. Thus, this model may detect and classify diabetic retinopathy automatically and accurately at an early stage. So it would be beneficial for humans to reduce the effects of diabetes. As a result of this, the impact of diabetes on vision loss can be minimized, and that would be a blessing in the medical field.
Collapse
Affiliation(s)
- Yadavendra Yadav
- School of Computer and Systems Sciences, Jawaharlal Nehru Univesity, New Delhi, India
| | - Satish Chand
- School of Computer and Systems Sciences, Jawaharlal Nehru Univesity, New Delhi, India
| | - Ramesh Ch. Sahoo
- Faculity of Engineering and Technology, MRIIRS, Faridabad, Haryana, India
| | - Biswa Mohan Sahoo
- School of Computing and Information Technology, Manipal University Jaipur, India
| | | |
Collapse
|
16
|
Xu X, Li J, Guan Y, Zhao L, Zhao Q, Zhang L, Li L. GLA-Net: A global-local attention network for automatic cataract classification. J Biomed Inform 2021; 124:103939. [PMID: 34752858 DOI: 10.1016/j.jbi.2021.103939] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 10/02/2021] [Accepted: 10/25/2021] [Indexed: 10/19/2022]
Abstract
Cataracts are the most crucial cause of blindness among all ophthalmic diseases. Convenient and cost-effective early cataract screening is urgently needed to reduce the risks of visual loss. To date, many studies have investigated automatic cataract classification based on fundus images. However, existing methods mainly rely on global image information while ignoring various local and subtle features. Notably, these local features are highly helpful for the identification of cataracts with different severities. To avoid this disadvantage, we introduce a deep learning technique to learn multilevel feature representations of the fundus image simultaneously. Specifically, a global-local attention network (GLA-Net) is proposed to handle the cataract classification task, which consists of two levels of subnets: the global-level attention subnet pays attention to the global structure information of the fundus image, while the local-level attention subnet focuses on the local discriminative features of the specific regions. These two types of subnets extract retinal features at different attention levels, which are then combined for final cataract classification. Our GLA-Net achieves the best performance in all metrics (90.65% detection accuracy, 83.47% grading accuracy, and 81.11% classification accuracy of grades 1 and 2). The experimental results on a real clinical dataset show that the combination of global-level and local-level attention models is effective for cataract screening and provides significant potential for other medical tasks.
Collapse
Affiliation(s)
- Xi Xu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Yu Guan
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Linna Zhao
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Qing Zhao
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China.
| | - Li Zhang
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Li
- National Center for Children's Health, Beijing Children's Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
17
|
Red-lesion extraction in retinal fundus images by directional intensity changes' analysis. Sci Rep 2021; 11:18223. [PMID: 34521886 PMCID: PMC8440775 DOI: 10.1038/s41598-021-97649-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2020] [Accepted: 08/18/2021] [Indexed: 12/31/2022] Open
Abstract
Diabetic retinopathy (DR) is an important retinal disease threatening people with the long diabetic history. Blood leakage in retina leads to the formation of red lesions in retina the analysis of which is helpful in the determination of severity of disease. In this paper, a novel red-lesion extraction method is proposed. The new method firstly determines the boundary pixels of blood vessel and red lesions. Then, it determines the distinguishing features of boundary pixels of red-lesions to discriminate them from other boundary pixels. The main point utilized here is that a red lesion can be observed as significant intensity changes in almost all directions in the fundus image. This can be feasible through considering special neighborhood windows around the extracted boundary pixels. The performance of the proposed method has been evaluated for three different datasets including Diaretdb0, Diaretdb1 and Kaggle datasets. It is shown that the method is capable of providing the values of 0.87 and 0.88 for sensitivity and specificity of Diaretdb1, 0.89 and 0.9 for sensitivity and specificity of Diaretdb0, 0.82 and 0.9 for sensitivity and specificity of Kaggle. Also, the proposed method has a time-efficient performance in the red-lesion extraction process.
Collapse
|
18
|
Xia H, Lan Y, Song S, Li H. A multi-scale segmentation-to-classification network for tiny microaneurysm detection in fundus images. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107140] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
19
|
Gegundez-Arias ME, Marin-Santos D, Perez-Borrero I, Vasallo-Vazquez MJ. A new deep learning method for blood vessel segmentation in retinal images based on convolutional kernels and modified U-Net model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106081. [PMID: 33882418 DOI: 10.1016/j.cmpb.2021.106081] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Accepted: 03/28/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic monitoring of retinal blood vessels proves very useful for the clinical assessment of ocular vascular anomalies or retinopathies. This paper presents an efficient and accurate deep learning-based method for vessel segmentation in eye fundus images. METHODS The approach consists of a convolutional neural network based on a simplified version of the U-Net architecture that combines residual blocks and batch normalization in the up- and downscaling phases. The network receives patches extracted from the original image as input and is trained with a novel loss function that considers the distance of each pixel to the vascular tree. At its output, it generates the probability of each pixel of the input patch belonging to the vascular structure. The application of the network to the patches in which a retinal image can be divided allows obtaining the pixel-wise probability map of the complete image. This probability map is then binarized with a certain threshold to generate the blood vessel segmentation provided by the method. RESULTS The method has been developed and evaluated in the DRIVE, STARE and CHASE_Db1 databases, which offer a manual segmentation of the vascular tree by each of its images. Using this set of images as ground truth, the accuracy of the vessel segmentations obtained for an operating point proposal (established by a single threshold value for each database) was quantified. The overall performance was measured using the area of its receiver operating characteristic curve. The method demonstrated robustness in the face of the variability of the fundus images of diverse origin, being capable of working with the highest level of accuracy in the entire set of possible points of operation, compared to those provided by the most accurate methods found in literature. CONCLUSIONS The analysis of results concludes that the proposed method reaches better performance than the rest of state-of-art methods and can be considered the most promising for integration into a real tool for vascular structure segmentation.
Collapse
Affiliation(s)
- Manuel E Gegundez-Arias
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Diego Marin-Santos
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Isaac Perez-Borrero
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Manuel J Vasallo-Vazquez
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| |
Collapse
|
20
|
Alam MN, Le D, Yao X. Differential artery-vein analysis in quantitative retinal imaging: a review. Quant Imaging Med Surg 2021; 11:1102-1119. [PMID: 33654680 PMCID: PMC7829162 DOI: 10.21037/qims-20-557] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Accepted: 06/19/2020] [Indexed: 11/06/2022]
Abstract
Quantitative retinal imaging is essential for eye disease detection, staging classification, and treatment assessment. It is known that different eye diseases or severity stages can affect the artery and vein systems in different ways. Therefore, differential artery-vein (AV) analysis can improve the performance of quantitative retinal imaging. In this article, we provide a brief summary of technical rationales and clinical applications of differential AV analysis in fundus photography, optical coherence tomography (OCT), and OCT angiography (OCTA).
Collapse
Affiliation(s)
- Minhaj Nur Alam
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
| | - David Le
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
| | - Xincheng Yao
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
21
|
Gilbert MJ, Sun JK. Artificial Intelligence in the assessment of diabetic retinopathy from fundus photographs. Semin Ophthalmol 2021; 35:325-332. [PMID: 33539253 DOI: 10.1080/08820538.2020.1855358] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Background: Over the next 25 years, the global prevalence of diabetes is expected to grow to affect 700 million individuals. Consequently, an unprecedented number of patients will be at risk for vision loss from diabetic eye disease. This demand will almost certainly exceed the supply of eye care professionals to individually evaluate each patient on an annual basis, signaling the need for 21st century tools to assist our profession in meeting this challenge. Methods: Review of available literature on artificial intelligence (AI) as applied to diabetic retinopathy (DR) detection and predictionResults: The field of AI has seen exponential growth in evaluating fundus photographs for DR. AI systems employ machine learning and artificial neural networks to teach themselves how to grade DR from libraries of tens of thousands of images and may be able to predict future DR progression based on baseline fundus photographs. Conclusions: AI algorithms are highly promising for the purposes of DR detection and will likely be able to reliably predict DR worsening in the future. A deeper understanding of these systems and how they interpret images is critical as they transition from the bench into the clinic.
Collapse
Affiliation(s)
- Michael J Gilbert
- Joslin Diabetes Center, Beetham Eye Institute , Boston, MA, United States
| | - Jennifer K Sun
- Joslin Diabetes Center, Beetham Eye Institute , Boston, MA, United States.,Department of Ophthalmology, Harvard Medical School , Boston, MA, United States
| |
Collapse
|
22
|
Lessons learnt from harnessing deep learning for real-world clinical applications in ophthalmology: detecting diabetic retinopathy from retinal fundus photographs. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00013-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
23
|
Romero-Oraá R, García M, Oraá-Pérez J, López-Gálvez MI, Hornero R. Effective Fundus Image Decomposition for the Detection of Red Lesions and Hard Exudates to Aid in the Diagnosis of Diabetic Retinopathy. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6549. [PMID: 33207825 PMCID: PMC7698181 DOI: 10.3390/s20226549] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/07/2020] [Accepted: 11/13/2020] [Indexed: 06/11/2023]
Abstract
Diabetic retinopathy (DR) is characterized by the presence of red lesions (RLs), such as microaneurysms and hemorrhages, and bright lesions, such as exudates (EXs). Early DR diagnosis is paramount to prevent serious sight damage. Computer-assisted diagnostic systems are based on the detection of those lesions through the analysis of fundus images. In this paper, a novel method is proposed for the automatic detection of RLs and EXs. As the main contribution, the fundus image was decomposed into various layers, including the lesion candidates, the reflective features of the retina, and the choroidal vasculature visible in tigroid retinas. We used a proprietary database containing 564 images, randomly divided into a training set and a test set, and the public database DiaretDB1 to verify the robustness of the algorithm. Lesion detection results were computed per pixel and per image. Using the proprietary database, 88.34% per-image accuracy (ACCi), 91.07% per-pixel positive predictive value (PPVp), and 85.25% per-pixel sensitivity (SEp) were reached for the detection of RLs. Using the public database, 90.16% ACCi, 96.26% PPV_p, and 84.79% SEp were obtained. As for the detection of EXs, 95.41% ACCi, 96.01% PPV_p, and 89.42% SE_p were reached with the proprietary database. Using the public database, 91.80% ACCi, 98.59% PPVp, and 91.65% SEp were obtained. The proposed method could be useful to aid in the diagnosis of DR, reducing the workload of specialists and improving the attention to diabetic patients.
Collapse
Affiliation(s)
- Roberto Romero-Oraá
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
| | - María García
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
| | - Javier Oraá-Pérez
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
| | - María I. López-Gálvez
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
- Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, 47003 Valladolid, Spain
- Instituto Universitario de Oftalmobiología Aplicada (IOBA), Universidad de Valladolid, 47011 Valladolid, Spain
| | - Roberto Hornero
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
- Instituto de Investigación en Matemáticas (IMUVA), Universidad de Valladolid, 47011 Valladolid, Spain
| |
Collapse
|
24
|
Melo T, Mendonça AM, Campilho A. Microaneurysm detection in color eye fundus images for diabetic retinopathy screening. Comput Biol Med 2020; 126:103995. [PMID: 33007620 DOI: 10.1016/j.compbiomed.2020.103995] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 09/07/2020] [Accepted: 09/07/2020] [Indexed: 02/01/2023]
Abstract
Diabetic retinopathy (DR) is a diabetes complication, which in extreme situations may lead to blindness. Since the first stages are often asymptomatic, regular eye examinations are required for an early diagnosis. As microaneurysms (MAs) are one of the first signs of DR, several automated methods have been proposed for their detection in order to reduce the ophthalmologists' workload. Although local convergence filters (LCFs) have already been applied for feature extraction, their potential as MA enhancement operators was not explored yet. In this work, we propose a sliding band filter for MA enhancement aiming at obtaining a set of initial MA candidates. Then, a combination of the filter responses with color, contrast and shape information is used by an ensemble of classifiers for final candidate classification. Finally, for each eye fundus image, a score is computed from the confidence values assigned to the MAs detected in the image. The performance of the proposed methodology was evaluated in four datasets. At the lesion level, sensitivities of 64% and 81% were achieved for an average of 8 false positives per image (FPIs) in e-ophtha MA and SCREEN-DR, respectively. In the last dataset, an AUC of 0.83 was also obtained for DR detection.
Collapse
Affiliation(s)
- Tânia Melo
- Institute for Systems and Computer Engineering, Technology and Science, Campus da Faculdade de Engenharia da Universidade Do Porto, Rua Dr. Roberto Frias, 4200-465, Porto, Portugal; Faculty of Engineering of the University of Porto, Rua Dr. Roberto Frias, S/n 4200-465, Porto, Portugal.
| | - Ana Maria Mendonça
- Institute for Systems and Computer Engineering, Technology and Science, Campus da Faculdade de Engenharia da Universidade Do Porto, Rua Dr. Roberto Frias, 4200-465, Porto, Portugal; Faculty of Engineering of the University of Porto, Rua Dr. Roberto Frias, S/n 4200-465, Porto, Portugal
| | - Aurélio Campilho
- Institute for Systems and Computer Engineering, Technology and Science, Campus da Faculdade de Engenharia da Universidade Do Porto, Rua Dr. Roberto Frias, 4200-465, Porto, Portugal; Faculty of Engineering of the University of Porto, Rua Dr. Roberto Frias, S/n 4200-465, Porto, Portugal
| |
Collapse
|
25
|
Automatic detection of non-perfusion areas in diabetic macular edema from fundus fluorescein angiography for decision making using deep learning. Sci Rep 2020; 10:15138. [PMID: 32934283 PMCID: PMC7492239 DOI: 10.1038/s41598-020-71622-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 07/30/2020] [Indexed: 02/05/2023] Open
Abstract
Vision loss caused by diabetic macular edema (DME) can be prevented by early detection and laser photocoagulation. As there is no comprehensive detection technique to recognize NPA, we proposed an automatic detection method of NPA on fundus fluorescein angiography (FFA) in DME. The study included 3,014 FFA images of 221 patients with DME. We use 3 convolutional neural networks (CNNs), including DenseNet, ResNet50, and VGG16, to identify non-perfusion regions (NP), microaneurysms, and leakages in FFA images. The NPA was segmented using attention U-net. To validate its performance, we applied our detection algorithm on 249 FFA images in which the NPA areas were manually delineated by 3 ophthalmologists. For DR lesion classification, area under the curve is 0.8855 for NP regions, 0.9782 for microaneurysms, and 0.9765 for leakage classifier. The average precision of NP region overlap ratio is 0.643. NP regions of DME in FFA images are identified based a new automated deep learning algorithm. This study is an in-depth study from computer-aided diagnosis to treatment, and will be the theoretical basis for the application of intelligent guided laser.
Collapse
|
26
|
Roy Chowdhury A, Banerjee S, Chatterjee T. A cybernetic systems approach to abnormality detection in retina images using case based reasoning. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-020-3187-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022] Open
|
27
|
Wang H, Yuan G, Zhao X, Peng L, Wang Z, He Y, Qu C, Peng Z. Hard exudate detection based on deep model learned information and multi-feature joint representation for diabetic retinopathy screening. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 191:105398. [PMID: 32092614 DOI: 10.1016/j.cmpb.2020.105398] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Revised: 01/18/2020] [Accepted: 02/14/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Diabetic retinopathy (DR), which is generally diagnosed by the presence of hemorrhages and hard exudates, is one of the most prevalent causes of visual impairment and blindness. Early detection of hard exudates (HEs) in color fundus photographs can help in preventing such destructive damage. However, this is a challenging task due to high intra-class diversity and high similarity with other structures in the fundus images. Most of the existing methods for detecting HEs are based on characterizing HEs using hand crafted features (HCFs) only, which can not characterize HEs accurately. Deep learning methods are scarce in this domain because they require large-scale sample sets for training which are not generally available for most routine medical imaging research. METHODS To address these challenges, we propose a novel methodology for HE detection using deep convolutional neural network (DCNN) and multi-feature joint representation. Specifically, we present a new optimized mathematical morphological approach that first segments HE candidates accurately. Then, each candidate is characterized using combined features based on deep features with HCFs incorporated, which is implemented by a ridge regression-based feature fusion. This method employs multi-space-based intensity features, geometric features, a gray-level co-occurrence matrix (GLCM)-based texture descriptor, a gray-level size zone matrix (GLSZM)-based texture descriptor to construct HCFs, and a DCNN to automatically learn the deep information of HE. Finally, a random forest is employed to identify the true HEs among candidates. RESULTS The proposed method is evaluated on two benchmark databases. It obtains an F-score of 0.8929 with an area under curve (AUC) of 0.9644 on the e-optha database and an F-score of 0.9326 with an AUC of 0.9323 on the HEI-MED database. These results demonstrate that our approach outperforms state-of-the-art methods. Our model also proves to be suitable for clinical applications based on private clinical images from a local hospital. CONCLUSIONS This newly proposed method integrates the traditional HCFs and deep features learned from DCNN for detecting HEs. It achieves a new state-of-the-art in both detecting HEs and DR screening. Furthermore, the proposed feature selection and fusion strategy reduces feature dimension and improves HE detection performance.
Collapse
Affiliation(s)
- Hui Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Guohui Yuan
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Xuegong Zhao
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Lingbing Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Zhuoran Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Yanmin He
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Chao Qu
- Department of Ophthalmology, Sichuan Academy of Medical Sciences and Sichuan Provincial People's Hospital, Chengdu 610072, China.
| | - Zhenming Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| |
Collapse
|
28
|
Kingkosol P, Pooprasert P, Choopong P, Hunchangsith B, Laksanaphuk V, Tantibundhit C. Automated Cytomegalovirus Retinitis Screening in Fundus Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1996-2002. [PMID: 33018395 DOI: 10.1109/embc44109.2020.9175461] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This work proposes an automated algorithms for classifying retinal fundus images as cytomegalovirus retinitis (CMVR), normal, and other diseases. Adaptive wavelet packet transform (AWPT) was used to extract features. The retinal fundus images were transformed using a 4-level Haar wavelet packet (WP) transform. The first two best trees were obtained using Shannon and log energy entropy, while the third best tree was obtained using the Daubechies-4 mother wavelet with Shannon entropy. The coefficients of each node were extracted, where the feature value of each leaf node of the best tree was the average of the WP coefficients in that node, while those of other non-leaf nodes were set to zero. The feature vector was classified using an artificial neural network (ANN). The effectiveness of the algorithm was evaluated using ten-fold cross-validation over a dataset consisting of 1,011 images (310 CMVR, 240 normal, and 461 other diseases). In testing, a dataset consisting of 101 images (31 CMVR, 24 normal, and 46 other diseases), the AWPT-based ANN had sensitivities of 90.32%, 83.33%, and 91.30% and specificities of 95.71%, 94.81%, and 92.73%. In conclusion, the proposed algorithm has promising potential in CMVR screening, for which the AWPT-based ANN is applicable with scarce data and limited resources.
Collapse
|
29
|
Stolte S, Fang R. A survey on medical image analysis in diabetic retinopathy. Med Image Anal 2020; 64:101742. [PMID: 32540699 DOI: 10.1016/j.media.2020.101742] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 02/03/2020] [Accepted: 05/28/2020] [Indexed: 01/12/2023]
Abstract
Diabetic Retinopathy (DR) represents a highly-prevalent complication of diabetes in which individuals suffer from damage to the blood vessels in the retina. The disease manifests itself through lesion presence, starting with microaneurysms, at the nonproliferative stage before being characterized by neovascularization in the proliferative stage. Retinal specialists strive to detect DR early so that the disease can be treated before substantial, irreversible vision loss occurs. The level of DR severity indicates the extent of treatment necessary - vision loss may be preventable by effective diabetes management in mild (early) stages, rather than subjecting the patient to invasive laser surgery. Using artificial intelligence (AI), highly accurate and efficient systems can be developed to help assist medical professionals in screening and diagnosing DR earlier and without the full resources that are available in specialty clinics. In particular, deep learning facilitates diagnosis earlier and with higher sensitivity and specificity. Such systems make decisions based on minimally handcrafted features and pave the way for personalized therapies. Thus, this survey provides a comprehensive description of the current technology used in each step of DR diagnosis. First, it begins with an introduction to the disease and the current technologies and resources available in this space. It proceeds to discuss the frameworks that different teams have used to detect and classify DR. Ultimately, we conclude that deep learning systems offer revolutionary potential to DR identification and prevention of vision loss.
Collapse
Affiliation(s)
- Skylar Stolte
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Biomedical Sciences Building JG56 P.O. Box 116131 Gainesville, FL 32611-6131, USA.
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Biomedical Sciences Building JG56 P.O. Box 116131 Gainesville, FL 32611-6131, USA.
| |
Collapse
|
30
|
Jiang H, Yang K, Gao M, Zhang D, Ma H, Qian W. An Interpretable Ensemble Deep Learning Model for Diabetic Retinopathy Disease Classification. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:2045-2048. [PMID: 31946303 DOI: 10.1109/embc.2019.8857160] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Diabetic retinopathy (DR) is one kind of eye disease that is caused by overtime diabetes. Lots of patients around the world suffered from DR which may bring about blindness. Early detection of DR is a rigid quest which can remind the DR patients to seek corresponding treatments in time. This paper presents an automatic image-level DR detection system using multiple well-trained deep learning models. Besides, several deep learning models are integrated using the Adaboost algorithm in order to reduce the bias of each single model. To explain the results of DR detection, this paper provides weighted class activation maps (CAMs) that can illustrate the suspected position of lesions. In the pre-processing stage, eight image transformation ways are also introduced to help augment the diversity of fundus images. Experiments demonstrate that the method proposed by this paper has stronger robustness and acquires more excellent performance than that of individual deep learning model.
Collapse
|
31
|
Lim G, Bellemo V, Xie Y, Lee XQ, Yip MYT, Ting DSW. Different fundus imaging modalities and technical factors in AI screening for diabetic retinopathy: a review. EYE AND VISION (LONDON, ENGLAND) 2020; 7:21. [PMID: 32313813 PMCID: PMC7155252 DOI: 10.1186/s40662-020-00182-7] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 03/10/2020] [Indexed: 12/12/2022]
Abstract
BACKGROUND Effective screening is a desirable method for the early detection and successful treatment for diabetic retinopathy, and fundus photography is currently the dominant medium for retinal imaging due to its convenience and accessibility. Manual screening using fundus photographs has however involved considerable costs for patients, clinicians and national health systems, which has limited its application particularly in less-developed countries. The advent of artificial intelligence, and in particular deep learning techniques, has however raised the possibility of widespread automated screening. MAIN TEXT In this review, we first briefly survey major published advances in retinal analysis using artificial intelligence. We take care to separately describe standard multiple-field fundus photography, and the newer modalities of ultra-wide field photography and smartphone-based photography. Finally, we consider several machine learning concepts that have been particularly relevant to the domain and illustrate their usage with extant works. CONCLUSIONS In the ophthalmology field, it was demonstrated that deep learning tools for diabetic retinopathy show clinically acceptable diagnostic performance when using colour retinal fundus images. Artificial intelligence models are among the most promising solutions to tackle the burden of diabetic retinopathy management in a comprehensive manner. However, future research is crucial to assess the potential clinical deployment, evaluate the cost-effectiveness of different DL systems in clinical practice and improve clinical acceptance.
Collapse
Affiliation(s)
- Gilbert Lim
- School of Computing, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Valentina Bellemo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
| | - Yuchen Xie
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Xin Q. Lee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Michelle Y. T. Yip
- Duke-NUS Medical School, National University of Singapore, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
| | - Daniel S. W. Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
- Vitreo-Retinal Service, Singapore National Eye Center, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
- Artificial Intelligence in Ophthalmology, Singapore Eye Research Institute, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
| |
Collapse
|
32
|
He Y, Jiao W, Shi Y, Lian J, Zhao B, Zou W, Zhu Y, Zheng Y. Segmenting Diabetic Retinopathy Lesions in Multispectral Images Using Low-Dimensional Spatial-Spectral Matrix Representation. IEEE J Biomed Health Inform 2020; 24:493-502. [DOI: 10.1109/jbhi.2019.2912668] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
33
|
Zhou Y, Li G, Li H. Automatic Cataract Classification Using Deep Neural Network With Discrete State Transition. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:436-446. [PMID: 31295110 DOI: 10.1109/tmi.2019.2928229] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Cataract is the clouding of lens, which affects vision and it is the leading cause of blindness in the world's population. Accurate and convenient cataract detection and cataract severity evaluation will improve the situation. Automatic cataract detection and grading methods are proposed in this paper. With prior knowledge, the improved Haar features and visible structure features are combined as features, and multilayer perceptron with discrete state transition (DST-MLP) or exponential DST (EDST-MLP) are designed as classifiers. Without prior knowledge, residual neural networks with DST (DST-ResNet) or EDST (EDST-ResNet) are proposed. Whether with prior knowledge or not, our proposed DST and EDST strategy can prevent overfitting and reduce storage memory during network training and implementation, and neural networks with these strategies achieve state-of-the-art accuracy in cataract detection and grading. The experimental results indicate that combined features always achieve better performance than a single type of feature, and classification methods with feature extraction based on prior knowledge are more suitable for complicated medical image classification task. These analyses can provide constructive advice for other medical image processing applications.
Collapse
|
34
|
Li Z, Guo C, Nie D, Lin D, Zhu Y, Chen C, Xiang Y, Xu F, Jin C, Zhang X, Yang Y, Zhang K, Zhao L, Zhang P, Han Y, Yun D, Wu X, Yan P, Lin H. Development and Evaluation of a Deep Learning System for Screening Retinal Hemorrhage Based on Ultra-Widefield Fundus Images. Transl Vis Sci Technol 2020; 9:3. [PMID: 32518708 PMCID: PMC7255628 DOI: 10.1167/tvst.9.2.3] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 11/21/2019] [Indexed: 12/15/2022] Open
Abstract
Purpose To develop and evaluate a deep learning (DL) system for retinal hemorrhage (RH) screening using ultra-widefield fundus (UWF) images. Methods A total of 16,827 UWF images from 11,339 individuals were used to develop the DL system. Three experienced retina specialists were recruited to grade UWF images independently. Three independent data sets from 3 different institutions were used to validate the effectiveness of the DL system. The data set from Zhongshan Ophthalmic Center (ZOC) was selected to compare the classification performance of the DL system and general ophthalmologists. A heatmap was generated to identify the most important area used by the DL model to classify RH and to discern whether the RH involved the anatomical macula. Results In the three independent data sets, the DL model for detecting RH achieved areas under the curve of 0.997, 0.998, and 0.999, with sensitivities of 97.6%, 96.7%, and 98.9% and specificities of 98.0%, 98.7%, and 99.4%. In the ZOC data set, the sensitivity of the DL model was better than that of the general ophthalmologists, although the general ophthalmologists had slightly higher specificities. The heatmaps highlighted RH regions in all true-positive images, and the RH within the anatomical macula was determined based on heatmaps. Conclusions Our DL system showed reliable performance for detecting RH and could be used to screen for RH-related diseases. Translational Relevance As a screening tool, this automated system may aid early diagnosis and management of RH-related retinal and systemic diseases by allowing timely referral.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yi Zhu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Chuan Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Yifan Xiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yahan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Kai Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, Inner Mongolia, China
| | - Yu Han
- EYE & ENT Hospital of Fudan University, Shanghai, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
35
|
Srivastava V, Purwar RK. Classification of eye-fundus images with diabetic retinopathy using shape based features integrated into a convolutional neural network. JOURNAL OF INFORMATION & OPTIMIZATION SCIENCES 2020. [DOI: 10.1080/02522667.2020.1714186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Varun Srivastava
- University School of Information, Communication & Technology, Guru Gobind Singh Indraprastha University Sector 16C, Dwarka, New Delhi 110078 India,
| | - Ravindra Kumar Purwar
- University School of Information, Communication & Technology, Guru Gobind Singh Indraprastha University Sector 16C, Dwarka, New Delhi 110078 India,
| |
Collapse
|
36
|
Accelerating Retinal Fundus Image Classification Using Artificial Neural Networks (ANNs) and Reconfigurable Hardware (FPGA). ELECTRONICS 2019. [DOI: 10.3390/electronics8121522] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Diabetic retinopathy (DR) and glaucoma are common eye diseases that affect a blood vessel in the retina and are two of the leading causes of vision loss around the world. Glaucoma is a common eye condition where the optic nerve that connects the eye to the brain becomes damaged, whereas DR is a complication of diabetes caused by high blood sugar levels damaging the back of the eye. In order to produce an accurate and early diagnosis, an extremely high number of retinal images needs to be processed. Given the required computational complexity of image processing algorithms and the need for high-performance architectures, this paper proposes and demonstrates the use of fully parallel field programmable gate arrays (FPGAs) to overcome the burden of real-time computing in conventional software architectures. The experimental results achieved through software implementation were validated on an FPGA device. The results showed a remarkable improvement in terms of computational speed and power consumption. This paper presents various preprocessing methods to analyse fundus images, which can serve as a diagnostic tool for detection of glaucoma and diabetic retinopathy. In the proposed adaptive thresholding-based preprocessing method, features were selected by calculating the area of the segmented optic disk, which was further classified using a feedforward neural network (NN). The analysis was carried out using feature extraction through existing methodologies such as adaptive thresholding, histogram and wavelet transform. Results obtained through these methods were quantified to obtain optimum performance in terms of classification accuracy. The proposed hardware implementation outperforms existing methods and offers a significant improvement in terms of computational speed and power consumption.
Collapse
|
37
|
Diabetic retinopathy detection using red lesion localization and convolutional neural networks. Comput Biol Med 2019; 116:103537. [PMID: 31747632 DOI: 10.1016/j.compbiomed.2019.103537] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2019] [Revised: 11/08/2019] [Accepted: 11/10/2019] [Indexed: 11/21/2022]
Abstract
Detecting the early signs of diabetic retinopathy (DR) is essential, as timely treatment might reduce or even prevent vision loss. Moreover, automatically localizing the regions of the retinal image that might contain lesions can favorably assist specialists in the task of detection. In this study, we designed a lesion localization model using a deep network patch-based approach. Our goal was to reduce the complexity of the model while improving its performance. For this purpose, we designed an efficient procedure (including two convolutional neural network models) for selecting the training patches, such that the challenging examples would be given special attention during the training process. Using the labeling of the region, a DR decision can be given to the initial image, without the need for special training. The model is trained on the Standard Diabetic Retinopathy Database, Calibration Level 1 (DIARETDB1) database and is tested on several databases (including Messidor) without any further adaptation. It reaches an area under the receiver operating characteristic curve of 0.912-95%CI(0.897-0.928) for DR screening, and a sensitivity of 0.940-95%CI(0.921-0.959). These values are competitive with other state-of-the-art approaches.
Collapse
|
38
|
Porwal P, Pachade S, Kokare M, Deshmukh G, Son J, Bae W, Liu L, Wang J, Liu X, Gao L, Wu T, Xiao J, Wang F, Yin B, Wang Y, Danala G, He L, Choi YH, Lee YC, Jung SH, Li Z, Sui X, Wu J, Li X, Zhou T, Toth J, Baran A, Kori A, Chennamsetty SS, Safwan M, Alex V, Lyu X, Cheng L, Chu Q, Li P, Ji X, Zhang S, Shen Y, Dai L, Saha O, Sathish R, Melo T, Araújo T, Harangi B, Sheng B, Fang R, Sheet D, Hajdu A, Zheng Y, Mendonça AM, Zhang S, Campilho A, Zheng B, Shen D, Giancardo L, Quellec G, Mériaudeau F. IDRiD: Diabetic Retinopathy - Segmentation and Grading Challenge. Med Image Anal 2019; 59:101561. [PMID: 31671320 DOI: 10.1016/j.media.2019.101561] [Citation(s) in RCA: 86] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 09/09/2019] [Accepted: 09/16/2019] [Indexed: 02/07/2023]
Abstract
Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.
Collapse
Affiliation(s)
- Prasanna Porwal
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA.
| | - Samiksha Pachade
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | - Manesh Kokare
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | | | | | | | - Lihong Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - Xinhui Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - TianBo Wu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | - Jing Xiao
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | | | - Yunzhi Wang
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Gopichandh Danala
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Linsheng He
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Yoon Ho Choi
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Yeong Chan Lee
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Sang-Hyuk Jung
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Xiaodan Sui
- School of Information Science and Engineering, Shandong Normal University, China
| | - Junyan Wu
- Cleerly Inc., New York, United States
| | | | - Ting Zhou
- University at Buffalo, New York, United States
| | - Janos Toth
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Agnes Baran
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | | | | | | | | | - Xingzheng Lyu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China; Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore
| | - Li Cheng
- Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore; Department of Electric and Computer Engineering, University of Alberta, Canada
| | - Qinhao Chu
- School of Computing, National University of Singapore, Singapore
| | - Pengcheng Li
- School of Computing, National University of Singapore, Singapore
| | - Xin Ji
- Beijing Shanggong Medical Technology Co., Ltd., China
| | - Sanyuan Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Yaxin Shen
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ling Dai
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | | | | | - Tânia Melo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
| | - Teresa Araújo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Balazs Harangi
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, USA
| | | | - Andras Hajdu
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, China
| | - Ana Maria Mendonça
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Aurélio Campilho
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Luca Giancardo
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | | | - Fabrice Mériaudeau
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Malaysia; ImViA/IFTIM, Université de Bourgogne, Dijon, France
| |
Collapse
|
39
|
Playout C, Duval R, Cheriet F. A Novel Weakly Supervised Multitask Architecture for Retinal Lesions Segmentation on Fundus Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2434-2444. [PMID: 30908197 DOI: 10.1109/tmi.2019.2906319] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Obtaining the complete segmentation map of retinal lesions is the first step toward an automated diagnosis tool for retinopathy that is interpretable in its decision-making. However, the limited availability of ground truth lesion detection maps at a pixel level restricts the ability of deep segmentation neural networks to generalize over large databases. In this paper, we propose a novel approach for training a convolutional multi-task architecture with supervised learning and reinforcing it with weakly supervised learning. The architecture is simultaneously trained for three tasks: segmentation of red lesions and of bright lesions, those two tasks done concurrently with lesion detection. In addition, we propose and discuss the advantages of a new preprocessing method that guarantees the color consistency between the raw image and its enhanced version. Our complete system produces segmentations of both red and bright lesions. The method is validated at the pixel level and per-image using four databases and a cross-validation strategy. When evaluated on the task of screening for the presence or absence of lesions on the Messidor image set, the proposed method achieves an area under the ROC curve of 0.839, comparable with the state-of-the-art.
Collapse
|
40
|
Manjaramkar A, Kokare M. Statistical Geometrical Features for Microaneurysm Detection. J Digit Imaging 2019; 31:224-234. [PMID: 28785874 DOI: 10.1007/s10278-017-0008-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023] Open
Abstract
Automated microaneurysm (MA) detection is still an open challenge due to its small size and similarity with blood vessels. In this paper, we present a novel method which is simple, efficient, and real-time for segmenting and detecting MA in color fundus images (CFI). To do this, a novel set of features based on statistics of geometrical properties of connected regions, that can easily discriminate lesion and non-lesion pixels are used. For large-scale evaluation proposed method is validated on DIARETDB1, ROC, STARE, and MESSIDOR dataset. It proves robust with respect to different image characteristics and camera settings. The best performance was achieved on per-image evaluation on DIARETDB1 dataset with sensitivity of 88.09 at 92.65% specificity which is quite encouraging for clinical use.
Collapse
Affiliation(s)
- Arati Manjaramkar
- Department of Information Technology, SGGS Institute of Engineering & Technology, Nanded, Maharashtra, 431606, India.
| | - Manesh Kokare
- Department of Electronics & Telecommunication, SGGS Institute of Engineering & Technology, Nanded, Maharashtra, 431606, India
| |
Collapse
|
41
|
Bellemo V, Lim G, Rim TH, Tan GSW, Cheung CY, Sadda S, He MG, Tufail A, Lee ML, Hsu W, Ting DSW. Artificial Intelligence Screening for Diabetic Retinopathy: the Real-World Emerging Application. Curr Diab Rep 2019; 19:72. [PMID: 31367962 DOI: 10.1007/s11892-019-1189-3] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
PURPOSE OF REVIEW This paper systematically reviews the recent progress in diabetic retinopathy screening. It provides an integrated overview of the current state of knowledge of emerging techniques using artificial intelligence integration in national screening programs around the world. Existing methodological approaches and research insights are evaluated. An understanding of existing gaps and future directions is created. RECENT FINDINGS Over the past decades, artificial intelligence has emerged into the scientific consciousness with breakthroughs that are sparking increasing interest among computer science and medical communities. Specifically, machine learning and deep learning (a subtype of machine learning) applications of artificial intelligence are spreading into areas that previously were thought to be only the purview of humans, and a number of applications in ophthalmology field have been explored. Multiple studies all around the world have demonstrated that such systems can behave on par with clinical experts with robust diagnostic performance in diabetic retinopathy diagnosis. However, only few tools have been evaluated in clinical prospective studies. Given the rapid and impressive progress of artificial intelligence technologies, the implementation of deep learning systems into routinely practiced diabetic retinopathy screening could represent a cost-effective alternative to help reduce the incidence of preventable blindness around the world.
Collapse
Affiliation(s)
- Valentina Bellemo
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Gilbert Lim
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- School of Computing, National University of Singapore, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Gavin S W Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - SriniVas Sadda
- Doheny Eye Institute, University of California, Los Angeles, CA, USA
| | - Ming-Guang He
- Center of Eye Research Australia, Melbourne, Victoria, Australia
| | - Adnan Tufail
- Moorfields Eye Hospital & Institute of Ophthalmology, UCL, London, UK
| | - Mong Li Lee
- School of Computing, National University of Singapore, Singapore, Singapore
| | - Wynne Hsu
- School of Computing, National University of Singapore, Singapore, Singapore
| | - Daniel Shu Wei Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
| |
Collapse
|
42
|
Randive SN, Senapati RK, Rahulkar AD. A review on computer-aided recent developments for automatic detection of diabetic retinopathy. J Med Eng Technol 2019; 43:87-99. [PMID: 31198073 DOI: 10.1080/03091902.2019.1576790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Diabetic retinopathy is a serious microvascular disorder that might result in loss of vision and blindness. It seriously damages the retinal blood vessels and reduces the light-sensitive inner layer of the eye. Due to the manual inspection of retinal fundus images on diabetic retinopathy to detect the morphological abnormalities in Microaneurysms (MAs), Exudates (EXs), Haemorrhages (HMs), and Inter retinal microvascular abnormalities (IRMA) is very difficult and time consuming process. In order to avoid this, the regular follow-up screening process, and early automatic Diabetic Retinopathy detection are necessary. This paper discusses various methods of analysing automatic retinopathy detection and classification of different grading based on the severity levels. In addition, retinal blood vessel detection techniques are also discussed for the ultimate detection and diagnostic procedure of proliferative diabetic retinopathy. Furthermore, the paper elaborately discussed the systematic review accessed by authors on various publicly available databases collected from different medical sources. In the survey, meta-analysis of several methods for diabetic feature extraction, segmentation and various types of classifiers have been used to evaluate the system performance metrics for the diagnosis of DR. This survey will be helpful for the technical persons and researchers who want to focus on enhancing the diagnosis of a system that would be more powerful in real life.
Collapse
Affiliation(s)
- Santosh Nagnath Randive
- a Department of Electronics & Communication Engineering , Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram , Guntur , Andhra Pradesh , India
| | - Ranjan K Senapati
- a Department of Electronics & Communication Engineering , Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram , Guntur , Andhra Pradesh , India
| | - Amol D Rahulkar
- b Department of Electrical and Electronics Engineering , National Institute of Technology , Goa , India
| |
Collapse
|
43
|
Derwin DJ, Selvi ST, Singh OJ. Secondary Observer System for Detection of Microaneurysms in Fundus Images Using Texture Descriptors. J Digit Imaging 2019; 33:159-167. [PMID: 31144148 DOI: 10.1007/s10278-019-00225-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023] Open
Abstract
The increase of diabetic retinopathy patients and diabetic mellitus worldwide yields lot of challenges to ophthalmologists in the screening of diabetic retinopathy. Different signs of diabetic retinopathy were identified in retinal images taken through fundus photography. Among these stages, the early stage of diabetic retinopathy termed as microaneurysms plays a vital role in diabetic retinopathy patients. To assist the ophthalmologists, and to avoid vision loss among diabetic retinopathy patients, a computer-aided diagnosis is essential that can be used as a second opinion while screening diabetic retinopathy. On this vision, a new methodology is proposed to detect the microaneurysms and non-microaneurysms through the stages of image pre-processing, candidate extraction, feature extraction, and classification. The feature extractor, generalized rotational invariant local binary pattern, contributes in extracting the texture-based features of microaneurysms. As a result, our proposed system achieved a free-response receiver operating characteristic score of 0.421 with Retinopathy Online Challenge database.
Collapse
Affiliation(s)
- D Jeba Derwin
- Department of ECE, Arunachala College of Engineering for Women, Kanyakumari, Tamilnadu, India.
| | - S Tami Selvi
- Department of ECE, National Engineering College, Tutucorin, Tamilnadu, India
| | - O Jeba Singh
- Department of EEE, Arunachala College of Engineering for Women, Kanyakumari, Tamilnadu, India
| |
Collapse
|
44
|
Eftekhari N, Pourreza HR, Masoudi M, Ghiasi-Shirazi K, Saeedi E. Microaneurysm detection in fundus images using a two-step convolutional neural network. Biomed Eng Online 2019; 18:67. [PMID: 31142335 PMCID: PMC6542103 DOI: 10.1186/s12938-019-0675-9] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Accepted: 04/30/2019] [Indexed: 11/29/2022] Open
Abstract
Background and objectives Diabetic retinopathy (DR) is the leading cause of blindness worldwide, and therefore its early detection is important in order to reduce disease-related eye injuries. DR is diagnosed by inspecting fundus images. Since microaneurysms (MA) are one of the main symptoms of the disease, distinguishing this complication within the fundus images facilitates early DR detection. In this paper, an automatic analysis of retinal images using convolutional neural network (CNN) is presented. Methods Our method incorporates a novel technique utilizing a two-stage process with two online datasets which results in accurate detection while solving the imbalance data problem and decreasing training time in comparison with previous studies. We have implemented our proposed CNNs using the Keras library. Results In order to evaluate our proposed method, an experiment was conducted on two standard publicly available datasets, i.e., Retinopathy Online Challenge dataset and E-Ophtha-MA dataset. Our results demonstrated a promising sensitivity value of about 0.8 for an average of >6 false positives per image, which is competitive with state of the art approaches. Conclusion Our method indicates significant improvement in MA-detection using retinal fundus images for monitoring diabetic retinopathy.
Collapse
Affiliation(s)
- Noushin Eftekhari
- Machine Vision Lab., Computer Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad (FUM), Azadi Sqr., Mashhad, Iran
| | - Hamid-Reza Pourreza
- Machine Vision Lab., Computer Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad (FUM), Azadi Sqr., Mashhad, Iran.
| | - Mojtaba Masoudi
- Machine Vision Lab., Computer Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad (FUM), Azadi Sqr., Mashhad, Iran
| | - Kamaledin Ghiasi-Shirazi
- Machine Vision Lab., Computer Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad (FUM), Azadi Sqr., Mashhad, Iran
| | - Ehsan Saeedi
- Machine Vision Lab., Computer Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad (FUM), Azadi Sqr., Mashhad, Iran
| |
Collapse
|
45
|
Detection of microaneurysms using ant colony algorithm in the early diagnosis of diabetic retinopathy. Med Hypotheses 2019; 129:109242. [PMID: 31371092 DOI: 10.1016/j.mehy.2019.109242] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Revised: 05/11/2019] [Accepted: 05/19/2019] [Indexed: 11/20/2022]
Abstract
Microaneurysms are lesions in the shape of small circular dilations which result from thinning in peripheral retinal blood vessels due to diabetes and increasing intra-retinal blood pressure. Because it is considered as the most important clinical finding in the diagnosis of diabetic retinopathy, accurate detection of these lesions bear utmost importance in the early diagnosis of diabetic retinopathy. The present study aims to accurately, effectively and automatically detect microaneurysms which are difficult to detect in color fundus images in early stage. To this aim, ant colony algorithm, which is an important optimization method, was used instead of conventional image processing techniques. First, retinal vascular structure was extracted from color fundus images in Messidor and DiaretDB1 data sets. Afterwards, the segmentation of microaneurysms was effectively carried out using ant colony algorithm. The same procedure was also applied to five different image processing and clustering algorithms (watershed, random walker, k-means, maximum entropy and region growing) in order to compare the performance of the proposed method with other methods. Microaneurysm images manually detected by a specialist eye doctor were used to measure the performances of above-mentioned methods. The similarities among microaneurysms which were automatically and manually segmented were tested using Dice and Jaccard similarity index values. Dice index values obtained from the study vary between 0.52 and 0.98 in maximum entropy, 0.55 and 0.88 in watershed, 0.75 and 0.86 in region growing, 0.55 and 0.78 in k-means, and 0.66 and 0.83 in random walker, and 0.81 and 0.9 in ant colony. Similar performance values were also obtained in Jaccard index. The results show that different performances were observed in the conventional segmentation of microaneurysms depending on the image quality. On the other hand, the ant colony based method proposed in this paper displays a more stabilized and higher performance irrespective of image contrast. Therefore, it is evident that the proposed method successfully detects microaneurysms even in low quality images, thus helping specialists diagnose them in an easier way.
Collapse
|
46
|
Joshi S, Karule PT. Mathematical morphology for microaneurysm detection in fundus images. Eur J Ophthalmol 2019; 30:1135-1142. [DOI: 10.1177/1120672119843021] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Aim: Fundus image analysis is the basis for the better understanding of retinal diseases which are found due to diabetes. Detection of earlier markers such as microaneurysms that appear in fundus images combined with treatment proves beneficial to prevent further complications of diabetic retinopathy with an increased risk of sight loss. Methods: The proposed algorithm consists of three modules: (1) image enhancement through morphological processing; (2) the extraction and removal of red structures, such as blood vessels preceded by detection and removal of bright artefacts; (3) finally, the true microaneurysm candidate selection among other structures based on feature extraction set. Results: The proposed strategy is successfully evaluated on two publicly available databases containing both normal and pathological images. The sensitivity of 89.22%, specificity of 91% and accuracy of 92% achieved for the detection of microaneurysms for Diaretdb1 database images. The algorithm evaluation for microaneurysm detection has a sensitivity of 83% and specificity 82% for e-ophtha database. Conclusion: In automated detection system, the successful detection of the number of microaneurysms correlates with the stages of the retinal diseases and its early diagnosis. The results for true microaneurysm detection indicates it as a useful tool for screening colour fundus images, which proves time saving for counting of microaneurysms to follow Diabetic Retinopathy Grading Criteria.
Collapse
Affiliation(s)
- Shilpa Joshi
- Department of Electronics Engineering, YCCE, Nagpur University, Nagpur, India
| | - PT Karule
- Department of Electronics Engineering, YCCE, Nagpur University, Nagpur, India
| |
Collapse
|
47
|
Romero-Oraá R, Jiménez-García J, García M, López-Gálvez MI, Oraá-Pérez J, Hornero R. Entropy Rate Superpixel Classification for Automatic Red Lesion Detection in Fundus Images. ENTROPY 2019; 21:e21040417. [PMID: 33267131 PMCID: PMC7514906 DOI: 10.3390/e21040417] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 04/17/2019] [Accepted: 04/17/2019] [Indexed: 12/26/2022]
Abstract
Diabetic retinopathy (DR) is the main cause of blindness in the working-age population in developed countries. Digital color fundus images can be analyzed to detect lesions for large-scale screening. Thereby, automated systems can be helpful in the diagnosis of this disease. The aim of this study was to develop a method to automatically detect red lesions (RLs) in retinal images, including hemorrhages and microaneurysms. These signs are the earliest indicators of DR. Firstly, we performed a novel preprocessing stage to normalize the inter-image and intra-image appearance and enhance the retinal structures. Secondly, the Entropy Rate Superpixel method was used to segment the potential RL candidates. Then, we reduced superpixel candidates by combining inaccurately fragmented regions within structures. Finally, we classified the superpixels using a multilayer perceptron neural network. The used database contained 564 fundus images. The DB was randomly divided into a training set and a test set. Results on the test set were measured using two different criteria. With a pixel-based criterion, we obtained a sensitivity of 81.43% and a positive predictive value of 86.59%. Using an image-based criterion, we reached 84.04% sensitivity, 85.00% specificity and 84.45% accuracy. The algorithm was also evaluated on the DiaretDB1 database. The proposed method could help specialists in the detection of RLs in diabetic patients.
Collapse
Affiliation(s)
- Roberto Romero-Oraá
- Biomedical Engineering Group, E.T.S.I. de Telecomunicación, University of Valladolid, 47011 Valladolid, Spain
- Correspondence: ; Tel.: +34-983-425-589
| | - Jorge Jiménez-García
- Biomedical Engineering Group, E.T.S.I. de Telecomunicación, University of Valladolid, 47011 Valladolid, Spain
| | - María García
- Biomedical Engineering Group, E.T.S.I. de Telecomunicación, University of Valladolid, 47011 Valladolid, Spain
| | - María I. López-Gálvez
- Biomedical Engineering Group, E.T.S.I. de Telecomunicación, University of Valladolid, 47011 Valladolid, Spain
- Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, 47003 Valladolid, Spain
- Instituto de Oftalmobiología Aplicada (IOBA), University of Valladolid, 47011 Valladolid, Spain
| | - Javier Oraá-Pérez
- Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, 47003 Valladolid, Spain
| | - Roberto Hornero
- Biomedical Engineering Group, E.T.S.I. de Telecomunicación, University of Valladolid, 47011 Valladolid, Spain
- Instituto de Investigación en Matemáticas (IMUVA), University of Valladolid, 47011 Valladolid, Spain
- Instituto de Neurociencias de Castilla y León (INCYL), University of Salamanca, 37007 Salamanca, Spain
| |
Collapse
|
48
|
Hashemzadeh M, Adlpour Azar B. Retinal blood vessel extraction employing effective image features and combination of supervised and unsupervised machine learning methods. Artif Intell Med 2019; 95:1-15. [PMID: 30904129 DOI: 10.1016/j.artmed.2019.03.001] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2018] [Revised: 12/08/2018] [Accepted: 03/01/2019] [Indexed: 11/30/2022]
Abstract
In medicine, retinal vessel analysis of fundus images is a prominent task for the screening and diagnosis of various ophthalmological and cardiovascular diseases. In this research, a method is proposed for extracting the retinal blood vessels employing a set of effective image features and combination of supervised and unsupervised machine learning techniques. Further to the common features used in extracting blood vessels, three strong features having a significant influence on the accuracy of the vessel extraction are utilized. The selected combination of the different types of individually efficient features results in a rich local information with better discrimination for vessel and non-vessel pixels. The proposed method first extracts the thick and clear vessels in an unsupervised manner, and then, it extracts the thin vessels in a supervised way. The goal of the combination of the supervised and unsupervised methods is to deal with the problem of intra-class high variance of image features calculated from various vessel pixels. The proposed method is evaluated on three publicly available databases DRIVE, STARE and CHASE_DB1. The obtained results (DRIVE: Acc = 0.9531, AUC = 0.9752; STARE: Acc = 0.9691, AUC = 0.9853; CHASE_DB1: Acc = 0.9623, AUC = 0.9789) demonstrate the better performance of the proposed method compared to the state-of-the-art methods.
Collapse
Affiliation(s)
- Mahdi Hashemzadeh
- Faculty of Information Technology and Computer Engineering, Azarbaijan Shahid Madani University, Tabriz-Azarshahr Road, 5375171379, Tabriz, Iran.
| | - Baharak Adlpour Azar
- Department of Computer Engineering, Tabriz Branch, Azad University, Tabriz, Iran.
| |
Collapse
|
49
|
|
50
|
Automated geographic atrophy segmentation for SD-OCT images based on two-stage learning model. Comput Biol Med 2019; 105:102-111. [DOI: 10.1016/j.compbiomed.2018.12.013] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Revised: 12/27/2018] [Accepted: 12/27/2018] [Indexed: 01/19/2023]
|