51
|
Wagner SK, Liefers B, Radia M, Zhang G, Struyven R, Faes L, Than J, Balal S, Hennings C, Kilduff C, Pooprasert P, Glinton S, Arunakirinathan M, Giannakis P, Braimah IZ, Ahmed ISH, Al-Feky M, Khalid H, Ferraz D, Vieira J, Jorge R, Husain S, Ravelo J, Hinds AM, Henderson R, Patel HI, Ostmo S, Campbell JP, Pontikos N, Patel PJ, Keane PA, Adams G, Balaskas K. Development and international validation of custom-engineered and code-free deep-learning models for detection of plus disease in retinopathy of prematurity: a retrospective study. Lancet Digit Health 2023; 5:e340-e349. [PMID: 37088692 PMCID: PMC10279502 DOI: 10.1016/s2589-7500(23)00050-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 01/08/2023] [Accepted: 02/14/2023] [Indexed: 04/25/2023]
Abstract
BACKGROUND Retinopathy of prematurity (ROP), a leading cause of childhood blindness, is diagnosed through interval screening by paediatric ophthalmologists. However, improved survival of premature neonates coupled with a scarcity of available experts has raised concerns about the sustainability of this approach. We aimed to develop bespoke and code-free deep learning-based classifiers for plus disease, a hallmark of ROP, in an ethnically diverse population in London, UK, and externally validate them in ethnically, geographically, and socioeconomically diverse populations in four countries and three continents. Code-free deep learning is not reliant on the availability of expertly trained data scientists, thus being of particular potential benefit for low resource health-care settings. METHODS This retrospective cohort study used retinal images from 1370 neonates admitted to a neonatal unit at Homerton University Hospital NHS Foundation Trust, London, UK, between 2008 and 2018. Images were acquired using a Retcam Version 2 device (Natus Medical, Pleasanton, CA, USA) on all babies who were either born at less than 32 weeks gestational age or had a birthweight of less than 1501 g. Each images was graded by two junior ophthalmologists with disagreements adjudicated by a senior paediatric ophthalmologist. Bespoke and code-free deep learning models (CFDL) were developed for the discrimination of healthy, pre-plus disease, and plus disease. Performance was assessed internally on 200 images with the majority vote of three senior paediatric ophthalmologists as the reference standard. External validation was on 338 retinal images from four separate datasets from the USA, Brazil, and Egypt with images derived from Retcam and the 3nethra neo device (Forus Health, Bengaluru, India). FINDINGS Of the 7414 retinal images in the original dataset, 6141 images were used in the final development dataset. For the discrimination of healthy versus pre-plus or plus disease, the bespoke model had an area under the curve (AUC) of 0·986 (95% CI 0·973-0·996) and the CFDL model had an AUC of 0·989 (0·979-0·997) on the internal test set. Both models generalised well to external validation test sets acquired using the Retcam for discriminating healthy from pre-plus or plus disease (bespoke range was 0·975-1·000 and CFDL range was 0·969-0·995). The CFDL model was inferior to the bespoke model on discriminating pre-plus disease from healthy or plus disease in the USA dataset (CFDL 0·808 [95% CI 0·671-0·909, bespoke 0·942 [0·892-0·982]], p=0·0070). Performance also reduced when tested on the 3nethra neo imaging device (CFDL 0·865 [0·742-0·965] and bespoke 0·891 [0·783-0·977]). INTERPRETATION Both bespoke and CFDL models conferred similar performance to senior paediatric ophthalmologists for discriminating healthy retinal images from ones with features of pre-plus or plus disease; however, CFDL models might generalise less well when considering minority classes. Care should be taken when testing on data acquired using alternative imaging devices from that used for the development dataset. Our study justifies further validation of plus disease classifiers in ROP screening and supports a potential role for code-free approaches to help prevent blindness in vulnerable neonates. FUNDING National Institute for Health Research Biomedical Research Centre based at Moorfields Eye Hospital NHS Foundation Trust and the University College London Institute of Ophthalmology. TRANSLATIONS For the Portuguese and Arabic translations of the abstract see Supplementary Materials section.
Collapse
Affiliation(s)
- Siegfried K Wagner
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Bart Liefers
- NIHR Moorfields Biomedical Research Centre, London, UK
| | - Meera Radia
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Gongyu Zhang
- NIHR Moorfields Biomedical Research Centre, London, UK
| | - Robbert Struyven
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Livia Faes
- NIHR Moorfields Biomedical Research Centre, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Jonathan Than
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Shafi Balal
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | | | | | | | | | | | - Periklis Giannakis
- Institute of Health Sciences Education, Queen Mary University of London, London, UK
| | - Imoro Zeba Braimah
- Lions International Eye Centre, Korle-Bu Teaching Hospital, Accra, Ghana
| | - Islam S H Ahmed
- Faculty of Medicine, Alexandria University, Alexandria, Egypt; Alexandria University Hospital, Alexandria, Egypt
| | - Mariam Al-Feky
- Department of Ophthalmology, Ain Shams University Hospitals, Cairo, Egypt; Watany Eye Hospital, Cairo, Egypt
| | - Hagar Khalid
- Moorfields Eye Hospital NHS Foundation Trust, London, UK; Department of Ophthalmology, Tanta University, Tanta, Egypt
| | - Daniel Ferraz
- Institute of Ophthalmology, University College London, London, UK; D'Or Institute for Research and Education, São Paulo, Brazil
| | - Juliana Vieira
- Department of Ophthalmology, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | - Rodrigo Jorge
- Department of Ophthalmology, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | - Shahid Husain
- The Blizard Institute, Queen Mary University of London, London, UK; Neonatology Department, Homerton University Hospital NHS Foundation Trust, London, UK
| | - Janette Ravelo
- Neonatology Department, Homerton University Hospital NHS Foundation Trust, London, UK
| | | | - Robert Henderson
- UCL Great Ormond Street Institute of Child Health, University College London, London, UK; Clinical and Academic Department of Ophthalmology, Great Ormond Street Hospital for Children, London, UK
| | - Himanshu I Patel
- Moorfields Eye Hospital NHS Foundation Trust, London, UK; The Royal London Hospital, Barts Health NHS Trust, London, UK
| | - Susan Ostmo
- Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
| | - J Peter Campbell
- Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
| | - Nikolas Pontikos
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Praveen J Patel
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Pearse A Keane
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Gill Adams
- NIHR Moorfields Biomedical Research Centre, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Konstantinos Balaskas
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK.
| |
Collapse
|
52
|
Feng H, Chen J, Zhang Z, Lou Y, Zhang S, Yang W. A bibliometric analysis of artificial intelligence applications in macular edema: exploring research hotspots and Frontiers. Front Cell Dev Biol 2023; 11:1174936. [PMID: 37255600 PMCID: PMC10225517 DOI: 10.3389/fcell.2023.1174936] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 05/02/2023] [Indexed: 06/01/2023] Open
Abstract
Background: Artificial intelligence (AI) is used in ophthalmological disease screening and diagnostics, medical image diagnostics, and predicting late-disease progression rates. We reviewed all AI publications associated with macular edema (ME) research Between 2011 and 2022 and performed modeling, quantitative, and qualitative investigations. Methods: On 1st February 2023, we screened the Web of Science Core Collection for AI applications related to ME, from which 297 studies were identified and analyzed (2011-2022). We collected information on: publications, institutions, country/region, keywords, journal name, references, and research hotspots. Literature clustering networks and Frontier knowledge bases were investigated using bibliometrix-BiblioShiny, VOSviewer, and CiteSpace bibliometric platforms. We used the R "bibliometrix" package to synopsize our observations, enumerate keywords, visualize collaboration networks between countries/regions, and generate a topic trends plot. VOSviewer was used to examine cooperation between institutions and identify citation relationships between journals. We used CiteSpace to identify clustering keywords over the timeline and identify keywords with the strongest citation bursts. Results: In total, 47 countries published AI studies related to ME; the United States had the highest H-index, thus the greatest influence. China and the United States cooperated most closely between all countries. Also, 613 institutions generated publications - the Medical University of Vienna had the highest number of studies. This publication record and H-index meant the university was the most influential in the ME field. Reference clusters were also categorized into 10 headings: retinal Optical Coherence Tomography (OCT) fluid detection, convolutional network models, deep learning (DL)-based single-shot predictions, retinal vascular disease, diabetic retinopathy (DR), convolutional neural networks (CNNs), automated macular pathology diagnosis, dry age-related macular degeneration (DARMD), class weight, and advanced DL architecture systems. Frontier keywords were represented by diabetic macular edema (DME) (2021-2022). Conclusion: Our review of the AI-related ME literature was comprehensive, systematic, and objective, and identified future trends and current hotspots. With increased DL outputs, the ME research focus has gradually shifted from manual ME examinations to automatic ME detection and associated symptoms. In this review, we present a comprehensive and dynamic overview of AI in ME and identify future research areas.
Collapse
Affiliation(s)
- Haiwen Feng
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Jiaqi Chen
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Zhichang Zhang
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Yan Lou
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Shaochong Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| |
Collapse
|
53
|
Farahat Z, Zrira N, Souissi N, Benamar S, Belmekki M, Ngote MN, Megdiche K. Application of Deep Learning Methods in a Moroccan Ophthalmic Center: Analysis and Discussion. Diagnostics (Basel) 2023; 13:diagnostics13101694. [PMID: 37238179 DOI: 10.3390/diagnostics13101694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 03/17/2023] [Accepted: 03/20/2023] [Indexed: 05/28/2023] Open
Abstract
Diabetic retinopathy (DR) remains one of the world's frequent eye illnesses, leading to vision loss among working-aged individuals. Hemorrhages and exudates are examples of signs of DR. However, artificial intelligence (AI), particularly deep learning (DL), is poised to impact nearly every aspect of human life and gradually transform medical practice. Insight into the condition of the retina is becoming more accessible thanks to major advancements in diagnostic technology. AI approaches can be used to assess lots of morphological datasets derived from digital images in a rapid and noninvasive manner. Computer-aided diagnosis tools for automatic detection of DR early-stage signs will ease the pressure on clinicians. In this work, we apply two methods to the color fundus images taken on-site at the Cheikh Zaïd Foundation's Ophthalmic Center in Rabat to detect both exudates and hemorrhages. First, we apply the U-Net method to segment exudates and hemorrhages into red and green colors, respectively. Second, the You Look Only Once Version 5 (YOLOv5) method identifies the presence of hemorrhages and exudates in an image and predicts a probability for each bounding box. The segmentation proposed method obtained a specificity of 85%, a sensitivity of 85%, and a Dice score of 85%. The detection software successfully detected 100% of diabetic retinopathy signs, the expert doctor detected 99% of DR signs, and the resident doctor detected 84%.
Collapse
Affiliation(s)
- Zineb Farahat
- LISTD Laboratory, Ecole Nationale Supérieure des Mines de Rabat, Rabat 10000, Morocco
- Cheikh Zaïd Foundation Medical Simulation Center, Rabat 10000, Morocco
| | - Nabila Zrira
- LISTD Laboratory, Ecole Nationale Supérieure des Mines de Rabat, Rabat 10000, Morocco
| | - Nissrine Souissi
- LISTD Laboratory, Ecole Nationale Supérieure des Mines de Rabat, Rabat 10000, Morocco
| | - Safia Benamar
- Cheikh Zaïd Ophthalmic Center, Cheikh Zaïd International University Hospital, Rabat 10000, Morocco
- Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Mohammed Belmekki
- Cheikh Zaïd Ophthalmic Center, Cheikh Zaïd International University Hospital, Rabat 10000, Morocco
- Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Mohamed Nabil Ngote
- LISTD Laboratory, Ecole Nationale Supérieure des Mines de Rabat, Rabat 10000, Morocco
- Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Kawtar Megdiche
- Cheikh Zaïd Foundation Medical Simulation Center, Rabat 10000, Morocco
| |
Collapse
|
54
|
Yu R, Ye X, Wang X, Wu Q, Jia L, Dong K, Zhu Z, Bao Y, Hou X, Jia W. Serum cholinesterase is associated with incident diabetic retinopathy: the Shanghai Nicheng cohort study. Nutr Metab (Lond) 2023; 20:26. [PMID: 37138337 PMCID: PMC10155425 DOI: 10.1186/s12986-023-00743-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 04/13/2023] [Indexed: 05/05/2023] Open
Abstract
BACKGROUND Serum cholinesterase (ChE) is positively associated with incident diabetes and dyslipidemia. We aimed to investigate the relationship between ChE and the incidence of diabetic retinopathy (DR). METHODS Based on a community-based cohort study followed for 4.6 years, 1133 participants aged 55-70 years with diabetes were analyzed. Fundus photographs were taken for each eye at both baseline and follow-up investigations. The presence and severity of DR were categorized into no DR, mild non-proliferative DR (NPDR), and referable DR (moderate NPDR or worse). Binary and multinomial logistic regression models were used to estimate the risk ratio (RR) and 95% confidence interval (CI) between ChE and DR. RESULTS Among the 1133 participants, 72 (6.4%) cases of DR occurred. The multivariable binary logistic regression showed that the highest tertile of ChE (≥ 422 U/L) was associated with a 2.01-fold higher risk of incident DR (RR 2.01, 95%CI 1.01-4.00; P for trend < 0.05) than the lowest tertile (< 354 U/L). The multivariable binary and multinomial logistic regression showed that the risk of DR increased by 41% (RR 1.41, 95%CI 1.05-1.90), and the risk of incident referable DR was almost 2-fold higher than no DR (RR 1.99, 95%CI 1.24-3.18) with per 1-SD increase of loge-transformed ChE. Furthermore, multiplicative interactions were found between ChE and elderly participants (aged 60 and older; P for interaction = 0.003) and men (P for interaction = 0.044) on the risk of DR. CONCLUSIONS In this study, ChE was associated with the incidence of DR, especially referable DR. ChE was a potential biomarker for predicting the incident DR.
Collapse
Affiliation(s)
- Rong Yu
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai Clinical Center for Diabetes, Shanghai Key Clinical Center for Metabolic Disease, Shanghai, China
| | - Xiaoqi Ye
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai Clinical Center for Diabetes, Shanghai Key Clinical Center for Metabolic Disease, Shanghai, China
| | - Xiangning Wang
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qiang Wu
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Lili Jia
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Keqing Dong
- General Practitioner Teams in Community Health Service Center of Nicheng, Pudong New District, Shanghai, China
| | - Zhijun Zhu
- General Practitioner Teams in Community Health Service Center of Nicheng, Pudong New District, Shanghai, China
| | - Yuqian Bao
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai Clinical Center for Diabetes, Shanghai Key Clinical Center for Metabolic Disease, Shanghai, China
| | - Xuhong Hou
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai Clinical Center for Diabetes, Shanghai Key Clinical Center for Metabolic Disease, Shanghai, China.
| | - Weiping Jia
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai Clinical Center for Diabetes, Shanghai Key Clinical Center for Metabolic Disease, Shanghai, China.
| |
Collapse
|
55
|
Tan Y, Zhao SX, Yang KF, Li YJ. A lightweight network guided with differential matched filtering for retinal vessel segmentation. Comput Biol Med 2023; 160:106924. [PMID: 37146492 DOI: 10.1016/j.compbiomed.2023.106924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 04/03/2023] [Accepted: 04/13/2023] [Indexed: 05/07/2023]
Abstract
The geometric morphology of retinal vessels reflects the state of cardiovascular health, and fundus images are important reference materials for ophthalmologists. Great progress has been made in automated vessel segmentation, but few studies have focused on thin vessel breakage and false-positives in areas with lesions or low contrast. In this work, we propose a new network, differential matched filtering guided attention UNet (DMF-AU), to address these issues, incorporating a differential matched filtering layer, feature anisotropic attention, and a multiscale consistency constrained backbone to perform thin vessel segmentation. The differential matched filtering is used for the early identification of locally linear vessels, and the resulting rough vessel map guides the backbone to learn vascular details. Feature anisotropic attention reinforces the vessel features of spatial linearity at each stage of the model. Multiscale constraints reduce the loss of vessel information while pooling within large receptive fields. In tests on multiple classical datasets, the proposed model performed well compared with other algorithms on several specially designed criteria for vessel segmentation. DMF-AU is a high-performance, lightweight vessel segmentation model. The source code is at https://github.com/tyb311/DMF-AU.
Collapse
Affiliation(s)
- Yubo Tan
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Shi-Xuan Zhao
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Kai-Fu Yang
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Yong-Jie Li
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| |
Collapse
|
56
|
Alam MN, Yamashita R, Ramesh V, Prabhune T, Lim JI, Chan RVP, Hallak J, Leng T, Rubin D. Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models. Sci Rep 2023; 13:6047. [PMID: 37055475 PMCID: PMC10102012 DOI: 10.1038/s41598-023-33365-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 04/12/2023] [Indexed: 04/15/2023] Open
Abstract
Diabetic retinopathy (DR) is a major cause of vision impairment in diabetic patients worldwide. Due to its prevalence, early clinical diagnosis is essential to improve treatment management of DR patients. Despite recent demonstration of successful machine learning (ML) models for automated DR detection, there is a significant clinical need for robust models that can be trained with smaller cohorts of dataset and still perform with high diagnostic accuracy in independent clinical datasets (i.e., high model generalizability). Towards this need, we have developed a self-supervised contrastive learning (CL) based pipeline for classification of referable vs non-referable DR. Self-supervised CL based pretraining allows enhanced data representation, therefore, the development of robust and generalized deep learning (DL) models, even with small, labeled datasets. We have integrated a neural style transfer (NST) augmentation in the CL pipeline to produce models with better representations and initializations for the detection of DR in color fundus images. We compare our CL pretrained model performance with two state of the art baseline models pretrained with Imagenet weights. We further investigate the model performance with reduced labeled training data (down to 10 percent) to test the robustness of the model when trained with small, labeled datasets. The model is trained and validated on the EyePACS dataset and tested independently on clinical datasets from the University of Illinois, Chicago (UIC). Compared to baseline models, our CL pretrained FundusNet model had higher area under the receiver operating characteristics (ROC) curve (AUC) (CI) values (0.91 (0.898 to 0.930) vs 0.80 (0.783 to 0.820) and 0.83 (0.801 to 0.853) on UIC data). At 10 percent labeled training data, the FundusNet AUC was 0.81 (0.78 to 0.84) vs 0.58 (0.56 to 0.64) and 0.63 (0.60 to 0.66) in baseline models, when tested on the UIC dataset. CL based pretraining with NST significantly improves DL classification performance, helps the model generalize well (transferable from EyePACS to UIC data), and allows training with small, annotated datasets, therefore reducing ground truth annotation burden of the clinicians.
Collapse
Affiliation(s)
- Minhaj Nur Alam
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA.
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, 9201 University City Boulevard, Charlotte, NC, 28223, USA.
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA.
| | - Rikiya Yamashita
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Vignav Ramesh
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Tejas Prabhune
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Jennifer I Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - R V P Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - Joelle Hallak
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - Theodore Leng
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| |
Collapse
|
57
|
Liu R, Wang T, Li H, Zhang P, Li J, Yang X, Shen D, Sheng B. TMM-Nets: Transferred Multi- to Mono-Modal Generation for Lupus Retinopathy Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1083-1094. [PMID: 36409801 DOI: 10.1109/tmi.2022.3223683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Rare diseases, which are severely underrepresented in basic and clinical research, can particularly benefit from machine learning techniques. However, current learning-based approaches usually focus on either mono-modal image data or matched multi-modal data, whereas the diagnosis of rare diseases necessitates the aggregation of unstructured and unmatched multi-modal image data due to their rare and diverse nature. In this study, we therefore propose diagnosis-guided multi-to-mono modal generation networks (TMM-Nets) along with training and testing procedures. TMM-Nets can transfer data from multiple sources to a single modality for diagnostic data structurization. To demonstrate their potential in the context of rare diseases, TMM-Nets were deployed to diagnose the lupus retinopathy (LR-SLE), leveraging unmatched regular and ultra-wide-field fundus images for transfer learning. The TMM-Nets encoded the transfer learning from diabetic retinopathy to LR-SLE based on the similarity of the fundus lesions. In addition, a lesion-aware multi-scale attention mechanism was developed for clinical alerts, enabling TMM-Nets not only to inform patient care, but also to provide insights consistent with those of clinicians. An adversarial strategy was also developed to refine multi- to mono-modal image generation based on diagnostic results and the data distribution to enhance the data augmentation performance. Compared to the baseline model, the TMM-Nets showed 35.19% and 33.56% F1 score improvements on the test and external validation sets, respectively. In addition, the TMM-Nets can be used to develop diagnostic models for other rare diseases.
Collapse
|
58
|
Moradi M, Chen Y, Du X, Seddon JM. Deep ensemble learning for automated non-advanced AMD classification using optimized retinal layer segmentation and SD-OCT scans. Comput Biol Med 2023; 154:106512. [PMID: 36701964 DOI: 10.1016/j.compbiomed.2022.106512] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 11/30/2022] [Accepted: 12/31/2022] [Indexed: 01/11/2023]
Abstract
BACKGROUND Accurate retinal layer segmentation in optical coherence tomography (OCT) images is crucial for quantitatively analyzing age-related macular degeneration (AMD) and monitoring its progression. However, previous retinal segmentation models depend on experienced experts and manually annotating retinal layers is time-consuming. On the other hand, accuracy of AMD diagnosis is directly related to the segmentation model's performance. To address these issues, we aimed to improve AMD detection using optimized retinal layer segmentation and deep ensemble learning. METHOD We integrated a graph-cut algorithm with a cubic spline to automatically annotate 11 retinal boundaries. The refined images were fed into a deep ensemble mechanism that combined a Bagged Tree and end-to-end deep learning classifiers. We tested the developed deep ensemble model on internal and external datasets. RESULTS The total error rates for our segmentation model using the boundary refinement approach was significantly lower than OCT Explorer segmentations (1.7% vs. 7.8%, p-value = 0.03). We utilized the refinement approach to quantify 169 imaging features using Zeiss SD-OCT volume scans. The presence of drusen and thickness of total retina, neurosensory retina, and ellipsoid zone to inner-outer segment (EZ-ISOS) thickness had higher contributions to AMD classification compared to other features. The developed ensemble learning model obtained a higher diagnostic accuracy in a shorter time compared with two human graders. The area under the curve (AUC) for normal vs. early AMD was 99.4%. CONCLUSION Testing results showed that the developed framework is repeatable and effective as a potentially valuable tool in retinal imaging research.
Collapse
Affiliation(s)
- Mousa Moradi
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, United States
| | - Yu Chen
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, United States.
| | - Xian Du
- Department of Mechanical and Industrial Engineering, University of Massachusetts, Amherst, MA, United States.
| | - Johanna M Seddon
- Department of Ophthalmology & Visual Sciences, University of Massachusetts Chan Medical School, Worcester, MA, United States.
| |
Collapse
|
59
|
Lin S, Ma Y, Xu Y, Lu L, He J, Zhu J, Peng Y, Yu T, Congdon N, Zou H. Artificial Intelligence in Community-Based Diabetic Retinopathy Telemedicine Screening in Urban China: Cost-effectiveness and Cost-Utility Analyses With Real-world Data. JMIR Public Health Surveill 2023; 9:e41624. [PMID: 36821353 PMCID: PMC9999255 DOI: 10.2196/41624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 11/13/2022] [Accepted: 01/12/2023] [Indexed: 02/24/2023] Open
Abstract
BACKGROUND Community-based telemedicine screening for diabetic retinopathy (DR) has been highly recommended worldwide. However, evidence from low- and middle-income countries (LMICs) on the choice between artificial intelligence (AI)-based and manual grading-based telemedicine screening is inadequate for policy making. OBJECTIVE The aim of this study was to test whether the AI model is more worthwhile than manual grading in community-based telemedicine screening for DR in the context of labor costs in urban China. METHODS We conducted cost-effectiveness and cost-utility analyses by using decision-analytic Markov models with 30 one-year cycles from a societal perspective to compare the cost, effectiveness, and utility of 2 scenarios in telemedicine screening for DR: manual grading and an AI model. Sensitivity analyses were performed. Real-world data were obtained mainly from the Shanghai Digital Eye Disease Screening Program. The main outcomes were the incremental cost-effectiveness ratio (ICER) and the incremental cost-utility ratio (ICUR). The ICUR thresholds were set as 1 and 3 times the local gross domestic product per capita. RESULTS The total expected costs for a 65-year-old resident were US $3182.50 and US $3265.40, while the total expected years without blindness were 9.80 years and 9.83 years, and the utilities were 6.748 quality-adjusted life years (QALYs) and 6.753 QALYs in the AI model and manual grading, respectively. The ICER for the AI-assisted model was US $2553.39 per year without blindness, and the ICUR was US $15,216.96 per QALY, which indicated that AI-assisted model was not cost-effective. The sensitivity analysis suggested that if there is an increase in compliance with referrals after the adoption of AI by 7.5%, an increase in on-site screening costs in manual grading by 50%, or a decrease in on-site screening costs in the AI model by 50%, then the AI model could be the dominant strategy. CONCLUSIONS Our study may provide a reference for policy making in planning community-based telemedicine screening for DR in LMICs. Our findings indicate that unless the referral compliance of patients with suspected DR increases, the adoption of the AI model may not improve the value of telemedicine screening compared to that of manual grading in LMICs. The main reason is that in the context of the low labor costs in LMICs, the direct health care costs saved by replacing manual grading with AI are less, and the screening effectiveness (QALYs and years without blindness) decreases. Our study suggests that the magnitude of the value generated by this technology replacement depends primarily on 2 aspects. The first is the extent of direct health care costs reduced by AI, and the second is the change in health care service utilization caused by AI. Therefore, our research can also provide analytical ideas for other health care sectors in their decision to use AI.
Collapse
Affiliation(s)
- Senlin Lin
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Yingyan Ma
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Yi Xu
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Lina Lu
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Jiangnan He
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Jianfeng Zhu
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Yajun Peng
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Tao Yu
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Nathan Congdon
- Centre for Public Health, Queen's University Belfast, Belfast, United Kingdom.,Orbis International, New York, NY, United States.,Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haidong Zou
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| |
Collapse
|
60
|
Liu L, Wu X, Lin D, Zhao L, Li M, Yun D, Lin Z, Pang J, Li L, Wu Y, Lai W, Xiao W, Shang Y, Feng W, Tan X, Li Q, Liu S, Lin X, Sun J, Zhao Y, Yang X, Ye Q, Zhong Y, Huang X, He Y, Fu Z, Xiang Y, Zhang L, Zhao M, Qu J, Xu F, Lu P, Li J, Xu F, Wei W, Dong L, Dai G, He X, Yan W, Zhu Q, Lu L, Zhang J, Zhou W, Meng X, Li S, Shen M, Jiang Q, Chen N, Zhou X, Li M, Wang Y, Zou H, Zhong H, Yang W, Shou W, Zhong X, Yang Z, Ding L, Hu Y, Tan G, He W, Zhao X, Chen Y, Liu Y, Lin H. DeepFundus: A flow-cytometry-like image quality classifier for boosting the whole life cycle of medical artificial intelligence. Cell Rep Med 2023; 4:100912. [PMID: 36669488 PMCID: PMC9975093 DOI: 10.1016/j.xcrm.2022.100912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 11/01/2022] [Accepted: 12/26/2022] [Indexed: 01/20/2023]
Abstract
Medical artificial intelligence (AI) has been moving from the research phase to clinical implementation. However, most AI-based models are mainly built using high-quality images preprocessed in the laboratory, which is not representative of real-world settings. This dataset bias proves a major driver of AI system dysfunction. Inspired by the design of flow cytometry, DeepFundus, a deep-learning-based fundus image classifier, is developed to provide automated and multidimensional image sorting to address this data quality gap. DeepFundus achieves areas under the receiver operating characteristic curves (AUCs) over 0.9 in image classification concerning overall quality, clinical quality factors, and structural quality analysis on both the internal test and national validation datasets. Additionally, DeepFundus can be integrated into both model development and clinical application of AI diagnostics to significantly enhance model performance for detecting multiple retinopathies. DeepFundus can be used to construct a data-driven paradigm for improving the entire life cycle of medical AI practice.
Collapse
Affiliation(s)
- Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Mingyuan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Weiyi Lai
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Wei Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Weibo Feng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Xiao Tan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Qiang Li
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Shenzhen Liu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xinxin Lin
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jiaxin Sun
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yiqi Zhao
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Ximei Yang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Qinying Ye
- Department of Ophthalmology, Second Affiliated Hospital, Guangdong Medical University, Zhanjiang, Guangdong, China
| | - Yuesi Zhong
- Department of Ophthalmology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xi Huang
- Department of Ophthalmology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yuan He
- Department of Ophthalmology, The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, China
| | - Ziwei Fu
- Department of Ophthalmology, The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, China
| | - Yi Xiang
- Department of Ophthalmology, Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Li Zhang
- Department of Ophthalmology, Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Mingwei Zhao
- Department of Ophthalmology, People's Hospital of Peking University, Beijing, China
| | - Jinfeng Qu
- Department of Ophthalmology, People's Hospital of Peking University, Beijing, China
| | - Fan Xu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, Guangxi, China
| | - Peng Lu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, Guangxi, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | | | - Xingru He
- School of Public Health, He University, Shenyang, Liaoning, China
| | - Wentao Yan
- The Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Qiaolin Zhu
- The Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Linna Lu
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jiaying Zhang
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Zhou
- Department of Ophthalmology, Tianjin Medical University General Hospital, Tianjin, China
| | - Xiangda Meng
- Department of Ophthalmology, Tianjin Medical University General Hospital, Tianjin, China
| | - Shiying Li
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China
| | - Mei Shen
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Nan Chen
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Xingtao Zhou
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | - Meiyan Li
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | - Yan Wang
- Tianjin Eye Hospital, Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Nankai University, Tianjin, China
| | - Haohan Zou
- Tianjin Eye Hospital, Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Nankai University, Tianjin, China
| | - Hua Zhong
- Department of Ophthalmology, The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - Wenyan Yang
- Department of Ophthalmology, The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - Wulin Shou
- Jiaxing Chaoju Eye Hospital, Jiaxing, Zhejiang, China
| | - Xingwu Zhong
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China
| | - Zhenduo Yang
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China
| | - Lin Ding
- Department of Ophthalmology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang, China
| | - Yongcheng Hu
- Bayannur Xudong Eye Hospital, Bayannur, Inner Mongolia, China
| | - Gang Tan
- Department of Ophthalmology, The First Affiliated Hospital, Hengyang Medical School, University of South China, Hengyang, Hunan, China
| | - Wanji He
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Xin Zhao
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
61
|
Vijayan M, S V. A Regression-Based Approach to Diabetic Retinopathy Diagnosis Using Efficientnet. Diagnostics (Basel) 2023; 13:diagnostics13040774. [PMID: 36832262 PMCID: PMC9955015 DOI: 10.3390/diagnostics13040774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 02/10/2023] [Accepted: 02/14/2023] [Indexed: 02/22/2023] Open
Abstract
The aim of this study is to develop a computer-assisted solution for the efficient and effective detection of diabetic retinopathy (DR), a complication of diabetes that can damage the retina and cause vision loss if not treated in a timely manner. Manually diagnosing DR through color fundus images requires a skilled clinician to spot lesions, but this can be challenging, especially in areas with a shortage of trained experts. As a result, there is a push to create computer-aided diagnosis systems for DR to help reduce the time it takes to diagnose the condition. The detection of diabetic retinopathy through automation is challenging, but convolutional neural networks (CNNs) play a vital role in achieving success. CNNs have been proven to be more effective in image classification than methods based on handcrafted features. This study proposes a CNN-based approach for the automated detection of DR using Efficientnet-B0 as the backbone network. The authors of this study take a unique approach by viewing the detection of diabetic retinopathy as a regression problem rather than a traditional multi-class classification problem. This is because the severity of DR is often rated on a continuous scale, such as the international clinical diabetic retinopathy (ICDR) scale. This continuous representation provides a more nuanced understanding of the condition, making regression a more suitable approach for DR detection compared to multi-class classification. This approach has several benefits. Firstly, it allows for more fine-grained predictions as the model can assign a value that falls between the traditional discrete labels. Secondly, it allows for better generalization. The model was tested on the APTOS and DDR datasets. The proposed model demonstrated improved efficiency and accuracy in detecting DR compared to traditional methods. This method has the potential to enhance the efficiency and accuracy of DR diagnosis, making it a valuable tool for healthcare professionals. The model has the potential to aid in the rapid and accurate diagnosis of DR, leading to the improved early detection, and management, of the disease.
Collapse
|
62
|
Attention-Driven Cascaded Network for Diabetic Retinopathy Grading from Fundus Images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
63
|
Khalili Pour E, Rezaee K, Azimi H, Mirshahvalad SM, Jafari B, Fadakar K, Faghihi H, Mirshahi A, Ghassemi F, Ebrahimiadib N, Mirghorbani M, Bazvand F, Riazi-Esfahani H, Riazi Esfahani M. Automated machine learning-based classification of proliferative and non-proliferative diabetic retinopathy using optical coherence tomography angiography vascular density maps. Graefes Arch Clin Exp Ophthalmol 2023; 261:391-399. [PMID: 36050474 DOI: 10.1007/s00417-022-05818-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 08/07/2022] [Accepted: 08/23/2022] [Indexed: 01/17/2023] Open
Abstract
PURPOSE The study aims to classify the eyes with proliferative diabetic retinopathy (PDR) and non-proliferative diabetic retinopathy (NPDR) based on the optical coherence tomography angiography (OCTA) vascular density maps using a supervised machine learning algorithm. METHODS OCTA vascular density maps (at superficial capillary plexus (SCP), deep capillary plexus (DCP), and total retina (R) levels) of 148 eyes from 78 patients with diabetic retinopathy (45 PDR and 103 NPDR) was used to classify the images to NPDR and PDR groups based on a supervised machine learning algorithm known as the support vector machine (SVM) classifier optimized by a genetic evolutionary algorithm. RESULTS The implemented algorithm in three different models reached up to 85% accuracy in classifying PDR and NPDR in all three levels of vascular density maps. The deep retinal layer vascular density map demonstrated the best performance with a 90% accuracy in discriminating between PDR and NPDR. CONCLUSIONS The current study on a limited number of patients with diabetic retinopathy demonstrated that a supervised machine learning-based method known as SVM can be used to differentiate PDR and NPDR patients using OCTA vascular density maps.
Collapse
Affiliation(s)
- Elias Khalili Pour
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Khosro Rezaee
- Department of Biomedical Engineering, Meybod University, Meybod, Iran
| | - Hossein Azimi
- Faculty of Mathematical Sciences and Computer, Kharazmi University, Tehran, Iran
| | - Seyed Mohammad Mirshahvalad
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Behzad Jafari
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Kaveh Fadakar
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Hooshang Faghihi
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Ahmad Mirshahi
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Fariba Ghassemi
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Nazanin Ebrahimiadib
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Masoud Mirghorbani
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Fatemeh Bazvand
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Hamid Riazi-Esfahani
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran.
| | - Mohammad Riazi Esfahani
- Department of Ophthalmology, Gavin Herbert Eye Institute, University of California Irvine, Irvine, CA, USA
| |
Collapse
|
64
|
Jia W, Fisher EB. Application and prospect of artificial intellingence in diabetes care. MEDICAL REVIEW (2021) 2023; 3:102-104. [PMID: 37724106 PMCID: PMC10471118 DOI: 10.1515/mr-2022-0039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 01/12/2023] [Indexed: 09/20/2023]
Abstract
Diabetes is one of the fastest-growing non-communicable diseases, becoming an important public health concern worldwide as well as in China. Currently, China has the largest population living with diabetes. Artificial intelligence (AI) is a fast-growing field and its applications to diabetes could enable the delivery of better management services for people with diabetes. This perspective summarized the latest findings of digital technologies and AI use in the following areas of diabetes care, mainly including screening and risk predictions of diabetes and diabetic complications, precise monitoring and intervention combined with new technologies, and mobile health application in self-management support for people with diabetes. Challenges to promote further use of AI in diabetes care included data standardization and integration, performance of AI-based medical devices, motivation of patients, and sensitivity to privacy. In summary, although the AI applications in clinical practice is still at an early stage, we are moving toward a new paradigm for diabetes care with the rapid development and emerging application of AI.
Collapse
Affiliation(s)
- Weiping Jia
- Shanghai Diabetes Institute, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Technical Center for Diabetes Prevention and Clinical Care, Shanghai Key Laboratory of Diabetes Mellitus, Department of Endocrinology and Metabolism, Shanghai Clinical Center for Diabetes, Shanghai Key Clinical Center for Metabolic Disease, Shanghai Research Center for Endocrine and Metabolic Diseases, Shanghai200233, China
| | - Edwin B. Fisher
- Peers for Progress, Department of Health Behavior, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, 135 Dauer Drive, Campus Box 7440, 27599-7440Chapel Hill, NC, USA
| |
Collapse
|
65
|
Developing a Deep Learning Model to Evaluate Bulbar Conjunctival Injection with Color Anterior Segment Photographs. J Clin Med 2023; 12:jcm12020715. [PMID: 36675643 PMCID: PMC9867092 DOI: 10.3390/jcm12020715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 01/08/2023] [Accepted: 01/10/2023] [Indexed: 01/18/2023] Open
Abstract
The present research aims to evaluate the feasibility of a deep-learning model in identifying bulbar conjunctival injection grading. Methods: We collected 1401 color anterior segment photographs demonstrating the cornea and bulbar conjunctival. The ground truth was bulbar conjunctival injection scores labeled by human ophthalmologists. Two convolutional neural network-based models were constructed and trained. Accuracy, precision, recall, F1-score, Kappa, and the area under the curve (AUC) were calculated to evaluate the efficiency of the deep learning models. The micro-average and macro-average AUC values for model grading bulbar conjunctival injection were 0.98 and 0.98, respectively. The deep learning model achieved a high accuracy of 87.12%, a precision of 87.13%, a recall of 87.12%, an F1-score of 87.07%, and Cohen's Kappa of 0.8153. The deep learning model demonstrated excellent performance in evaluating the severity of bulbar conjunctival injection, and it has the potential to help evaluate ocular surface diseases and determine disease progression and recovery.
Collapse
|
66
|
Zhang H, Botler M, Kooman JP. Deep Learning for Image Analysis in Kidney Care. ADVANCES IN KIDNEY DISEASE AND HEALTH 2023; 30:25-32. [PMID: 36723278 DOI: 10.1053/j.akdh.2022.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 08/23/2022] [Accepted: 11/07/2022] [Indexed: 12/24/2022]
Abstract
Analysis of medical images, such as radiological or tissue specimens, is an indispensable part of medical diagnostics. Conventionally done manually, the process may sometimes be time-consuming and prone to interobserver variability. Image classification and segmentation by deep learning strategies, predominantly convolutional neural networks, may provide a significant advance in the diagnostic process. In renal medicine, most evidence has been generated around the radiological assessment of renal abnormalities and histological analysis of renal biopsy specimens' segmentation. In this article, the basic principles of image analysis by convolutional neural networks, brief descriptions of convolutional neural networks, and their system architecture for image analysis are discussed, in combination with examples regarding their use in image analysis in nephrology.
Collapse
Affiliation(s)
| | | | - Jeroen P Kooman
- Division of Nephrology, Department of Internal Medicine, Maastricht University Medical Center, Maastricht, The Netherlands
| |
Collapse
|
67
|
Chandhakanond P, Aimmanee P. Hemorrhage segmentation in mobile-phone retinal images using multiregion contrast enhancement and iterative NICK thresholding region growing. Sci Rep 2022; 12:21513. [PMID: 36513802 PMCID: PMC9747926 DOI: 10.1038/s41598-022-26073-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 12/08/2022] [Indexed: 12/15/2022] Open
Abstract
Hemorrhage segmentation in retinal images is challenging because the sizes and shapes vary for each hemorrhage, the intensity is close to the blood vessels and macula, and the intensity is often nonuniform, especially for large hemorrhages. Hemorrhage segmentation in mobile-phone retinal images is even more challenging because mobile-phone retinal images usually have poorer contrast, more shadows, and uneven illumination compared to those obtained from the table-top ophthalmoscope. In this work, the proposed KMMRC-INRG method enhances the hemorrhage segmentation performance with nonuniform intensity in poor lighting conditions on mobile-phone images. It improves the uneven illumination of mobile-phone retinal images using a proposed method, K-mean multiregion contrast enhancement (KMMRC). It also enhances the boundary segmentation of the hemorrhage blobs using a novel iterative NICK thresholding region growing (INRG) method before applying an SVM classifier based on hue, saturation, and brightness features. This approach can achieve as high as 80.18%, 91.26%, 85.36%, and 80.08% for recall, precision, F1-measure, and IoU, respectively. The F1-measure score improves up to 19.02% compared to a state-of-the-art method DT-HSVE tested on the same full dataset and as much as 58.88% when considering only images with large-size hemorrhages.
Collapse
Affiliation(s)
- Patsaphon Chandhakanond
- grid.412434.40000 0004 1937 1127School of Information, Computer, and Communication Technology, Sirindhorn International Institute of Technology, Thammasat University, 131 Moo 5, Tivanont Rd, Bangkadi, Meung, Patumthani, 12000 Thailand
| | - Pakinee Aimmanee
- grid.412434.40000 0004 1937 1127School of Information, Computer, and Communication Technology, Sirindhorn International Institute of Technology, Thammasat University, 131 Moo 5, Tivanont Rd, Bangkadi, Meung, Patumthani, 12000 Thailand
| |
Collapse
|
68
|
Economics of Artificial Intelligence in Healthcare: Diagnosis vs. Treatment. Healthcare (Basel) 2022; 10:healthcare10122493. [PMID: 36554017 PMCID: PMC9777836 DOI: 10.3390/healthcare10122493] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/03/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
Motivation: The price of medical treatment continues to rise due to (i) an increasing population; (ii) an aging human growth; (iii) disease prevalence; (iv) a rise in the frequency of patients that utilize health care services; and (v) increase in the price. Objective: Artificial Intelligence (AI) is already well-known for its superiority in various healthcare applications, including the segmentation of lesions in images, speech recognition, smartphone personal assistants, navigation, ride-sharing apps, and many more. Our study is based on two hypotheses: (i) AI offers more economic solutions compared to conventional methods; (ii) AI treatment offers stronger economics compared to AI diagnosis. This novel study aims to evaluate AI technology in the context of healthcare costs, namely in the areas of diagnosis and treatment, and then compare it to the traditional or non-AI-based approaches. Methodology: PRISMA was used to select the best 200 studies for AI in healthcare with a primary focus on cost reduction, especially towards diagnosis and treatment. We defined the diagnosis and treatment architectures, investigated their characteristics, and categorized the roles that AI plays in the diagnostic and therapeutic paradigms. We experimented with various combinations of different assumptions by integrating AI and then comparing it against conventional costs. Lastly, we dwell on three powerful future concepts of AI, namely, pruning, bias, explainability, and regulatory approvals of AI systems. Conclusions: The model shows tremendous cost savings using AI tools in diagnosis and treatment. The economics of AI can be improved by incorporating pruning, reduction in AI bias, explainability, and regulatory approvals.
Collapse
|
69
|
pGlycoQuant with a deep residual network for quantitative glycoproteomics at intact glycopeptide level. Nat Commun 2022; 13:7539. [PMID: 36477196 PMCID: PMC9729625 DOI: 10.1038/s41467-022-35172-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 11/17/2022] [Indexed: 12/12/2022] Open
Abstract
Large-scale intact glycopeptide identification has been advanced by software tools. However, tools for quantitative analysis remain lagging behind, which hinders exploring the differential site-specific glycosylation. Here, we report pGlycoQuant, a generic tool for both primary and tandem mass spectrometry-based intact glycopeptide quantitation. pGlycoQuant advances in glycopeptide matching through applying a deep learning model that reduces missing values by 19-89% compared with Byologic, MSFragger-Glyco, Skyline, and Proteome Discoverer, as well as a Match In Run algorithm for more glycopeptide coverage, greatly expanding the quantitative function of several widely used search engines, including pGlyco 2.0, pGlyco3, Byonic and MSFragger-Glyco. Further application of pGlycoQuant to the N-glycoproteomic study in three different metastatic HCC cell lines quantifies 6435 intact N-glycopeptides and, together with in vitro molecular biology experiments, illustrates site 979-core fucosylation of L1CAM as a potential regulator of HCC metastasis. We expected further applications of the freely available pGlycoQuant in glycoproteomic studies.
Collapse
|
70
|
Paul SK, Pan I, Sobol WM. Efficient labeling of retinal fundus photographs using deep active learning. J Med Imaging (Bellingham) 2022; 9:064001. [PMID: 36405815 PMCID: PMC9667889 DOI: 10.1117/1.jmi.9.6.064001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 10/31/2022] [Indexed: 11/18/2023] Open
Abstract
Purpose To compare the performance of four deep active learning (DAL) approaches to optimize label efficiency for training diabetic retinopathy (DR) classification deep learning models. Approach 88,702 color retinal fundus photographs from 44,351 patients with DR grades from the publicly available EyePACS dataset were used. Four DAL approaches [entropy sampling (ES), Bayesian active learning by disagreement (BALD), core set, and adversarial active learning (ADV)] were compared to conventional naive random sampling. Models were compared at various dataset sizes using Cohen's kappa (CK) and area under the receiver operating characteristic curve on an internal test set of 10,000 images. An independent test set of 3662 fundus photographs was used to assess generalizability. Results On the internal test set, 3 out of 4 DAL methods resulted in statistically significant performance improvements ( p < 1 × 10 - 4 ) compared to random sampling for multiclass classification, with the largest observed differences in CK ranging from 0.051 for BALD to 0.053 for ES. Improvements in multiclass classification generalized to the independent test set, with the largest differences in CK ranging from 0.126 to 0.135. However, no statistically significant improvements were seen for binary classification. Similar performance was seen across DAL methods, except ADV, which performed similarly to random sampling. Conclusions Uncertainty-based and feature descriptor-based deep active learning methods outperformed random sampling on both the internal and independent test sets at multiclass classification. However, binary classification performance remained similar across random sampling and active learning methods.
Collapse
Affiliation(s)
- Samantha K. Paul
- University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Ophthalmology, Cleveland, Ohio, United States
| | - Ian Pan
- Brigham and Women’s Hospital, Harvard Medical School, Department of Radiology, Boston, Massachusetts, United States
| | - Warren M. Sobol
- University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Ophthalmology, Cleveland, Ohio, United States
| |
Collapse
|
71
|
Lu Z, Miao J, Dong J, Zhu S, Wang X, Feng J. Automatic classification of retinal diseases with transfer learning-based lightweight convolutional neural network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
72
|
Sheng B, Chen X, Li T, Ma T, Yang Y, Bi L, Zhang X. An overview of artificial intelligence in diabetic retinopathy and other ocular diseases. Front Public Health 2022; 10:971943. [PMID: 36388304 PMCID: PMC9650481 DOI: 10.3389/fpubh.2022.971943] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 10/04/2022] [Indexed: 01/25/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is a branch of science that empowers machines using human intelligence. AI refers to the technology of rendering human intelligence through computer programs. From healthcare to the precise prevention, diagnosis, and management of diseases, AI is progressing rapidly in various interdisciplinary fields, including ophthalmology. Ophthalmology is at the forefront of AI in medicine because the diagnosis of ocular diseases heavy reliance on imaging. Recently, deep learning-based AI screening and prediction models have been applied to the most common visual impairment and blindness diseases, including glaucoma, cataract, age-related macular degeneration (ARMD), and diabetic retinopathy (DR). The success of AI in medicine is primarily attributed to the development of deep learning algorithms, which are computational models composed of multiple layers of simulated neurons. These models can learn the representations of data at multiple levels of abstraction. The Inception-v3 algorithm and transfer learning concept have been applied in DR and ARMD to reuse fundus image features learned from natural images (non-medical images) to train an AI system with a fraction of the commonly used training data (<1%). The trained AI system achieved performance comparable to that of human experts in classifying ARMD and diabetic macular edema on optical coherence tomography images. In this study, we highlight the fundamental concepts of AI and its application in these four major ocular diseases and further discuss the current challenges, as well as the prospects in ophthalmology.
Collapse
Affiliation(s)
- Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Xiaosi Chen
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tingyao Li
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Tianxing Ma
- Chongqing University-University of Cincinnati Joint Co-op Institute, Chongqing University, Chongqing, China
| | - Yang Yang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Bi
- School of Computer Science, University of Sydney, Sydney, NSW, Australia
| | - Xinyuan Zhang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
73
|
Lin S, Li L, Zou H, Xu Y, Lu L. Medical Staff and Resident Preferences for Using Deep Learning in Eye Disease Screening: Discrete Choice Experiment. J Med Internet Res 2022; 24:e40249. [PMID: 36125854 PMCID: PMC9533207 DOI: 10.2196/40249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 08/08/2022] [Accepted: 09/02/2022] [Indexed: 11/17/2022] Open
Abstract
Background Deep learning–assisted eye disease diagnosis technology is increasingly applied in eye disease screening. However, no research has suggested the prerequisites for health care service providers and residents willing to use it. Objective The aim of this paper is to reveal the preferences of health care service providers and residents for using artificial intelligence (AI) in community-based eye disease screening, particularly their preference for accuracy. Methods Discrete choice experiments for health care providers and residents were conducted in Shanghai, China. In total, 34 medical institutions with adequate AI-assisted screening experience participated. A total of 39 medical staff and 318 residents were asked to answer the questionnaire and make a trade-off among alternative screening strategies with different attributes, including missed diagnosis rate, overdiagnosis rate, screening result feedback efficiency, level of ophthalmologist involvement, organizational form, cost, and screening result feedback form. Conditional logit models with the stepwise selection method were used to estimate the preferences. Results Medical staff preferred high accuracy: The specificity of deep learning models should be more than 90% (odds ratio [OR]=0.61 for 10% overdiagnosis; P<.001), which was much higher than the Food and Drug Administration standards. However, accuracy was not the residents’ preference. Rather, they preferred to have the doctors involved in the screening process. In addition, when compared with a fully manual diagnosis, AI technology was more favored by the medical staff (OR=2.08 for semiautomated AI model and OR=2.39 for fully automated AI model; P<.001), while the residents were in disfavor of the AI technology without doctors’ supervision (OR=0.24; P<.001). Conclusions Deep learning model under doctors’ supervision is strongly recommended, and the specificity of the model should be more than 90%. In addition, digital transformation should help medical staff move away from heavy and repetitive work and spend more time on communicating with residents.
Collapse
Affiliation(s)
- Senlin Lin
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Liping Li
- Shanghai Hongkou Center for Disease Control and Prevention, Shanghai, China
| | - Haidong Zou
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Yi Xu
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Lina Lu
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| |
Collapse
|
74
|
Nadeem MW, Goh HG, Hussain M, Liew SY, Andonovic I, Khan MA. Deep Learning for Diabetic Retinopathy Analysis: A Review, Research Challenges, and Future Directions. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22186780. [PMID: 36146130 PMCID: PMC9505428 DOI: 10.3390/s22186780] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 08/02/2022] [Accepted: 08/08/2022] [Indexed: 05/12/2023]
Abstract
Deep learning (DL) enables the creation of computational models comprising multiple processing layers that learn data representations at multiple levels of abstraction. In the recent past, the use of deep learning has been proliferating, yielding promising results in applications across a growing number of fields, most notably in image processing, medical image analysis, data analysis, and bioinformatics. DL algorithms have also had a significant positive impact through yielding improvements in screening, recognition, segmentation, prediction, and classification applications across different domains of healthcare, such as those concerning the abdomen, cardiac, pathology, and retina. Given the extensive body of recent scientific contributions in this discipline, a comprehensive review of deep learning developments in the domain of diabetic retinopathy (DR) analysis, viz., screening, segmentation, prediction, classification, and validation, is presented here. A critical analysis of the relevant reported techniques is carried out, and the associated advantages and limitations highlighted, culminating in the identification of research gaps and future challenges that help to inform the research community to develop more efficient, robust, and accurate DL models for the various challenges in the monitoring and diagnosis of DR.
Collapse
Affiliation(s)
- Muhammad Waqas Nadeem
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
| | - Hock Guan Goh
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
- Correspondence: (H.G.G.); (I.A.)
| | - Muzammil Hussain
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan
| | - Soung-Yue Liew
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
| | - Ivan Andonovic
- Department of Electronic and Electrical Engineering, Royal College Building, University of Strathclyde, 204 George St., Glasgow G1 1XW, UK
- Correspondence: (H.G.G.); (I.A.)
| | - Muhammad Adnan Khan
- Pattern Recognition and Machine Learning Lab, Department of Software, Gachon University, Seongnam 13557, Korea
- Faculty of Computing, Riphah School of Computing and Innovation, Riphah International University, Lahore Campus, Lahore 54000, Pakistan
| |
Collapse
|
75
|
Ong J, Tan G, Ang M, Chhablani J. Digital Advancements in Retinal Models of Care in the Post-COVID-19 Lockdown Era. Asia Pac J Ophthalmol (Phila) 2022; 11:403-407. [PMID: 36094383 DOI: 10.1097/apo.0000000000000533] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 03/14/2022] [Indexed: 11/25/2022] Open
Abstract
The coronavirus disease-2019 (COVID-19) pandemic introduced unique barriers to retinal care including limited access to imaging modalities, ophthalmic clinicians, and direct medical interventions. These unprecedented barriers were met with the robust implementation of digital advances to aid in monitoring and efficiency of retinal care while taking into the account of public safety. Many of these innovations have been successful in maintaining efficiency and patient satisfaction and are likely to stay to help preserve vision in the future. In this article we highlight these advances implemented during the pandemic including telescreening triage, virtual retinal imaging clinics, at-home optical coherence tomography, mobile phone self-monitoring, and virtual reality monitoring technology. We also discuss advancing innovations including Internet of Things and Blockchain technology that will be critical for further implementation and security of these digital advancements.
Collapse
Affiliation(s)
- Joshua Ong
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA
| | - Gavin Tan
- Surgical Retinal Department of the Singapore National Eye Centre, Singapore
- Clinician Scientist, Singapore Eye Research Institute, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Marcus Ang
- Duke-NUS Department of Ophthalmology and Visual Sciences, Singapore
| | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA
| |
Collapse
|
76
|
Holistic multi-class classification & grading of diabetic foot ulcerations from plantar thermal images using deep learning. Health Inf Sci Syst 2022; 10:21. [PMID: 36039095 PMCID: PMC9418397 DOI: 10.1007/s13755-022-00194-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 08/14/2022] [Indexed: 11/17/2022] Open
Abstract
Purpose Diabetic foot is a common complication associated with diabetes mellitus (DM) leading to ulcerations in the feet. Due to diabetic neuropathy, most patients have reduced sensitivity to pain. As a result, minor injuries go unnoticed and progress into ulcers. The timely detection of potential ulceration points and intervention is crucial in preventing amputation. Changes in plantar temperature are one of the early signs of ulceration. Previous studies have focused on either binary classification or grading of DM severity, but neglect the holistic consideration of the problem. Moreover, multi-class studies exhibit severe performance variations between different classes. Methods We propose a new convolutional neural network for discrimination between non-DM and five DM severity grades from plantar thermal images and compare its performance against pre-trained networks such as AlexNet and related works. We address the lack of data and imbalanced class distribution, prevalent in prior work, achieving well-balanced classification performance. Results Our proposed model achieved the best performance with a mean accuracy of 0.9827, mean sensitivity of 0.9684 and mean specificity of 0.9892 in combined diabetic foot detection and grading. Conclusion To the best of our knowledge, this study sets a new state-of-the-art in plantar foot thermogram detection and grading, while being the first to implement a holistic multi-class classification and grading solution. Reliable automatic thermogram grading is a first step towards the development of smart health devices for DM patients.
Collapse
|
77
|
Santos C, Aguiar M, Welfer D, Belloni B. A New Approach for Detecting Fundus Lesions Using Image Processing and Deep Neural Network Architecture Based on YOLO Model. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22176441. [PMID: 36080898 PMCID: PMC9460625 DOI: 10.3390/s22176441] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Revised: 08/21/2022] [Accepted: 08/23/2022] [Indexed: 05/27/2023]
Abstract
Diabetic Retinopathy is one of the main causes of vision loss, and in its initial stages, it presents with fundus lesions, such as microaneurysms, hard exudates, hemorrhages, and soft exudates. Computational models capable of detecting these lesions can help in the early diagnosis of the disease and prevent the manifestation of more severe forms of lesions, helping in screening and defining the best form of treatment. However, the detection of these lesions through computerized systems is a challenge due to numerous factors, such as the characteristics of size and shape of the lesions, noise and the contrast of images available in the public datasets of Diabetic Retinopathy, the number of labeled examples of these lesions available in the datasets and the difficulty of deep learning algorithms in detecting very small objects in digital images. Thus, to overcome these problems, this work proposes a new approach based on image processing techniques, data augmentation, transfer learning, and deep neural networks to assist in the medical diagnosis of fundus lesions. The proposed approach was trained, adjusted, and tested using the public DDR and IDRiD Diabetic Retinopathy datasets and implemented in the PyTorch framework based on the YOLOv5 model. The proposed approach reached in the DDR dataset an mAP of 0.2630 for the IoU limit of 0.5 and F1-score of 0.3485 in the validation stage, and an mAP of 0.1540 for the IoU limit of 0.5 and F1-score of 0.2521, in the test stage. The results obtained in the experiments demonstrate that the proposed approach presented superior results to works with the same purpose found in the literature.
Collapse
Affiliation(s)
- Carlos Santos
- Computer Center, Federal Institute of Education, Science and Technology Farroupilha, Alegrete 97555-000, Brazil
- Postgraduate Program in Computing (PPGC), Federal University of Pelotas, Pelotas 96010-610, Brazil
| | - Marilton Aguiar
- Postgraduate Program in Computing (PPGC), Federal University of Pelotas, Pelotas 96010-610, Brazil
| | - Daniel Welfer
- Postgraduate Program in Computer Science (PPGCC), Departament of Applied Computing (DCOM), Federal University of Santa Maria, Santa Maria 97105-900, Brazil
| | - Bruno Belloni
- Federal Institute of Education, Science and Technology Sul-Rio-Grandense, Passo Fundo 99064-440, Brazil
| |
Collapse
|
78
|
Charng J, Alam K, Swartz G, Kugelman J, Alonso-Caneiro D, Mackey DA, Chen FK. Deep learning: applications in retinal and optic nerve diseases. Clin Exp Optom 2022:1-10. [PMID: 35999058 DOI: 10.1080/08164622.2022.2111201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
Abstract
Deep learning (DL) represents a paradigm-shifting, burgeoning field of research with emerging clinical applications in optometry. Unlike traditional programming, which relies on human-set specific rules, DL works by exposing the algorithm to a large amount of annotated data and allowing the software to develop its own set of rules (i.e. learn) by adjusting the parameters inside the model (network) during a training process in order to complete the task on its own. One major limitation of traditional programming is that, with complex tasks, it may require an extensive set of rules to accurately complete the assignment. Additionally, traditional programming can be susceptible to human bias from programmer experience. With the dramatic increase in the amount and the complexity of clinical data, DL has been utilised to automate data analysis and thus to assist clinicians in patient management. This review will present the latest advances in DL, for managing posterior eye diseases as well as DL-based solutions for patients with vision loss.
Collapse
Affiliation(s)
- Jason Charng
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Khyber Alam
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Gavin Swartz
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Jason Kugelman
- School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David Alonso-Caneiro
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David A Mackey
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Fred K Chen
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia.,Department of Ophthalmology, Royal Perth Hospital, Western Australia, Perth, Australia
| |
Collapse
|
79
|
Amin J, Anjum MA, Malik M. Fused information of DeepLabv3+ and transfer learning model for semantic segmentation and rich features selection using equilibrium optimizer (EO) for classification of NPDR lesions. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108881] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
80
|
Bhambra N, Antaki F, Malt FE, Xu A, Duval R. Deep learning for ultra-widefield imaging: a scoping review. Graefes Arch Clin Exp Ophthalmol 2022; 260:3737-3778. [PMID: 35857087 DOI: 10.1007/s00417-022-05741-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/16/2022] [Accepted: 06/22/2022] [Indexed: 11/04/2022] Open
Abstract
PURPOSE This article is a scoping review of published and peer-reviewed articles using deep-learning (DL) applied to ultra-widefield (UWF) imaging. This study provides an overview of the published uses of DL and UWF imaging for the detection of ophthalmic and systemic diseases, generative image synthesis, quality assessment of images, and segmentation and localization of ophthalmic image features. METHODS A literature search was performed up to August 31st, 2021 using PubMed, Embase, Cochrane Library, and Google Scholar. The inclusion criteria were as follows: (1) deep learning, (2) ultra-widefield imaging. The exclusion criteria were as follows: (1) articles published in any language other than English, (2) articles not peer-reviewed (usually preprints), (3) no full-text availability, (4) articles using machine learning algorithms other than deep learning. No study design was excluded from consideration. RESULTS A total of 36 studies were included. Twenty-three studies discussed ophthalmic disease detection and classification, 5 discussed segmentation and localization of ultra-widefield images (UWFIs), 3 discussed generative image synthesis, 3 discussed ophthalmic image quality assessment, and 2 discussed detecting systemic diseases via UWF imaging. CONCLUSION The application of DL to UWF imaging has demonstrated significant effectiveness in the diagnosis and detection of ophthalmic diseases including diabetic retinopathy, retinal detachment, and glaucoma. DL has also been applied in the generation of synthetic ophthalmic images. This scoping review highlights and discusses the current uses of DL with UWF imaging, and the future of DL applications in this field.
Collapse
Affiliation(s)
- Nishaant Bhambra
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - Fares Antaki
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada.,Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada
| | - Farida El Malt
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - AnQi Xu
- Faculty of Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada. .,Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada.
| |
Collapse
|
81
|
Sun F, Sun Y, Zhu J, Wang X, Ji C, Zhang J, Chen S, Yu Y, Xu W, Qian H. Mesenchymal stem cells-derived small extracellular vesicles alleviate diabetic retinopathy by delivering NEDD4. Stem Cell Res Ther 2022; 13:293. [PMID: 35841055 PMCID: PMC9284871 DOI: 10.1186/s13287-022-02983-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Accepted: 05/29/2022] [Indexed: 01/08/2023] Open
Abstract
Background As a leading cause of vision decline and severe blindness in adults, diabetic retinopathy (DR) is characterized by the aggravation of retinal oxidative stress and apoptosis in the early stage. Emerging studies reveal that mesenchymal stem cells-derived small extracellular vesicles (MSC-sEV) treatment represents a promising cell-free approach to alleviate ocular disorders. However, the repairing effects of MSC-sEV in DR remain largely unclear. This study aimed at exploring the role and the underlying mechanism of MSC-sEV in hyperglycemia-induced retinal degeneration. Methods In vivo, we used streptozotocin (STZ) to establish diabetic rat model, followed by the intravitreal injection of MSC-sEV to determine the curative effect. The cell viability and antioxidant capacity of retinal pigment epithelium (RPE) cells stimulated with high-glucose (HG) medium after MSC-sEV treatment were analyzed in vitro. By detecting the response of cell signaling pathways in MSC-sEV-treated RPE cells, we explored the functional mechanism of MSC-sEV. Mass spectrometry was performed to reveal the bioactive protein which mediated the role of MSC-sEV. Results The intravitreal injection of MSC-sEV elicited antioxidant effects and counteracted retinal apoptosis in STZ-induced DR rat model. MSC-sEV treatment also reduced the oxidative level and enhanced the proliferation ability of RPE cells cultured in HG conditions in vitro. Further studies showed that the increased level of phosphatase and tensin homolog (PTEN) inhibited AKT phosphorylation and nuclear factor erythroid 2-related factor 2 (NRF2) expression in RPE cells stimulated with HG medium, which could be reversed by MSC-sEV intervention. Through mass spectrometry, we illustrated that MSC-sEV-delivered neuronal precursor cell-expressed developmentally downregulated 4 (NEDD4) could cause PTEN ubiquitination and degradation, activate AKT signaling and upregulate NRF2 level to prevent DR progress. Moreover, NEDD4 knockdown impaired MSC-sEV-mediated retinal therapeutic effects. Conclusions Our findings indicated that MSC-sEV ameliorated DR through NEDD4-induced regulation on PTEN/AKT/NRF2 signaling pathway, thus revealing the efficiency and mechanism of MSC-sEV-based retinal protection and providing new insights into the treatment of DR. Supplementary Information The online version contains supplementary material available at 10.1186/s13287-022-02983-0.
Collapse
Affiliation(s)
- Fengtian Sun
- Jiangsu Key Laboratory of Medical Science and Laboratory Medicine, School of Medicine, Jiangsu University, Zhenjiang, 212013, Jiangsu, China
| | - Yuntong Sun
- Jiangsu Key Laboratory of Medical Science and Laboratory Medicine, School of Medicine, Jiangsu University, Zhenjiang, 212013, Jiangsu, China
| | - Junyan Zhu
- Jiangsu Key Laboratory of Medical Science and Laboratory Medicine, School of Medicine, Jiangsu University, Zhenjiang, 212013, Jiangsu, China
| | - Xiaoling Wang
- Jiangsu Key Laboratory of Medical Science and Laboratory Medicine, School of Medicine, Jiangsu University, Zhenjiang, 212013, Jiangsu, China
| | - Cheng Ji
- Jiangsu Key Laboratory of Medical Science and Laboratory Medicine, School of Medicine, Jiangsu University, Zhenjiang, 212013, Jiangsu, China
| | - Jiahui Zhang
- Jiangsu Key Laboratory of Medical Science and Laboratory Medicine, School of Medicine, Jiangsu University, Zhenjiang, 212013, Jiangsu, China
| | - Shenyuan Chen
- Jiangsu Key Laboratory of Medical Science and Laboratory Medicine, School of Medicine, Jiangsu University, Zhenjiang, 212013, Jiangsu, China
| | - Yifan Yu
- Jiangsu Key Laboratory of Medical Science and Laboratory Medicine, School of Medicine, Jiangsu University, Zhenjiang, 212013, Jiangsu, China
| | - Wenrong Xu
- Jiangsu Key Laboratory of Medical Science and Laboratory Medicine, School of Medicine, Jiangsu University, Zhenjiang, 212013, Jiangsu, China.
| | - Hui Qian
- Jiangsu Key Laboratory of Medical Science and Laboratory Medicine, School of Medicine, Jiangsu University, Zhenjiang, 212013, Jiangsu, China.
| |
Collapse
|
82
|
Fractal dimension of retinal vasculature as an image quality metric for automated fundus image analysis systems. Sci Rep 2022; 12:11868. [PMID: 35831401 PMCID: PMC9279448 DOI: 10.1038/s41598-022-16089-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 07/04/2022] [Indexed: 11/21/2022] Open
Abstract
Automated fundus screening is becoming a significant programme of telemedicine in ophthalmology. Instant quality evaluation of uploaded retinal images could decrease unreliable diagnosis. In this work, we propose fractal dimension of retinal vasculature as an easy, effective and explainable indicator of retinal image quality. The pipeline of our approach is as follows: utilize image pre-processing technique to standardize input retinal images from possibly different sources to a uniform style; then, an improved deep learning empowered vessel segmentation model is employed to extract retinal vessels from the pre-processed images; finally, a box counting module is used to measure the fractal dimension of segmented vessel images. A small fractal threshold (could be a value between 1.45 and 1.50) indicates insufficient image quality. Our approach has been validated on 30,644 images from four public database.
Collapse
|
83
|
Nderitu P, Nunez do Rio JM, Webster ML, Mann SS, Hopkins D, Cardoso MJ, Modat M, Bergeles C, Jackson TL. Automated image curation in diabetic retinopathy screening using deep learning. Sci Rep 2022; 12:11196. [PMID: 35778615 PMCID: PMC9249740 DOI: 10.1038/s41598-022-15491-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 06/24/2022] [Indexed: 11/20/2022] Open
Abstract
Diabetic retinopathy (DR) screening images are heterogeneous and contain undesirable non-retinal, incorrect field and ungradable samples which require curation, a laborious task to perform manually. We developed and validated single and multi-output laterality, retinal presence, retinal field and gradability classification deep learning (DL) models for automated curation. The internal dataset comprised of 7743 images from DR screening (UK) with 1479 external test images (Portugal and Paraguay). Internal vs external multi-output laterality AUROC were right (0.994 vs 0.905), left (0.994 vs 0.911) and unidentifiable (0.996 vs 0.680). Retinal presence AUROC were (1.000 vs 1.000). Retinal field AUROC were macula (0.994 vs 0.955), nasal (0.995 vs 0.962) and other retinal field (0.997 vs 0.944). Gradability AUROC were (0.985 vs 0.918). DL effectively detects laterality, retinal presence, retinal field and gradability of DR screening images with generalisation between centres and populations. DL models could be used for automated image curation within DR screening.
Collapse
Affiliation(s)
- Paul Nderitu
- Section of Ophthalmology, King's College London, London, UK. .,King's Ophthalmology Research Unit, King's College Hospital, London, UK.
| | | | - Ms Laura Webster
- South East London Diabetic Eye Screening Programme, Guy's and St Thomas' Foundation Trust, London, UK
| | - Samantha S Mann
- South East London Diabetic Eye Screening Programme, Guy's and St Thomas' Foundation Trust, London, UK.,Department of Ophthalmology, Guy's and St Thomas' Foundation Trust, London, UK
| | - David Hopkins
- Department of Diabetes, School of Life Course Sciences, King's College London, London, UK.,Institute of Diabetes, Endocrinology and Obesity, King's Health Partners, London, UK
| | - M Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Christos Bergeles
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Timothy L Jackson
- Section of Ophthalmology, King's College London, London, UK.,King's Ophthalmology Research Unit, King's College Hospital, London, UK
| |
Collapse
|
84
|
Applying supervised contrastive learning for the detection of diabetic retinopathy and its severity levels from fundus images. Comput Biol Med 2022; 146:105602. [DOI: 10.1016/j.compbiomed.2022.105602] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Revised: 04/26/2022] [Accepted: 05/06/2022] [Indexed: 01/02/2023]
|
85
|
Liu R, Wang X, Wu Q, Dai L, Fang X, Yan T, Son J, Tang S, Li J, Gao Z, Galdran A, Poorneshwaran JM, Liu H, Wang J, Chen Y, Porwal P, Wei Tan GS, Yang X, Dai C, Song H, Chen M, Li H, Jia W, Shen D, Sheng B, Zhang P. DeepDRiD: Diabetic Retinopathy-Grading and Image Quality Estimation Challenge. PATTERNS (NEW YORK, N.Y.) 2022; 3:100512. [PMID: 35755875 PMCID: PMC9214346 DOI: 10.1016/j.patter.2022.100512] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 03/28/2022] [Accepted: 04/25/2022] [Indexed: 12/19/2022]
Abstract
We described a challenge named "Diabetic Retinopathy (DR)-Grading and Image Quality Estimation Challenge" in conjunction with ISBI 2020 to hold three sub-challenges and develop deep learning models for DR image assessment and grading. The scientific community responded positively to the challenge, with 34 submissions from 574 registrations. In the challenge, we provided the DeepDRiD dataset containing 2,000 regular DR images (500 patients) and 256 ultra-widefield images (128 patients), both having DR quality and grading annotations. We discussed details of the top 3 algorithms in each sub-challenges. The weighted kappa for DR grading ranged from 0.93 to 0.82, and the accuracy for image quality evaluation ranged from 0.70 to 0.65. The results showed that image quality assessment can be used as a further target for exploration. We also have released the DeepDRiD dataset on GitHub to help develop automatic systems and improve human judgment in DR screening and diagnosis.
Collapse
Affiliation(s)
- Ruhan Liu
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China.,MoE Key Lab of Artificial Intelligence, Artificial Intelligence Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Xiangning Wang
- Department of Ophthalmology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Qiang Wu
- Department of Ophthalmology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Ling Dai
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China.,MoE Key Lab of Artificial Intelligence, Artificial Intelligence Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Xi Fang
- Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Tao Yan
- Department of Electromechanical Engineering, University of Macau, Macao, China
| | | | - Shiqi Tang
- Department of Mathematics, City University of Hong Kong, Hong Kong, China
| | - Jiang Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
| | - Zijian Gao
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou, China
| | | | | | - Hao Liu
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou, China
| | - Jie Wang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Yerui Chen
- Nanjing University of Science and Technology, Nanjing, China
| | - Prasanna Porwal
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Xiaokang Yang
- MoE Key Lab of Artificial Intelligence, Artificial Intelligence Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Chao Dai
- Shanghai Zhi Tang Health Technology Co., LTD., China
| | - Haitao Song
- MoE Key Lab of Artificial Intelligence, Artificial Intelligence Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Mingang Chen
- Shanghai Key Laboratory of Computer Software Testing & Evaluating, Shanghai Development Center of Computer Software Technology, Shanghai, China
| | - Huating Li
- Department of Endocrinology and Metabolism, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China.,Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Weiping Jia
- Department of Endocrinology and Metabolism, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China.,Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China.,Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China.,MoE Key Lab of Artificial Intelligence, Artificial Intelligence Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Ping Zhang
- Department of Computer Science and Engineering, The Ohio State University, Ohio, USA.,Department of Biomedical Informatics, The Ohio State University, Ohio, USA.,Translational Data Analytics Institute, The Ohio State University, Ohio, USA
| |
Collapse
|
86
|
Balancing Data through Data Augmentation Improves the Generality of Transfer Learning for Diabetic Retinopathy Classification. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115363] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The incidence of diabetes in Mauritius is amongst the highest in the world. Diabetic retinopathy (DR), a complication resulting from the disease, can lead to blindness if not detected early. The aim of this work was to investigate the use of transfer learning and data augmentation for the classification of fundus images into five different stages of diabetic retinopathy. The five stages are No DR, Mild nonproliferative DR, Moderate nonproliferative DR, Severe nonproliferative DR and Proliferative. To this end, deep transfer learning and three pre-trained models, VGG16, ResNet50 and DenseNet169, were used to classify the APTOS dataset. The preliminary experiments resulted in low training and validation accuracies, and hence, the APTOS dataset was augmented while ensuring a balance between the five classes. This dataset was then used to train the three models, and the best three models were used to classify a blind Mauritian test datum. We found that the ResNet50 model produced the best results out of the three models and also achieved very good accuracies for the five classes. The classification of class-4 Mauritian fundus images, severe cases, produced some unexpected results, with some images being classified as mild, and therefore needs to be further investigated.
Collapse
|
87
|
Cardiovascular Risk Stratification in Diabetic Retinopathy via Atherosclerotic Pathway in COVID-19/non-COVID-19 Frameworks using Artificial Intelligence Paradigm: A Narrative Review. Diagnostics (Basel) 2022; 12:diagnostics12051234. [PMID: 35626389 PMCID: PMC9140106 DOI: 10.3390/diagnostics12051234] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 05/11/2022] [Accepted: 05/11/2022] [Indexed: 11/18/2022] Open
Abstract
Diabetes is one of the main causes of the rising cases of blindness in adults. This microvascular complication of diabetes is termed diabetic retinopathy (DR) and is associated with an expanding risk of cardiovascular events in diabetes patients. DR, in its various forms, is seen to be a powerful indicator of atherosclerosis. Further, the macrovascular complication of diabetes leads to coronary artery disease (CAD). Thus, the timely identification of cardiovascular disease (CVD) complications in DR patients is of utmost importance. Since CAD risk assessment is expensive for low-income countries, it is important to look for surrogate biomarkers for risk stratification of CVD in DR patients. Due to the common genetic makeup between the coronary and carotid arteries, low-cost, high-resolution imaging such as carotid B-mode ultrasound (US) can be used for arterial tissue characterization and risk stratification in DR patients. The advent of artificial intelligence (AI) techniques has facilitated the handling of large cohorts in a big data framework to identify atherosclerotic plaque features in arterial ultrasound. This enables timely CVD risk assessment and risk stratification of patients with DR. Thus, this review focuses on understanding the pathophysiology of DR, retinal and CAD imaging, the role of surrogate markers for CVD, and finally, the CVD risk stratification of DR patients. The review shows a step-by-step cyclic activity of how diabetes and atherosclerotic disease cause DR, leading to the worsening of CVD. We propose a solution to how AI can help in the identification of CVD risk. Lastly, we analyze the role of DR/CVD in the COVID-19 framework.
Collapse
|
88
|
Li C, Wu X, Hu J, Shan J, Zhang Z, Huang X, Liu H. Graphene-based photocatalytic nanocomposites used to treat pharmaceutical and personal care product wastewater: A review. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2022; 29:35657-35681. [PMID: 35257332 DOI: 10.1007/s11356-022-19469-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Accepted: 02/23/2022] [Indexed: 06/14/2023]
Abstract
Photocatalytic technology has been widely studied by researchers in the field of environmental purification. This technology can not only completely convert organic pollutants into small molecules of CO2 and H2O through redox reactions but also remove metal ions and other inorganic substances from water. This article reviews the research progress of graphene-based photocatalytic nanocomposites in the treatment of wastewater. First, we elucidate the basic principles of photocatalysis, the types of graphene-based nanocomposites, and the role of graphene in photocatalysis (e.g., graphene can accelerate the separation of photon-hole pairs and increase the intensity and range of light absorption). Second, the preparation, characterization, and application of composites in wastewater are introduced. We also discuss the kinetic model of the photocatalytic degradation of pollutants. Finally, the enhancement mechanism of graphene in terms of photocatalysis is not completely clear, and graphene-based photocatalysts with high catalytic efficiency, low cost, and large-scale production have not yet appeared, so there is an urgent need for more extensive and in-depth research.
Collapse
Affiliation(s)
- Caifang Li
- Guizhou Provincial Key Laboratory for Information Systems of Mountainous Areas and Protection of Ecological Environment, Guizhou Normal University, Guiyang, 550001, China
| | - Xianliang Wu
- Guizhou Institute of Biology, Guiyang, Guizhou, 550009, China
| | - Jiwei Hu
- Guizhou Provincial Key Laboratory for Information Systems of Mountainous Areas and Protection of Ecological Environment, Guizhou Normal University, Guiyang, 550001, China
| | - Junyue Shan
- Guizhou Provincial Key Laboratory for Information Systems of Mountainous Areas and Protection of Ecological Environment, Guizhou Normal University, Guiyang, 550001, China
| | - Zhenming Zhang
- Guizhou Institute of Biology, Guiyang, Guizhou, 550009, China
| | - Xianfei Huang
- Guizhou Provincial Key Laboratory for Information Systems of Mountainous Areas and Protection of Ecological Environment, Guizhou Normal University, Guiyang, 550001, China.
| | - Huijuan Liu
- The Key Laboratory of Environmental Pollution Monitoring and Disease Control, Ministry of Education, Guizhou Medical University, Guiyang, 550025, China
| |
Collapse
|
89
|
Diagnosis of Retinal Diseases Based on Bayesian Optimization Deep Learning Network Using Optical Coherence Tomography Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8014979. [PMID: 35463234 PMCID: PMC9033334 DOI: 10.1155/2022/8014979] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 03/17/2022] [Indexed: 02/08/2023]
Abstract
Retinal abnormalities have emerged as a serious public health concern in recent years and can manifest gradually and without warning. These diseases can affect any part of the retina, causing vision impairment and indeed blindness in extreme cases. This necessitates the development of automated approaches to detect retinal diseases more precisely and, preferably, earlier. In this paper, we examine transfer learning of pretrained convolutional neural network (CNN) and then transfer it to detect retinal problems from Optical Coherence Tomography (OCT) images. In this study, pretrained CNN models, namely, VGG16, DenseNet201, InceptionV3, and Xception, are used to classify seven different retinal diseases from a dataset of images with and without retinal diseases. In addition, to choose optimum values for hyperparameters, Bayesian optimization is applied, and image augmentation is used to increase the generalization capabilities of the developed models. This research also provides a comparison of the proposed models as well as an analysis of them. The accuracy achieved using DenseNet201 on the Retinal OCT Image dataset is more than 99% and offers a good level of accuracy in classifying retinal diseases compared to other approaches, which only detect a small number of retinal diseases.
Collapse
|
90
|
Zhang G, Sun B, Chen Z, Gao Y, Zhang Z, Li K, Yang W. Diabetic Retinopathy Grading by Deep Graph Correlation Network on Retinal Images Without Manual Annotations. Front Med (Lausanne) 2022; 9:872214. [PMID: 35492360 PMCID: PMC9046841 DOI: 10.3389/fmed.2022.872214] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 03/18/2022] [Indexed: 11/20/2022] Open
Abstract
Background Diabetic retinopathy, as a severe public health problem associated with vision loss, should be diagnosed early using an accurate screening tool. While many previous deep learning models have been proposed for this disease, they need sufficient professional annotation data to train the model, requiring more expensive and time-consuming screening skills. Method This study aims to economize manual power and proposes a deep graph correlation network (DGCN) to develop automated diabetic retinopathy grading without any professional annotations. DGCN involves the novel deep learning algorithm of a graph convolutional network to exploit inherent correlations from independent retinal image features learned by a convolutional neural network. Three designed loss functions of graph-center, pseudo-contrastive, and transformation-invariant constrain the optimisation and application of the DGCN model in an automated diabetic retinopathy grading task. Results To evaluate the DGCN model, this study employed EyePACS-1 and Messidor-2 sets to perform grading results. It achieved an accuracy of 89.9% (91.8%), sensitivity of 88.2% (90.2%), and specificity of 91.3% (93.0%) on EyePACS-1 (Messidor-2) data set with a confidence index of 95% and commendable effectiveness on receiver operating characteristic (ROC) curve and t-SNE plots. Conclusion The grading capability of this study is close to that of retina specialists, but superior to that of trained graders, which demonstrates that the proposed DGCN provides an innovative route for automated diabetic retinopathy grading and other computer-aided diagnostic systems.
Collapse
Affiliation(s)
- Guanghua Zhang
- Department of Intelligence and Automation, Taiyuan University, Taiyuan, China
- Graphics and Imaging Laboratory, University of Girona, Girona, Spain
| | - Bin Sun
- Shanxi Eye Hospital, Taiyuan, China
| | - Zhixian Chen
- Department of Intelligence and Automation, Taiyuan University, Taiyuan, China
| | - Yuxi Gao
- Shanxi Finance and Taxation College, Taiyuan, China
| | | | - Keran Li
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- Keran Li,
| | - Weihua Yang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- *Correspondence: Weihua Yang,
| |
Collapse
|
91
|
Xie Z, Xiao X. Novel biomarkers and therapeutic approaches for diabetic retinopathy and nephropathy: Recent progress and future perspectives. Front Endocrinol (Lausanne) 2022; 13:1065856. [PMID: 36506068 PMCID: PMC9732104 DOI: 10.3389/fendo.2022.1065856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 11/09/2022] [Indexed: 11/27/2022] Open
Abstract
The global burden due to microvascular complications in patients with diabetes mellitus persists and even increases alarmingly, the intervention and management are now encountering many difficulties and challenges. This paper reviews the recent advancement and progress in novel biomarkers, artificial intelligence technology, therapeutic agents and approaches of diabetic retinopathy and nephropathy, providing more insights into the management of microvascular complications.
Collapse
|
92
|
Update on Optical Coherence Tomography and Optical Coherence Tomography Angiography Imaging in Proliferative Diabetic Retinopathy. Diagnostics (Basel) 2021; 11:diagnostics11101869. [PMID: 34679567 PMCID: PMC8535055 DOI: 10.3390/diagnostics11101869] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 09/22/2021] [Accepted: 10/07/2021] [Indexed: 12/29/2022] Open
Abstract
Proliferative diabetic retinopathy (PDR) is a major cause of blindness in diabetic individuals. Optical coherence tomography (OCT) and OCT-angiography (OCTA) are noninvasive imaging techniques useful for the diagnosis and assessment of PDR. We aim to review several recent developments using OCT and discuss their present and potential future applications in the clinical setting. An electronic database search was performed so as to include all studies assessing OCT and/or OCTA findings in PDR patients published from 1 January 2020 to 31 May 2021. Thirty studies were included, and the most recently published data essentially focused on the higher detection rate of neovascularization obtained with widefield-OCT and/or OCTA (WF-OCT/OCTA) and on the increasing quality of retinal imaging with quality levels non-inferior to widefield-fluorescein angiography (WF-FA). There were also significant developments in the study of retinal nonperfusion areas (NPAs) using these techniques and research on the impact of PDR treatment on NPAs and on vascular density. It is becoming increasingly clear that it is critical to use adequate imaging protocols focused on optimized segmentation and maximized imaged retinal area, with ongoing technological development through artificial intelligence and deep learning. These latest findings emphasize the growing applicability and role of noninvasive imaging in managing PDR with the added benefit of avoiding the repetition of invasive conventional FA.
Collapse
|
93
|
Yuen V, Ran A, Shi J, Sham K, Yang D, Chan VTT, Chan R, Yam JC, Tham CC, McKay GJ, Williams MA, Schmetterer L, Cheng CY, Mok V, Chen CL, Wong TY, Cheung CY. Deep-Learning-Based Pre-Diagnosis Assessment Module for Retinal Photographs: A Multicenter Study. Transl Vis Sci Technol 2021; 10:16. [PMID: 34524409 PMCID: PMC8444486 DOI: 10.1167/tvst.10.11.16] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 08/12/2021] [Indexed: 12/23/2022] Open
Abstract
Purpose Artificial intelligence (AI) deep learning (DL) has been shown to have significant potential for eye disease detection and screening on retinal photographs in different clinical settings, particular in primary care. However, an automated pre-diagnosis image assessment is essential to streamline the application of the developed AI-DL algorithms. In this study, we developed and validated a DL-based pre-diagnosis assessment module for retinal photographs, targeting image quality (gradable vs. ungradable), field of view (macula-centered vs. optic-disc-centered), and laterality of the eye (right vs. left). Methods A total of 21,348 retinal photographs from 1914 subjects from various clinical settings in Hong Kong, Singapore, and the United Kingdom were used for training, internal validation, and external testing for the DL module, developed by two DL-based algorithms (EfficientNet-B0 and MobileNet-V2). Results For image-quality assessment, the pre-diagnosis module achieved area under the receiver operating characteristic curve (AUROC) values of 0.975, 0.999, and 0.987 in the internal validation dataset and the two external testing datasets, respectively. For field-of-view assessment, the module had an AUROC value of 1.000 in all of the datasets. For laterality-of-the-eye assessment, the module had AUROC values of 1.000, 0.999, and 0.985 in the internal validation dataset and the two external testing datasets, respectively. Conclusions Our study showed that this three-in-one DL module for assessing image quality, field of view, and laterality of the eye of retinal photographs achieved excellent performance and generalizability across different centers and ethnicities. Translational Relevance The proposed DL-based pre-diagnosis module realized accurate and automated assessments of image quality, field of view, and laterality of the eye of retinal photographs, which could be further integrated into AI-based models to improve operational flow for enhancing disease screening and diagnosis.
Collapse
Affiliation(s)
- Vincent Yuen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Anran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jian Shi
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Kaiser Sham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Victor T. T. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Raymond Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jason C. Yam
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Clement C. Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Gareth J. McKay
- Center for Public Health, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Michael A. Williams
- Center for Medical Education, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Vincent Mok
- Gerald Choa Neuroscience Center, Therese Pei Fong Chow Research Center for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Hong Kong
| | - Christopher L. Chen
- Memory, Aging and Cognition Center, Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Y. Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|