51
|
Application of Deep Learning to Retinal-Image-Based Oculomics for Evaluation of Systemic Health: A Review. J Clin Med 2022; 12:jcm12010152. [PMID: 36614953 PMCID: PMC9821402 DOI: 10.3390/jcm12010152] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 12/17/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022] Open
Abstract
The retina is a window to the human body. Oculomics is the study of the correlations between ophthalmic biomarkers and systemic health or disease states. Deep learning (DL) is currently the cutting-edge machine learning technique for medical image analysis, and in recent years, DL techniques have been applied to analyze retinal images in oculomics studies. In this review, we summarized oculomics studies that used DL models to analyze retinal images-most of the published studies to date involved color fundus photographs, while others focused on optical coherence tomography images. These studies showed that some systemic variables, such as age, sex and cardiovascular disease events, could be consistently robustly predicted, while other variables, such as thyroid function and blood cell count, could not be. DL-based oculomics has demonstrated fascinating, "super-human" predictive capabilities in certain contexts, but it remains to be seen how these models will be incorporated into clinical care and whether management decisions influenced by these models will lead to improved clinical outcomes.
Collapse
|
52
|
Zhang S, Chen R, Wang Y, Hu W, Kiburg KV, Zhang J, Yang X, Yu H, He M, Wang W, Zhu Z. Association of Retinal Age Gap and Risk of Kidney Failure: A UK Biobank Study. Am J Kidney Dis 2022; 81:537-544.e1. [PMID: 36481699 DOI: 10.1053/j.ajkd.2022.09.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 09/25/2022] [Indexed: 12/12/2022]
Abstract
RATIONALE & OBJECTIVE The incidence of kidney failure is known to increase with age. We have previously developed and validated the use of retinal age based on fundus images as a biomarker of aging. However, the association of retinal age with kidney failure is not clear. We investigated the association of retinal age gap (the difference between retinal age and chronological age) with future risk of kidney failure. STUDY DESIGN Prospective cohort study. SETTING & PARTICIPANTS 11,052 UK Biobank study participants without any reported disease for characterizing retinal age in a deep learning algorithm. 35,864 other participants with retinal images and no kidney failure were followed to assess the association between retinal age gap and the risk of kidney failure. EXPOSURE Retinal age gap, defined as the difference between model-based retinal age and chronological age. OUTCOME Incident kidney failure. ANALYTICAL APPROACH A deep learning prediction model used to characterize retinal age based on retinal images and chronological age, and Cox proportional hazards regression models to investigate the association of retinal age gap with incident kidney failure. RESULTS After a median follow-up period of 11 (IQR, 10.89-11.14) years, 115 (0.32%) participants were diagnosed with incident kidney failure. Each 1-year greater retinal age gap at baseline was independently associated with a 10% increase in the risk of incident kidney failure (HR, 1.10 [95% CI, 1.03-1.17]; P=0.003). Participants with retinal age gaps in the fourth (highest) quartile had a significantly higher risk of incident kidney failure compared with those in the first quartile (HR, 2.77 [95% CI, 1.29-5.93]; P=0.009). LIMITATIONS Limited generalizability related to the composition of participants in the UK Biobank study. CONCLUSIONS Retinal age gap was significantly associated with incident kidney failure and may be a promising noninvasive predictive biomarker for incident kidney failure.
Collapse
Affiliation(s)
- Shiran Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, and Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, People's Republic of China
| | - Ruiye Chen
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, Australia; Department of Surgery, Ophthalmology, University of Melbourne, Melbourne, Australia
| | - Yan Wang
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, People's Republic of China
| | - Wenyi Hu
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, Australia; Department of Surgery, Ophthalmology, University of Melbourne, Melbourne, Australia
| | - Katerina V Kiburg
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, Australia
| | - Junyao Zhang
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, Australia
| | - Xiaohong Yang
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, People's Republic of China
| | - Honghua Yu
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, People's Republic of China
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, and Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, People's Republic of China; Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, People's Republic of China; Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, Australia; Department of Surgery, Ophthalmology, University of Melbourne, Melbourne, Australia.
| | - Wei Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, and Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, People's Republic of China
| | - Zhuoting Zhu
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, People's Republic of China; Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, Australia; Department of Surgery, Ophthalmology, University of Melbourne, Melbourne, Australia.
| |
Collapse
|
53
|
Rom Y, Aviv R, Ianchulev T, Dvey-Aharon Z. Predicting the future development of diabetic retinopathy using a deep learning algorithm for the analysis of non-invasive retinal imaging. BMJ Open Ophthalmol 2022. [DOI: 10.1136/bmjophth-2022-001140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
AimsDiabetic retinopathy (DR) is the most common cause of vision loss in the working age. This research aimed to develop an artificial intelligence (AI) machine learning model which can predict the development of referable DR from fundus imagery of otherwise healthy eyes.MethodsOur researchers trained a machine learning algorithm on the EyePACS data set, consisting of 156 363 fundus images. Referrable DR was defined as any level above mild on the International Clinical Diabetic Retinopathy scale.ResultsThe algorithm achieved 0.81 area under receiver operating curve (AUC) when averaging scores from multiple images on the task of predicting development of referrable DR, and 0.76 AUC when using a single image.ConclusionOur results suggest that risk of DR may be predicted from fundus photography alone. Prediction of personalised risk of DR may become key in treatment and contribute to patient compliance across the board, particularly when supported by further prospective research.
Collapse
|
54
|
Iqbal S, Khan TM, Naveed K, Naqvi SS, Nawaz SJ. Recent trends and advances in fundus image analysis: A review. Comput Biol Med 2022; 151:106277. [PMID: 36370579 DOI: 10.1016/j.compbiomed.2022.106277] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/19/2022] [Accepted: 10/30/2022] [Indexed: 11/05/2022]
Abstract
Automated retinal image analysis holds prime significance in the accurate diagnosis of various critical eye diseases that include diabetic retinopathy (DR), age-related macular degeneration (AMD), atherosclerosis, and glaucoma. Manual diagnosis of retinal diseases by ophthalmologists takes time, effort, and financial resources, and is prone to error, in comparison to computer-aided diagnosis systems. In this context, robust classification and segmentation of retinal images are primary operations that aid clinicians in the early screening of patients to ensure the prevention and/or treatment of these diseases. This paper conducts an extensive review of the state-of-the-art methods for the detection and segmentation of retinal image features. Existing notable techniques for the detection of retinal features are categorized into essential groups and compared in depth. Additionally, a summary of quantifiable performance measures for various important stages of retinal image analysis, such as image acquisition and preprocessing, is provided. Finally, the widely used in the literature datasets for analyzing retinal images are described and their significance is emphasized.
Collapse
Affiliation(s)
- Shahzaib Iqbal
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan; Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark
| | - Syed S Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Syed Junaid Nawaz
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| |
Collapse
|
55
|
Nguyen TX, Ran AR, Hu X, Yang D, Jiang M, Dou Q, Cheung CY. Federated Learning in Ocular Imaging: Current Progress and Future Direction. Diagnostics (Basel) 2022; 12:2835. [PMID: 36428895 PMCID: PMC9689273 DOI: 10.3390/diagnostics12112835] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 11/11/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Advances in artificial intelligence deep learning (DL) have made tremendous impacts on the field of ocular imaging over the last few years. Specifically, DL has been utilised to detect and classify various ocular diseases on retinal photographs, optical coherence tomography (OCT) images, and OCT-angiography images. In order to achieve good robustness and generalisability of model performance, DL training strategies traditionally require extensive and diverse training datasets from various sites to be transferred and pooled into a "centralised location". However, such a data transferring process could raise practical concerns related to data security and patient privacy. Federated learning (FL) is a distributed collaborative learning paradigm which enables the coordination of multiple collaborators without the need for sharing confidential data. This distributed training approach has great potential to ensure data privacy among different institutions and reduce the potential risk of data leakage from data pooling or centralisation. This review article aims to introduce the concept of FL, provide current evidence of FL in ocular imaging, and discuss potential challenges as well as future applications.
Collapse
Affiliation(s)
- Truong X. Nguyen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Xiaoyan Hu
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Meirui Jiang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
56
|
Cheung CY, Ran AR, Wang S, Chan VTT, Sham K, Hilal S, Venketasubramanian N, Cheng CY, Sabanayagam C, Tham YC, Schmetterer L, McKay GJ, Williams MA, Wong A, Au LWC, Lu Z, Yam JC, Tham CC, Chen JJ, Dumitrascu OM, Heng PA, Kwok TCY, Mok VCT, Milea D, Chen CLH, Wong TY. A deep learning model for detection of Alzheimer's disease based on retinal photographs: a retrospective, multicentre case-control study. Lancet Digit Health 2022; 4:e806-e815. [PMID: 36192349 DOI: 10.1016/s2589-7500(22)00169-8] [Citation(s) in RCA: 52] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 08/12/2022] [Accepted: 08/19/2022] [Indexed: 10/14/2022]
Abstract
BACKGROUND There is no simple model to screen for Alzheimer's disease, partly because the diagnosis of Alzheimer's disease itself is complex-typically involving expensive and sometimes invasive tests not commonly available outside highly specialised clinical settings. We aimed to develop a deep learning algorithm that could use retinal photographs alone, which is the most common method of non-invasive imaging the retina to detect Alzheimer's disease-dementia. METHODS In this retrospective, multicentre case-control study, we trained, validated, and tested a deep learning algorithm to detect Alzheimer's disease-dementia from retinal photographs using retrospectively collected data from 11 studies that recruited patients with Alzheimer's disease-dementia and people without disease from different countries. Our main aim was to develop a bilateral model to detect Alzheimer's disease-dementia from retinal photographs alone. We designed and internally validated the bilateral deep learning model using retinal photographs from six studies. We used the EfficientNet-b2 network as the backbone of the model to extract features from the images. Integrated features from four retinal photographs (optic nerve head-centred and macula-centred fields from both eyes) for each individual were used to develop supervised deep learning models and equip the network with unsupervised domain adaptation technique, to address dataset discrepancy between the different studies. We tested the trained model using five other studies, three of which used PET as a biomarker of significant amyloid β burden (testing the deep learning model between amyloid β positive vs amyloid β negative). FINDINGS 12 949 retinal photographs from 648 patients with Alzheimer's disease and 3240 people without the disease were used to train, validate, and test the deep learning model. In the internal validation dataset, the deep learning model had 83·6% (SD 2·5) accuracy, 93·2% (SD 2·2) sensitivity, 82·0% (SD 3·1) specificity, and an area under the receiver operating characteristic curve (AUROC) of 0·93 (0·01) for detecting Alzheimer's disease-dementia. In the testing datasets, the bilateral deep learning model had accuracies ranging from 79·6% (SD 15·5) to 92·1% (11·4) and AUROCs ranging from 0·73 (SD 0·24) to 0·91 (0·10). In the datasets with data on PET, the model was able to differentiate between participants who were amyloid β positive and those who were amyloid β negative: accuracies ranged from 80·6 (SD 13·4%) to 89·3 (13·7%) and AUROC ranged from 0·68 (SD 0·24) to 0·86 (0·16). In subgroup analyses, the discriminative performance of the model was improved in patients with eye disease (accuracy 89·6% [SD 12·5%]) versus those without eye disease (71·7% [11·6%]) and patients with diabetes (81·9% [SD 20·3%]) versus those without the disease (72·4% [11·7%]). INTERPRETATION A retinal photograph-based deep learning algorithm can detect Alzheimer's disease with good accuracy, showing its potential for screening Alzheimer's disease in a community setting. FUNDING BrightFocus Foundation.
Collapse
Affiliation(s)
- Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China.
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Shujun Wang
- Department of Computer Science and Engineering, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Victor T T Chan
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China; Department of Ophthalmology and Visual Sciences, Prince of Wales Hospital, Hong Kong Special Administrative Region, China
| | - Kaiser Sham
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Saima Hilal
- Memory Aging &Cognition Centre, National University Health System, Singapore; Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, Singapore
| | | | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore
| | - Charumathi Sabanayagam
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore
| | - Yih Chung Tham
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Singapore Eye Research Institute, Advanced Ocular Engineering and School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
| | - Gareth J McKay
- Centre for Public Health, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | | | - Adrian Wong
- Gerald Choa Neuroscience Institute, Therese Pei Fong Chow Research Centre for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Division of Neurology, Department of Medicine and Therapeutics, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Lisa W C Au
- Gerald Choa Neuroscience Institute, Therese Pei Fong Chow Research Centre for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Division of Neurology, Department of Medicine and Therapeutics, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Zhihui Lu
- Jockey Club Centre for Osteoporosis Care and Control, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China; Department of Medicine and Therapeutics, Faculty of Medicine, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Jason C Yam
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - John J Chen
- Department of Ophthalmology and Department of Neurology, Mayo Clinic, Rochester, MN, USA
| | - Oana M Dumitrascu
- Department of Neurology and Department of Ophthalmology, Division of Cerebrovascular Diseases, Mayo Clinic College of Medicine and Science, Scottsdale, AZ, USA
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Timothy C Y Kwok
- Jockey Club Centre for Osteoporosis Care and Control, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China; Department of Medicine and Therapeutics, Faculty of Medicine, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Vincent C T Mok
- Gerald Choa Neuroscience Institute, Therese Pei Fong Chow Research Centre for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Division of Neurology, Department of Medicine and Therapeutics, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Dan Milea
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore
| | - Christopher Li-Hsian Chen
- Memory Aging &Cognition Centre, National University Health System, Singapore; Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China
| |
Collapse
|
57
|
Roy S, Meena T, Lim SJ. Demystifying Supervised Learning in Healthcare 4.0: A New Reality of Transforming Diagnostic Medicine. Diagnostics (Basel) 2022; 12:2549. [PMID: 36292238 PMCID: PMC9601517 DOI: 10.3390/diagnostics12102549] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 10/17/2022] [Accepted: 10/18/2022] [Indexed: 11/17/2022] Open
Abstract
The global healthcare sector continues to grow rapidly and is reflected as one of the fastest-growing sectors in the fourth industrial revolution (4.0). The majority of the healthcare industry still uses labor-intensive, time-consuming, and error-prone traditional, manual, and manpower-based methods. This review addresses the current paradigm, the potential for new scientific discoveries, the technological state of preparation, the potential for supervised machine learning (SML) prospects in various healthcare sectors, and ethical issues. The effectiveness and potential for innovation of disease diagnosis, personalized medicine, clinical trials, non-invasive image analysis, drug discovery, patient care services, remote patient monitoring, hospital data, and nanotechnology in various learning-based automation in healthcare along with the requirement for explainable artificial intelligence (AI) in healthcare are evaluated. In order to understand the potential architecture of non-invasive treatment, a thorough study of medical imaging analysis from a technical point of view is presented. This study also represents new thinking and developments that will push the boundaries and increase the opportunity for healthcare through AI and SML in the near future. Nowadays, SML-based applications require a lot of data quality awareness as healthcare is data-heavy, and knowledge management is paramount. Nowadays, SML in biomedical and healthcare developments needs skills, quality data consciousness for data-intensive study, and a knowledge-centric health management system. As a result, the merits, demerits, and precautions need to take ethics and the other effects of AI and SML into consideration. The overall insight in this paper will help researchers in academia and industry to understand and address the future research that needs to be discussed on SML in the healthcare and biomedical sectors.
Collapse
Affiliation(s)
- Sudipta Roy
- Artificial Intelligence & Data Science, Jio Institute, Navi Mumbai 410206, India
| | - Tanushree Meena
- Artificial Intelligence & Data Science, Jio Institute, Navi Mumbai 410206, India
| | - Se-Jung Lim
- Division of Convergence, Honam University, 120, Honamdae-gil, Gwangsan-gu, Gwangju 62399, Korea
| |
Collapse
|
58
|
Ta AWA, Goh HL, Ang C, Koh LY, Poon K, Miller SM. Two Singapore public healthcare AI applications for national screening programs and other examples. HEALTH CARE SCIENCE 2022; 1:41-57. [PMID: 38938890 PMCID: PMC11080681 DOI: 10.1002/hcs2.10] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 06/21/2022] [Indexed: 06/29/2024]
Abstract
This article explains how two AI systems have been incorporated into the everyday operations of two Singapore public healthcare nation-wide screening programs. The first example is embedded within the setting of a national level population health screening program for diabetes related eye diseases, targeting the rapidly increasing number of adults in the country with diabetes. In the second example, the AI assisted screening is done shortly after a person is admitted to one of the public hospitals to identify which inpatients-especially which elderly patients with complex conditions-have a high risk of being readmitted as an inpatient multiple times in the months following discharge. Ways in which healthcare needs and the clinical operations context influenced the approach to designing or deploying the AI systems are highlighted, illustrating the multiplicity of factors that shape the requirements for successful large-scale deployments of AI systems that are deeply embedded within clinical workflows. In the first example, the choice was made to use the system in a semi-automated (vs. fully automated) mode as this was assessed to be more cost-effective, though still offering substantial productivity improvement. In the second example, machine learning algorithm design and model execution trade-offs were made that prioritized key aspects of patient engagement and inclusion over higher levels of predictive accuracy. The article concludes with several lessons learned related to deploying AI systems within healthcare settings, and also lists several other AI efforts already in deployment and in the pipeline for Singapore's public healthcare system.
Collapse
Affiliation(s)
- Andy Wee An Ta
- Department of Data Analytics and AIIntegrated Health Information Systems (IHiS) Private LimitedSingaporeSingapore
| | - Han Leong Goh
- Department of Data Analytics and AIIntegrated Health Information Systems (IHiS) Private LimitedSingaporeSingapore
| | - Christine Ang
- Department of Data Analytics and AIIntegrated Health Information Systems (IHiS) Private LimitedSingaporeSingapore
| | - Lian Yeow Koh
- Department of Data Analytics and AIIntegrated Health Information Systems (IHiS) Private LimitedSingaporeSingapore
| | - Ken Poon
- Department of Data Analytics and AIIntegrated Health Information Systems (IHiS) Private LimitedSingaporeSingapore
| | - Steven M. Miller
- School of Computing and Information SystemsSingapore Management UniversitySingaporeSingapore
| |
Collapse
|
59
|
Pereira-Morales AJ, Rojas LH. Risk stratification using Artificial Intelligence: Could it be useful to reduce the burden of chronic kidney disease in low- and middle-income Countries? Front Public Health 2022; 10:999512. [PMID: 36249250 PMCID: PMC9558275 DOI: 10.3389/fpubh.2022.999512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 09/05/2022] [Indexed: 01/26/2023] Open
Affiliation(s)
- Angela J. Pereira-Morales
- PhD Program in Public Health, School of Medicine, Universidad Nacional de Colombia, Bogotá, Colombia,Science for Life (S4L), 10x Research Group, Bogotá, Colombia,*Correspondence: Angela J. Pereira-Morales
| | - Luis H. Rojas
- Science for Life (S4L), 10x Research Group, Bogotá, Colombia
| |
Collapse
|
60
|
Patil AD, Biousse V, Newman NJ. Artificial intelligence in ophthalmology: an insight into neurodegenerative disease. Curr Opin Ophthalmol 2022; 33:432-439. [PMID: 35819902 DOI: 10.1097/icu.0000000000000877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The aging world population accounts for the increasing prevalence of neurodegenerative diseases such as Alzheimer's and Parkinson's which carry a significant health and economic burden. There is therefore a need for sensitive and specific noninvasive biomarkers for early diagnosis and monitoring. Advances in retinal and optic nerve multimodal imaging as well as the development of artificial intelligence deep learning systems (AI-DLS) have heralded a number of promising advances of which ophthalmologists are at the forefront. RECENT FINDINGS The association among retinal vascular, nerve fiber layer, and macular findings in neurodegenerative disease is well established. In order to optimize the use of these ophthalmic parameters as biomarkers, validated AI-DLS are required to ensure clinical efficacy and reliability. Varied image acquisition methods and protocols as well as variability in neurogenerative disease diagnosis compromise the robustness of ground truths that are paramount to developing high-quality training datasets. SUMMARY In order to produce effective AI-DLS for the diagnosis and monitoring of neurodegenerative disease, multicenter international collaboration is required to prospectively produce large inclusive datasets, acquired through standardized methods and protocols. With a uniform approach, the efficacy of resultant clinical applications will be maximized.
Collapse
Affiliation(s)
| | | | - Nancy J Newman
- Department of Ophthalmology
- Department of Neurology
- Department of Neurological Surgery, Emory University School of Medicine, Atlanta, Georgia, USA
| |
Collapse
|
61
|
González-Gonzalo C, Thee EF, Klaver CCW, Lee AY, Schlingemann RO, Tufail A, Verbraak F, Sánchez CI. Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res 2022; 90:101034. [PMID: 34902546 DOI: 10.1016/j.preteyeres.2021.101034] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/14/2023]
Abstract
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Collapse
Affiliation(s)
- Cristina González-Gonzalo
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands.
| | - Eric F Thee
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Caroline C W Klaver
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the Netherlands; Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Aaron Y Lee
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
| | - Reinier O Schlingemann
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Frank Verbraak
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands
| | - Clara I Sánchez
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Amsterdam, the Netherlands
| |
Collapse
|
62
|
Wong DYL, Lam MC, Ran A, Cheung CY. Artificial intelligence in retinal imaging for cardiovascular disease prediction: current trends and future directions. Curr Opin Ophthalmol 2022; 33:440-446. [PMID: 35916571 DOI: 10.1097/icu.0000000000000886] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW Retinal microvasculature assessment has shown promise to enhance cardiovascular disease (CVD) risk stratification. Integrating artificial intelligence into retinal microvasculature analysis may increase the screening capacity of CVD risks compared with risk score calculation through blood-taking. This review summarizes recent advancements in artificial intelligence based retinal photograph analysis for CVD prediction, and suggests challenges and future prospects for translation into a clinical setting. RECENT FINDINGS Artificial intelligence based retinal microvasculature analyses potentially predict CVD risk factors (e.g. blood pressure, diabetes), direct CVD events (e.g. CVD mortality), retinal features (e.g. retinal vessel calibre) and CVD biomarkers (e.g. coronary artery calcium score). However, challenges such as handling photographs with concurrent retinal diseases, limited diverse data from other populations or clinical settings, insufficient interpretability and generalizability, concerns on cost-effectiveness and social acceptance may impede the dissemination of these artificial intelligence algorithms into clinical practice. SUMMARY Artificial intelligence based retinal microvasculature analysis may supplement existing CVD risk stratification approach. Although technical and socioeconomic challenges remain, we envision artificial intelligence based microvasculature analysis to have major clinical and research impacts in the future, through screening for high-risk individuals especially in less-developed areas and identifying new retinal biomarkers for CVD research.
Collapse
Affiliation(s)
- Dragon Y L Wong
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | | | | | | |
Collapse
|
63
|
Loftus TJ, Shickel B, Ozrazgat-Baslanti T, Ren Y, Glicksberg BS, Cao J, Singh K, Chan L, Nadkarni GN, Bihorac A. Artificial intelligence-enabled decision support in nephrology. Nat Rev Nephrol 2022; 18:452-465. [PMID: 35459850 PMCID: PMC9379375 DOI: 10.1038/s41581-022-00562-3] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2022] [Indexed: 12/12/2022]
Abstract
Kidney pathophysiology is often complex, nonlinear and heterogeneous, which limits the utility of hypothetical-deductive reasoning and linear, statistical approaches to diagnosis and treatment. Emerging evidence suggests that artificial intelligence (AI)-enabled decision support systems - which use algorithms based on learned examples - may have an important role in nephrology. Contemporary AI applications can accurately predict the onset of acute kidney injury before notable biochemical changes occur; can identify modifiable risk factors for chronic kidney disease onset and progression; can match or exceed human accuracy in recognizing renal tumours on imaging studies; and may augment prognostication and decision-making following renal transplantation. Future AI applications have the potential to make real-time, continuous recommendations for discrete actions and yield the greatest probability of achieving optimal kidney health outcomes. Realizing the clinical integration of AI applications will require cooperative, multidisciplinary commitment to ensure algorithm fairness, overcome barriers to clinical implementation, and build an AI-competent workforce. AI-enabled decision support should preserve the pre-eminence of wisdom and augment rather than replace human decision-making. By anchoring intuition with objective predictions and classifications, this approach should favour clinician intuition when it is honed by experience.
Collapse
Affiliation(s)
- Tyler J Loftus
- Department of Surgery, University of Florida Health, Gainesville, FL, USA
| | - Benjamin Shickel
- Department of Medicine, University of Florida Health, Gainesville, FL, USA
| | | | - Yuanfang Ren
- Department of Medicine, University of Florida Health, Gainesville, FL, USA
| | - Benjamin S Glicksberg
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Jie Cao
- Department of Computational Medicine and Bioinformatics, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Karandeep Singh
- Department of Learning Health Sciences and Internal Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Lili Chan
- The Mount Sinai Clinical Intelligence Center, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Division of Nephrology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Girish N Nadkarni
- The Mount Sinai Clinical Intelligence Center, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- The Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Azra Bihorac
- Department of Medicine, University of Florida Health, Gainesville, FL, USA.
| |
Collapse
|
64
|
Bhatt P, Liu J, Gong Y, Wang J, Guo Y. Emerging Artificial Intelligence–Empowered mHealth: Scoping Review. JMIR Mhealth Uhealth 2022; 10:e35053. [PMID: 35679107 PMCID: PMC9227797 DOI: 10.2196/35053] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 01/23/2022] [Accepted: 04/08/2022] [Indexed: 11/13/2022] Open
Abstract
Background
Artificial intelligence (AI) has revolutionized health care delivery in recent years. There is an increase in research for advanced AI techniques, such as deep learning, to build predictive models for the early detection of diseases. Such predictive models leverage mobile health (mHealth) data from wearable sensors and smartphones to discover novel ways for detecting and managing chronic diseases and mental health conditions.
Objective
Currently, little is known about the use of AI-powered mHealth (AIM) settings. Therefore, this scoping review aims to map current research on the emerging use of AIM for managing diseases and promoting health. Our objective is to synthesize research in AIM models that have increasingly been used for health care delivery in the last 2 years.
Methods
Using Arksey and O’Malley’s 5-point framework for conducting scoping reviews, we reviewed AIM literature from the past 2 years in the fields of biomedical technology, AI, and information systems. We searched 3 databases, PubsOnline at INFORMS, e-journal archive at MIS Quarterly, and Association for Computing Machinery (ACM) Digital Library using keywords such as “mobile healthcare,” “wearable medical sensors,” “smartphones”, and “AI.” We included AIM articles and excluded technical articles focused only on AI models. We also used the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) technique for identifying articles that represent a comprehensive view of current research in the AIM domain.
Results
We screened 108 articles focusing on developing AIM models for ensuring better health care delivery, detecting diseases early, and diagnosing chronic health conditions, and 37 articles were eligible for inclusion, with 31 of the 37 articles being published last year (76%). Of the included articles, 9 studied AI models to detect serious mental health issues, such as depression and suicidal tendencies, and chronic health conditions, such as sleep apnea and diabetes. Several articles discussed the application of AIM models for remote patient monitoring and disease management. The considered primary health concerns belonged to 3 categories: mental health, physical health, and health promotion and wellness. Moreover, 14 of the 37 articles used AIM applications to research physical health, representing 38% of the total studies. Finally, 28 out of the 37 (76%) studies used proprietary data sets rather than public data sets. We found a lack of research in addressing chronic mental health issues and a lack of publicly available data sets for AIM research.
Conclusions
The application of AIM models for disease detection and management is a growing research domain. These models provide accurate predictions for enabling preventive care on a broader scale in the health care domain. Given the ever-increasing need for remote disease management during the pandemic, recent AI techniques, such as federated learning and explainable AI, can act as a catalyst for increasing the adoption of AIM and enabling secure data sharing across the health care industry.
Collapse
Affiliation(s)
- Paras Bhatt
- Department of Electrical & Computer Engineering, The University of Texas at San Antonio, San Antonio, TX, United States
| | - Jia Liu
- The University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Yanmin Gong
- Department of Electrical & Computer Engineering, The University of Texas at San Antonio, San Antonio, TX, United States
| | - Jing Wang
- Florida State University, Tallahassee, FL, United States
| | - Yuanxiong Guo
- Department of Electrical & Computer Engineering, The University of Texas at San Antonio, San Antonio, TX, United States
| |
Collapse
|
65
|
Betzler BK, Rim TH, Sabanayagam C, Cheng CY. Artificial Intelligence in Predicting Systemic Parameters and Diseases From Ophthalmic Imaging. Front Digit Health 2022; 4:889445. [PMID: 35706971 PMCID: PMC9190759 DOI: 10.3389/fdgth.2022.889445] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/06/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial Intelligence (AI) analytics has been used to predict, classify, and aid clinical management of multiple eye diseases. Its robust performances have prompted researchers to expand the use of AI into predicting systemic, non-ocular diseases and parameters based on ocular images. Herein, we discuss the reasons why the eye is well-suited for systemic applications, and review the applications of deep learning on ophthalmic images in the prediction of demographic parameters, body composition factors, and diseases of the cardiovascular, hematological, neurodegenerative, metabolic, renal, and hepatobiliary systems. Three main imaging modalities are included—retinal fundus photographs, optical coherence tomographs and external ophthalmic images. We examine the range of systemic factors studied from ophthalmic imaging in current literature and discuss areas of future research, while acknowledging current limitations of AI systems based on ophthalmic images.
Collapse
Affiliation(s)
- Bjorn Kaijun Betzler
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Charumathi Sabanayagam
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| |
Collapse
|
66
|
Lim JS, Hong M, Lam WST, Zhang Z, Teo ZL, Liu Y, Ng WY, Foo LL, Ting DSW. Novel technical and privacy-preserving technology for artificial intelligence in ophthalmology. Curr Opin Ophthalmol 2022; 33:174-187. [PMID: 35266894 DOI: 10.1097/icu.0000000000000846] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The application of artificial intelligence (AI) in medicine and ophthalmology has experienced exponential breakthroughs in recent years in diagnosis, prognosis, and aiding clinical decision-making. The use of digital data has also heralded the need for privacy-preserving technology to protect patient confidentiality and to guard against threats such as adversarial attacks. Hence, this review aims to outline novel AI-based systems for ophthalmology use, privacy-preserving measures, potential challenges, and future directions of each. RECENT FINDINGS Several key AI algorithms used to improve disease detection and outcomes include: Data-driven, imagedriven, natural language processing (NLP)-driven, genomics-driven, and multimodality algorithms. However, deep learning systems are susceptible to adversarial attacks, and use of data for training models is associated with privacy concerns. Several data protection methods address these concerns in the form of blockchain technology, federated learning, and generative adversarial networks. SUMMARY AI-applications have vast potential to meet many eyecare needs, consequently reducing burden on scarce healthcare resources. A pertinent challenge would be to maintain data privacy and confidentiality while supporting AI endeavors, where data protection methods would need to rapidly evolve with AI technology needs. Ultimately, for AI to succeed in medicine and ophthalmology, a balance would need to be found between innovation and privacy.
Collapse
Affiliation(s)
- Jane S Lim
- Singapore National Eye Centre, Singapore Eye Research Institute
| | | | - Walter S T Lam
- Yong Loo Lin School of Medicine, National University of Singapore
| | - Zheting Zhang
- Lee Kong Chian School of Medicine, Nanyang Technological University
| | - Zhen Ling Teo
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Yong Liu
- National University of Singapore, DukeNUS Medical School, Singapore
| | - Wei Yan Ng
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Li Lian Foo
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute
| |
Collapse
|
67
|
Yun JS, Kim J, Jung SH, Cha SA, Ko SH, Ahn YB, Won HH, Sohn KA, Kim D. A deep learning model for screening type 2 diabetes from retinal photographs. Nutr Metab Cardiovasc Dis 2022; 32:1218-1226. [PMID: 35197214 PMCID: PMC9018521 DOI: 10.1016/j.numecd.2022.01.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 12/13/2021] [Accepted: 01/08/2022] [Indexed: 11/16/2022]
Abstract
BACKGROUND AND AIMS We aimed to develop and evaluate a non-invasive deep learning algorithm for screening type 2 diabetes in UK Biobank participants using retinal images. METHODS AND RESULTS The deep learning model for prediction of type 2 diabetes was trained on retinal images from 50,077 UK Biobank participants and tested on 12,185 participants. We evaluated its performance in terms of predicting traditional risk factors (TRFs) and genetic risk for diabetes. Next, we compared the performance of three models in predicting type 2 diabetes using 1) an image-only deep learning algorithm, 2) TRFs, 3) the combination of the algorithm and TRFs. Assessing net reclassification improvement (NRI) allowed quantification of the improvement afforded by adding the algorithm to the TRF model. When predicting TRFs with the deep learning algorithm, the areas under the curve (AUCs) obtained with the validation set for age, sex, and HbA1c status were 0.931 (0.928-0.934), 0.933 (0.929-0.936), and 0.734 (0.715-0.752), respectively. When predicting type 2 diabetes, the AUC of the composite logistic model using non-invasive TRFs was 0.810 (0.790-0.830), and that for the deep learning model using only fundus images was 0.731 (0.707-0.756). Upon addition of TRFs to the deep learning algorithm, discriminative performance was improved to 0.844 (0.826-0.861). The addition of the algorithm to the TRFs model improved risk stratification with an overall NRI of 50.8%. CONCLUSION Our results demonstrate that this deep learning algorithm can be a useful tool for stratifying individuals at high risk of type 2 diabetes in the general population.
Collapse
Affiliation(s)
- Jae-Seung Yun
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Jaesik Kim
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Department of Computer Engineering, Ajou University, Suwon, Republic of Korea; Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - Sang-Hyuk Jung
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA; Samsung Advanced Institute for Health Sciences and Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul, Republic of Korea
| | - Seon-Ah Cha
- Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seung-Hyun Ko
- Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yu-Bae Ahn
- Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Hong-Hee Won
- Samsung Advanced Institute for Health Sciences and Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul, Republic of Korea
| | - Kyung-Ah Sohn
- Department of Computer Engineering, Ajou University, Suwon, Republic of Korea; Department of Artificial Intelligence, Ajou University, Suwon, Republic of Korea.
| | - Dokyoon Kim
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
68
|
Nusinovici S, Rim TH, Yu M, Lee G, Tham YC, Cheung N, Chong CCY, Da Soh Z, Thakur S, Lee CJ, Sabanayagam C, Lee BK, Park S, Kim SS, Kim HC, Wong TY, Cheng CY. Retinal photograph-based deep learning predicts biological age, and stratifies morbidity and mortality risk. Age Ageing 2022; 51:6561972. [PMID: 35363255 PMCID: PMC8973000 DOI: 10.1093/ageing/afac065] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND ageing is an important risk factor for a variety of human pathologies. Biological age (BA) may better capture ageing-related physiological changes compared with chronological age (CA). OBJECTIVE we developed a deep learning (DL) algorithm to predict BA based on retinal photographs and evaluated the performance of our new ageing marker in the risk stratification of mortality and major morbidity in general populations. METHODS we first trained a DL algorithm using 129,236 retinal photographs from 40,480 participants in the Korean Health Screening study to predict the probability of age being ≥65 years ('RetiAGE') and then evaluated the ability of RetiAGE to stratify the risk of mortality and major morbidity among 56,301 participants in the UK Biobank. Cox proportional hazards model was used to estimate the hazard ratios (HRs). RESULTS in the UK Biobank, over a 10-year follow up, 2,236 (4.0%) died; of them, 636 (28.4%) were due to cardiovascular diseases (CVDs) and 1,276 (57.1%) due to cancers. Compared with the participants in the RetiAGE first quartile, those in the RetiAGE fourth quartile had a 67% higher risk of 10-year all-cause mortality (HR = 1.67 [1.42-1.95]), a 142% higher risk of CVD mortality (HR = 2.42 [1.69-3.48]) and a 60% higher risk of cancer mortality (HR = 1.60 [1.31-1.96]), independent of CA and established ageing phenotypic biomarkers. Likewise, compared with the first quartile group, the risk of CVD and cancer events in the fourth quartile group increased by 39% (HR = 1.39 [1.14-1.69]) and 18% (HR = 1.18 [1.10-1.26]), respectively. The best discrimination ability for RetiAGE alone was found for CVD mortality (c-index = 0.70, sensitivity = 0.76, specificity = 0.55). Furthermore, adding RetiAGE increased the discrimination ability of the model beyond CA and phenotypic biomarkers (increment in c-index between 1 and 2%). CONCLUSIONS the DL-derived RetiAGE provides a novel, alternative approach to measure ageing.
Collapse
Affiliation(s)
- Simon Nusinovici
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Marco Yu
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | | | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore.,Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Ning Cheung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | | | - Zhi Da Soh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Chan Joo Lee
- Division of Cardiology, Severance Cardiovascular Hospital, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Charumathi Sabanayagam
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Byoung Kwon Lee
- Division of Cardiology, Severance Cardiovascular Hospital, Gangnam Severance Hospital, Yonsei University Medical College of Medicine, Seoul, South Korea
| | - Sungha Park
- Division of Cardiology, Severance Cardiovascular Hospital and Integrated Research Center for Cerebrovascular and Cardiovascular Disease, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Sung Soo Kim
- Department of Ophthalmology, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Hyeon Chang Kim
- Department of Preventive Medicine, Yonsei University College of Medicine, Seoul, Korea
| | - Tien-Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore.,Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| |
Collapse
|
69
|
Wagner SK, Hughes F, Cortina-Borja M, Pontikos N, Struyven R, Liu X, Montgomery H, Alexander DC, Topol E, Petersen SE, Balaskas K, Hindley J, Petzold A, Rahi JS, Denniston AK, Keane PA. AlzEye: longitudinal record-level linkage of ophthalmic imaging and hospital admissions of 353 157 patients in London, UK. BMJ Open 2022; 12:e058552. [PMID: 35296488 PMCID: PMC8928293 DOI: 10.1136/bmjopen-2021-058552] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
PURPOSE Retinal signatures of systemic disease ('oculomics') are increasingly being revealed through a combination of high-resolution ophthalmic imaging and sophisticated modelling strategies. Progress is currently limited not mainly by technical issues, but by the lack of large labelled datasets, a sine qua non for deep learning. Such data are derived from prospective epidemiological studies, in which retinal imaging is typically unimodal, cross-sectional, of modest number and relates to cohorts, which are not enriched with subpopulations of interest, such as those with systemic disease. We thus linked longitudinal multimodal retinal imaging from routinely collected National Health Service (NHS) data with systemic disease data from hospital admissions using a privacy-by-design third-party linkage approach. PARTICIPANTS Between 1 January 2008 and 1 April 2018, 353 157 participants aged 40 years or older, who attended Moorfields Eye Hospital NHS Foundation Trust, a tertiary ophthalmic institution incorporating a principal central site, four district hubs and five satellite clinics in and around London, UK serving a catchment population of approximately six million people. FINDINGS TO DATE Among the 353 157 individuals, 186 651 had a total of 1 337 711 Hospital Episode Statistics admitted patient care episodes. Systemic diagnoses recorded at these episodes include 12 022 patients with myocardial infarction, 11 735 with all-cause stroke and 13 363 with all-cause dementia. A total of 6 261 931 retinal images of seven different modalities and across three manufacturers were acquired from 1 54 830 patients. The majority of retinal images were retinal photographs (n=1 874 175) followed by optical coherence tomography (n=1 567 358). FUTURE PLANS AlzEye combines the world's largest single institution retinal imaging database with nationally collected systemic data to create an exceptional large-scale, enriched cohort that reflects the diversity of the population served. First analyses will address cardiovascular diseases and dementia, with a view to identifying hidden retinal signatures that may lead to earlier detection and risk management of these life-threatening conditions.
Collapse
Affiliation(s)
- Siegfried Karl Wagner
- Institute of Ophthalmology, University College London, London, UK
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Fintan Hughes
- Department of Anaesthesiology, Duke University Hospital, Durham, North Carolina, USA
| | | | - Nikolas Pontikos
- Institute of Ophthalmology, University College London, London, UK
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Robbert Struyven
- Institute of Ophthalmology, University College London, London, UK
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Xiaoxuan Liu
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- Centre for Regulatory Science and Innovation, Birmingham Health Partners, Birmingham, UK
| | - Hugh Montgomery
- Centre for Human Health and Performance, University College London, London, UK
| | - Daniel C Alexander
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
| | - Eric Topol
- Scripps Research Institute, La Jolla, California, USA
| | - Steffen Erhard Petersen
- William Harvey Research Institute, Queen Mary University of London, London, UK
- Barts Heart Centre, Barts Health NHS Trust, London, UK
| | - Konstantinos Balaskas
- Institute of Ophthalmology, University College London, London, UK
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Jack Hindley
- Department of Information Governance, University College London, London, UK
| | - Axel Petzold
- Institute of Ophthalmology, University College London, London, UK
- Institute of Neurology, University College London, London, UK
- Department of Neurophthalmology, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Jugnoo S Rahi
- Institute of Ophthalmology, University College London, London, UK
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Great Ormond Street Institute of Child Health, University College London, London, UK
- Great Ormond Street Hospital for Children NHS Foundation Trust, London, UK
- Ulverscroft Vision Research Group, University College London, London, UK
| | - Alastair K Denniston
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- Centre for Regulatory Science and Innovation, Birmingham Health Partners, Birmingham, UK
| | - Pearse A Keane
- Institute of Ophthalmology, University College London, London, UK
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| |
Collapse
|
70
|
Cheung CY, Biousse V, Keane PA, Schiffrin EL, Wong TY. Hypertensive eye disease. Nat Rev Dis Primers 2022; 8:14. [PMID: 35273180 DOI: 10.1038/s41572-022-00342-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/18/2022] [Indexed: 02/07/2023]
Abstract
Hypertensive eye disease includes a spectrum of pathological changes, the most well known being hypertensive retinopathy. Other commonly involved parts of the eye in hypertension include the choroid and optic nerve, sometimes referred to as hypertensive choroidopathy and hypertensive optic neuropathy. Together, hypertensive eye disease develops in response to acute and/or chronic elevation of blood pressure. Major advances in research over the past three decades have greatly enhanced our understanding of the epidemiology, systemic associations and clinical implications of hypertensive eye disease, particularly hypertensive retinopathy. Traditionally diagnosed via a clinical funduscopic examination, but increasingly documented on digital retinal fundus photographs, hypertensive retinopathy has long been considered a marker of systemic target organ damage (for example, kidney disease) elsewhere in the body. Epidemiological studies indicate that hypertensive retinopathy signs are commonly seen in the general adult population, are associated with subclinical measures of vascular disease and predict risk of incident clinical cardiovascular events. New technologies, including development of non-invasive optical coherence tomography angiography, artificial intelligence and mobile ocular imaging instruments, have allowed further assessment and understanding of the ocular manifestations of hypertension and increase the potential that ocular imaging could be used for hypertension management and cardiovascular risk stratification.
Collapse
Affiliation(s)
- Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Valérie Biousse
- Departments of Ophthalmology and Neurology, Emory University School of Medicine, Atlanta, GA, USA
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust, London, UK.,Institute of Ophthalmology, University College London, London, UK
| | - Ernesto L Schiffrin
- Hypertension and Vascular Research Unit, Lady Davis Institute for Medical Research, and Department of Medicine, Sir Mortimer B. Davis Jewish General Hospital, McGill University, Montreal, Quebec, Canada
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore, Singapore. .,Tsinghua Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
71
|
Peng Q, Tseng RMWW, Tham YC, Cheng CY, Rim TH. Detection of Systemic Diseases From Ocular Images Using Artificial Intelligence: A Systematic Review. Asia Pac J Ophthalmol (Phila) 2022; 11:126-139. [PMID: 35533332 DOI: 10.1097/apo.0000000000000515] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Despite the huge investment in health care, there is still a lack of precise and easily accessible screening systems. With proven associations to many systemic diseases, the eye could potentially provide a credible perspective as a novel screening tool. This systematic review aims to summarize the current applications of ocular image-based artificial intelligence on the detection of systemic diseases and suggest future trends for systemic disease screening. METHODS A systematic search was conducted on September 1, 2021, using 3 databases-PubMed, Google Scholar, and Web of Science library. Date restrictions were not imposed and search terms covering ocular images, systemic diseases, and artificial intelligence aspects were used. RESULTS Thirty-three papers were included in this systematic review. A spectrum of target diseases was observed, and this included but was not limited to cardio-cerebrovascular diseases, central nervous system diseases, renal dysfunctions, and hepatological diseases. Additionally, one- third of the papers included risk factor predictions for the respective systemic diseases. CONCLUSIONS Ocular image - based artificial intelligence possesses potential diagnostic power to screen various systemic diseases and has also demonstrated the ability to detect Alzheimer and chronic kidney diseases at early stages. Further research is needed to validate these models for real-world implementation.
Collapse
Affiliation(s)
- Qingsheng Peng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Clinical and Translational Sciences Program, Duke-NUS Medical School, Singapore
| | | | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| |
Collapse
|
72
|
Rabbi F, Dabbagh SR, Angin P, Yetisen AK, Tasoglu S. Deep Learning-Enabled Technologies for Bioimage Analysis. MICROMACHINES 2022; 13:mi13020260. [PMID: 35208385 PMCID: PMC8880650 DOI: 10.3390/mi13020260] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 01/31/2022] [Accepted: 02/03/2022] [Indexed: 02/05/2023]
Abstract
Deep learning (DL) is a subfield of machine learning (ML), which has recently demonstrated its potency to significantly improve the quantification and classification workflows in biomedical and clinical applications. Among the end applications profoundly benefitting from DL, cellular morphology quantification is one of the pioneers. Here, we first briefly explain fundamental concepts in DL and then we review some of the emerging DL-enabled applications in cell morphology quantification in the fields of embryology, point-of-care ovulation testing, as a predictive tool for fetal heart pregnancy, cancer diagnostics via classification of cancer histology images, autosomal polycystic kidney disease, and chronic kidney diseases.
Collapse
Affiliation(s)
- Fazle Rabbi
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey; (F.R.); (S.R.D.)
| | - Sajjad Rahmani Dabbagh
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey; (F.R.); (S.R.D.)
- Koç University Arçelik Research Center for Creative Industries (KUAR), Koç University, Sariyer, Istanbul 34450, Turkey
- Koc University Is Bank Artificial Intelligence Lab (KUIS AILab), Koç University, Sariyer, Istanbul 34450, Turkey
| | - Pelin Angin
- Department of Computer Engineering, Middle East Technical University, Ankara 06800, Turkey;
| | - Ali Kemal Yetisen
- Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK;
| | - Savas Tasoglu
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey; (F.R.); (S.R.D.)
- Koç University Arçelik Research Center for Creative Industries (KUAR), Koç University, Sariyer, Istanbul 34450, Turkey
- Koc University Is Bank Artificial Intelligence Lab (KUIS AILab), Koç University, Sariyer, Istanbul 34450, Turkey
- Institute of Biomedical Engineering, Boğaziçi University, Çengelköy, Istanbul 34684, Turkey
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569 Stuttgart, Germany
- Correspondence:
| |
Collapse
|
73
|
Nam D, Chapiro J, Paradis V, Seraphin TP, Kather JN. Artificial intelligence in liver diseases: improving diagnostics, prognostics and response prediction. JHEP REPORTS : INNOVATION IN HEPATOLOGY 2022; 4:100443. [PMID: 35243281 PMCID: PMC8867112 DOI: 10.1016/j.jhepr.2022.100443] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 12/26/2021] [Accepted: 01/11/2022] [Indexed: 12/18/2022]
Abstract
Clinical routine in hepatology involves the diagnosis and treatment of a wide spectrum of metabolic, infectious, autoimmune and neoplastic diseases. Clinicians integrate qualitative and quantitative information from multiple data sources to make a diagnosis, prognosticate the disease course, and recommend a treatment. In the last 5 years, advances in artificial intelligence (AI), particularly in deep learning, have made it possible to extract clinically relevant information from complex and diverse clinical datasets. In particular, histopathology and radiology image data contain diagnostic, prognostic and predictive information which AI can extract. Ultimately, such AI systems could be implemented in clinical routine as decision support tools. However, in the context of hepatology, this requires further large-scale clinical validation and regulatory approval. Herein, we summarise the state of the art in AI in hepatology with a particular focus on histopathology and radiology data. We present a roadmap for the further development of novel biomarkers in hepatology and outline critical obstacles which need to be overcome.
Collapse
|
74
|
Deep Learning of Retinal Imaging: A Useful Tool for Coronary Artery Calcium Score Prediction in Diabetic Patients. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Cardiovascular diseases (CVD) are one of the leading causes of death in the developed countries. Previous studies suggest that retina blood vessels provide relevant information on cardiovascular risk. Retina fundus imaging (RFI) is a cheap medical imaging test that is already regularly performed in diabetic population as screening of diabetic retinopathy (DR). Since diabetes is a major cause of CVD, we wanted to explore the use Deep Learning architectures on RFI as a tool for predicting CV risk in this population. Particularly, we use the coronary artery calcium (CAC) score as a marker, and train a convolutional neural network (CNN) to predict whether it surpasses a certain threshold defined by experts. The preliminary experiments on a reduced set of clinically verified patients show promising accuracies. In addition, we observed that elementary clinical data is positively correlated with the risk of suffering from a CV disease. We found that the results from both informational cues are complementary, and we propose two applications that can benefit from the combination of image analysis and clinical data.
Collapse
|
75
|
Kong H, Zang S, Hu Y, Lin Z, Liu B, Zeng X, Xiao Y, Du Z, Guanrong W, Ren Y, Fang Y, Xiaohong Y, Yu H. Effect of High Myopia and Cataract Surgery on the Correlation Between Diabetic Retinopathy and Chronic Kidney Disease. Front Med (Lausanne) 2022; 9:788573. [PMID: 35721047 PMCID: PMC9198540 DOI: 10.3389/fmed.2022.788573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Accepted: 03/21/2022] [Indexed: 02/05/2023] Open
Abstract
PURPOSE To investigate the effect of high myopia and cataract surgery on the grading of diabetic retinopathy (DR) and their roles in the correlation between DR and chronic kidney disease (CKD). METHODS A total of 1,063 eyes of 1,063 diabetic patients were enrolled. We conducted binary and multiple multivariate regressions to analyze the ocular and systemic risk factors of DR. Based on the presence of myopia and history of cataract surgery, we divided the cases into four subgroups, namely those with high myopia, with the history of cataract surgery, with both conditions, and with neither, then determined the correlation between the stages of DR and CKD in each subgroup. RESULTS In the binary analysis, high myopia was identified as the protective factor for DR odds ratio (OR): 0.312 [95% confidence interval (CI): 0.195-0.500, p < 0.001], whereas cataract surgery was one of the independent risk factors for DR [OR: 2.818 (95% CI: 1.507-5.273), p = 0.001]. With increased stages of DR, high myopia played an increasingly protective role [mild non-proliferative DR (NPDR), OR = 0.461, p = 0.004; moderate NPDR OR = 0.217, p = 0.003; severe NPDR, OR = 0.221, p = 0.008; proliferative DR (PDR), OR = 0.125, p = 0.001], whereas cataract surgery became a stronger risk factor, especially in PDR (mild NPDR, OR = 1.595, p = 0.259; moderate NPDR, OR = 3.955, p = 0.005; severe NPDR, OR = 6.836, p < 0.001; PDR, OR = 9.756, p < 0.001). The correlation between the stages of DR and CKD in the group with neither high myopia nor cataract surgery history was the highest among all subgroups. CONCLUSION High myopia was a protective factor, whereas cataract surgery is a risk factor for DR, and both factors showed stronger effects throughout the (natural disease) grading of DR. The stages of DR and CKD showed a higher correlation after adjustment of the ocular confounding factors.
Collapse
Affiliation(s)
- Huiqian Kong
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Southern Medical University, Guangdong Academy of Medical Sciences/The Second School of Clinical Medicine, Guangzhou, China
| | - Siwen Zang
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Southern Medical University, Guangdong Academy of Medical Sciences/The Second School of Clinical Medicine, Guangzhou, China
| | - Yijun Hu
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Southern Medical University, Guangdong Academy of Medical Sciences/The Second School of Clinical Medicine, Guangzhou, China
- Refractive Surgery Center, Guangzhou Aier Eye Hospital, Aier Institute of Refractive Surgery, Guangzhou, China
- Aier School of Ophthalmology, Central South University, Changsha, China
| | - Zhanjie Lin
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Southern Medical University, Guangdong Academy of Medical Sciences/The Second School of Clinical Medicine, Guangzhou, China
- Graduate School, Shantou University Medical College, Shantou, China
| | - Baoyi Liu
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Southern Medical University, Guangdong Academy of Medical Sciences/The Second School of Clinical Medicine, Guangzhou, China
| | - Xiaomin Zeng
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Southern Medical University, Guangdong Academy of Medical Sciences/The Second School of Clinical Medicine, Guangzhou, China
| | - Yu Xiao
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Southern Medical University, Guangdong Academy of Medical Sciences/The Second School of Clinical Medicine, Guangzhou, China
| | - Zijing Du
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Southern Medical University, Guangdong Academy of Medical Sciences/The Second School of Clinical Medicine, Guangzhou, China
| | - Wu Guanrong
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Southern Medical University, Guangdong Academy of Medical Sciences/The Second School of Clinical Medicine, Guangzhou, China
| | - Yun Ren
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Southern Medical University, Guangdong Academy of Medical Sciences/The Second School of Clinical Medicine, Guangzhou, China
- Graduate School, Shantou University Medical College, Shantou, China
| | - Ying Fang
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Southern Medical University, Guangdong Academy of Medical Sciences/The Second School of Clinical Medicine, Guangzhou, China
| | - Yang Xiaohong
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Southern Medical University, Guangdong Academy of Medical Sciences/The Second School of Clinical Medicine, Guangzhou, China
- *Correspondence: Yang Xiaohong
| | - Honghua Yu
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Southern Medical University, Guangdong Academy of Medical Sciences/The Second School of Clinical Medicine, Guangzhou, China
- Honghua Yu
| |
Collapse
|
76
|
Babenko B, Mitani A, Traynis I, Kitade N, Singh P, Maa AY, Cuadros J, Corrado GS, Peng L, Webster DR, Varadarajan A, Hammel N, Liu Y. Detection of signs of disease in external photographs of the eyes via deep learning. Nat Biomed Eng 2022; 6:1370-1383. [PMID: 35352000 PMCID: PMC8963675 DOI: 10.1038/s41551-022-00867-5] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 02/15/2022] [Indexed: 01/14/2023]
Abstract
Retinal fundus photographs can be used to detect a range of retinal conditions. Here we show that deep-learning models trained instead on external photographs of the eyes can be used to detect diabetic retinopathy (DR), diabetic macular oedema and poor blood glucose control. We developed the models using eye photographs from 145,832 patients with diabetes from 301 DR screening sites and evaluated the models on four tasks and four validation datasets with a total of 48,644 patients from 198 additional screening sites. For all four tasks, the predictive performance of the deep-learning models was significantly higher than the performance of logistic regression models using self-reported demographic and medical history data, and the predictions generalized to patients with dilated pupils, to patients from a different DR screening programme and to a general eye care programme that included diabetics and non-diabetics. We also explored the use of the deep-learning models for the detection of elevated lipid levels. The utility of external eye photographs for the diagnosis and management of diseases should be further validated with images from different cameras and patient populations.
Collapse
Affiliation(s)
- Boris Babenko
- grid.420451.60000 0004 0635 6729Google Health, Palo Alto, CA USA
| | - Akinori Mitani
- grid.420451.60000 0004 0635 6729Google Health, Palo Alto, CA USA ,Artera, Mountain View, CA, USA
| | - Ilana Traynis
- Google Health via Advanced Clinical, Deerfield, IL USA
| | - Naho Kitade
- grid.420451.60000 0004 0635 6729Google Health, Palo Alto, CA USA
| | - Preeti Singh
- grid.420451.60000 0004 0635 6729Google Health, Palo Alto, CA USA
| | - April Y. Maa
- grid.189967.80000 0001 0941 6502Department of Ophthalmology, Emory University School of Medicine, Atlanta, GA USA ,grid.484324.d0000 0004 0420 9995Regional Telehealth Services, Technology-based Eye Care Services (TECS) Division, Veterans Integrated Service Network (VISN) 7, Decatur, GA USA
| | | | - Greg S. Corrado
- grid.420451.60000 0004 0635 6729Google Health, Palo Alto, CA USA
| | - Lily Peng
- grid.420451.60000 0004 0635 6729Google Health, Palo Alto, CA USA
| | - Dale R. Webster
- grid.420451.60000 0004 0635 6729Google Health, Palo Alto, CA USA
| | | | - Naama Hammel
- grid.420451.60000 0004 0635 6729Google Health, Palo Alto, CA USA
| | - Yun Liu
- grid.420451.60000 0004 0635 6729Google Health, Palo Alto, CA USA
| |
Collapse
|
77
|
Wang Z, Keane PA, Chiang M, Cheung CY, Wong TY, Ting DSW. Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
78
|
Alé-Chilet A, Bernal-Morales C, Barraso M, Hernández T, Oliva C, Vinagre I, Ortega E, Figueras-Roca M, Sala-Puigdollers A, Esquinas C, Gimenez M, Esmatjes E, Adán A, Zarranz-Ventura J. Optical Coherence Tomography Angiography in Type 1 Diabetes Mellitus-Report 2: Diabetic Kidney Disease. J Clin Med 2021; 11:197. [PMID: 35011940 PMCID: PMC8745787 DOI: 10.3390/jcm11010197] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 12/21/2021] [Accepted: 12/24/2021] [Indexed: 12/27/2022] Open
Abstract
The purpose of this study is to investigate potential associations between optical coherence tomography angiography (OCTA) parameters and diabetic kidney disease (DKD) categories in type 1 diabetes mellitus (T1DM) patients and controls. A complete ocular and systemic examination, including OCTA imaging tests and bloods, was performed. OCTA parameters included vessel density (VD), perfusion density (PD), foveal avascular zone area (FAZa), perimeter (FAZp) and circularity (FAZc) in the superficial vascular plexus, and DKD categories were defined according to glomerular filtration rate (GFR), albumin-creatinine ratio (ACR) and KDIGO prognosis risk classifications. A total of 425 individuals (1 eye/1 patient) were included. Reduced VD and FAZc were associated with greater categories of GFR (p = 0.002, p = 0.04), ACR (p = 0.003, p = 0.005) and KDIGO risk prognosis classifications (p = 0.002, p = 0.005). FAZc was significantly reduced in greater KDIGO prognosis risk categories (low risk vs. moderate risk, 0.65 ± 0.09 vs. 0.60 ± 0.07, p < 0.05). VD and FAZc presented the best diagnostic performance in ROCs. In conclusion, OCTA parameters, such as VD and FAZc, are able to detect different GFR, ACR, and KDIGO categories in T1DM patients and controls in a non-invasive, objective quantitative way. FAZc is able to discriminate within T1DM patients those with greater DKD categories and greater risk of DKD progression.
Collapse
Affiliation(s)
- Aníbal Alé-Chilet
- Institut Clínic d’Oftalmologia (ICOF), Hospital Clínic, 08028 Barcelona, Spain; (A.A.-C.); (C.B.-M.); (M.B.); (T.H.); (C.O.); (M.F.-R.); (A.S.-P.); (A.A.)
| | - Carolina Bernal-Morales
- Institut Clínic d’Oftalmologia (ICOF), Hospital Clínic, 08028 Barcelona, Spain; (A.A.-C.); (C.B.-M.); (M.B.); (T.H.); (C.O.); (M.F.-R.); (A.S.-P.); (A.A.)
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), 08036 Barcelona, Spain; (I.V.); (E.O.); (M.G.); (E.E.)
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 2PD, UK
| | - Marina Barraso
- Institut Clínic d’Oftalmologia (ICOF), Hospital Clínic, 08028 Barcelona, Spain; (A.A.-C.); (C.B.-M.); (M.B.); (T.H.); (C.O.); (M.F.-R.); (A.S.-P.); (A.A.)
| | - Teresa Hernández
- Institut Clínic d’Oftalmologia (ICOF), Hospital Clínic, 08028 Barcelona, Spain; (A.A.-C.); (C.B.-M.); (M.B.); (T.H.); (C.O.); (M.F.-R.); (A.S.-P.); (A.A.)
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), 08036 Barcelona, Spain; (I.V.); (E.O.); (M.G.); (E.E.)
| | - Cristian Oliva
- Institut Clínic d’Oftalmologia (ICOF), Hospital Clínic, 08028 Barcelona, Spain; (A.A.-C.); (C.B.-M.); (M.B.); (T.H.); (C.O.); (M.F.-R.); (A.S.-P.); (A.A.)
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), 08036 Barcelona, Spain; (I.V.); (E.O.); (M.G.); (E.E.)
| | - Irene Vinagre
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), 08036 Barcelona, Spain; (I.V.); (E.O.); (M.G.); (E.E.)
- Diabetes Unit, Hospital Clínic, 08036 Barcelona, Spain
- Institut Clínic de Malalties Digestives i Metabòliques (ICMDM), Hospital Clínic, 08036 Barcelona, Spain
| | - Emilio Ortega
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), 08036 Barcelona, Spain; (I.V.); (E.O.); (M.G.); (E.E.)
- Diabetes Unit, Hospital Clínic, 08036 Barcelona, Spain
- Institut Clínic de Malalties Digestives i Metabòliques (ICMDM), Hospital Clínic, 08036 Barcelona, Spain
- Centro de Investigación Biomédica en Red de la Fisiopatología de la Obesidad y Nutrición (CIBEROBN), 08036 Barcelona, Spain
| | - Marc Figueras-Roca
- Institut Clínic d’Oftalmologia (ICOF), Hospital Clínic, 08028 Barcelona, Spain; (A.A.-C.); (C.B.-M.); (M.B.); (T.H.); (C.O.); (M.F.-R.); (A.S.-P.); (A.A.)
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), 08036 Barcelona, Spain; (I.V.); (E.O.); (M.G.); (E.E.)
- Diabetes Unit, Hospital Clínic, 08036 Barcelona, Spain
| | - Anna Sala-Puigdollers
- Institut Clínic d’Oftalmologia (ICOF), Hospital Clínic, 08028 Barcelona, Spain; (A.A.-C.); (C.B.-M.); (M.B.); (T.H.); (C.O.); (M.F.-R.); (A.S.-P.); (A.A.)
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), 08036 Barcelona, Spain; (I.V.); (E.O.); (M.G.); (E.E.)
- Diabetes Unit, Hospital Clínic, 08036 Barcelona, Spain
| | - Cristina Esquinas
- Respiratory Department, Hospital Universitari Vall d’Hebron, 08035 Barcelona, Spain;
| | - Marga Gimenez
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), 08036 Barcelona, Spain; (I.V.); (E.O.); (M.G.); (E.E.)
- Diabetes Unit, Hospital Clínic, 08036 Barcelona, Spain
- Institut Clínic de Malalties Digestives i Metabòliques (ICMDM), Hospital Clínic, 08036 Barcelona, Spain
| | - Enric Esmatjes
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), 08036 Barcelona, Spain; (I.V.); (E.O.); (M.G.); (E.E.)
- Diabetes Unit, Hospital Clínic, 08036 Barcelona, Spain
- Institut Clínic de Malalties Digestives i Metabòliques (ICMDM), Hospital Clínic, 08036 Barcelona, Spain
| | - Alfredo Adán
- Institut Clínic d’Oftalmologia (ICOF), Hospital Clínic, 08028 Barcelona, Spain; (A.A.-C.); (C.B.-M.); (M.B.); (T.H.); (C.O.); (M.F.-R.); (A.S.-P.); (A.A.)
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), 08036 Barcelona, Spain; (I.V.); (E.O.); (M.G.); (E.E.)
| | - Javier Zarranz-Ventura
- Institut Clínic d’Oftalmologia (ICOF), Hospital Clínic, 08028 Barcelona, Spain; (A.A.-C.); (C.B.-M.); (M.B.); (T.H.); (C.O.); (M.F.-R.); (A.S.-P.); (A.A.)
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), 08036 Barcelona, Spain; (I.V.); (E.O.); (M.G.); (E.E.)
- Diabetes Unit, Hospital Clínic, 08036 Barcelona, Spain
| |
Collapse
|
79
|
Chantaduly C, Troutt HR, Perez Reyes KA, Zuckerman JE, Chang PD, Lau WL. Artificial Intelligence Assessment of Renal Scarring (AIRS Study). KIDNEY360 2021; 3:83-90. [PMID: 35368566 PMCID: PMC8967621 DOI: 10.34067/kid.0003662021] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 11/11/2021] [Indexed: 01/10/2023]
Abstract
Background The goal of the Artificial Intelligence in Renal Scarring (AIRS) study is to develop machine learning tools for noninvasive quantification of kidney fibrosis from imaging scans. Methods We conducted a retrospective analysis of patients who had one or more abdominal computed tomography (CT) scans within 6 months of a kidney biopsy. The final cohort encompassed 152 CT scans from 92 patients, which included images of 300 native kidneys and 76 transplant kidneys. Two different convolutional neural networks (slice-level and voxel-level classifiers) were tested to differentiate severe versus mild/moderate kidney fibrosis (≥50% versus <50%). Interstitial fibrosis and tubular atrophy scores from kidney biopsy reports were used as ground-truth. Results The two machine learning models demonstrated similar positive predictive value (0.886 versus 0.935) and accuracy (0.831 versus 0.879). Conclusions In summary, machine learning algorithms are a promising noninvasive diagnostic tool to quantify kidney fibrosis from CT scans. The clinical utility of these prediction tools, in terms of avoiding renal biopsy and associated bleeding risks in patients with severe fibrosis, remains to be validated in prospective clinical trials.
Collapse
Affiliation(s)
- Chanon Chantaduly
- Department of Radiological Sciences and Center for Artificial Intelligence in Diagnostic Medicine, University of California Irvine, Orange, California
| | - Hayden R. Troutt
- Division of Nephrology, Department of Medicine, University of California Irvine, Orange, California
| | - Karla A. Perez Reyes
- Division of Nephrology, Department of Medicine, University of California Irvine, Orange, California
| | - Jonathan E. Zuckerman
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine at University of California Los Angeles, Los Angeles, California
| | - Peter D. Chang
- Department of Radiological Sciences and Center for Artificial Intelligence in Diagnostic Medicine, University of California Irvine, Orange, California
| | - Wei Ling Lau
- Division of Nephrology, Department of Medicine, University of California Irvine, Orange, California
| |
Collapse
|
80
|
Lin Y, Khong PL, Zou Z, Cao P. Evaluation of pediatric hydronephrosis using deep learning quantification of fluid-to-kidney-area ratio by ultrasonography. Abdom Radiol (NY) 2021; 46:5229-5239. [PMID: 34227014 DOI: 10.1007/s00261-021-03201-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/28/2021] [Accepted: 06/28/2021] [Indexed: 12/17/2022]
Abstract
PURPOSE Hydronephrosis is the dilation of the pelvicalyceal system due to the urine flow obstruction in one or both kidneys. Conventionally, renal pelvis anterior-posterior diameter (APD) was used for quantifying hydronephrosis in medical images (e.g., ultrasound, CT, and functional MRI). Our study aimed to automatically detect and quantify the fluid and kidney areas on ultrasonography, using a deep learning approach. METHODS An attention-Unet was used to segment the kidney and the dilated pelvicalyceal system with fluid. The gold standard for diagnosing hydronephrosis was the APD > 1.0 cm. For semi-quantification, we proposed a fluid-to-kidney-area ratio measurement, i.e., [Formula: see text], as a deep learning-derived biomarker. Dice coefficient, confusion matrix, ROC curve, and Z-test were used to evaluate the model performance. Linear regression was applied to obtain the fluid-to-kidney-area ratio cutoff for detecting hydronephrosis. RESULTS For regional kidney segmentation, the Dice coefficients were 0.92 and 0.83 for the kidney and dilated pelvicalyceal system, respectively. The sensitivity and specificity of detecting dilated pelvicalyceal system were 0.99 and 0.83, respectively. The linear equation was fluid-to-kidney-area ratio = (0.213 ± 0.004) × APD (in cm) for 95% confidence interval on the slope with R2 = 0.87. The fluid-to-kidney-area ratio cutoff for detecting hydronephrosis was 0.213. The sensitivity and specificity for detecting hydronephrosis were 0.90 and 0.80, respectively. CONCLUSION Our study confirmed the feasibility of deep learning characterization of the kidney and fluid, showing an automatic pediatric hydronephrosis detection.
Collapse
|
81
|
Identifying Peripheral Neuropathy in Colour Fundus Photographs Based on Deep Learning. Diagnostics (Basel) 2021; 11:diagnostics11111943. [PMID: 34829290 PMCID: PMC8623417 DOI: 10.3390/diagnostics11111943] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 10/13/2021] [Accepted: 10/15/2021] [Indexed: 11/18/2022] Open
Abstract
The aim of this study was to develop and validate a deep learning-based system to detect peripheral neuropathy (DN) from retinal colour images in people with diabetes. Retinal images from 1561 people with diabetes were used to predictDN diagnosed on vibration perception threshold. A total of 189 had diabetic retinopathy (DR), 276 had DN, and 43 had both DR and DN. 90% of the images were used for training and validation and 10% for testing. Deep neural networks, including Squeezenet, Inception, and Densenet were utilized, and the architectures were tested with and without pre-trained weights. Random transform of images was used during training. The algorithm was trained and tested using three sets of data: all retinal images, images without DR and images with DR. Area under the ROC curve (AUC) was used to evaluate performance. The AUC to predict DN on the whole cohort was 0.8013 (±0.0257) on the validation set and 0.7097 (±0.0031) on the test set. The AUC increased to 0.8673 (±0.0088) in the presence of DR. The retinal images can be used to identify individuals with DN and provides an opportunity to educate patients about their DN status when they attend DR screening.
Collapse
|
82
|
Pandey M, Gupta A. A systematic review of the automatic kidney segmentation methods in abdominal images. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.10.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
83
|
Abstract
PURPOSE OF REVIEW Systemic retinal biomarkers are biomarkers identified in the retina and related to evaluation and management of systemic disease. This review summarizes the background, categories and key findings from this body of research as well as potential applications to clinical care. RECENT FINDINGS Potential systemic retinal biomarkers for cardiovascular disease, kidney disease and neurodegenerative disease were identified using regression analysis as well as more sophisticated image processing techniques. Deep learning techniques were used in a number of studies predicting diseases including anaemia and chronic kidney disease. A virtual coronary artery calcium score performed well against other competing traditional models of event prediction. SUMMARY Systemic retinal biomarker research has progressed rapidly using regression studies with clearly identified biomarkers such as retinal microvascular patterns, as well as using deep learning models. Future systemic retinal biomarker research may be able to boost performance using larger data sets, the addition of meta-data and higher resolution image inputs.
Collapse
|
84
|
Affiliation(s)
| | | | - Yun Liu
- Google Health, Palo Alto, CA, USA.
| |
Collapse
|
85
|
Coleman K, Coleman J, Franco-Penya H, Hamroush F, Murtagh P, Fitzpatrick P, Aiken M, Combes A, Keegan D. A New Smartphone-Based Optic Nerve Head Biometric for Verification and Change Detection. Transl Vis Sci Technol 2021; 10:1. [PMID: 34196679 PMCID: PMC8267185 DOI: 10.1167/tvst.10.8.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 05/21/2021] [Indexed: 12/03/2022] Open
Abstract
Purpose Lens adapted smartphones are being used regularly instead of ophthalmoscopes. The most common causes of preventable blindness in the world, which are glaucoma and diabetic retinopathy, can develop asymptomatic changes to the optic nerve head (ONH) especially in the developing world where there is a dire shortage of ophthalmologists but ubiquitous mobile phones. We developed a proof-of-concept ONH biometric (application [APP]) to use as a routine biometric on a mobile phone. The unique blood vessel pattern is verified if it maps on to a previously enrolled image. Methods The iKey APP platform comprises three deep neural networks (DNNs) developed from anonymous ONH images: the graticule blood vessel (GBV) and the blood vessel specific feature (BVSF) DNNs were trained on unique blood vessel vectors. A non-feature specific (NFS) baseline ResNet50 DNN was trained for comparison. Results Verification reached an accuracy of 97.06% with BVSF, 87.24% with GBV and 79.8% using NFS. Conclusions A new ONH biometric was developed with a hybrid platform of ONH algorithms for use as a verification biometric on a smartphone. Failure to verify will alert the user to possible changes to the image, so that silent changes may be observed before sight threatening disease progresses. The APP retains a history of all ONH images. Future longitudinal analysis will explore the impact of ONH changes to the iKey biometric platform. Translational Relevance Phones with iKey will host ONH images for biometric protection of both health and financial data. The ONH may be used for automatic screening by new disease detection DNNs.
Collapse
Affiliation(s)
| | | | | | | | - Patrick Murtagh
- Mater Vision Institute, Mater University Hospital, Dublin, Ireland
| | | | - Mary Aiken
- Department of Law and Criminology, University of East London, East London, UK
| | | | - David Keegan
- Mater Vision Institute, Mater University Hospital, Dublin, Ireland
| |
Collapse
|
86
|
Valikodath NG, Cole E, Ting DSW, Campbell JP, Pasquale LR, Chiang MF, Chan RVP. Impact of Artificial Intelligence on Medical Education in Ophthalmology. Transl Vis Sci Technol 2021; 10:14. [PMID: 34125146 PMCID: PMC8212436 DOI: 10.1167/tvst.10.7.14] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Clinical care in ophthalmology is rapidly evolving as artificial intelligence (AI) algorithms are being developed. The medical community and national and federal regulatory bodies are recognizing the importance of adapting to AI. However, there is a gap in physicians’ understanding of AI and its implications regarding its potential use in clinical care, and there are limited resources and established programs focused on AI and medical education in ophthalmology. Physicians are essential in the application of AI in a clinical context. An AI curriculum in ophthalmology can help provide physicians with a fund of knowledge and skills to integrate AI into their practice. In this paper, we provide general recommendations for an AI curriculum for medical students, residents, and fellows in ophthalmology.
Collapse
Affiliation(s)
- Nita G Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, USA
| | - Emily Cole
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, USA
| | - Daniel S W Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Louis R Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai Hospital, New York, NY, USA
| | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, USA
| | | |
Collapse
|
87
|
Zhang K, Liu X, Xu J, Yuan J, Cai W, Chen T, Wang K, Gao Y, Nie S, Xu X, Qin X, Su Y, Xu W, Olvera A, Xue K, Li Z, Zhang M, Zeng X, Zhang CL, Li O, Zhang EE, Zhu J, Xu Y, Kermany D, Zhou K, Pan Y, Li S, Lai IF, Chi Y, Wang C, Pei M, Zang G, Zhang Q, Lau J, Lam D, Zou X, Wumaier A, Wang J, Shen Y, Hou FF, Zhang P, Xu T, Zhou Y, Wang G. Deep-learning models for the detection and incidence prediction of chronic kidney disease and type 2 diabetes from retinal fundus images. Nat Biomed Eng 2021; 5:533-545. [PMID: 34131321 DOI: 10.1038/s41551-021-00745-6] [Citation(s) in RCA: 106] [Impact Index Per Article: 35.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 05/12/2021] [Indexed: 02/05/2023]
Abstract
Regular screening for the early detection of common chronic diseases might benefit from the use of deep-learning approaches, particularly in resource-poor or remote settings. Here we show that deep-learning models can be used to identify chronic kidney disease and type 2 diabetes solely from fundus images or in combination with clinical metadata (age, sex, height, weight, body-mass index and blood pressure) with areas under the receiver operating characteristic curve of 0.85-0.93. The models were trained and validated with a total of 115,344 retinal fundus photographs from 57,672 patients and can also be used to predict estimated glomerulal filtration rates and blood-glucose levels, with mean absolute errors of 11.1-13.4 ml min-1 per 1.73 m2 and 0.65-1.1 mmol l-1, and to stratify patients according to disease-progression risk. We evaluated the generalizability of the models for the identification of chronic kidney disease and type 2 diabetes with population-based external validation cohorts and via a prospective study with fundus images captured with smartphones, and assessed the feasibility of predicting disease progression in a longitudinal cohort.
Collapse
Affiliation(s)
- Kang Zhang
- Center for Clinical Translational Innovations and Biomedical Big Data Center, West China Hospital and Sichuan University, Chengdu, China. .,Center for Biomedicine and Innovations, Faculty of Medicine, Macau University of Science and Technology and University Hospital, Macau, China.
| | - Xiaohong Liu
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
| | - Jie Xu
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Ophthalmology and Visual Science Key Lab, Beijing, China.,State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
| | - Jin Yuan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Wenjia Cai
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ting Chen
- Department of Computer Science and Technology, Tsinghua University, Beijing, China.
| | - Kai Wang
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
| | - Yuanxu Gao
- Center for Biomedicine and Innovations, Faculty of Medicine, Macau University of Science and Technology and University Hospital, Macau, China
| | - Sheng Nie
- State Key Laboratory of Organ Failure Research, National Clinical Research Center for Kidney Disease and Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Xiaodong Xu
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
| | - Xiaoqi Qin
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
| | - Yuandong Su
- Center for Clinical Translational Innovations and Biomedical Big Data Center, West China Hospital and Sichuan University, Chengdu, China
| | - Wenqin Xu
- Center for Clinical Translational Innovations and Biomedical Big Data Center, West China Hospital and Sichuan University, Chengdu, China
| | - Andrea Olvera
- Center for Clinical Translational Innovations and Biomedical Big Data Center, West China Hospital and Sichuan University, Chengdu, China
| | - Kanmin Xue
- Nuffield Laboratory of Ophthalmology, Department of Clinical Neurosciences, University of Oxford and Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Zhihuan Li
- Center for Clinical Translational Innovations and Biomedical Big Data Center, West China Hospital and Sichuan University, Chengdu, China
| | - Meixia Zhang
- Center for Clinical Translational Innovations and Biomedical Big Data Center, West China Hospital and Sichuan University, Chengdu, China
| | - Xiaoxi Zeng
- Center for Clinical Translational Innovations and Biomedical Big Data Center, West China Hospital and Sichuan University, Chengdu, China.,Kidney Research Institute, Nephrology Division, West China Hospital and Sichuan University, Chengdu, China
| | - Charlotte L Zhang
- Bioland Laboratory (Guangzhou Regenerative Medicine and Health Guangdong Laboratory), Guangzhou, China
| | - Oulan Li
- Bioland Laboratory (Guangzhou Regenerative Medicine and Health Guangdong Laboratory), Guangzhou, China
| | - Edward E Zhang
- Bioland Laboratory (Guangzhou Regenerative Medicine and Health Guangdong Laboratory), Guangzhou, China
| | - Jie Zhu
- Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Yiming Xu
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
| | - Daniel Kermany
- Center for Clinical Translational Innovations and Biomedical Big Data Center, West China Hospital and Sichuan University, Chengdu, China
| | - Kaixin Zhou
- Bioland Laboratory (Guangzhou Regenerative Medicine and Health Guangdong Laboratory), Guangzhou, China
| | - Ying Pan
- Department of Endocrinology, Kunshan Hospital Affiliated to Jiangsu University, Kunshan, China
| | - Shaoyun Li
- The Big Data Research Center, Chongqing Renji affiliated Hospital to the University of Chinese Academy of Sciences, Chongqing, China
| | - Iat Fan Lai
- Ophthalmic Center, Kiang Wu Hospital, Macau, China
| | - Ying Chi
- Peking University First Affiliated Hospital, Beijing, China
| | - Changuang Wang
- Peking University Third Affiliated Hospital, Beijing, China
| | - Michelle Pei
- Center for Biomedicine and Innovations, Faculty of Medicine, Macau University of Science and Technology and University Hospital, Macau, China
| | - Guangxi Zang
- Center for Biomedicine and Innovations, Faculty of Medicine, Macau University of Science and Technology and University Hospital, Macau, China
| | - Qi Zhang
- Biotherapy Center, Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Johnson Lau
- Department of Applied Biology and Chemical Technology, Hong Kong Polytechnic University, Hong Kong, China
| | - Dennis Lam
- Department of Applied Biology and Chemical Technology, Hong Kong Polytechnic University, Hong Kong, China.,C-MER Dennis Lam and Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China
| | - Xiaoguang Zou
- Ophthalmic Center of the First People's Hospital of Kashi Prefecture, Kashi Prefecture, Xinjiang, China
| | - Aizezi Wumaier
- Ophthalmic Center of the First People's Hospital of Kashi Prefecture, Kashi Prefecture, Xinjiang, China
| | - Jianquan Wang
- Ophthalmic Center of the First People's Hospital of Kashi Prefecture, Kashi Prefecture, Xinjiang, China
| | - Yin Shen
- Medical Research Institute, Wuhan University, Wuhan, China
| | - Fan Fan Hou
- State Key Laboratory of Organ Failure Research, National Clinical Research Center for Kidney Disease and Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Ping Zhang
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
| | - Tao Xu
- Bioland Laboratory (Guangzhou Regenerative Medicine and Health Guangdong Laboratory), Guangzhou, China.
| | - Yong Zhou
- Clinical Research Institue, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, China.
| | - Guangyu Wang
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China.
| |
Collapse
|
88
|
Ran A, Cheung CY. Deep Learning-Based Optical Coherence Tomography and Optical Coherence Tomography Angiography Image Analysis: An Updated Summary. Asia Pac J Ophthalmol (Phila) 2021; 10:253-260. [PMID: 34383717 DOI: 10.1097/apo.0000000000000405] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
ABSTRACT Deep learning (DL) is a subset of artificial intelligence based on deep neural networks. It has made remarkable breakthroughs in medical imaging, particularly for image classification and pattern recognition. In ophthalmology, there are rising interests in applying DL methods to analyze optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA) images. Studies showed that OCT and OCTA image evaluation by DL algorithms achieved good performance for disease detection, prognosis prediction, and image quality control, suggesting that the incorporation of DL technology could potentially enhance the accuracy of disease evaluation and the efficiency of clinical workflow. However, substantial issues, such as small training sample size, data preprocessing standardization, model robustness, results explanation, and performance cross-validation, are yet to be tackled before deploying these DL models in real-time clinics. This review summarized recent studies on DL-based image analysis models for OCT and OCTA images and discussed the potential challenges of clinical deployment and future research directions.
Collapse
Affiliation(s)
- Anran Ran
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong SAR
| | | |
Collapse
|
89
|
Patil S, Choudhary S. Deep convolutional neural network for chronic kidney disease prediction using ultrasound imaging. BIO-ALGORITHMS AND MED-SYSTEMS 2021. [DOI: 10.1515/bams-2020-0068] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Abstract
Objectives
Chronic kidney disease (CKD) is a common disease and it is related to a higher risk of cardiovascular disease and end-stage renal disease that can be prevented by the earlier recognition and diagnosis of individuals at risk. Even though risk factors for CKD have been recognized, the effectiveness of CKD risk classification via prediction models remains uncertain. This paper intends to introduce a new predictive model for CKD using US image.
Methods
The proposed model includes three main phases “(1) preprocessing, (2) feature extraction, (3) and classification.” In the first phase, the input image is subjected to preprocessing, which deploys image inpainting and median filtering processes. After preprocessing, feature extraction takes place under four cases; (a) texture analysis to detect the characteristics of texture, (b) proposed high-level feature enabled local binary pattern (LBP) extraction, (c) area based feature extraction, and (d) mean intensity based feature extraction. These extracted features are then subjected for classification, where “optimized deep convolutional neural network (DCNN)” is used. In order to make the prediction more accurate, the weight and the activation function of DCNN are optimally chosen by a new hybrid model termed as diversity maintained hybrid whale moth flame optimization (DM-HWM) model.
Results
The accuracy of adopted model at 40th training percentage was 44.72, 11.02, 5.59, 3.92, 3.92, 3.57, 2.59, 1.71, 1.68, and 0.42% superior to traditional artificial neural networks (ANN), support vector machine (SVM), NB, J48, NB-tree, LR, composite hypercube on iterated random projection (CHIRP), CNN, moth flame optimization (MFO), and whale optimization algorithm (WOA) models.
Conclusions
Finally, the superiority of the adopted scheme is validated over other conventional models in terms of various measures.
Collapse
Affiliation(s)
- Smitha Patil
- Research Scholar, VTU , RC Sir MVIT , Bengaluru , India
- Assistant Professor, Presidency University , Bengaluru , India
| | | |
Collapse
|
90
|
Xiao W, Huang X, Wang JH, Lin DR, Zhu Y, Chen C, Yang YH, Xiao J, Zhao LQ, Li JPO, Cheung CYL, Mise Y, Guo ZY, Du YF, Chen BB, Hu JX, Zhang K, Lin XS, Wen W, Liu YZ, Chen WR, Zhong YS, Lin HT. Screening and identifying hepatobiliary diseases through deep learning using ocular images: a prospective, multicentre study. LANCET DIGITAL HEALTH 2021; 3:e88-e97. [PMID: 33509389 DOI: 10.1016/s2589-7500(20)30288-0] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2020] [Revised: 11/07/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
Abstract
BACKGROUND Ocular changes are traditionally associated with only a few hepatobiliary diseases. These changes are non-specific and have a low detection rate, limiting their potential use as clinically independent diagnostic features. Therefore, we aimed to engineer deep learning models to establish associations between ocular features and major hepatobiliary diseases and to advance automated screening and identification of hepatobiliary diseases from ocular images. METHODS We did a multicentre, prospective study to develop models using slit-lamp or retinal fundus images from participants in three hepatobiliary departments and two medical examination centres. Included participants were older than 18 years and had complete clinical information; participants diagnosed with acute hepatobiliary diseases were excluded. We trained seven slit-lamp models and seven fundus models (with or without hepatobiliary disease [screening model] or one specific disease type within six categories [identifying model]) using a development dataset, and we tested the models with an external test dataset. Additionally, we did a visual explanation and occlusion test. Model performances were evaluated using the area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, and F1* score. FINDINGS Between Dec 16, 2018, and July 31, 2019, we collected data from 1252 participants (from the Department of Hepatobiliary Surgery of the Third Affiliated Hospital of Sun Yat-sen University, the Department of Infectious Diseases of the Affiliated Huadu Hospital of Southern Medical University, and the Nantian Medical Centre of Aikang Health Care [Guangzhou, China]) for the development dataset; between Aug 14, 2019, and Jan 31, 2020, we collected data from 537 participants (from the Department of Infectious Diseases of the Third Affiliated Hospital of Sun Yat-sen University and the Huanshidong Medical Centre of Aikang Health Care [Guangzhou, China]) for the test dataset. The AUROC for screening for hepatobiliary diseases of the slit-lamp model was 0·74 (95% CI 0·71-0·76), whereas that of the fundus model was 0·68 (0·65-0·71). For the identification of hepatobiliary diseases, the AUROCs were 0·93 (0·91-0·94; slit-lamp) and 0·84 (0·81-0·86; fundus) for liver cancer, 0·90 (0·88-0·91; slit-lamp) and 0·83 (0·81-0·86; fundus) for liver cirrhosis, and ranged 0·58-0·69 (0·55-0·71; slit-lamp) and 0·62-0·70 (0·58-0·73; fundus) for other hepatobiliary diseases, including chronic viral hepatitis, non-alcoholic fatty liver disease, cholelithiasis, and hepatic cyst. In addition to the conjunctiva and sclera, our deep learning model revealed that the structures of the iris and fundus also contributed to the classification. INTERPRETATION Our study established qualitative associations between ocular features and major hepatobiliary diseases, providing a non-invasive, convenient, and complementary method for hepatobiliary disease screening and identification, which could be applied as an opportunistic screening tool. FUNDING Science and Technology Planning Projects of Guangdong Province; National Key R&D Program of China; Guangzhou Key Laboratory Project; National Natural Science Foundation of China.
Collapse
Affiliation(s)
- Wei Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | - Xi Huang
- Department of Hepatobiliary Surgery, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jing Hui Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | - Duo Ru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Centre, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Ya Han Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | - Jun Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | - Lan Qin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | | | - Carol Yim-Lui Cheung
- Department of Ophthalmology and Visual Sciences, Chinese University of Hong Kong, Hong Kong, China
| | - Yoshihiro Mise
- Department of Hepatobiliary and Pancreatic Surgery, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Zhi Yong Guo
- Organ Transplant Centre, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yun Feng Du
- Vistel AI Lab, Visionary Intelligence, Beijing, China
| | - Bai Bing Chen
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jing Xiong Hu
- Department of Hepatobiliary Surgery, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Kai Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | - Xiao Shan Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | - Wen Wen
- National Centre for Liver Cancer, Eastern Hepatobiliary Surgery Hospital, Second Military Medical University, Shanghai, China
| | - Yi Zhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | - Wei Rong Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | - Yue Si Zhong
- Department of Hepatobiliary Surgery, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.
| | - Hao Tian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China; Centre for Precision Medicine, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
91
|
JAMTHIKAR AD, PUVVULA A, GUPTA D, JOHRI AM, NAMBI V, KHANNA NN, SABA L, MAVROGENI S, LAIRD JR, PAREEK G, MINER M, SFIKAKIS PP, PROTOGEROU A, KITAS GD, NICOLAIDES A, SHARMA AM, VISWANATHAN V, RATHORE VS, KOLLURI R, BHATT DL, SURI JS. Cardiovascular disease and stroke risk assessment in patients with chronic kidney disease using integration of estimated glomerular filtration rate, ultrasonic image phenotypes, and artificial intelligence: a narrative review. INT ANGIOL 2021; 40:150-164. [DOI: 10.23736/s0392-9590.20.04538-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
92
|
Paterson EN, Cardwell C, MacGillivray TJ, Trucco E, Doney AS, Foster P, Maxwell AP, McKay GJ. Investigation of associations between retinal microvascular parameters and albuminuria in UK Biobank: a cross-sectional case-control study. BMC Nephrol 2021; 22:72. [PMID: 33632154 PMCID: PMC7908698 DOI: 10.1186/s12882-021-02273-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 02/18/2021] [Indexed: 12/12/2022] Open
Abstract
Background Associations between microvascular variation and chronic kidney disease (CKD) have been reported previously. Non-invasive retinal fundus imaging enables evaluation of the microvascular network and may offer insight to systemic risk associated with CKD. Methods Retinal microvascular parameters (fractal dimension [FD] – a measure of the complexity of the vascular network, tortuosity, and retinal arteriolar and venular calibre) were quantified from macula-centred fundus images using the Vessel Assessment and Measurement Platform for Images of the REtina (VAMPIRE) version 3.1 (VAMPIRE group, Universities of Dundee and Edinburgh, Scotland) and assessed for associations with renal damage in a case-control study nested within the multi-centre UK Biobank cohort study. Participants were designated cases or controls based on urinary albumin to creatinine ratio (ACR) thresholds. Participants with ACR ≥ 3 mg/mmol (ACR stages A2-A3) were characterised as cases, and those with an ACR < 3 mg/mmol (ACR stage A1) were categorised as controls. Participants were matched on age, sex and ethnic background. Results Lower FD (less extensive microvascular branching) was associated with a small increase in odds of albuminuria independent of blood pressure, diabetes and other potential confounding variables (odds ratio [OR] 1.18, 95% confidence interval [CI] 1.03–1.34 for arterioles and OR 1.24, CI 1.05–1.47 for venules). Measures of tortuosity or retinal arteriolar and venular calibre were not significantly associated with ACR. Conclusions This study supports previously reported associations between retinal microvascular FD and other metabolic disturbances affecting the systemic vasculature. The association between retinal microvascular FD and albuminuria, independent of diabetes and blood pressure, may represent a useful indicator of systemic vascular damage associated with albuminuria. Supplementary Information The online version contains supplementary material available at 10.1186/s12882-021-02273-6.
Collapse
Affiliation(s)
- Euan N Paterson
- Centre for Public Health, Institute of Clinical Science, Queen's University Belfast, Block B, Royal Hospital, Grosvenor Road, Belfast, Northern Ireland, BT12 6BA
| | - Chris Cardwell
- Centre for Public Health, Institute of Clinical Science, Queen's University Belfast, Block B, Royal Hospital, Grosvenor Road, Belfast, Northern Ireland, BT12 6BA
| | - Thomas J MacGillivray
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, Scotland, UK
| | - Emanuele Trucco
- VAMPIRE project, Computer Vision and Image Processing Group, School of Science and Engineering (Computing), University of Dundee, Dundee, UK
| | - Alexander S Doney
- Ninewells Hospital and Medical School, University of Dundee, Dundee, UK
| | | | - Alexander P Maxwell
- Centre for Public Health, Institute of Clinical Science, Queen's University Belfast, Block B, Royal Hospital, Grosvenor Road, Belfast, Northern Ireland, BT12 6BA
| | - Gareth J McKay
- Centre for Public Health, Institute of Clinical Science, Queen's University Belfast, Block B, Royal Hospital, Grosvenor Road, Belfast, Northern Ireland, BT12 6BA.
| | | |
Collapse
|
93
|
Esteva A, Chou K, Yeung S, Naik N, Madani A, Mottaghi A, Liu Y, Topol E, Dean J, Socher R. Deep learning-enabled medical computer vision. NPJ Digit Med 2021; 4:5. [PMID: 33420381 PMCID: PMC7794558 DOI: 10.1038/s41746-020-00376-2] [Citation(s) in RCA: 256] [Impact Index Per Article: 85.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 12/01/2020] [Indexed: 02/07/2023] Open
Abstract
A decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields-including medicine-to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques-powered by deep learning-for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit-including cardiology, pathology, dermatology, ophthalmology-and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.
Collapse
Affiliation(s)
| | | | | | - Nikhil Naik
- Salesforce AI Research, San Francisco, CA, USA
| | - Ali Madani
- Salesforce AI Research, San Francisco, CA, USA
| | | | - Yun Liu
- Google Research, Mountain View, CA, USA
| | - Eric Topol
- Scripps Research Translational Institute, La Jolla, CA, USA
| | - Jeff Dean
- Google Research, Mountain View, CA, USA
| | | |
Collapse
|
94
|
Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_200-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
95
|
Kang EYC, Hsieh YT, Li CH, Huang YJ, Kuo CF, Kang JH, Chen KJ, Lai CC, Wu WC, Hwang YS. Deep Learning-Based Detection of Early Renal Function Impairment Using Retinal Fundus Images: Model Development and Validation. JMIR Med Inform 2020; 8:e23472. [PMID: 33139242 PMCID: PMC7728538 DOI: 10.2196/23472] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 09/01/2020] [Accepted: 10/30/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Retinal imaging has been applied for detecting eye diseases and cardiovascular risks using deep learning-based methods. Furthermore, retinal microvascular and structural changes were found in renal function impairments. However, a deep learning-based method using retinal images for detecting early renal function impairment has not yet been well studied. OBJECTIVE This study aimed to develop and evaluate a deep learning model for detecting early renal function impairment using retinal fundus images. METHODS This retrospective study enrolled patients who underwent renal function tests with color fundus images captured at any time between January 1, 2001, and August 31, 2019. A deep learning model was constructed to detect impaired renal function from the images. Early renal function impairment was defined as estimated glomerular filtration rate <90 mL/min/1.73 m2. Model performance was evaluated with respect to the receiver operating characteristic curve and area under the curve (AUC). RESULTS In total, 25,706 retinal fundus images were obtained from 6212 patients for the study period. The images were divided at an 8:1:1 ratio. The training, validation, and testing data sets respectively contained 20,787, 2189, and 2730 images from 4970, 621, and 621 patients. There were 10,686 and 15,020 images determined to indicate normal and impaired renal function, respectively. The AUC of the model was 0.81 in the overall population. In subgroups stratified by serum hemoglobin A1c (HbA1c) level, the AUCs were 0.81, 0.84, 0.85, and 0.87 for the HbA1c levels of ≤6.5%, >6.5%, >7.5%, and >10%, respectively. CONCLUSIONS The deep learning model in this study enables the detection of early renal function impairment using retinal fundus images. The model was more accurate for patients with elevated serum HbA1c levels.
Collapse
Affiliation(s)
- Eugene Yu-Chuan Kang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Yi-Ting Hsieh
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| | | | - Yi-Jin Huang
- Acer Healthcare Incorporated, New Taipei, Taiwan
| | - Chang-Fu Kuo
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Center for Artificial Intelligence in Medicine, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
| | - Je-Ho Kang
- Department of Nephrology, Yang Ming Hospital, Taoyuan, Taiwan
| | - Kuan-Jen Chen
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Chi-Chun Lai
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Wei-Chi Wu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Yih-Shiou Hwang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| |
Collapse
|
96
|
Prediction of systemic biomarkers from retinal photographs: development and validation of deep-learning algorithms. LANCET DIGITAL HEALTH 2020; 2:e526-e536. [DOI: 10.1016/s2589-7500(20)30216-8] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 08/13/2020] [Accepted: 08/16/2020] [Indexed: 01/01/2023]
|
97
|
Amitava AK. Commentary: How useful is a deep learning smartphone application for screening for amblyogenic risk factors? Indian J Ophthalmol 2020; 68:1411. [PMID: 32587178 PMCID: PMC7574105 DOI: 10.4103/ijo.ijo_1900_20] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Affiliation(s)
- Abadan K Amitava
- Institute of Ophthalmology, JN Medical College, Aligarh Muslim University, Aligarh, Uttar Pradesh, India
| |
Collapse
|
98
|
Waldstein SM. Opportunistic deep learning of retinal photographs: the window to the body revisited. Lancet Digit Health 2020; 2:e269-e270. [PMID: 33328117 DOI: 10.1016/s2589-7500(20)30080-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Accepted: 04/03/2020] [Indexed: 06/12/2023]
Affiliation(s)
- Sebastian M Waldstein
- Department of Ophthalmology, Medical University of Vienna, Vienna, Austria; Department of Ophthalmology, Westmead Hospital, University of Sydney, Sydney, Australia.
| |
Collapse
|