1
|
Guo T, Liu K, Zou H, Xu X, Yang J, Yu Q. Refined image quality assessment for color fundus photography based on deep learning. Digit Health 2024; 10:20552076231207582. [PMID: 38425654 PMCID: PMC10903193 DOI: 10.1177/20552076231207582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/26/2023] [Indexed: 03/02/2024] Open
Abstract
Purpose Color fundus photography is widely used in clinical and screening settings for eye diseases. Poor image quality greatly affects the reliability of further evaluation and diagnosis. In this study, we developed an automated assessment module for color fundus photography image quality assessment using deep learning. Methods A total of 55,931 color fundus photography images from multiple centers in Shanghai and the public database were collected and annotated as training, validation, and testing data sets. The pre-diagnosis image quality assessment module based on the multi-task deep neural network was designed. The detailed criterion of color fundus photography image quality including three subcategories with three levels of grading was applied to improve precision and objectivity. The auxiliary tasks such as the localization of the optic nerve head and macula, the classification of laterality, and the field of view were also included to assist the quality assessment. Finally, we validated our module internally and externally by evaluating the area under the receiver operating characteristic curve, sensitivity, specificity, accuracy, and quadratic weighted Kappa. Results The "Location" subcategory achieved area under the receiver operating characteristic curves of 0.991, 0.920, and 0.946 for the three grades, respectively. The "Clarity" subcategory achieved area under the receiver operating characteristic curves of 0.980, 0.917, and 0.954 for the three grades, respectively. The "Artifact" subcategory achieved area under the receiver operating characteristic curves of 0.976, 0.952, and 0.986 for the three grades, respectively. The accuracy and Kappa of overall quality reach 88.15% and 89.70%, respectively, on the internal set. These two indicators on the external set were 86.63% and 88.55%, respectively, which were very close to that of the internal set. Conclusions This work showed that our deep module was able to evaluate the color fundus photography image quality using more detailed three subcategories with three grade criteria. The promising results on both internal and external validation indicated the strength and generalizability of our module.
Collapse
Affiliation(s)
- Tianjiao Guo
- Institute of Medical Robotics, Shanghai Jiao Tong University, China
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Kun Liu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Clinical Research Center for Eye Diseases, China
| | - Haidong Zou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Clinical Research Center for Eye Diseases, China
| | - Xun Xu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Clinical Research Center for Eye Diseases, China
| | - Jie Yang
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, China
| | - Qi Yu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Clinical Research Center for Eye Diseases, China
| |
Collapse
|
2
|
Lapka M, Straňák Z. The Current State of Artificial Intelligence in Neuro-Ophthalmology. A Review. CESKA A SLOVENSKA OFTALMOLOGIE : CASOPIS CESKE OFTALMOLOGICKE SPOLECNOSTI A SLOVENSKE OFTALMOLOGICKE SPOLECNOSTI 2024; 80:179-186. [PMID: 38538291 DOI: 10.31348/2023/33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Abstract
This article presents a summary of recent advances in the development and use of complex systems using artificial intelligence (AI) in neuro-ophthalmology. The aim of the following article is to present the principles of AI and algorithms that are currently being used or are still in the stage of evaluation or validation within the neuro-ophthalmology environment. For the purpose of this text, a literature search was conducted using specific keywords in available scientific databases, cumulatively up to April 2023. The AI systems developed across neuro-ophthalmology mostly achieve high sensitivity, specificity and accuracy. Individual AI systems and algorithms are subsequently selected, simply described and compared in the article. The results of the individual studies differ significantly, depending on the chosen methodology, the set goals, the size of the test, evaluated set, and the evaluated parameters. It has been demonstrated that the evaluation of various diseases will be greatly speeded up with the help of AI and make the diagnosis more efficient in the future, thus showing a high potential to be a useful tool in clinical practice even with a significant increase in the number of patients.
Collapse
|
3
|
Zia T, Wahab A, Windridge D, Tirunagari S, Bhatti NB. Visual attribution using Adversarial Latent Transformations. Comput Biol Med 2023; 166:107521. [PMID: 37778213 DOI: 10.1016/j.compbiomed.2023.107521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 09/02/2023] [Accepted: 09/19/2023] [Indexed: 10/03/2023]
Abstract
The ability to accurately locate all indicators of disease within medical images is vital for comprehending the effects of the disease, as well as for weakly-supervised segmentation and localization of the diagnostic correlators of disease. Existing methods either use classifiers to make predictions based on class-salient regions or else use adversarial learning based image-to-image translation to capture such disease effects. However, the former does not capture all relevant features for visual attribution (VA) and are prone to data biases; the latter can generate adversarial (misleading) and inefficient solutions when dealing in pixel values. To address this issue, we propose a novel approach Visual Attribution using Adversarial Latent Transformations (VA2LT). Our method uses adversarial learning to generate counterfactual (CF) normal images from abnormal images by finding and modifying discrepancies in the latent space. We use cycle consistency between the query and CF latent representations to guide our training. We evaluate our method on three datasets including a synthetic dataset, the Alzheimer's Disease Neuroimaging Initiative dataset, and the BraTS dataset. Our method outperforms baseline and related methods on all datasets.
Collapse
Affiliation(s)
- Tehseen Zia
- COMSATS University Islamabad, Pakistan; Medical Imaging and Diagnostics Lab, National Center of Artificial Intelligence, Pakistan.
| | - Abdul Wahab
- COMSATS University Islamabad, Pakistan; Medical Imaging and Diagnostics Lab, National Center of Artificial Intelligence, Pakistan
| | | | | | | |
Collapse
|
4
|
Nazir S, Dickson DM, Akram MU. Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput Biol Med 2023; 156:106668. [PMID: 36863192 DOI: 10.1016/j.compbiomed.2023.106668] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 01/12/2023] [Accepted: 02/10/2023] [Indexed: 02/21/2023]
Abstract
Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a 'black box' nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements. This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.
Collapse
Affiliation(s)
- Sajid Nazir
- Department of Computing, Glasgow Caledonian University, Glasgow, UK.
| | - Diane M Dickson
- Department of Podiatry and Radiography, Research Centre for Health, Glasgow Caledonian University, Glasgow, UK
| | - Muhammad Usman Akram
- Computer and Software Engineering Department, National University of Sciences and Technology, Islamabad, Pakistan
| |
Collapse
|
5
|
Lin T, Peng S, Lu S, Fu S, Zeng D, Li J, Chen T, Fan T, Lang C, Feng S, Ma J, Zhao C, Antony B, Cicuttini F, Quan X, Zhu Z, Ding C. Prediction of knee pain improvement over two years for knee osteoarthritis using a dynamic nomogram based on MRI-derived radiomics: a proof-of-concept study. Osteoarthritis Cartilage 2023; 31:267-278. [PMID: 36334697 DOI: 10.1016/j.joca.2022.10.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 09/26/2022] [Accepted: 10/19/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVES To develop and validate a nomogram to detect improved knee pain in osteoarthritis (OA) by integrating magnetic resonance imaging (MRI) radiomics signature of subchondral bone and clinical characteristics. METHODS Participants were selected from the Vitamin D Effects on Osteoarthritis (VIDEO) study. The primary outcome was 20% improvement of knee pain score over 2 years in participants administrated either vitamin D or placebo. Radiomics features of subchondral bone and clinical characteristics from 216 participants were extracted and analyzed. The participants were randomly split into the training and validation cohorts at a ratio of 8:2. Least absolute shrinkage and selection operator (LASSO) regression was used to select features and generate radiomics signatures. The optimal radiomics signature and clinical indicators were fitted into a nomogram using multivariable logistic regression model. RESULTS The nomogram showed favorable discrimination performance [AUCtraining, 0.79 (95% CI: 0.72-0.79), AUCvalidation, 0.83 (95% CI: 0.70-0.96)] as well as a good calibration. Additional contributing value of fusion radiomics signature to the nomogram was statistically significant (NRI, 0.23; IDI, 0.14, P < 0.001 in training cohort and NRI, 0.29; IDI, 0.18, P < 0.05 in validating cohort). Decision curve analysis confirmed the clinical usefulness of nomogram. CONCLUSION The radiomics-based nomogram comprising the MR radiomics signature and clinical variables achieves a favorable predictive efficacy and accuracy in differentiating improvement in knee pain among OA patients. This proof-of-concept study provides a promising way to predict clinically meaningful outcomes.
Collapse
Affiliation(s)
- T Lin
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, China.
| | - S Peng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China.
| | - S Lu
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, China.
| | - S Fu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China.
| | - D Zeng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China.
| | - J Li
- Division of Orthopaedic Surgery, Department of Orthopaedics, Nanfang Hospital, Southern Medical University, Guangzhou, 510282, China.
| | - T Chen
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, China.
| | - T Fan
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, China.
| | - C Lang
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, China.
| | - S Feng
- Department of Civil and Environmental Engineering, The Hong Kong University of Science and Technology, 999077, Hong Kong, China.
| | - J Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China.
| | - C Zhao
- Philips China, Beijing, 100000, China.
| | - B Antony
- Menzies Institute for Medical Research, University of Tasmania, Hobart, Tasmania, 7000, Australia.
| | - F Cicuttini
- Department of Epidemiology and Preventive Medicine, Monash University, Melbourne, Victoria, 3800, Australia.
| | - X Quan
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, China.
| | - Z Zhu
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, China.
| | - C Ding
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, China; Menzies Institute for Medical Research, University of Tasmania, Hobart, Tasmania, 7000, Australia.
| |
Collapse
|
6
|
Huang B, Fong LWR, Chaudhari R, Zhang S. Development and evaluation of a java-based deep neural network method for drug response predictions. Front Artif Intell 2023; 6:1069353. [PMID: 37035534 PMCID: PMC10076891 DOI: 10.3389/frai.2023.1069353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 03/03/2023] [Indexed: 04/11/2023] Open
Abstract
Accurate prediction of drug response is a crucial step in personalized medicine. Recently, deep learning techniques have been witnessed with significant breakthroughs in a variety of areas including biomedical research and chemogenomic applications. This motivated us to develop a novel deep learning platform to accurately and reliably predict the response of cancer cells to different drug treatments. In the present work, we describe a Java-based implementation of deep neural network method, termed JavaDL, to predict cancer responses to drugs solely based on their chemical features. To this end, we devised a novel cost function and added a regularization term which suppresses overfitting. We also adopted an early stopping strategy to further reduce overfit and improve the accuracy and robustness of our models. To evaluate our method, we compared with several popular machine learning and deep neural network programs and observed that JavaDL either outperformed those methods in model building or obtained comparable predictions. Finally, JavaDL was employed to predict drug responses of several aggressive breast cancer cell lines, and the results showed robust and accurate predictions with r 2 as high as 0.81.
Collapse
|
7
|
Nderitu P, Nunez do Rio JM, Webster ML, Mann SS, Hopkins D, Cardoso MJ, Modat M, Bergeles C, Jackson TL. Automated image curation in diabetic retinopathy screening using deep learning. Sci Rep 2022; 12:11196. [PMID: 35778615 PMCID: PMC9249740 DOI: 10.1038/s41598-022-15491-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 06/24/2022] [Indexed: 11/20/2022] Open
Abstract
Diabetic retinopathy (DR) screening images are heterogeneous and contain undesirable non-retinal, incorrect field and ungradable samples which require curation, a laborious task to perform manually. We developed and validated single and multi-output laterality, retinal presence, retinal field and gradability classification deep learning (DL) models for automated curation. The internal dataset comprised of 7743 images from DR screening (UK) with 1479 external test images (Portugal and Paraguay). Internal vs external multi-output laterality AUROC were right (0.994 vs 0.905), left (0.994 vs 0.911) and unidentifiable (0.996 vs 0.680). Retinal presence AUROC were (1.000 vs 1.000). Retinal field AUROC were macula (0.994 vs 0.955), nasal (0.995 vs 0.962) and other retinal field (0.997 vs 0.944). Gradability AUROC were (0.985 vs 0.918). DL effectively detects laterality, retinal presence, retinal field and gradability of DR screening images with generalisation between centres and populations. DL models could be used for automated image curation within DR screening.
Collapse
Affiliation(s)
- Paul Nderitu
- Section of Ophthalmology, King's College London, London, UK.
- King's Ophthalmology Research Unit, King's College Hospital, London, UK.
| | | | - Ms Laura Webster
- South East London Diabetic Eye Screening Programme, Guy's and St Thomas' Foundation Trust, London, UK
| | - Samantha S Mann
- South East London Diabetic Eye Screening Programme, Guy's and St Thomas' Foundation Trust, London, UK
- Department of Ophthalmology, Guy's and St Thomas' Foundation Trust, London, UK
| | - David Hopkins
- Department of Diabetes, School of Life Course Sciences, King's College London, London, UK
- Institute of Diabetes, Endocrinology and Obesity, King's Health Partners, London, UK
| | - M Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Christos Bergeles
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Timothy L Jackson
- Section of Ophthalmology, King's College London, London, UK
- King's Ophthalmology Research Unit, King's College Hospital, London, UK
| |
Collapse
|
8
|
van der Velden BH, Kuijf HJ, Gilhuijs KG, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med Image Anal 2022; 79:102470. [DOI: 10.1016/j.media.2022.102470] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 03/15/2022] [Accepted: 05/02/2022] [Indexed: 12/11/2022]
|
9
|
VANT-GAN: Adversarial Learning for Discrepancy-Based Visual Attribution in Medical Imaging. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.02.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
10
|
Yuen V, Ran A, Shi J, Sham K, Yang D, Chan VTT, Chan R, Yam JC, Tham CC, McKay GJ, Williams MA, Schmetterer L, Cheng CY, Mok V, Chen CL, Wong TY, Cheung CY. Deep-Learning-Based Pre-Diagnosis Assessment Module for Retinal Photographs: A Multicenter Study. Transl Vis Sci Technol 2021; 10:16. [PMID: 34524409 PMCID: PMC8444486 DOI: 10.1167/tvst.10.11.16] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 08/12/2021] [Indexed: 12/23/2022] Open
Abstract
Purpose Artificial intelligence (AI) deep learning (DL) has been shown to have significant potential for eye disease detection and screening on retinal photographs in different clinical settings, particular in primary care. However, an automated pre-diagnosis image assessment is essential to streamline the application of the developed AI-DL algorithms. In this study, we developed and validated a DL-based pre-diagnosis assessment module for retinal photographs, targeting image quality (gradable vs. ungradable), field of view (macula-centered vs. optic-disc-centered), and laterality of the eye (right vs. left). Methods A total of 21,348 retinal photographs from 1914 subjects from various clinical settings in Hong Kong, Singapore, and the United Kingdom were used for training, internal validation, and external testing for the DL module, developed by two DL-based algorithms (EfficientNet-B0 and MobileNet-V2). Results For image-quality assessment, the pre-diagnosis module achieved area under the receiver operating characteristic curve (AUROC) values of 0.975, 0.999, and 0.987 in the internal validation dataset and the two external testing datasets, respectively. For field-of-view assessment, the module had an AUROC value of 1.000 in all of the datasets. For laterality-of-the-eye assessment, the module had AUROC values of 1.000, 0.999, and 0.985 in the internal validation dataset and the two external testing datasets, respectively. Conclusions Our study showed that this three-in-one DL module for assessing image quality, field of view, and laterality of the eye of retinal photographs achieved excellent performance and generalizability across different centers and ethnicities. Translational Relevance The proposed DL-based pre-diagnosis module realized accurate and automated assessments of image quality, field of view, and laterality of the eye of retinal photographs, which could be further integrated into AI-based models to improve operational flow for enhancing disease screening and diagnosis.
Collapse
Affiliation(s)
- Vincent Yuen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Anran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jian Shi
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Kaiser Sham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Victor T. T. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Raymond Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jason C. Yam
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Clement C. Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Gareth J. McKay
- Center for Public Health, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Michael A. Williams
- Center for Medical Education, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Vincent Mok
- Gerald Choa Neuroscience Center, Therese Pei Fong Chow Research Center for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Hong Kong
| | - Christopher L. Chen
- Memory, Aging and Cognition Center, Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Y. Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
11
|
Deep Learning and Transfer Learning for Optic Disc Laterality Detection: Implications for Machine Learning in Neuro-Ophthalmology. J Neuroophthalmol 2021; 40:178-184. [PMID: 31453913 DOI: 10.1097/wno.0000000000000827] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
BACKGROUND Deep learning (DL) has demonstrated human expert levels of performance for medical image classification in a wide array of medical fields, including ophthalmology. In this article, we present the results of our DL system designed to determine optic disc laterality, right eye vs left eye, in the presence of both normal and abnormal optic discs. METHODS Using transfer learning, we modified the ResNet-152 deep convolutional neural network (DCNN), pretrained on ImageNet, to determine the optic disc laterality. After a 5-fold cross-validation, we generated receiver operating characteristic curves and corresponding area under the curve (AUC) values to evaluate performance. The data set consisted of 576 color fundus photographs (51% right and 49% left). Both 30° photographs centered on the optic disc (63%) and photographs with varying degree of optic disc centration and/or wider field of view (37%) were included. Both normal (27%) and abnormal (73%) optic discs were included. Various neuro-ophthalmological diseases were represented, such as, but not limited to, atrophy, anterior ischemic optic neuropathy, hypoplasia, and papilledema. RESULTS Using 5-fold cross-validation (70% training; 10% validation; 20% testing), our DCNN for classifying right vs left optic disc achieved an average AUC of 0.999 (±0.002) with optimal threshold values, yielding an average accuracy of 98.78% (±1.52%), sensitivity of 98.60% (±1.72%), and specificity of 98.97% (±1.38%). When tested against a separate data set for external validation, our 5-fold cross-validation model achieved the following average performance: AUC 0.996 (±0.005), accuracy 97.2% (±2.0%), sensitivity 96.4% (±4.3%), and specificity 98.0% (±2.2%). CONCLUSIONS Small data sets can be used to develop high-performing DL systems for semantic labeling of neuro-ophthalmology images, specifically in distinguishing between right and left optic discs, even in the presence of neuro-ophthalmological pathologies. Although this may seem like an elementary task, this study demonstrates the power of transfer learning and provides an example of a DCNN that can help curate large medical image databases for machine-learning purposes and facilitate ophthalmologist workflow by automatically labeling images according to laterality.
Collapse
|
12
|
Ramya J, Rajakumar MP, Uma Maheswari B. HPWO-LS-based deep learning approach with S-ROA-optimized optic cup segmentation for fundus image classification. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05732-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
13
|
Zheng C, Xie X, Wang Z, Li W, Chen J, Qiao T, Qian Z, Liu H, Liang J, Chen X. Development and validation of deep learning algorithms for automated eye laterality detection with anterior segment photography. Sci Rep 2021; 11:586. [PMID: 33436781 PMCID: PMC7803760 DOI: 10.1038/s41598-020-79809-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Accepted: 12/09/2020] [Indexed: 02/05/2023] Open
Abstract
This paper aimed to develop and validate a deep learning (DL) model for automated detection of the laterality of the eye on anterior segment photographs. Anterior segment photographs for training a DL model were collected with the Scheimpflug anterior segment analyzer. We applied transfer learning and fine-tuning of pre-trained deep convolutional neural networks (InceptionV3, VGG16, MobileNetV2) to develop DL models for determining the eye laterality. Testing datasets, from Scheimpflug and slit-lamp digital camera photography, were employed to test the DL model, and the results were compared with a classification performed by human experts. The performance of the DL model was evaluated by accuracy, sensitivity, specificity, operating characteristic curves, and corresponding area under the curve values. A total of 14,468 photographs were collected for the development of DL models. After training for 100 epochs, the DL models of the InceptionV3 mode achieved the area under the receiver operating characteristic curve of 0.998 (with 95% CI 0.924-0.958) for detecting eye laterality. In the external testing dataset (76 primary gaze photographs taken by a digital camera), the DL model achieves an accuracy of 96.1% (95% CI 91.7%-100%), which is better than an accuracy of 72.3% (95% CI 62.2%-82.4%), 82.8% (95% CI 78.7%-86.9%) and 86.8% (95% CI 82.5%-91.1%) achieved by human graders. Our study demonstrated that this high-performing DL model can be used for automated labeling for the laterality of eyes. Our DL model is useful for managing a large volume of the anterior segment images with a slit-lamp camera in the clinical setting.
Collapse
Affiliation(s)
- Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaolin Xie
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Zhilei Wang
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Wen Li
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Jili Chen
- Department of Ophthalmology, Shibei Hospital, Shanghai, China
| | - Tong Qiao
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Zhuyun Qian
- Department of Ophthalmology, Shanghai Aier Eye Hospital, No. 1286, Hongqiao Road, Changning District, Shanghai, 200050, China
| | - Hui Liu
- Aier School of Ophthalmology, Central South University, Changsha, Hunan Province, China
| | - Jianheng Liang
- Aier School of Ophthalmology, Central South University, Changsha, Hunan Province, China
| | - Xu Chen
- Department of Ophthalmology, Shanghai Aier Eye Hospital, No. 1286, Hongqiao Road, Changning District, Shanghai, 200050, China.
- Aier School of Ophthalmology, Central South University, Changsha, Hunan Province, China.
| |
Collapse
|
14
|
Yi PH, Lin A, Wei J, Yu AC, Sair HI, Hui FK, Hager GD, Harvey SC. Deep-Learning-Based Semantic Labeling for 2D Mammography and Comparison of Complexity for Machine Learning Tasks. J Digit Imaging 2020; 32:565-570. [PMID: 31197559 PMCID: PMC6646449 DOI: 10.1007/s10278-019-00244-w] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Machine learning has several potential uses in medical imaging for semantic labeling of images to improve radiologist workflow and to triage studies for review. The purpose of this study was to (1) develop deep convolutional neural networks (DCNNs) for automated classification of 2D mammography views, determination of breast laterality, and assessment and of breast tissue density; and (2) compare the performance of DCNNs on these tasks of varying complexity to each other. We obtained 3034 2D-mammographic images from the Digital Database for Screening Mammography, annotated with mammographic view, image laterality, and breast tissue density. These images were used to train a DCNN to classify images for these three tasks. The DCNN trained to classify mammographic view achieved receiver-operating-characteristic (ROC) area under the curve (AUC) of 1. The DCNN trained to classify breast image laterality initially misclassified right and left breasts (AUC 0.75); however, after discontinuing horizontal flips during data augmentation, AUC improved to 0.93 (p < 0.0001). Breast density classification proved more difficult, with the DCNN achieving 68% accuracy. Automated semantic labeling of 2D mammography is feasible using DCNNs and can be performed with small datasets. However, automated classification of differences in breast density is more difficult, likely requiring larger datasets.
Collapse
Affiliation(s)
- Paul H Yi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA. .,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA.
| | - Abigail Lin
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA
| | - Jinchi Wei
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Alice C Yu
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA
| | - Haris I Sair
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Ferdinand K Hui
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Gregory D Hager
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Susan C Harvey
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA
| |
Collapse
|
15
|
Kim YD, Noh KJ, Byun SJ, Lee S, Kim T, Sunwoo L, Lee KJ, Kang SH, Park KH, Park SJ. Effects of Hypertension, Diabetes, and Smoking on Age and Sex Prediction from Retinal Fundus Images. Sci Rep 2020; 10:4623. [PMID: 32165702 PMCID: PMC7067849 DOI: 10.1038/s41598-020-61519-9] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 02/28/2020] [Indexed: 12/25/2022] Open
Abstract
Retinal fundus images are used to detect organ damage from vascular diseases (e.g. diabetes mellitus and hypertension) and screen ocular diseases. We aimed to assess convolutional neural network (CNN) models that predict age and sex from retinal fundus images in normal participants and in participants with underlying systemic vascular-altered status. In addition, we also tried to investigate clues regarding differences between normal ageing and vascular pathologic changes using the CNN models. In this study, we developed CNN age and sex prediction models using 219,302 fundus images from normal participants without hypertension, diabetes mellitus (DM), and any smoking history. The trained models were assessed in four test-sets with 24,366 images from normal participants, 40,659 images from hypertension participants, 14,189 images from DM participants, and 113,510 images from smokers. The CNN model accurately predicted age in normal participants; the correlation between predicted age and chronologic age was R2 = 0.92, and the mean absolute error (MAE) was 3.06 years. MAEs in test-sets with hypertension (3.46 years), DM (3.55 years), and smoking (2.65 years) were similar to that of normal participants; however, R2 values were relatively low (hypertension, R2 = 0.74; DM, R2 = 0.75; smoking, R2 = 0.86). In subgroups with participants over 60 years, the MAEs increased to above 4.0 years and the accuracies declined for all test-sets. Fundus-predicted sex demonstrated acceptable accuracy (area under curve > 0.96) in all test-sets. Retinal fundus images from participants with underlying vascular-altered conditions (hypertension, DM, or smoking) indicated similar MAEs and low coefficients of determination (R2) between the predicted age and chronologic age, thus suggesting that the ageing process and pathologic vascular changes exhibit different features. Our models demonstrate the most improved performance yet and provided clues to the relationship and difference between ageing and pathologic changes from underlying systemic vascular conditions. In the process of fundus change, systemic vascular diseases are thought to have a different effect from ageing. Research in context. Evidence before this study. The human retina and optic disc continuously change with ageing, and they share physiologic or pathologic characteristics with brain and systemic vascular status. As retinal fundus images provide high-resolution in-vivo images of retinal vessels and parenchyma without any invasive procedure, it has been used to screen ocular diseases and has attracted significant attention as a predictive biomarker for cerebral and systemic vascular diseases. Recently, deep neural networks have revolutionised the field of medical image analysis including retinal fundus images and shown reliable results in predicting age, sex, and presence of cardiovascular diseases. Added value of this study. This is the first study demonstrating how a convolutional neural network (CNN) trained using retinal fundus images from normal participants measures the age of participants with underlying vascular conditions such as hypertension, diabetes mellitus (DM), or history of smoking using a large database, SBRIA, which contains 412,026 retinal fundus images from 155,449 participants. Our results indicated that the model accurately predicted age in normal participants, while correlations (coefficient of determination, R2) in test-sets with hypertension, DM, and smoking were relatively low. Additionally, a subgroup analysis indicated that mean absolute errors (MAEs) increased and accuracies declined significantly in subgroups with participants over 60 years of age in both normal participants and participants with vascular-altered conditions. These results suggest that pathologic retinal vascular changes occurring in systemic vascular diseases are different form the changes in spontaneous ageing process, and the ageing process observed in retinal fundus images may saturate at age about 60 years. Implications of all available evidence. Based on this study and previous reports, the CNN could accurately and reliably predict age and sex using retinal fundus images. The fact that retinal changes caused by ageing and systemic vascular diseases occur differently motivates one to understand the retina deeper. Deep learning-based fundus image reading may be a more useful and beneficial tool for screening and diagnosing systemic and ocular diseases after further development.
Collapse
Affiliation(s)
- Yong Dae Kim
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea.,Department of Ophthalmology, Kangdong Sacred Heart Hospital, Seoul, Korea
| | - Kyoung Jin Noh
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Seong Jun Byun
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Soochahn Lee
- School of Electrical Engineering, Kookmin University, Seoul, Republic of Korea
| | - Tackeun Kim
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Kyong Joon Lee
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Si-Hyuck Kang
- Division of Cardiology, Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea.
| |
Collapse
|
16
|
Liu C, Han X, Li Z, Ha J, Peng G, Meng W, He M. A self-adaptive deep learning method for automated eye laterality detection based on color fundus photography. PLoS One 2019; 14:e0222025. [PMID: 31536537 PMCID: PMC6752776 DOI: 10.1371/journal.pone.0222025] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Accepted: 08/20/2019] [Indexed: 12/16/2022] Open
Abstract
PURPOSE To provide a self-adaptive deep learning (DL) method to automatically detect the eye laterality based on fundus images. METHODS A total of 18394 fundus images with real-world eye laterality labels were used for model development and internal validation. A separate dataset of 2000 fundus images with eye laterality labeled manually was used for external validation. A DL model was developed based on a fine-tuned Inception-V3 network with self-adaptive strategy. The area under receiver operator characteristic curve (AUC) with sensitivity and specificity and confusion matrix were applied to assess the model performance. The class activation map (CAM) was used for model visualization. RESULTS In the external validation (N = 2000, 50% labeled as left eye), the AUC of the DL model for overall eye laterality detection was 0.995 (95% CI, 0.993-0.997) with an accuracy of 99.13%. Specifically for left eye detection, the sensitivity was 99.00% (95% CI, 98.11%-99.49%) and the specificity was 99.10% (95% CI, 98.23%-99.56%). Nineteen images were wrongly classified as compared to the human labels: 12 were due to human wrong labelling, while 7 were due to poor image quality. The CAM showed that the region of interest for eye laterality detection was mainly the optic disc and surrounding areas. CONCLUSION We proposed a self-adaptive DL method with a high performance in detecting eye laterality based on fundus images. Results of our findings were based on real world labels and thus had practical significance in clinical settings.
Collapse
Affiliation(s)
- Chi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- School of Computer Science, University of Technology, Sydney, Australia
| | - Xiaotong Han
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jason Ha
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, Australia
| | - Guankai Peng
- Guangzhou Healgoo Interactive Medical Technology Co. Ltd., Guangzhou, China
| | - Wei Meng
- Guangzhou Healgoo Interactive Medical Technology Co. Ltd., Guangzhou, China
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Australia
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
| |
Collapse
|