201
|
Scanzera AC, Nyenhuis SM, Rudd BN, Ramaswamy M, Mazzucca S, Castro M, Kennedy DJ, Mermelstein RJ, Chambers DA, Dudek SM, Krishnan JA. Building a new regional home for implementation science: Annual Midwest Clinical & Translational Research Meetings. J Investig Med 2023; 71:567-576. [PMID: 37002618 PMCID: PMC11337947 DOI: 10.1177/10815589231166102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
The vision of the Central Society for Clinical and Translational Research (CSCTR) is to "promote a vibrant, supportive community of multidisciplinary, clinical, and translational medical research to benefit humanity." Together with the Midwestern Section of the American Federation for Medical Research, CSCTR hosts an Annual Midwest Clinical & Translational Research Meeting, a regional multispecialty meeting that provides the opportunity for trainees and early-stage investigators to present their research to leaders in their fields. There is an increasing national and global interest in implementation science (IS), the systematic study of activities (or strategies) to facilitate the successful uptake of evidence-based health interventions in clinical and community settings. Given the growing importance of this field and its relevance to the goals of the CSCTR, in 2022, the Midwest Clinical & Translational Research Meeting incorporated new initiatives and sessions in IS. In this report, we describe the role of IS in the translational research spectrum, provide a summary of sessions from the 2022 Midwest Clinical & Translational Research Meeting, and highlight initiatives to complement national efforts to build capacity for IS through the annual meetings.
Collapse
Affiliation(s)
- Angelica C. Scanzera
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, 1855 W. Taylor Street, Chicago, IL 60612, United States
| | - Sharmilee M. Nyenhuis
- Department of Pediatrics, University of Chicago, 5841 S. Maryland Ave, Chicago, IL 60637
| | - Brittany N. Rudd
- Institute for Juvenile Research, University of Illinois Chicago, 1747 W. Roosevelt Rd., Chicago, IL 60612
| | - Megha Ramaswamy
- KU Medical Center, University of Kansas, 3901 Rainbow Boulevard, Kansas City, KS 66160
| | - Stephanie Mazzucca
- Brown School, Washington University in St. Louis, One Brookings Drive, St. Louis, MO 63130
| | - Mario Castro
- KU Medical Center, University of Kansas, 3901 Rainbow Boulevard, Kansas City, KS 66160
| | - David J. Kennedy
- Department of Medicine, University of Toledo College of Medicine and Life Sciences, 3000 Arlington Ave, Toledo, OH 43614
| | - Robin J. Mermelstein
- Institute for Health Research and Policy, University of Illinois Chicago, 1747 W. Roosevelt Road, Chicago, IL 60612
| | - David A. Chambers
- Division of Cancer Control and Population Sciences, National Cancer Institute, 37 Convent Drive, Bethesda, MD 20814
| | - Steven M. Dudek
- . Department of Medicine, University of Illinois Chicago, 840 S. Wood Street., Chicago, IL 60612
| | - Jerry A. Krishnan
- . Department of Medicine, University of Illinois Chicago, 840 S. Wood Street., Chicago, IL 60612
- Population Health Sciences Program, University of Illinois Chicago, 1220 S. Wood Street, Chicago, IL 60612, United States
| |
Collapse
|
202
|
Cleland CR, Rwiza J, Evans JR, Gordon I, MacLeod D, Burton MJ, Bascaran C. Artificial intelligence for diabetic retinopathy in low-income and middle-income countries: a scoping review. BMJ Open Diabetes Res Care 2023; 11:e003424. [PMID: 37532460 PMCID: PMC10401245 DOI: 10.1136/bmjdrc-2023-003424] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 07/11/2023] [Indexed: 08/04/2023] Open
Abstract
Diabetic retinopathy (DR) is a leading cause of blindness globally. There is growing evidence to support the use of artificial intelligence (AI) in diabetic eye care, particularly for screening populations at risk of sight loss from DR in low-income and middle-income countries (LMICs) where resources are most stretched. However, implementation into clinical practice remains limited. We conducted a scoping review to identify what AI tools have been used for DR in LMICs and to report their performance and relevant characteristics. 81 articles were included. The reported sensitivities and specificities were generally high providing evidence to support use in clinical practice. However, the majority of studies focused on sensitivity and specificity only and there was limited information on cost, regulatory approvals and whether the use of AI improved health outcomes. Further research that goes beyond reporting sensitivities and specificities is needed prior to wider implementation.
Collapse
Affiliation(s)
- Charles R Cleland
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
- Eye Department, Kilimanjaro Christian Medical Centre, Moshi, United Republic of Tanzania
| | - Justus Rwiza
- Eye Department, Kilimanjaro Christian Medical Centre, Moshi, United Republic of Tanzania
| | - Jennifer R Evans
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| | - Iris Gordon
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| | - David MacLeod
- Tropical Epidemiology Group, Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine, London, UK
| | - Matthew J Burton
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Covadonga Bascaran
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| |
Collapse
|
203
|
Matta S, Lamard M, Conze PH, Le Guilcher A, Lecat C, Carette R, Basset F, Massin P, Rottier JB, Cochener B, Quellec G. Towards population-independent, multi-disease detection in fundus photographs. Sci Rep 2023; 13:11493. [PMID: 37460629 DOI: 10.1038/s41598-023-38610-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 07/11/2023] [Indexed: 07/20/2023] Open
Abstract
Independent validation studies of automatic diabetic retinopathy screening systems have recently shown a drop of screening performance on external data. Beyond diabetic retinopathy, this study investigates the generalizability of deep learning (DL) algorithms for screening various ocular anomalies in fundus photographs, across heterogeneous populations and imaging protocols. The following datasets are considered: OPHDIAT (France, diabetic population), OphtaMaine (France, general population), RIADD (India, general population) and ODIR (China, general population). Two multi-disease DL algorithms were developed: a Single-Dataset (SD) network, trained on the largest dataset (OPHDIAT), and a Multiple-Dataset (MD) network, trained on multiple datasets simultaneously. To assess their generalizability, both algorithms were evaluated whenever training and test data originate from overlapping datasets or from disjoint datasets. The SD network achieved a mean per-disease area under the receiver operating characteristic curve (mAUC) of 0.9571 on OPHDIAT. However, it generalized poorly to the other three datasets (mAUC < 0.9). When all four datasets were involved in training, the MD network significantly outperformed the SD network (p = 0.0058), indicating improved generality. However, in leave-one-dataset-out experiments, performance of the MD network was significantly lower on populations unseen during training than on populations involved in training (p < 0.0001), indicating imperfect generalizability.
Collapse
Affiliation(s)
- Sarah Matta
- Université de Bretagne Occidentale, Brest, Bretagne, France.
- INSERM, UMR 1101, Brest, F-29 200, France.
| | - Mathieu Lamard
- Université de Bretagne Occidentale, Brest, Bretagne, France
- INSERM, UMR 1101, Brest, F-29 200, France
| | - Pierre-Henri Conze
- INSERM, UMR 1101, Brest, F-29 200, France
- IMT Atlantique, Brest, F-29200, France
| | | | - Clément Lecat
- Evolucare Technologies, Villers-Bretonneux, F-80800, France
| | | | - Fabien Basset
- Evolucare Technologies, Villers-Bretonneux, F-80800, France
| | - Pascale Massin
- Service d'Ophtalmologie, Hôpital Lariboisière, APHP, Paris, F-75475, France
| | - Jean-Bernard Rottier
- Bâtiment de consultation porte 14 Pôle Santé Sud CMCM, 28 Rue de Guetteloup, Le Mans, F-72100, France
| | - Béatrice Cochener
- Université de Bretagne Occidentale, Brest, Bretagne, France
- INSERM, UMR 1101, Brest, F-29 200, France
- Service d'Ophtalmologie, CHRU Brest, Brest, F-29200, France
| | | |
Collapse
|
204
|
Nissen TPH, Nørgaard TL, Schielke KC, Vestergaard P, Nikontovic A, Dawidowicz M, Grauslund J, Vorum H, Aasbjerg K. Performance of a Support Vector Machine Learning Tool for Diagnosing Diabetic Retinopathy in Clinical Practice. J Pers Med 2023; 13:1128. [PMID: 37511741 PMCID: PMC10381514 DOI: 10.3390/jpm13071128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 06/28/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023] Open
Abstract
PURPOSE To examine the real-world performance of a support vector machine learning software (RetinaLyze) in order to identify the possible presence of diabetic retinopathy (DR) in patients with diabetes via software implementation in clinical practice. METHODS 1001 eyes from 1001 patients-one eye per patient-participating in the Danish National Screening Programme were included. Three independent ophthalmologists graded all eyes according to the International Clinical Diabetic Retinopathy Disease Severity Scale with the exact level of disease being determined by majority decision. The software detected DR and no DR and was compared to the ophthalmologists' gradings. RESULTS At a clinical chosen threshold, the software showed a sensitivity, specificity, positive predictive value and negative predictive value of 84.9% (95% CI: 81.8-87.9), 89.9% (95% CI: 86.8-92.7), 92.1% (95% CI: 89.7-94.4), and 81.0% (95% CI: 77.2-84.7), respectively, when compared to human grading. The results from the routine screening were 87.0% (95% CI: 84.2-89.7), 85.3% (95% CI: 81.8-88.6), 89.2% (95% CI: 86.3-91.7), and 82.5% (95% CI: 78.5-86.0), respectively. AUC was 93.4%. The reference graders Conger's Exact Kappa was 0.827. CONCLUSION The software performed similarly to routine grading with overlapping confidence intervals, indicating comparable performance between the two groups. The intergrader agreement was satisfactory. However, evaluating the updated software alongside updated clinical procedures is crucial. It is therefore recommended that further clinical testing before implementation of the software as a decision support tool is conducted.
Collapse
Affiliation(s)
- Tobias P H Nissen
- Steno Diabetes Center North Jutland, 9000 Aalborg, Denmark
- Department of Ophthalmology, Aalborg University Hospital, Hobrovej 18, 9000 Aalborg, Denmark
| | - Thomas L Nørgaard
- Department of Ophthalmology, Aalborg University Hospital, Hobrovej 18, 9000 Aalborg, Denmark
| | - Katja C Schielke
- Department of Ophthalmology, Aalborg University Hospital, Hobrovej 18, 9000 Aalborg, Denmark
| | - Peter Vestergaard
- Steno Diabetes Center North Jutland, 9000 Aalborg, Denmark
- Department of Clinical Medicine and Endocrinology, Aalborg University Hospital, 9000 Aalborg, Denmark
| | | | - Malgorzata Dawidowicz
- Department of Ophthalmology, Aalborg University Hospital, Hobrovej 18, 9000 Aalborg, Denmark
| | - Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, 5000 Odense, Denmark
| | - Henrik Vorum
- Department of Ophthalmology, Aalborg University Hospital, Hobrovej 18, 9000 Aalborg, Denmark
| | | |
Collapse
|
205
|
Sharma S. Artificial intelligence for fracture diagnosis in orthopedic X-rays: current developments and future potential. SICOT J 2023; 9:21. [PMID: 37409882 PMCID: PMC10324466 DOI: 10.1051/sicotj/2023018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 06/17/2023] [Indexed: 07/07/2023] Open
Abstract
The use of artificial intelligence (AI) in the interpretation of orthopedic X-rays has shown great potential to improve the accuracy and efficiency of fracture diagnosis. AI algorithms rely on large datasets of annotated images to learn how to accurately classify and diagnose abnormalities. One way to improve AI interpretation of X-rays is to increase the size and quality of the datasets used for training, and to incorporate more advanced machine learning techniques, such as deep reinforcement learning, into the algorithms. Another approach is to integrate AI algorithms with other imaging modalities, such as computed tomography (CT) scans, and magnetic resonance imaging (MRI), to provide a more comprehensive and accurate diagnosis. Recent studies have shown that AI algorithms can accurately detect and classify fractures of the wrist and long bones on X-ray images, demonstrating the potential of AI to improve the accuracy and efficiency of fracture diagnosis. These findings suggest that AI has the potential to significantly improve patient outcomes in the field of orthopedics.
Collapse
Affiliation(s)
- Sanskrati Sharma
- Department of Orthopedics, Royal Preston Hospital Sharoe Green Ln, Fulwood Preston PR2 9HT United Kingdom
| |
Collapse
|
206
|
Arcot Desai S, Afzal MF, Barry W, Kuo J, Benard S, Traner C, Tcheng T, Seale C, Morrell M. Expert and deep learning model identification of iEEG seizures and seizure onset times. Front Neurosci 2023; 17:1156838. [PMID: 37476840 PMCID: PMC10354337 DOI: 10.3389/fnins.2023.1156838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 06/13/2023] [Indexed: 07/22/2023] Open
Abstract
Hundreds of 90-s iEEG records are typically captured from each NeuroPace RNS System patient between clinic visits. While these records provide invaluable information about the patient's electrographic seizure and interictal activity patterns, manually classifying them into electrographic seizure/non-seizure activity, and manually identifying the seizure onset channels and times is an extremely time-consuming process. A convolutional neural network based Electrographic Seizure Classifier (ESC) model was developed in an earlier study. In this study, the classification model is tested against iEEG annotations provided by three expert reviewers board certified in epilepsy. The three experts individually annotated 3,874 iEEG channels from 36, 29, and 35 patients with leads in the mesiotemporal (MTL), neocortical (NEO), and MTL + NEO regions, respectively. The ESC model's seizure/non-seizure classification scores agreed with the three reviewers at 88.7%, 89.6%, and 84.3% which was similar to how reviewers agreed with each other (92.9%-86.4%). On iEEG channels with all 3 experts in agreement (83.2%), the ESC model had an agreement score of 93.2%. Additionally, the ESC model's certainty scores reflected combined reviewer certainty scores. When 0, 1, 2 and 3 (out of 3) reviewers annotated iEEG channels as electrographic seizures, the ESC model's seizure certainty scores were in the range: [0.12-0.19], [0.32-0.42], [0.61-0.70], and [0.92-0.95] respectively. The ESC model was used as a starting-point model for training a second Seizure Onset Detection (SOD) model. For this task, seizure onset times were manually annotated on a relatively small number of iEEG channels (4,859 from 50 patients). Experiments showed that fine-tuning the ESC models with augmented data (30,768 iEEG channels) resulted in a better validation performance (on 20% of the manually annotated data) compared to training with only the original data (3.1s vs 4.4s median absolute error). Similarly, using the ESC model weights as the starting point for fine-tuning instead of other model weight initialization methods provided significant advantage in SOD model validation performance (3.1s vs 4.7s and 3.5s median absolute error). Finally, on iEEG channels where three expert annotations of seizure onset times were within 1.5 s, the SOD model's seizure onset time prediction was within 1.7 s of expert annotation.
Collapse
Affiliation(s)
| | | | - Wade Barry
- NeuroPace, Inc., Mountain View, CA, United States
| | - Jonathan Kuo
- Department of Neurology, University of Southern California, Los Angeles, CA, United States
| | - Shawna Benard
- Department of Neurology, University of Southern California, Los Angeles, CA, United States
| | | | | | - Cairn Seale
- NeuroPace, Inc., Mountain View, CA, United States
| | - Martha Morrell
- NeuroPace, Inc., Mountain View, CA, United States
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, United States
| |
Collapse
|
207
|
Medeiros FA, Lee T, Jammal AA, Al-Aswad LA, Eydelman MB, Schuman JS. The Definition of Glaucomatous Optic Neuropathy in Artificial Intelligence Research and Clinical Applications. Ophthalmol Glaucoma 2023; 6:432-438. [PMID: 36731747 PMCID: PMC10387499 DOI: 10.1016/j.ogla.2023.01.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 01/19/2023] [Accepted: 01/23/2023] [Indexed: 06/11/2023]
Abstract
OBJECTIVE Although artificial intelligence (AI) models may offer innovative and powerful ways to use the wealth of data generated by diagnostic tools, there are important challenges related to their development and validation. Most notable is the lack of a perfect reference standard for glaucomatous optic neuropathy (GON). Because AI models are trained to predict presence of glaucoma or its progression, they generally rely on a reference standard that is used to train the model and assess its validity. If an improper reference standard is used, the model may be trained to detect or predict something that has little or no clinical value. This article summarizes the issues and discussions related to the definition of GON in AI applications as presented by the Glaucoma Workgroup from the Collaborative Community for Ophthalmic Imaging (CCOI) US Food and Drug Administration Virtual Workshop, on September 3 and 4, 2020, and on January 28, 2022. DESIGN Review and conference proceedings. SUBJECTS No human or animal subjects or data therefrom were used in the production of this article. METHODS A summary of the Workshop was produced with input and approval from all participants. MAIN OUTCOME MEASURES Consensus position of the CCOI Workgroup on the challenges in defining GON and possible solutions. RESULTS The Workshop reviewed existing challenges that arise from the use of subjective definitions of GON and highlighted the need for a more objective approach to characterize GON that could facilitate replication and comparability of AI studies and allow for better clinical validation of proposed AI tools. Different tests and combination of parameters for defining a reference standard for GON have been proposed. Different reference standards may need to be considered depending on the scenario in which the AI models are going to be applied, such as community-based or opportunistic screening versus detection or monitoring of glaucoma in tertiary care. CONCLUSIONS The development and validation of new AI-based diagnostic tests should be based on rigorous methodology with clear determination of how the reference standards for glaucomatous damage are constructed and the settings where the tests are going to be applied. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Felipe A Medeiros
- Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina; Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina.
| | - Terry Lee
- Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina
| | - Alessandro A Jammal
- Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina
| | - Lama A Al-Aswad
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York; Department of Population Health, NYU Langone Health, NYU Grossman School of Medicine, New York, New York
| | | | - Joel S Schuman
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York; Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, New York; Department of Electrical and Computer Engineering, New York University Tandon School of Engineering, Brooklyn, New York; Center for Neural Science, NYU, New York, New York; Neuroscience Institute, NYU Langone Health, New York, New York
| |
Collapse
|
208
|
Hakami KM, Alameer M, Jaawna E, Sudi A, Bahkali B, Mohammed A, Hakami A, Mahfouz MS, Alhazmi AH, Dhayihi TM. The Impact of Artificial Intelligence on the Preference of Radiology as a Future Specialty Among Medical Students at Jazan University, Saudi Arabia: A Cross-Sectional Study. Cureus 2023; 15:e41840. [PMID: 37575874 PMCID: PMC10423067 DOI: 10.7759/cureus.41840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/11/2023] [Indexed: 08/15/2023] Open
Abstract
Background The use of artificial intelligence (AI) in healthcare continues to spark interest and has been the subject of extensive discussion in recent years as well as its potential effects on future medical specialties, including radiology. In this study, we aimed to study the impact of AI on the preference of medical students at Jazan University in choosing radiology as a future specialty. Methodology An observational cross-sectional study was conducted using a pre-tested self-administered online questionnaire among medical students at Jazan University. Data were cleaned, coded, entered, and analyzed using SPSS (SPSS Inc., USA) version 25. Statistical significance was defined as a P-value of less than 0.05. We examined the respondents' preference for radiology rankings with the presence and absence of AI. Radiology's ranking as a preferred specialty with or without AI integration was statistically analyzed for associations with baseline characteristics, personal opinions, and previous exposures among those who had radiology as one of their top three options. Results Approximately 27.4% of males and 28.3% of females ranked radiology among their top three preferred choices. Almost 65.2% were exposed to radiology topics through pre-clinical lectures. The main sources of information about AI for the studied group were medical students (41%) and the Internet (27.5%). The preference of students for radiology was significantly affected when it is assessed by AI (P < 0.05). Around (16.1%) of those who chose radiology as one of their top three choices strongly agree that AI will decrease the job opportunities for radiologists. Logistic regression analysis showed that being a female is significantly associated with an increased chance to replace radiology with other specialty when it is integrated with AI (Crude odds ratio (COR) = 1.91). Conclusion Our results demonstrated that the students' choices were significantly affected by the presence of AI. Thereover, to raise medical students' knowledge and awareness of the potential positive effects of AI, it is necessary to organize an educational campaign, webinars, and conferences.
Collapse
Affiliation(s)
| | | | - Essa Jaawna
- Faculty of Medicine, Jazan University, Jazan, SAU
| | | | | | | | | | | | | | | |
Collapse
|
209
|
Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, Zhou H, Wu S, Shao Y, Chen W. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Rep Med 2023:101095. [PMID: 37385253 PMCID: PMC10394169 DOI: 10.1016/j.xcrm.2023.101095] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Lei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guiyang 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - He Xie
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Hongjian Zhou
- Department of Computer Science, University of Oxford, Oxford, Oxfordshire OX1 2JD, UK
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Yi Shao
- Department of Ophthalmology, the First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
210
|
Wang Z, Lim G, Ng WY, Tan TE, Lim J, Lim SH, Foo V, Lim J, Sinisterra LG, Zheng F, Liu N, Tan GSW, Cheng CY, Cheung GCM, Wong TY, Ting DSW. Synthetic artificial intelligence using generative adversarial network for retinal imaging in detection of age-related macular degeneration. Front Med (Lausanne) 2023; 10:1184892. [PMID: 37425325 PMCID: PMC10324667 DOI: 10.3389/fmed.2023.1184892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 05/30/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Age-related macular degeneration (AMD) is one of the leading causes of vision impairment globally and early detection is crucial to prevent vision loss. However, the screening of AMD is resource dependent and demands experienced healthcare providers. Recently, deep learning (DL) systems have shown the potential for effective detection of various eye diseases from retinal fundus images, but the development of such robust systems requires a large amount of datasets, which could be limited by prevalence of the disease and privacy of patient. As in the case of AMD, the advanced phenotype is often scarce for conducting DL analysis, which may be tackled via generating synthetic images using Generative Adversarial Networks (GANs). This study aims to develop GAN-synthesized fundus photos with AMD lesions, and to assess the realness of these images with an objective scale. Methods To build our GAN models, a total of 125,012 fundus photos were used from a real-world non-AMD phenotypical dataset. StyleGAN2 and human-in-the-loop (HITL) method were then applied to synthesize fundus images with AMD features. To objectively assess the quality of the synthesized images, we proposed a novel realness scale based on the frequency of the broken vessels observed in the fundus photos. Four residents conducted two rounds of gradings on 300 images to distinguish real from synthetic images, based on their subjective impression and the objective scale respectively. Results and discussion The introduction of HITL training increased the percentage of synthetic images with AMD lesions, despite the limited number of AMD images in the initial training dataset. Qualitatively, the synthesized images have been proven to be robust in that our residents had limited ability to distinguish real from synthetic ones, as evidenced by an overall accuracy of 0.66 (95% CI: 0.61-0.66) and Cohen's kappa of 0.320. For the non-referable AMD classes (no or early AMD), the accuracy was only 0.51. With the objective scale, the overall accuracy improved to 0.72. In conclusion, GAN models built with HITL training are capable of producing realistic-looking fundus images that could fool human experts, while our objective realness scale based on broken vessels can help identifying the synthetic fundus photos.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Wei Yan Ng
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien-En Tan
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Jane Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Sing Hui Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Valencia Foo
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Joshua Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | | | - Feihui Zheng
- Singapore Eye Research Institute, Singapore, Singapore
| | - Nan Liu
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Ching-Yu Cheng
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Gemmy Chui Ming Cheung
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore, Singapore
- School of Medicine, Tsinghua University, Beijing, China
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| |
Collapse
|
211
|
Shi XH, Dong L, Zhang RH, Zhou DJ, Ling SG, Shao L, Yan YN, Wang YX, Wei WB. Relationships between quantitative retinal microvascular characteristics and cognitive function based on automated artificial intelligence measurements. Front Cell Dev Biol 2023; 11:1174984. [PMID: 37416799 PMCID: PMC10322221 DOI: 10.3389/fcell.2023.1174984] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 06/09/2023] [Indexed: 07/08/2023] Open
Abstract
Introduction: The purpose of this study is to assess the relationship between retinal vascular characteristics and cognitive function using artificial intelligence techniques to obtain fully automated quantitative measurements of retinal vascular morphological parameters. Methods: A deep learning-based semantic segmentation network ResNet101-UNet was used to construct a vascular segmentation model for fully automated quantitative measurement of retinal vascular parameters on fundus photographs. Retinal photographs centered on the optic disc of 3107 participants (aged 50-93 years) from the Beijing Eye Study 2011, a population-based cross-sectional study, were analyzed. The main parameters included the retinal vascular branching angle, vascular fractal dimension, vascular diameter, vascular tortuosity, and vascular density. Cognitive function was assessed using the Mini-Mental State Examination (MMSE). Results: The results showed that the mean MMSE score was 26.34 ± 3.64 (median: 27; range: 2-30). Among the participants, 414 (13.3%) were classified as having cognitive impairment (MMSE score < 24), 296 (9.5%) were classified as mild cognitive impairment (MMSE: 19-23), 98 (3.2%) were classified as moderate cognitive impairment (MMSE: 10-18), and 20 (0.6%) were classified as severe cognitive impairment (MMSE < 10). Compared with the normal cognitive function group, the retinal venular average diameter was significantly larger (p = 0.013), and the retinal vascular fractal dimension and vascular density were significantly smaller (both p < 0.001) in the mild cognitive impairment group. The retinal arteriole-to-venular ratio (p = 0.003) and vascular fractal dimension (p = 0.033) were significantly decreased in the severe cognitive impairment group compared to the mild cognitive impairment group. In the multivariate analysis, better cognition (i.e., higher MMSE score) was significantly associated with higher retinal vascular fractal dimension (b = 0.134, p = 0.043) and higher retinal vascular density (b = 0.152, p = 0.023) after adjustment for age, best corrected visual acuity (BCVA) (logMAR) and education level. Discussion: In conclusion, our findings derived from an artificial intelligence-based fully automated retinal vascular parameter measurement method showed that several retinal vascular morphological parameters were correlated with cognitive impairment. The decrease in retinal vascular fractal dimension and decreased vascular density may serve as candidate biomarkers for early identification of cognitive impairment. The observed reduction in the retinal arteriole-to-venular ratio occurs in the late stages of cognitive impairment.
Collapse
Affiliation(s)
- Xu Han Shi
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Rui Heng Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Deng Ji Zhou
- EVision Technology (Beijing) Co., Ltd., Beijing, China
| | | | - Lei Shao
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yan Ni Yan
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ya Xing Wang
- Beijing Ophthalmology and Visual Science Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Institute of Ophthalmology, Capital Medical University, Beijing, China
| | - Wen Bin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
212
|
Deshmukh R, Ong ZZ, Rampat R, Alió del Barrio JL, Barua A, Ang M, Mehta JS, Said DG, Dua HS, Ambrósio R, Ting DSJ. Management of keratoconus: an updated review. Front Med (Lausanne) 2023; 10:1212314. [PMID: 37409272 PMCID: PMC10318194 DOI: 10.3389/fmed.2023.1212314] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 05/30/2023] [Indexed: 07/07/2023] Open
Abstract
Keratoconus is the most common corneal ectatic disorder. It is characterized by progressive corneal thinning with resultant irregular astigmatism and myopia. Its prevalence has been estimated at 1:375 to 1:2,000 people globally, with a considerably higher rate in the younger populations. Over the past two decades, there was a paradigm shift in the management of keratoconus. The treatment has expanded significantly from conservative management (e.g., spectacles and contact lenses wear) and penetrating keratoplasty to many other therapeutic and refractive modalities, including corneal cross-linking (with various protocols/techniques), combined CXL-keratorefractive surgeries, intracorneal ring segments, anterior lamellar keratoplasty, and more recently, Bowman's layer transplantation, stromal keratophakia, and stromal regeneration. Several recent large genome-wide association studies (GWAS) have identified important genetic mutations relevant to keratoconus, facilitating the development of potential gene therapy targeting keratoconus and halting the disease progression. In addition, attempts have been made to leverage the power of artificial intelligence-assisted algorithms in enabling earlier detection and progression prediction in keratoconus. In this review, we provide a comprehensive overview of the current and emerging treatment of keratoconus and propose a treatment algorithm for systematically guiding the management of this common clinical entity.
Collapse
Affiliation(s)
- Rashmi Deshmukh
- Department of Cornea and Refractive Surgery, LV Prasad Eye Institute, Hyderabad, India
| | - Zun Zheng Ong
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham, United Kingdom
| | - Radhika Rampat
- Department of Ophthalmology, Royal Free London NHS Foundation Trust, London, United Kingdom
| | - Jorge L. Alió del Barrio
- Cornea, Cataract and Refractive Surgery Unit, Vissum (Miranza Group), Alicante, Spain
- Division of Ophthalmology, School of Medicine, Universidad Miguel Hernández, Alicante, Spain
| | - Ankur Barua
- Birmingham and Midland Eye Centre, Birmingham, United Kingdom
| | - Marcus Ang
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Jodhbir S. Mehta
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Dalia G. Said
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham, United Kingdom
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Harminder S. Dua
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham, United Kingdom
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Renato Ambrósio
- Department of Cornea and Refractive Surgery, Instituto de Olhos Renato Ambrósio, Rio de Janeiro, Brazil
- Department of Ophthalmology, Federal University of the State of Rio de Janeiro (UNIRIO), Rio de Janeiro, Brazil
- Federal University of São Paulo (UNIFESP), São Paulo, Brazil
| | - Darren Shu Jeng Ting
- Birmingham and Midland Eye Centre, Birmingham, United Kingdom
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
213
|
Angkurawaranon S, Sanorsieng N, Unsrisong K, Inkeaw P, Sripan P, Khumrin P, Angkurawaranon C, Vaniyapong T, Chitapanarux I. A comparison of performance between a deep learning model with residents for localization and classification of intracranial hemorrhage. Sci Rep 2023; 13:9975. [PMID: 37340038 DOI: 10.1038/s41598-023-37114-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 06/15/2023] [Indexed: 06/22/2023] Open
Abstract
Intracranial hemorrhage (ICH) from traumatic brain injury (TBI) requires prompt radiological investigation and recognition by physicians. Computed tomography (CT) scanning is the investigation of choice for TBI and has become increasingly utilized under the shortage of trained radiology personnel. It is anticipated that deep learning models will be a promising solution for the generation of timely and accurate radiology reports. Our study examines the diagnostic performance of a deep learning model and compares the performance of that with detection, localization and classification of traumatic ICHs involving radiology, emergency medicine, and neurosurgery residents. Our results demonstrate that the high level of accuracy achieved by the deep learning model, (0.89), outperforms the residents with regard to sensitivity (0.82) but still lacks behind in specificity (0.90). Overall, our study suggests that the deep learning model may serve as a potential screening tool aiding the interpretation of head CT scans among traumatic brain injury patients.
Collapse
Affiliation(s)
- Salita Angkurawaranon
- Department of Radiology, Maharaj Nakorn Chiang Mai Hospital, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand
- Global Health and Chronic Conditions Research Group, Chiang Mai, 50200, Thailand
| | - Nonn Sanorsieng
- Department of Radiology, Maharaj Nakorn Chiang Mai Hospital, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Kittisak Unsrisong
- Department of Radiology, Maharaj Nakorn Chiang Mai Hospital, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Papangkorn Inkeaw
- Department of Computer Science, Faculty of Science, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Patumrat Sripan
- Research Institute for Health Sciences, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Piyapong Khumrin
- Department of Family Medicine, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Chaisiri Angkurawaranon
- Global Health and Chronic Conditions Research Group, Chiang Mai, 50200, Thailand
- Department of Family Medicine, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Tanat Vaniyapong
- Neurosurgery Division, Department of Surgery, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Imjai Chitapanarux
- Department of Radiology, Maharaj Nakorn Chiang Mai Hospital, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand.
| |
Collapse
|
214
|
Wang Z, Li Z, Li K, Mu S, Zhou X, Di Y. Performance of artificial intelligence in diabetic retinopathy screening: a systematic review and meta-analysis of prospective studies. Front Endocrinol (Lausanne) 2023; 14:1197783. [PMID: 37383397 PMCID: PMC10296189 DOI: 10.3389/fendo.2023.1197783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 05/23/2023] [Indexed: 06/30/2023] Open
Abstract
Aims To systematically evaluate the diagnostic value of an artificial intelligence (AI) algorithm model for various types of diabetic retinopathy (DR) in prospective studies over the previous five years, and to explore the factors affecting its diagnostic effectiveness. Materials and methods A search was conducted in Cochrane Library, Embase, Web of Science, PubMed, and IEEE databases to collect prospective studies on AI models for the diagnosis of DR from January 2017 to December 2022. We used QUADAS-2 to evaluate the risk of bias in the included studies. Meta-analysis was performed using MetaDiSc and STATA 14.0 software to calculate the combined sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio of various types of DR. Diagnostic odds ratios, summary receiver operating characteristic (SROC) plots, coupled forest plots, and subgroup analysis were performed according to the DR categories, patient source, region of study, and quality of literature, image, and algorithm. Results Finally, 21 studies were included. Meta-analysis showed that the pooled sensitivity, specificity, pooled positive likelihood ratio, pooled negative likelihood ratio, area under the curve, Cochrane Q index, and pooled diagnostic odds ratio of AI model for the diagnosis of DR were 0.880 (0.875-0.884), 0.912 (0.99-0.913), 13.021 (10.738-15.789), 0.083 (0.061-0.112), 0.9798, 0.9388, and 206.80 (124.82-342.63), respectively. The DR categories, patient source, region of study, sample size, quality of literature, image, and algorithm may affect the diagnostic efficiency of AI for DR. Conclusion AI model has a clear diagnostic value for DR, but it is influenced by many factors that deserve further study. Systematic review registration https://www.crd.york.ac.uk/prospero/, identifier CRD42023389687.
Collapse
|
215
|
Li H, Yang Z. Torsional nystagmus recognition based on deep learning for vertigo diagnosis. Front Neurosci 2023; 17:1160904. [PMID: 37360163 PMCID: PMC10288185 DOI: 10.3389/fnins.2023.1160904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 05/22/2023] [Indexed: 06/28/2023] Open
Abstract
Introduction Detection of torsional nystagmus can help identify the canal of origin in benign paroxysmal positional vertigo (BPPV). Most currently available pupil trackers do not detect torsional nystagmus. In view of this, a new deep learning network model was designed for the determination of torsional nystagmus. Methods The data set comes from the Eye, Ear, Nose and Throat (Eye&ENT) Hospital of Fudan University. In the process of data acquisition, the infrared videos were obtained from eye movement recorder. The dataset contains 24521 nystagmus videos. All torsion nystagmus videos were annotated by the ophthalmologist of the hospital. 80% of the data set was used to train the model, and 20% was used to test. Results Experiments indicate that the designed method can effectively identify torsional nystagmus. Compared with other methods, it has high recognition accuracy. It can realize the automatic recognition of torsional nystagmus and provides support for the posterior and anterior canal BPPV diagnosis. Discussion Our present work complements existing methods of 2D nystagmus analysis and could improve the diagnostic capabilities of VNG in multiple vestibular disorders. To automatically pick BPV requires detection of nystagmus in all 3 planes and identification of a paroxysm. This is the next research work to be carried out.
Collapse
|
216
|
Yusuf IH, Charbel Issa P, Ahn SJ. Unmet needs and future perspectives in hydroxychloroquine retinopathy. Front Med (Lausanne) 2023; 10:1196815. [PMID: 37359010 PMCID: PMC10288184 DOI: 10.3389/fmed.2023.1196815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 05/17/2023] [Indexed: 06/28/2023] Open
Abstract
Retinopathy is a well-recognized toxic effect of hydroxychloroquine treatment. As hydroxychloroquine retinopathy is potentially a vision-threatening condition, early detection is imperative to minimize vision loss due to drug toxicity. However, early detection of hydroxychloroquine retinopathy is still challenging even with modern retinal imaging techniques. No treatment has been established for this condition, except for drug cessation to minimize further damage. In this perspective article, we aimed to summarize the knowledge gaps and unmet needs in current clinical practice and research in hydroxychloroquine retinopathy. The information presented in this article may help guide the future directions of screening practices and research in hydroxychloroquine retinopathy.
Collapse
Affiliation(s)
- Imran H. Yusuf
- Oxford Eye Hospital and Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Oxford, United Kingdom
| | - Peter Charbel Issa
- Oxford Eye Hospital and Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Oxford, United Kingdom
| | - Seong Joon Ahn
- Department of Ophthalmology, Hanyang University Hospital, Hanyang University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
217
|
Hassan E, Elmougy S, Ibraheem MR, Hossain MS, AlMutib K, Ghoneim A, AlQahtani SA, Talaat FM. Enhanced Deep Learning Model for Classification of Retinal Optical Coherence Tomography Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:5393. [PMID: 37420558 DOI: 10.3390/s23125393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/15/2023] [Accepted: 05/25/2023] [Indexed: 07/09/2023]
Abstract
Retinal optical coherence tomography (OCT) imaging is a valuable tool for assessing the condition of the back part of the eye. The condition has a great effect on the specificity of diagnosis, the monitoring of many physiological and pathological procedures, and the response and evaluation of therapeutic effectiveness in various fields of clinical practices, including primary eye diseases and systemic diseases such as diabetes. Therefore, precise diagnosis, classification, and automated image analysis models are crucial. In this paper, we propose an enhanced optical coherence tomography (EOCT) model to classify retinal OCT based on modified ResNet (50) and random forest algorithms, which are used in the proposed study's training strategy to enhance performance. The Adam optimizer is applied during the training process to increase the efficiency of the ResNet (50) model compared with the common pre-trained models, such as spatial separable convolutions and visual geometry group (VGG) (16). The experimentation results show that the sensitivity, specificity, precision, negative predictive value, false discovery rate, false negative rate accuracy, and Matthew's correlation coefficient are 0.9836, 0.9615, 0.9740, 0.9756, 0.0385, 0.0260, 0.0164, 0.9747, 0.9788, and 0.9474, respectively.
Collapse
Affiliation(s)
- Esraa Hassan
- Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh 33516, Egypt
| | - Samir Elmougy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Mai R Ibraheem
- Department of Information Technology, Faculty of Computers and information, Kafrelsheikh University, Kafrelsheikh 33516, Egypt
| | - M Shamim Hossain
- Research Chair of Pervasive and Mobile Computing, Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
| | - Khalid AlMutib
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11574, Saudi Arabia
| | - Ahmed Ghoneim
- Research Chair of Pervasive and Mobile Computing, Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
| | - Salman A AlQahtani
- Research Chair of Pervasive and Mobile Computing, Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11574, Saudi Arabia
| | - Fatma M Talaat
- Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh 33516, Egypt
| |
Collapse
|
218
|
Zhao PY, Bommakanti N, Yu G, Aaberg MT, Patel TP, Paulus YM. Deep learning for automated detection of neovascular leakage on ultra-widefield fluorescein angiography in diabetic retinopathy. Sci Rep 2023; 13:9165. [PMID: 37280345 DOI: 10.1038/s41598-023-36327-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 06/01/2023] [Indexed: 06/08/2023] Open
Abstract
Diabetic retinopathy is a leading cause of blindness in working-age adults worldwide. Neovascular leakage on fluorescein angiography indicates progression to the proliferative stage of diabetic retinopathy, which is an important distinction that requires timely ophthalmic intervention with laser or intravitreal injection treatment to reduce the risk of severe, permanent vision loss. In this study, we developed a deep learning algorithm to detect neovascular leakage on ultra-widefield fluorescein angiography images obtained from patients with diabetic retinopathy. The algorithm, an ensemble of three convolutional neural networks, was able to accurately classify neovascular leakage and distinguish this disease marker from other angiographic disease features. With additional real-world validation and testing, our algorithm could facilitate identification of neovascular leakage in the clinical setting, allowing timely intervention to reduce the burden of blinding diabetic eye disease.
Collapse
Affiliation(s)
- Peter Y Zhao
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Nikhil Bommakanti
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Gina Yu
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Michael T Aaberg
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Tapan P Patel
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Yannis M Paulus
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA.
| |
Collapse
|
219
|
Wu J, Yuan Z, Fang Z, Huang Z, Xu Y, Xie W, Wu F, Yao YF. A knowledge-enhanced transform-based multimodal classifier for microbial keratitis identification. Sci Rep 2023; 13:9003. [PMID: 37268729 DOI: 10.1038/s41598-023-36024-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 05/27/2023] [Indexed: 06/04/2023] Open
Abstract
Microbial keratitis, a nonviral corneal infection caused by bacteria, fungi, and protozoa, is an urgent condition in ophthalmology requiring prompt treatment in order to prevent severe complications of corneal perforation and vision loss. It is difficult to distinguish between bacterial and fungal keratitis from image unimodal alone, as the characteristics of the sample images themselves are very close. Therefore, this study aims to develop a new deep learning model called knowledge-enhanced transform-based multimodal classifier that exploited the potential of slit-lamp images along with treatment texts to identify bacterial keratitis (BK) and fungal keratitis (FK). The model performance was evaluated in terms of the accuracy, specificity, sensitivity and the area under the curve (AUC). 704 images from 352 patients were divided into training, validation and testing set. In the testing set, our model reached the best accuracy was 93%, sensitivity was 0.97(95% CI [0.84,1]), specificity was 0.92(95% CI [0.76,0.98]) and AUC was 0.94(95% CI [0.92,0.96]), exceeding the benchmark accuracy of 0.86. The diagnostic average accuracies of BK ranged from 81 to 92%, respectively and those for FK were 89-97%. It is the first study to focus on the influence of disease changes and medication interventions on infectious keratitis and our model outperformed the benchmark models and reaching the state-of-the-art performance.
Collapse
Affiliation(s)
- Jianfeng Wu
- School of Medicine, Zhejiang University, Hangzhou, Zhejiang Province, 31002, China
| | - Zhouhang Yuan
- College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang Province, 31002, China
| | - Zhengqing Fang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang Province, 31002, China
| | - Zhengxing Huang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang Province, 31002, China
| | - Yesheng Xu
- Department of Ophthalmology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang Province, 310016, China
- Key Laboratory for Corneal Diseases Research of Zhejiang Province, Hangzhou, Zhejiang Province, China
| | - Wenjia Xie
- Department of Ophthalmology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang Province, 310016, China
- Key Laboratory for Corneal Diseases Research of Zhejiang Province, Hangzhou, Zhejiang Province, China
| | - Fei Wu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang Province, 31002, China.
| | - Yu-Feng Yao
- Department of Ophthalmology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang Province, 310016, China.
- Key Laboratory for Corneal Diseases Research of Zhejiang Province, Hangzhou, Zhejiang Province, China.
| |
Collapse
|
220
|
Heindl LM, Li S, Ting DSW, Keane PA. Artificial intelligence in ophthalmological practice: when ideal meets reality. BMJ Open Ophthalmol 2023; 8:e001129. [PMID: 37493688 PMCID: PMC10255244 DOI: 10.1136/bmjophth-2022-001129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023] Open
Affiliation(s)
- Ludwig M Heindl
- Department of Ophthalmology, University of Cologne, Koln, Germany
| | - Senmao Li
- Department of Ophthalmology, University of Cologne, Koln, Germany
- Department of Ophthalmology, The First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, China
| | - Daniel S W Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
- Ophthalmology and Visual Sciences Department, Duke-NUS Medical School, Singapore
| | - Pearse A Keane
- Medical Retina, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| |
Collapse
|
221
|
Vaghefi E, Yang S, Xie L, Han D, Yap A, Schmeidel O, Marshall J, Squirrell D. A multi-centre prospective evaluation of THEIA™ to detect diabetic retinopathy (DR) and diabetic macular oedema (DMO) in the New Zealand screening program. Eye (Lond) 2023; 37:1683-1689. [PMID: 36057664 PMCID: PMC10219993 DOI: 10.1038/s41433-022-02217-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 07/09/2022] [Accepted: 08/12/2022] [Indexed: 12/23/2022] Open
Abstract
PURPOSE To validate the potential application of THEIA™ as clinical decision making assistant in a national screening program. METHODS A total of 900 patients were recruited from either an urban large eye hospital, or a semi-rural optometrist led screening provider, as they were attending their appointment as part of New Zealand Diabetic Eye Screening Programme. The de-identified images were independently graded by three senior specialists, and final results were aggregated using New Zealand grading scheme, which was then converted to referable/non-referable and Healthy/mild/more than mild/sight threatening categories. RESULTS THEIA™ managed to grade all images obtained during the study. Comparing the adjudicated images from the specialist grading team, "ground truth", with the grading by the AI platform in detecting "sight threatening" disease, at the patient level THEIA™ achieved 100% imageability, 100% [98.49-100.00%] sensitivity and [97.02-99.16%] specificity, and negative predictive value of 100%. In other words, THEIA™ did not miss any patients with "more than mild" or "sight threatening" disease. The level of agreement between the clinicians and the aggregated results was (k value: 0.9881, 0.9557, and 0.9175), and the level of agreement between THEIA™ and the aggregated labels was (k value: 0.9515). CONCLUSION This multi-centre prospective trial showed that THEIA™ did not miss referable disease when screening for diabetic retinopathy and maculopathy. It also had a very high level of granularity in reporting the disease level. As THEIA™ has been tested on a variety of cameras, operating in a range of clinics (rural/urban, ophthalmologist-led\optometrist-led), we believe that it will be a suitable addition to a public diabetic screening program.
Collapse
Affiliation(s)
- Ehsan Vaghefi
- Toku Eyes®, Auckland, New Zealand.
- School of Optometry and Vision Science, The University of Auckland, Auckland, New Zealand.
| | | | - Li Xie
- Toku Eyes®, Auckland, New Zealand
| | | | - Aaron Yap
- Department of Ophthalmology, The University of Auckland, Auckland, New Zealand
| | - Ole Schmeidel
- Department of Diabetes, Auckland District Health Board, Auckland, New Zealand
| | - John Marshall
- Institute of Ophthalmology, University College of London, London, UK
| | - David Squirrell
- Toku Eyes®, Auckland, New Zealand
- Department of Ophthalmology, The University of Auckland, Auckland, New Zealand
| |
Collapse
|
222
|
Tan TF, Teo ZL, Ting DSW. Artificial Intelligence Bias and Ethics in Retinal Imaging. JAMA Ophthalmol 2023; 141:552-553. [PMID: 37140916 DOI: 10.1001/jamaophthalmol.2023.1490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Affiliation(s)
- Ting Fang Tan
- Singapore National Eye Center, Singapore Eye Research Institute, Singapore, Singapore
| | - Zhen Ling Teo
- Singapore National Eye Center, Singapore Eye Research Institute, Singapore, Singapore
| | - Daniel Shu Wei Ting
- Singapore National Eye Center, Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| |
Collapse
|
223
|
Eslami Y, Mousavi Kouzahkanan Z, Farzinvash Z, Safizadeh M, Zarei R, Fakhraie G, Vahedian Z, Mahmoudi T, Fadakar K, Beikmarzehei A, Tabatabaei SM. Deep Learning-Based Classification of Subtypes of Primary Angle-Closure Disease With Anterior Segment Optical Coherence Tomography. J Glaucoma 2023; 32:540-547. [PMID: 36897658 DOI: 10.1097/ijg.0000000000002194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 02/08/2023] [Indexed: 03/11/2023]
Abstract
PRCIS We developed a deep learning-based classifier that can discriminate primary angle closure suspects (PACS), primary angle closure (PAC)/primary angle closure glaucoma (PACG), and also control eyes with open angle with acceptable accuracy. PURPOSE To develop a deep learning-based classifier for differentiating subtypes of primary angle closure disease, including PACS and PAC/PACG, and also normal control eyes. MATERIALS AND METHODS Anterior segment optical coherence tomography images were used for analysis with 5 different networks including MnasNet, MobileNet, ResNet18, ResNet50, and EfficientNet. The data set was split with randomization performed at the patient level into a training plus validation set (85%), and a test data set (15%). Then 4-fold cross-validation was used to train the model. In each mentioned architecture, the networks were trained with original and cropped images. Also, the analyses were carried out for single images and images grouped on the patient level (case-based). Then majority voting was applied to the determination of the final prediction. RESULTS A total of 1616 images of normal eyes (87 eyes), 1055 images of PACS (66 eyes), and 1076 images of PAC/PACG (66 eyes) eyes were included in the analysis. The mean ± SD age was 51.76 ± 15.15 years and 48.3% were males. MobileNet had the best performance in the model, in which both original and cropped images were used. The accuracy of MobileNet for detecting normal, PACS, and PAC/PACG eyes was 0.99 ± 0.00, 0.77 ± 0.02, and 0.77 ± 0.03, respectively. By running MobileNet in a case-based classification approach, the accuracy improved and reached 0.95 ± 0.03, 0.83 ± 0.06, and 0.81 ± 0.05, respectively. For detecting the open angle, PACS, and PAC/PACG, the MobileNet classifier achieved an area under the curve of 1, 0.906, and 0.872, respectively, on the test data set. CONCLUSION The MobileNet-based classifier can detect normal, PACS, and PAC/PACG eyes with acceptable accuracy based on anterior segment optical coherence tomography images.
Collapse
Affiliation(s)
- Yadollah Eslami
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | | | - Zahra Farzinvash
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mona Safizadeh
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Reza Zarei
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Ghasem Fakhraie
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Zakieh Vahedian
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Tahereh Mahmoudi
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Kaveh Fadakar
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | | | - Seyed Mehdi Tabatabaei
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
224
|
Lai ACK, Buchan JC, Chan JCH, Nolan W. Determinants of late presentation of glaucoma in Hong Kong. Eye (Lond) 2023; 37:1717-1724. [PMID: 36100709 PMCID: PMC10219946 DOI: 10.1038/s41433-022-02235-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 07/29/2022] [Accepted: 09/01/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Glaucoma is the commonest cause of irreversible blindness worldwide. As it is typically asymptomatic until advanced, the risk of blindness from late presentation is higher than other eye diseases. This study aims to investigate the risk factors for late presentation of primary glaucoma patients. METHODS We undertook a hospital-based case-control study of a random sample of glaucoma patients from a hospital in Hong Kong. Structured questionnaires and existing information from the electronic patient record were used, and the odds of presenting late were analysed by logistic regression. RESULTS Of 210 recruited participants, 83 (39.5%) presented with advanced glaucoma unilaterally or bilaterally. The mean age of participants was 61.1 ± 11.9 years, with 110 males (52.4%). Univariate analysis revealed that male sex and primary angle-closure glaucoma (PACG) have 3.06 (CI95:1.71-5.48; P < 0.001) and 2.47 (CI95:1.11-5.49; P = 0.03) times higher odds of late presentation, respectively. Multivariate analysis revealed late presenters were 3.54 (CI95:1.35-9.35; P = 0.01) times more likely to have PACG than primary open-angle glaucoma (POAG). Patients with elevated baseline intraocular pressure (IOP) also had 1.06 times higher odds of presenting with advanced glaucoma (CI95:1.02-1.11; P = 0.002). Linear regression revealed that PACG patients present with 7.12 mmHg higher IOP than POAG patients (CI95:4.23-10.0; P < 0.001). CONCLUSION In conclusion, a high proportion of glaucoma patients present late in Hong Kong, with gender and type of glaucoma being significant determinants. Our study shows that PACG presents with higher IOP and, along with male gender, are more likely to have advanced disease than POAG.
Collapse
Affiliation(s)
- Anakin Chu Kwan Lai
- International Centre for Eye Health, London School of Hygiene and Tropical Medicine, London, UK.
- Department of Ophthalmology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong SAR, China.
| | - John C Buchan
- International Centre for Eye Health, London School of Hygiene and Tropical Medicine, London, UK
| | - Jonathan Cheuk-Hung Chan
- Department of Ophthalmology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong SAR, China
| | - Winifred Nolan
- International Centre for Eye Health, London School of Hygiene and Tropical Medicine, London, UK
- NIHR Biomedical Research Centre, Moorfields and UCL Institute of Ophthalmology, London, UK
| |
Collapse
|
225
|
Coyner AS, Singh P, Brown JM, Ostmo S, Chan RP, Chiang MF, Kalpathy-Cramer J, Campbell JP. Association of Biomarker-Based Artificial Intelligence With Risk of Racial Bias in Retinal Images. JAMA Ophthalmol 2023; 141:543-552. [PMID: 37140902 PMCID: PMC10160994 DOI: 10.1001/jamaophthalmol.2023.1310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 03/01/2023] [Indexed: 05/05/2023]
Abstract
Importance Although race is a social construct, it is associated with variations in skin and retinal pigmentation. Image-based medical artificial intelligence (AI) algorithms that use images of these organs have the potential to learn features associated with self-reported race (SRR), which increases the risk of racially biased performance in diagnostic tasks; understanding whether this information can be removed, without affecting the performance of AI algorithms, is critical in reducing the risk of racial bias in medical AI. Objective To evaluate whether converting color fundus photographs to retinal vessel maps (RVMs) of infants screened for retinopathy of prematurity (ROP) removes the risk for racial bias. Design, Setting, and Participants The retinal fundus images (RFIs) of neonates with parent-reported Black or White race were collected for this study. A u-net, a convolutional neural network (CNN) that provides precise segmentation for biomedical images, was used to segment the major arteries and veins in RFIs into grayscale RVMs, which were subsequently thresholded, binarized, and/or skeletonized. CNNs were trained with patients' SRR labels on color RFIs, raw RVMs, and thresholded, binarized, or skeletonized RVMs. Study data were analyzed from July 1 to September 28, 2021. Main Outcomes and Measures Area under the precision-recall curve (AUC-PR) and area under the receiver operating characteristic curve (AUROC) at both the image and eye level for classification of SRR. Results A total of 4095 RFIs were collected from 245 neonates with parent-reported Black (94 [38.4%]; mean [SD] age, 27.2 [2.3] weeks; 55 majority sex [58.5%]) or White (151 [61.6%]; mean [SD] age, 27.6 [2.3] weeks, 80 majority sex [53.0%]) race. CNNs inferred SRR from RFIs nearly perfectly (image-level AUC-PR, 0.999; 95% CI, 0.999-1.000; infant-level AUC-PR, 1.000; 95% CI, 0.999-1.000). Raw RVMs were nearly as informative as color RFIs (image-level AUC-PR, 0.938; 95% CI, 0.926-0.950; infant-level AUC-PR, 0.995; 95% CI, 0.992-0.998). Ultimately, CNNs were able to learn whether RFIs or RVMs were from Black or White infants regardless of whether images contained color, vessel segmentation brightness differences were nullified, or vessel segmentation widths were uniform. Conclusions and Relevance Results of this diagnostic study suggest that it can be very challenging to remove information relevant to SRR from fundus photographs. As a result, AI algorithms trained on fundus photographs have the potential for biased performance in practice, even if based on biomarkers rather than raw images. Regardless of the methodology used for training AI, evaluating performance in relevant subpopulations is critical.
Collapse
Affiliation(s)
- Aaron S. Coyner
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - Praveer Singh
- Radiology, MGH/Harvard Medical School, Charlestown, Massachusetts
- MGH & BWH Center for Clinical Data Science, Boston, Massachusetts
| | - James M. Brown
- School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - R.V. Paul Chan
- Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Jayashree Kalpathy-Cramer
- Radiology, MGH/Harvard Medical School, Charlestown, Massachusetts
- MGH & BWH Center for Clinical Data Science, Boston, Massachusetts
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| |
Collapse
|
226
|
Krzywicki T, Brona P, Zbrzezny AM, Grzybowski AE. A Global Review of Publicly Available Datasets Containing Fundus Images: Characteristics, Barriers to Access, Usability, and Generalizability. J Clin Med 2023; 12:jcm12103587. [PMID: 37240693 DOI: 10.3390/jcm12103587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/29/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023] Open
Abstract
This article provides a comprehensive and up-to-date overview of the repositories that contain color fundus images. We analyzed them regarding availability and legality, presented the datasets' characteristics, and identified labeled and unlabeled image sets. This study aimed to complete all publicly available color fundus image datasets to create a central catalog of available color fundus image datasets.
Collapse
Affiliation(s)
- Tomasz Krzywicki
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
| | - Piotr Brona
- Department of Ophthalmology, Poznan City Hospital, 61-285 Poznań, Poland
| | - Agnieszka M Zbrzezny
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
- Faculty of Design, SWPS University of Social Sciences and Humanities, Chodakowska 19/31, 03-815 Warsaw, Poland
| | - Andrzej E Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, 60-836 Poznań, Poland
| |
Collapse
|
227
|
Jin K, Gao Z, Jiang X, Wang Y, Ma X, Li Y, Ye J. MSHF: A Multi-Source Heterogeneous Fundus (MSHF) Dataset for Image Quality Assessment. Sci Data 2023; 10:286. [PMID: 37198230 PMCID: PMC10192420 DOI: 10.1038/s41597-023-02188-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 04/27/2023] [Indexed: 05/19/2023] Open
Abstract
Image quality assessment (IQA) is significant for current techniques of image-based computer-aided diagnosis, and fundus imaging is the chief modality for screening and diagnosing ophthalmic diseases. However, most of the existing IQA datasets are single-center datasets, disregarding the type of imaging device, eye condition, and imaging environment. In this paper, we collected a multi-source heterogeneous fundus (MSHF) database. The MSHF dataset consisted of 1302 high-resolution normal and pathologic images from color fundus photography (CFP), images of healthy volunteers taken with a portable camera, and ultrawide-field (UWF) images of diabetic retinopathy patients. Dataset diversity was visualized with a spatial scatter plot. Image quality was determined by three ophthalmologists according to its illumination, clarity, contrast and overall quality. To the best of our knowledge, this is one of the largest fundus IQA datasets and we believe this work will be beneficial to the construction of a standardized medical image database.
Collapse
Affiliation(s)
- Kai Jin
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Zhejiang, Hangzhou, 310009, China
| | - Zhiyuan Gao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Zhejiang, Hangzhou, 310009, China
| | - Xiaoyu Jiang
- College of Control Science and Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Yaqi Wang
- College of Media, Communication University of Zhejiang, Hangzhou, 310018, China
| | - Xiaoyu Ma
- Institute of Intelligent Media, Communication University of Zhejiang, Hangzhou, 310018, China
| | - Yunxiang Li
- College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Zhejiang, Hangzhou, 310009, China.
| |
Collapse
|
228
|
Feng H, Chen J, Zhang Z, Lou Y, Zhang S, Yang W. A bibliometric analysis of artificial intelligence applications in macular edema: exploring research hotspots and Frontiers. Front Cell Dev Biol 2023; 11:1174936. [PMID: 37255600 PMCID: PMC10225517 DOI: 10.3389/fcell.2023.1174936] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 05/02/2023] [Indexed: 06/01/2023] Open
Abstract
Background: Artificial intelligence (AI) is used in ophthalmological disease screening and diagnostics, medical image diagnostics, and predicting late-disease progression rates. We reviewed all AI publications associated with macular edema (ME) research Between 2011 and 2022 and performed modeling, quantitative, and qualitative investigations. Methods: On 1st February 2023, we screened the Web of Science Core Collection for AI applications related to ME, from which 297 studies were identified and analyzed (2011-2022). We collected information on: publications, institutions, country/region, keywords, journal name, references, and research hotspots. Literature clustering networks and Frontier knowledge bases were investigated using bibliometrix-BiblioShiny, VOSviewer, and CiteSpace bibliometric platforms. We used the R "bibliometrix" package to synopsize our observations, enumerate keywords, visualize collaboration networks between countries/regions, and generate a topic trends plot. VOSviewer was used to examine cooperation between institutions and identify citation relationships between journals. We used CiteSpace to identify clustering keywords over the timeline and identify keywords with the strongest citation bursts. Results: In total, 47 countries published AI studies related to ME; the United States had the highest H-index, thus the greatest influence. China and the United States cooperated most closely between all countries. Also, 613 institutions generated publications - the Medical University of Vienna had the highest number of studies. This publication record and H-index meant the university was the most influential in the ME field. Reference clusters were also categorized into 10 headings: retinal Optical Coherence Tomography (OCT) fluid detection, convolutional network models, deep learning (DL)-based single-shot predictions, retinal vascular disease, diabetic retinopathy (DR), convolutional neural networks (CNNs), automated macular pathology diagnosis, dry age-related macular degeneration (DARMD), class weight, and advanced DL architecture systems. Frontier keywords were represented by diabetic macular edema (DME) (2021-2022). Conclusion: Our review of the AI-related ME literature was comprehensive, systematic, and objective, and identified future trends and current hotspots. With increased DL outputs, the ME research focus has gradually shifted from manual ME examinations to automatic ME detection and associated symptoms. In this review, we present a comprehensive and dynamic overview of AI in ME and identify future research areas.
Collapse
Affiliation(s)
- Haiwen Feng
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Jiaqi Chen
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Zhichang Zhang
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Yan Lou
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Shaochong Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| |
Collapse
|
229
|
Ong ZZ, Sadek Y, Liu X, Qureshi R, Liu SH, Li T, Sounderajah V, Ashrafian H, Ting DSW, Said DG, Mehta JS, Burton MJ, Dua HS, Ting DSJ. Diagnostic performance of deep learning in infectious keratitis: a systematic review and meta-analysis protocol. BMJ Open 2023; 13:e065537. [PMID: 37164459 PMCID: PMC10173987 DOI: 10.1136/bmjopen-2022-065537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 04/27/2023] [Indexed: 05/12/2023] Open
Abstract
INTRODUCTION Infectious keratitis (IK) represents the fifth-leading cause of blindness worldwide. A delay in diagnosis is often a major factor in progression to irreversible visual impairment and/or blindness from IK. The diagnostic challenge is further compounded by low microbiological culture yield, long turnaround time, poorly differentiated clinical features and polymicrobial infections. In recent years, deep learning (DL), a subfield of artificial intelligence, has rapidly emerged as a promising tool in assisting automated medical diagnosis, clinical triage and decision-making, and improving workflow efficiency in healthcare services. Recent studies have demonstrated the potential of using DL in assisting the diagnosis of IK, though the accuracy remains to be elucidated. This systematic review and meta-analysis aims to critically examine and compare the performance of various DL models with clinical experts and/or microbiological results (the current 'gold standard') in diagnosing IK, with an aim to inform practice on the clinical applicability and deployment of DL-assisted diagnostic models. METHODS AND ANALYSIS This review will consider studies that included application of any DL models to diagnose patients with suspected IK, encompassing bacterial, fungal, protozoal and/or viral origins. We will search various electronic databases, including EMBASE and MEDLINE, and trial registries. There will be no restriction to the language and publication date. Two independent reviewers will assess the titles, abstracts and full-text articles. Extracted data will include details of each primary studies, including title, year of publication, authors, types of DL models used, populations, sample size, decision threshold and diagnostic performance. We will perform meta-analyses for the included primary studies when there are sufficient similarities in outcome reporting. ETHICS AND DISSEMINATION No ethical approval is required for this systematic review. We plan to disseminate our findings via presentation/publication in a peer-reviewed journal. PROSPERO REGISTRATION NUMBER CRD42022348596.
Collapse
Affiliation(s)
- Zun Zheng Ong
- Department of Ophthalmology, Queen's Medical Centre, Nottingham, UK
| | - Youssef Sadek
- Department of Ophthalmology, Queen's Medical Centre, Nottingham, UK
| | - Xiaoxuan Liu
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
| | - Riaz Qureshi
- Department of Ophthalmology, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Su-Hsun Liu
- Department of Ophthalmology, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Tianjing Li
- Department of Ophthalmology, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Viknesh Sounderajah
- Institute of Global Health Innovation, Imperial College London, London, UK
- Department of Surgery & Cancer, Imperial College London, London, UK
| | - Hutan Ashrafian
- Institute of Global Health Innovation, Imperial College London, London, UK
- Department of Surgery & Cancer, Imperial College London, London, UK
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore, Singapore
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Dalia G Said
- Department of Ophthalmology, Queen's Medical Centre, Nottingham, UK
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, UK
- Research Institute of Ophthalmology, Cairo, Egypt
| | - Jodhbir S Mehta
- Duke-NUS Medical School, National University of Singapore, Singapore
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Matthew J Burton
- International Centre for Eye Health, London School of Hygiene and Tropical Medicine, London, UK
- National Institute for Health Research (NIHR) Biomedical Research Centre (BRC) for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Harminder Singh Dua
- Department of Ophthalmology, Queen's Medical Centre, Nottingham, UK
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, UK
| | - Darren Shu Jeng Ting
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, UK
- Birmingham and Midland Eye Centre, Birmingham, UK
| |
Collapse
|
230
|
Wang B, Li L, Nakashima Y, Kawasaki R, Nagahara H. Real-time estimation of the remaining surgery duration for cataract surgery using deep convolutional neural networks and long short-term memory. BMC Med Inform Decis Mak 2023; 23:80. [PMID: 37143041 PMCID: PMC10161556 DOI: 10.1186/s12911-023-02160-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 03/23/2023] [Indexed: 05/06/2023] Open
Abstract
PURPOSE Estimating the surgery length has the potential to be utilized as skill assessment, surgical training, or efficient surgical facility utilization especially if it is done in real-time as a remaining surgery duration (RSD). Surgical length reflects a certain level of efficiency and mastery of the surgeon in a well-standardized surgery such as cataract surgery. In this paper, we design and develop a real-time RSD estimation method for cataract surgery that does not require manual labeling and is transferable with minimum fine-tuning. METHODS A regression method consisting of convolutional neural networks (CNNs) and long short-term memory (LSTM) is designed for RSD estimation. The model is firstly trained and evaluated for the single main surgeon with a large number of surgeries. Then, the fine-tuning strategy is used to transfer the model to the data of the other two surgeons. Mean Absolute Error (MAE in seconds) was used to evaluate the performance of the RSD estimation. The proposed method is compared with the naïve method which is based on the statistic of the historical data. A transferability experiment is also set to demonstrate the generalizability of the method. RESULT The mean surgical time for the sample videos was 318.7 s (s) (standard deviation 83.4 s) for the main surgeon for the initial training. In our experiments, the lowest MAE of 19.4 s (equal to about 6.4% of the mean surgical time) is achieved by our best-trained model for the independent test data of the main target surgeon. It reduces the MAE by 35.5 s (-10.2%) compared to the naïve method. The fine-tuning strategy transfers the model trained for the main target to the data of other surgeons with only a small number of training data (20% of the pre-training). The MAEs for the other two surgeons are 28.3 s and 30.6 s with the fine-tuning model, which decreased by -8.1 s and -7.5 s than the Per-surgeon model (average declining of -7.8 s and 1.3% of video duration). External validation study with Cataract-101 outperformed 3 reported methods of TimeLSTM, RSDNet, and CataNet. CONCLUSION An approach to build a pre-trained model for estimating RSD estimation based on a single surgeon and then transfer to other surgeons demonstrated both low prediction error and good transferability with minimum fine-tuning videos.
Collapse
Affiliation(s)
- Bowen Wang
- Institute for Datability Science (IDS), Osaka University, Suita, 565-0871, Japan
| | - Liangzhi Li
- Institute for Datability Science (IDS), Osaka University, Suita, 565-0871, Japan
| | - Yuta Nakashima
- Institute for Datability Science (IDS), Osaka University, Suita, 565-0871, Japan
| | - Ryo Kawasaki
- Artificial Intelligence Center for Medical Research and Application, Osaka University Hospital, Suita, 565-0871, Japan.
- Department of Vision Informatics, Graduate School of Medicine, Osaka University, Suita, 565-0871, Japan.
| | - Hajime Nagahara
- Institute for Datability Science (IDS), Osaka University, Suita, 565-0871, Japan
| |
Collapse
|
231
|
Yu R, Ye X, Wang X, Wu Q, Jia L, Dong K, Zhu Z, Bao Y, Hou X, Jia W. Serum cholinesterase is associated with incident diabetic retinopathy: the Shanghai Nicheng cohort study. Nutr Metab (Lond) 2023; 20:26. [PMID: 37138337 PMCID: PMC10155425 DOI: 10.1186/s12986-023-00743-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 04/13/2023] [Indexed: 05/05/2023] Open
Abstract
BACKGROUND Serum cholinesterase (ChE) is positively associated with incident diabetes and dyslipidemia. We aimed to investigate the relationship between ChE and the incidence of diabetic retinopathy (DR). METHODS Based on a community-based cohort study followed for 4.6 years, 1133 participants aged 55-70 years with diabetes were analyzed. Fundus photographs were taken for each eye at both baseline and follow-up investigations. The presence and severity of DR were categorized into no DR, mild non-proliferative DR (NPDR), and referable DR (moderate NPDR or worse). Binary and multinomial logistic regression models were used to estimate the risk ratio (RR) and 95% confidence interval (CI) between ChE and DR. RESULTS Among the 1133 participants, 72 (6.4%) cases of DR occurred. The multivariable binary logistic regression showed that the highest tertile of ChE (≥ 422 U/L) was associated with a 2.01-fold higher risk of incident DR (RR 2.01, 95%CI 1.01-4.00; P for trend < 0.05) than the lowest tertile (< 354 U/L). The multivariable binary and multinomial logistic regression showed that the risk of DR increased by 41% (RR 1.41, 95%CI 1.05-1.90), and the risk of incident referable DR was almost 2-fold higher than no DR (RR 1.99, 95%CI 1.24-3.18) with per 1-SD increase of loge-transformed ChE. Furthermore, multiplicative interactions were found between ChE and elderly participants (aged 60 and older; P for interaction = 0.003) and men (P for interaction = 0.044) on the risk of DR. CONCLUSIONS In this study, ChE was associated with the incidence of DR, especially referable DR. ChE was a potential biomarker for predicting the incident DR.
Collapse
Affiliation(s)
- Rong Yu
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai Clinical Center for Diabetes, Shanghai Key Clinical Center for Metabolic Disease, Shanghai, China
| | - Xiaoqi Ye
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai Clinical Center for Diabetes, Shanghai Key Clinical Center for Metabolic Disease, Shanghai, China
| | - Xiangning Wang
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qiang Wu
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Lili Jia
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Keqing Dong
- General Practitioner Teams in Community Health Service Center of Nicheng, Pudong New District, Shanghai, China
| | - Zhijun Zhu
- General Practitioner Teams in Community Health Service Center of Nicheng, Pudong New District, Shanghai, China
| | - Yuqian Bao
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai Clinical Center for Diabetes, Shanghai Key Clinical Center for Metabolic Disease, Shanghai, China
| | - Xuhong Hou
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai Clinical Center for Diabetes, Shanghai Key Clinical Center for Metabolic Disease, Shanghai, China.
| | - Weiping Jia
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai Clinical Center for Diabetes, Shanghai Key Clinical Center for Metabolic Disease, Shanghai, China.
| |
Collapse
|
232
|
Kim C, Yang Z, Park SH, Hwang SH, Oh YW, Kang EY, Yong HS. Multicentre external validation of a commercial artificial intelligence software to analyse chest radiographs in health screening environments with low disease prevalence. Eur Radiol 2023; 33:3501-3509. [PMID: 36624227 DOI: 10.1007/s00330-022-09315-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 10/13/2022] [Accepted: 11/22/2022] [Indexed: 01/11/2023]
Abstract
OBJECTIVES To externally validate the performance of a commercial AI software program for interpreting CXRs in a large, consecutive, real-world cohort from primary healthcare centres. METHODS A total of 3047 CXRs were collected from two primary healthcare centres, characterised by low disease prevalence, between January and December 2018. All CXRs were labelled as normal or abnormal according to CT findings. Four radiology residents read all CXRs twice with and without AI assistance. The performances of the AI and readers with and without AI assistance were measured in terms of area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity. RESULTS The prevalence of clinically significant lesions was 2.2% (68 of 3047). The AUROC, sensitivity, and specificity of the AI were 0.648 (95% confidence interval [CI] 0.630-0.665), 35.3% (CI, 24.7-47.8), and 94.2% (CI, 93.3-95.0), respectively. AI detected 12 of 41 pneumonia, 3 of 5 tuberculosis, and 9 of 22 tumours. AI-undetected lesions tended to be smaller than true-positive lesions. The readers' AUROCs ranged from 0.534-0.676 without AI and 0.571-0.688 with AI (all p values < 0.05). For all readers, the mean reading time was 2.96-10.27 s longer with AI assistance (all p values < 0.05). CONCLUSIONS The performance of commercial AI in these high-volume, low-prevalence settings was poorer than expected, although it modestly boosted the performance of less-experienced readers. The technical prowess of AI demonstrated in experimental settings and approved by regulatory bodies may not directly translate to real-world practice, especially where the demand for AI assistance is highest. KEY POINTS • This study shows the limited applicability of commercial AI software for detecting abnormalities in CXRs in a health screening population. • When using AI software in a specific clinical setting that differs from the training setting, it is necessary to adjust the threshold or perform additional training with such data that reflects this environment well. • Prospective test accuracy studies, randomised controlled trials, or cohort studies are needed to examine AI software to be implemented in real clinical practice.
Collapse
Affiliation(s)
- Cherry Kim
- Department of Radiology, Ansan Hospital, Korea University College of Medicine, 123, Jeokgeum-ro, Danwon-gu, Ansan-si, Gyeonggi, 15355, South Korea
| | - Zepa Yang
- Biomedical Research Center, Guro Hospital, Korea University College of Medicine, Seoul, 08308, South Korea
| | - Seong Ho Park
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, 05505, South Korea
| | - Sung Ho Hwang
- Department of Radiology, Anam Hospital, Korea University College of Medicine, Seoul, 02841, South Korea
| | - Yu-Whan Oh
- Department of Radiology, Anam Hospital, Korea University College of Medicine, Seoul, 02841, South Korea
| | - Eun-Young Kang
- Department of Radiology, Guro Hospital, Korea University College of Medicine, 33-41, Gurodong-ro 28-gil, Guro-gu, Seoul, 08308, South Korea
| | - Hwan Seok Yong
- Department of Radiology, Guro Hospital, Korea University College of Medicine, 33-41, Gurodong-ro 28-gil, Guro-gu, Seoul, 08308, South Korea.
| |
Collapse
|
233
|
Yeh TC, Chen SJ, Chou YB, Luo AC, Deng YS, Lee YH, Chang PH, Lin CJ, Tai MC, Chen YC, Ko YC. PREDICTING VISUAL OUTCOME AFTER SURGERY IN PATIENTS WITH IDIOPATHIC EPIRETINAL MEMBRANE USING A NOVEL CONVOLUTIONAL NEURAL NETWORK. Retina 2023; 43:767-774. [PMID: 36727822 DOI: 10.1097/iae.0000000000003714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
PURPOSE To develop a deep convolutional neural network that enables the prediction of postoperative visual outcomes after epiretinal membrane surgery based on preoperative optical coherence tomography images and clinical parameters to refine surgical decision making. METHODS A total of 529 patients with idiopathic epiretinal membrane who underwent standard vitrectomy with epiretinal membrane peeling surgery by two surgeons between January 1, 2014, and June 1, 2020, were enrolled. The newly developed Heterogeneous Data Fusion Net was introduced to predict postoperative visual acuity outcomes (improvement ≥2 lines in Snellen chart) 12 months after surgery based on preoperative cross-sectional optical coherence tomography images and clinical factors, including age, sex, and preoperative visual acuity. The predictive accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve of the convolutional neural network model were evaluated. RESULTS The developed model demonstrated an overall accuracy for visual outcome prediction of 88.68% (95% CI, 79.0%-95.7%) with an area under the receiver operating characteristic curve of 97.8% (95% CI, 86.8%-98.0%), sensitivity of 87.0% (95% CI, 67.9%-95.5%), specificity of 92.9% (95% CI, 77.4%-98.0%), precision of 0.909, recall of 0.870, and F1 score of 0.889. The heatmaps identified the critical area for prediction as the ellipsoid zone of photoreceptors and the superficial retina, which was subjected to tangential traction of the proliferative membrane. CONCLUSION The novel Heterogeneous Data Fusion Net demonstrated high accuracy in the automated prediction of visual outcomes after weighing and leveraging multiple clinical parameters, including optical coherence tomography images. This approach may be helpful in establishing personalized therapeutic strategies for epiretinal membrane management.
Collapse
Affiliation(s)
- Tsai-Chu Yeh
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei City, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Shih-Jen Chen
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei City, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei City, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - An-Chun Luo
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Yu-Shan Deng
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Yu-Hsien Lee
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Po-Han Chang
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Chun-Ju Lin
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Ming-Chi Tai
- Industrial Technology Research Institute, Taipei City, Taiwan
- Department of Materials Science and Engineering, National Tsing-Hua University, Taipei City, Taiwan; and
| | - Ying-Chi Chen
- Division of Computer Science and Engineering, University of Michigan, Ann Arbor, Michigan
| | - Yu-Chieh Ko
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei City, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| |
Collapse
|
234
|
Senthil Kumar K, Miskovic V, Blasiak A, Sundar R, Pedrocchi ALG, Pearson AT, Prelaj A, Ho D. Artificial Intelligence in Clinical Oncology: From Data to Digital Pathology and Treatment. Am Soc Clin Oncol Educ Book 2023; 43:e390084. [PMID: 37235822 DOI: 10.1200/edbk_390084] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Recently, a wide spectrum of artificial intelligence (AI)-based applications in the broader categories of digital pathology, biomarker development, and treatment have been explored. In the domain of digital pathology, these have included novel analytical strategies for realizing new information derived from standard histology to guide treatment selection and biomarker development to predict treatment selection and response. In therapeutics, these have included AI-driven drug target discovery, drug design and repurposing, combination regimen optimization, modulated dosing, and beyond. Given the continued advances that are emerging, it is important to develop workflows that seamlessly combine the various segments of AI innovation to comprehensively augment the diagnostic and interventional arsenal of the clinical oncology community. To overcome challenges that remain with regard to the ideation, validation, and deployment of AI in clinical oncology, recommendations toward bringing this workflow to fruition are also provided from clinical, engineering, implementation, and health care economics considerations. Ultimately, this work proposes frameworks that can potentially integrate these domains toward the sustainable adoption of practice-changing AI by the clinical oncology community to drive improved patient outcomes.
Collapse
Affiliation(s)
- Kirthika Senthil Kumar
- The Institute for Digital Medicine (WisDM), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- The N.1 Institute for Health (N.1), National University of Singapore, Singapore
- Department of Biomedical Engineering, College of Design and Engineering, National University of Singapore, Singapore
| | - Vanja Miskovic
- Department of Electronics, Informatics, and Bioengineering, Politecnico di Milano, Milan, Italy
- Department of Medical Oncology, Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy
| | - Agata Blasiak
- The Institute for Digital Medicine (WisDM), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- The N.1 Institute for Health (N.1), National University of Singapore, Singapore
- Department of Biomedical Engineering, College of Design and Engineering, National University of Singapore, Singapore
- Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Raghav Sundar
- The Institute for Digital Medicine (WisDM), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- The N.1 Institute for Health (N.1), National University of Singapore, Singapore
- Department of Haematology-Oncology, National University Cancer Institute, National University Hospital
- Department of Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Singapore Gastric Cancer Consortium, Singapore
- NUS Centre for Cancer Research (N2CR), National University of Singapore, Singapore
| | | | - Alexander T Pearson
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, IL
- University of Chicago Comprehensive Cancer Center, Chicago, IL
| | - Arsela Prelaj
- Department of Electronics, Informatics, and Bioengineering, Politecnico di Milano, Milan, Italy
- Department of Medical Oncology, Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy
| | - Dean Ho
- The Institute for Digital Medicine (WisDM), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- The N.1 Institute for Health (N.1), National University of Singapore, Singapore
- Department of Biomedical Engineering, College of Design and Engineering, National University of Singapore, Singapore
- Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| |
Collapse
|
235
|
Cheng CT, Lin HS, Hsu CP, Chen HW, Huang JF, Fu CY, Hsieh CH, Yeh CN, Chung IF, Liao CH. The three-dimensional weakly supervised deep learning algorithm for traumatic splenic injury detection and sequential localization: an experimental study. Int J Surg 2023; 109:1115-1124. [PMID: 36999810 PMCID: PMC10389597 DOI: 10.1097/js9.0000000000000380] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 03/23/2023] [Indexed: 04/01/2023]
Abstract
BACKGROUND Splenic injury is the most common solid visceral injury in blunt abdominal trauma, and high-resolution abdominal computed tomography (CT) can adequately detect the injury. However, these lethal injuries sometimes have been overlooked in current practice. Deep learning (DL) algorithms have proven their capabilities in detecting abnormal findings in medical images. The aim of this study is to develop a three-dimensional, weakly supervised DL algorithm for detecting splenic injury on abdominal CT using a sequential localization and classification approach. MATERIAL AND METHODS The dataset was collected in a tertiary trauma center on 600 patients who underwent abdominal CT between 2008 and 2018, half of whom had splenic injuries. The images were split into development and test datasets at a 4 : 1 ratio. A two-step DL algorithm, including localization and classification models, was constructed to identify the splenic injury. Model performance was evaluated using the area under the receiver operating characteristic curve (AUROC), accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). Grad-CAM (Gradient-weighted Class Activation Mapping) heatmaps from the test set were visually assessed. To validate the algorithm, we also collected images from another hospital to serve as external validation data. RESULTS A total of 480 patients, 50% of whom had spleen injuries, were included in the development dataset, and the rest were included in the test dataset. All patients underwent contrast-enhanced abdominal CT in the emergency room. The automatic two-step EfficientNet model detected splenic injury with an AUROC of 0.901 (95% CI: 0.836-0.953). At the maximum Youden index, the accuracy, sensitivity, specificity, PPV, and NPV were 0.88, 0.81, 0.92, 0.91, and 0.83, respectively. The heatmap identified 96.3% of splenic injury sites in true positive cases. The algorithm achieved a sensitivity of 0.92 for detecting trauma in the external validation cohort, with an acceptable accuracy of 0.80. CONCLUSIONS The DL model can identify splenic injury on CT, and further application in trauma scenarios is possible.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery
- Chang Gung University, Taoyuan
| | - Hou-Shian Lin
- Department of Trauma and Emergency Surgery
- Chang Gung University, Taoyuan
| | - Chih-Po Hsu
- Department of Trauma and Emergency Surgery
- Chang Gung University, Taoyuan
| | - Huan-Wu Chen
- Department of Medical Imaging and Intervention
- Chang Gung University, Taoyuan
| | - Jen-Fu Huang
- Department of Trauma and Emergency Surgery
- Chang Gung University, Taoyuan
| | - Chih-Yuan Fu
- Department of Trauma and Emergency Surgery
- Chang Gung University, Taoyuan
| | - Chi-Hsun Hsieh
- Department of Trauma and Emergency Surgery
- Chang Gung University, Taoyuan
| | - Chun-Nan Yeh
- Department of General Surgery
- Chang Gung University, Taoyuan
| | - I-Fang Chung
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan, Republic of China
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery
- Center for Artificial Intelligence in Medicine, Chang Gung Memorial Hospital, Linkou
- Chang Gung University, Taoyuan
| |
Collapse
|
236
|
Teo ZL, Kwee A, Lim JC, Lam CS, Ho D, Maurer-Stroh S, Su Y, Chesterman S, Chen T, Tan CC, Wong TY, Ngiam KY, Tan CH, Soon D, Choong ML, Chua R, Wong S, Lim C, Cheong WY, Ting DS. Artificial intelligence innovation in healthcare: Relevance of reporting guidelines for clinical translation from bench to bedside. ANNALS OF THE ACADEMY OF MEDICINE, SINGAPORE 2023; 52:199-212. [PMID: 38904533 DOI: 10.47102/annals-acadmedsg.2022452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/22/2024]
Abstract
Artificial intelligence (AI) and digital innovation are transforming healthcare. Technologies such as machine learning in image analysis, natural language processing in medical chatbots and electronic medical record extraction have the potential to improve screening, diagnostics and prognostication, leading to precision medicine and preventive health. However, it is crucial to ensure that AI research is conducted with scientific rigour to facilitate clinical implementation. Therefore, reporting guidelines have been developed to standardise and streamline the development and validation of AI technologies in health. This commentary proposes a structured approach to utilise these reporting guidelines for the translation of promising AI techniques from research and development into clinical translation, and eventual widespread implementation from bench to bedside.
Collapse
Affiliation(s)
- Zhen Ling Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Ann Kwee
- Department of Endocrinology, Singapore General Hospital, Singapore
| | - John Cw Lim
- Centre of Regulatory Excellence, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Carolyn Sp Lam
- Department of Cardiology, National Heart Centre Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Dean Ho
- Department of Biomedical Engineering, Institute of Digital Medicine, N.1 Institute of Health and Department of Pharmacology, National University of Singapore, Singapore
| | - Sebastian Maurer-Stroh
- Bioinformatics Institute and Infectious Diseases Labs, Agency for Science, Technology and Research, Singapore
- Yong Loo Lin School of Medicine and Department of Biological Sciences, National University of Singapore, Singapore
| | - Yi Su
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore
| | - Simon Chesterman
- Faculty of Law, National University of Singapore, Singapore
- AI Singapore, Singapore
| | - Tsuhan Chen
- AI Singapore, Singapore
- School of Computing, National University of Singapore, Singapore
| | - Chorh Chuan Tan
- Chief Health Scientist Office, Ministry of Health, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Kee Yuan Ngiam
- Group Technology Office, National University Health System, Singapore
| | - Cher Heng Tan
- Centre for Health Innovation, National Healthcare Group, Singapore
| | - Danny Soon
- Consortium for Clinical Research and Innovation, Singapore, Singapore
| | | | - Raymond Chua
- Director of Medical Services Office (Health Regulation Group), Ministry of Health, Singapore
| | - Sutowo Wong
- Data Analytics, Ministry of Health, Singapore
| | - Colin Lim
- Technology, Ministry of Health, Singapore
| | | | - Daniel Sw Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
- Artificial Intelligence Office, Singapore Health Services, Singapore
| |
Collapse
|
237
|
Alam MN, Yamashita R, Ramesh V, Prabhune T, Lim JI, Chan RVP, Hallak J, Leng T, Rubin D. Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models. Sci Rep 2023; 13:6047. [PMID: 37055475 PMCID: PMC10102012 DOI: 10.1038/s41598-023-33365-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 04/12/2023] [Indexed: 04/15/2023] Open
Abstract
Diabetic retinopathy (DR) is a major cause of vision impairment in diabetic patients worldwide. Due to its prevalence, early clinical diagnosis is essential to improve treatment management of DR patients. Despite recent demonstration of successful machine learning (ML) models for automated DR detection, there is a significant clinical need for robust models that can be trained with smaller cohorts of dataset and still perform with high diagnostic accuracy in independent clinical datasets (i.e., high model generalizability). Towards this need, we have developed a self-supervised contrastive learning (CL) based pipeline for classification of referable vs non-referable DR. Self-supervised CL based pretraining allows enhanced data representation, therefore, the development of robust and generalized deep learning (DL) models, even with small, labeled datasets. We have integrated a neural style transfer (NST) augmentation in the CL pipeline to produce models with better representations and initializations for the detection of DR in color fundus images. We compare our CL pretrained model performance with two state of the art baseline models pretrained with Imagenet weights. We further investigate the model performance with reduced labeled training data (down to 10 percent) to test the robustness of the model when trained with small, labeled datasets. The model is trained and validated on the EyePACS dataset and tested independently on clinical datasets from the University of Illinois, Chicago (UIC). Compared to baseline models, our CL pretrained FundusNet model had higher area under the receiver operating characteristics (ROC) curve (AUC) (CI) values (0.91 (0.898 to 0.930) vs 0.80 (0.783 to 0.820) and 0.83 (0.801 to 0.853) on UIC data). At 10 percent labeled training data, the FundusNet AUC was 0.81 (0.78 to 0.84) vs 0.58 (0.56 to 0.64) and 0.63 (0.60 to 0.66) in baseline models, when tested on the UIC dataset. CL based pretraining with NST significantly improves DL classification performance, helps the model generalize well (transferable from EyePACS to UIC data), and allows training with small, annotated datasets, therefore reducing ground truth annotation burden of the clinicians.
Collapse
Affiliation(s)
- Minhaj Nur Alam
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA.
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, 9201 University City Boulevard, Charlotte, NC, 28223, USA.
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA.
| | - Rikiya Yamashita
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Vignav Ramesh
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Tejas Prabhune
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Jennifer I Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - R V P Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - Joelle Hallak
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - Theodore Leng
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| |
Collapse
|
238
|
Xiao H, Tang J, Zhang F, Liu L, Zhou J, Chen M, Li M, Wu X, Nie Y, Duan J. Global trends and performances in diabetic retinopathy studies: A bibliometric analysis. Front Public Health 2023; 11:1128008. [PMID: 37124794 PMCID: PMC10136779 DOI: 10.3389/fpubh.2023.1128008] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 03/09/2023] [Indexed: 05/02/2023] Open
Abstract
Objective The objective of this study is to conduct a comprehensive bibliometric analysis to identify and evaluate global trends in diabetic retinopathy (DR) research and visualize the focus and frontiers of this field. Methods Diabetic retinopathy-related publications from the establishment of the Web of Science (WOS) through 1 November 2022 were retrieved for qualitative and quantitative analyses. This study analyzed annual publication counts, prolific countries, institutions, journals, and the top 10 most cited literature. The findings were presented through descriptive statistics. VOSviewer 1.6.17 was used to exhibit keywords with high frequency and national cooperation networks, while CiteSpace 5.5.R2 displayed the timeline and burst keywords for each term. Results A total of 10,709 references were analyzed, and the number of publications continuously increased over the investigated period. America had the highest h-index and citation frequency, contributing to the most influence. China was the most prolific country, producing 3,168 articles. The University of London had the highest productivity. The top three productive journals were from America, and Investigative Ophthalmology Visual Science had the highest number of publications. The article from Gulshan et al. (2016; co-citation counts, 2,897) served as the representative and symbolic reference. The main research topics in this area were incidence, pathogenesis, treatment, and artificial intelligence (AI). Deep learning, models, biomarkers, and optical coherence tomography angiography (OCTA) of DR were frontier hotspots. Conclusion Bibliometric analysis in this study provided valuable insights into global trends in DR research frontiers. Four key study directions and three research frontiers were extracted from the extensive DR-related literature. As the incidence of DR continues to increase, DR prevention and treatment have become a pressing public health concern and a significant area of research interest. In addition, the development of AI technologies and telemedicine has emerged as promising research frontiers for balancing the number of doctors and patients.
Collapse
Affiliation(s)
- Huan Xiao
- School of Ophthalmology, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Jinfan Tang
- School of Acupuncture-Moxibustion and Tuina, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Feng Zhang
- School of Acupuncture-Moxibustion and Tuina, Beijing University of Chinese Medicine, Beijing, China
| | - Luping Liu
- School of Ophthalmology, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Jing Zhou
- School of Ophthalmology, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Meiqi Chen
- School of Ophthalmology, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Mengyue Li
- School of Ophthalmology, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Xiaoxiao Wu
- School of Ophthalmology, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Yingying Nie
- School of Ophthalmology, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Junguo Duan
- School of Ophthalmology, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| |
Collapse
|
239
|
Son J, Shin JY, Kong ST, Park J, Kwon G, Kim HD, Park KH, Jung KH, Park SJ. An interpretable and interactive deep learning algorithm for a clinically applicable retinal fundus diagnosis system by modelling finding-disease relationship. Sci Rep 2023; 13:5934. [PMID: 37045856 PMCID: PMC10097752 DOI: 10.1038/s41598-023-32518-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 03/28/2023] [Indexed: 04/14/2023] Open
Abstract
The identification of abnormal findings manifested in retinal fundus images and diagnosis of ophthalmic diseases are essential to the management of potentially vision-threatening eye conditions. Recently, deep learning-based computer-aided diagnosis systems (CADs) have demonstrated their potential to reduce reading time and discrepancy amongst readers. However, the obscure reasoning of deep neural networks (DNNs) has been the leading cause to reluctance in its clinical use as CAD systems. Here, we present a novel architectural and algorithmic design of DNNs to comprehensively identify 15 abnormal retinal findings and diagnose 8 major ophthalmic diseases from macula-centered fundus images with the accuracy comparable to experts. We then define a notion of counterfactual attribution ratio (CAR) which luminates the system's diagnostic reasoning, representing how each abnormal finding contributed to its diagnostic prediction. By using CAR, we show that both quantitative and qualitative interpretation and interactive adjustment of the CAD result can be achieved. A comparison of the model's CAR with experts' finding-disease diagnosis correlation confirms that the proposed model identifies the relationship between findings and diseases similarly as ophthalmologists do.
Collapse
Affiliation(s)
| | - Joo Young Shin
- Department of Ophthalmology, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Republic of Korea
| | | | | | | | - Hoon Dong Kim
- Department of Ophthalmology, College of Medicine, Soonchunhyang University, Cheonan, Republic of Korea
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, Republic of Korea
| | - Kyu-Hwan Jung
- Department of Medical Device Research and Management, Samsung Advanced Institute for Health Sciences and Technology, Sungkyunkwan University, 81 Irwon-ro, Gangnam-gu, Seoul, Republic of Korea.
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, Republic of Korea.
| |
Collapse
|
240
|
Shimizu E, Ishikawa T, Tanji M, Agata N, Nakayama S, Nakahara Y, Yokoiwa R, Sato S, Hanyuda A, Ogawa Y, Hirayama M, Tsubota K, Sato Y, Shimazaki J, Negishi K. Artificial intelligence to estimate the tear film breakup time and diagnose dry eye disease. Sci Rep 2023; 13:5822. [PMID: 37037877 PMCID: PMC10085985 DOI: 10.1038/s41598-023-33021-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 04/06/2023] [Indexed: 04/12/2023] Open
Abstract
The use of artificial intelligence (AI) in the diagnosis of dry eye disease (DED) remains limited due to the lack of standardized image formats and analysis models. To overcome these issues, we used the Smart Eye Camera (SEC), a video-recordable slit-lamp device, and collected videos of the anterior segment of the eye. This study aimed to evaluate the accuracy of the AI algorithm in estimating the tear film breakup time and apply this model for the diagnosis of DED according to the Asia Dry Eye Society (ADES) DED diagnostic criteria. Using the retrospectively corrected DED videos of 158 eyes from 79 patients, 22,172 frames were annotated by the DED specialist to label whether or not the frame had breakup. The AI algorithm was developed using the training dataset and machine learning. The DED criteria of the ADES was used to determine the diagnostic performance. The accuracy of tear film breakup time estimation was 0.789 (95% confidence interval (CI) 0.769-0.809), and the area under the receiver operating characteristic curve of this AI model was 0.877 (95% CI 0.861-0.893). The sensitivity and specificity of this AI model for the diagnosis of DED was 0.778 (95% CI 0.572-0.912) and 0.857 (95% CI 0.564-0.866), respectively. We successfully developed a novel AI-based diagnostic model for DED. Our diagnostic model has the potential to enable ophthalmology examination outside hospitals and clinics.
Collapse
Affiliation(s)
- Eisuke Shimizu
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan.
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan.
- Yokohama Keiai Eye Clinic, Courtley House 2F, 1-11-17 Wada, Hodogaya-ku, Kanagawa, 240-0065, Japan.
| | - Toshiki Ishikawa
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Makoto Tanji
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Naomichi Agata
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Shintaro Nakayama
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Yo Nakahara
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Ryota Yokoiwa
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Shinri Sato
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
- Yokohama Keiai Eye Clinic, Courtley House 2F, 1-11-17 Wada, Hodogaya-ku, Kanagawa, 240-0065, Japan
| | - Akiko Hanyuda
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Yoko Ogawa
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Masatoshi Hirayama
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Kazuo Tsubota
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Yasunori Sato
- Department of Preventive Medicine and Public Health, School of Medicine, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Jun Shimazaki
- Department of Ophthalmology, Tokyo Dental College Ichikawa General Hospital, 5-11-13 Sugano, Ichikawa-shi, Chiba, 272-8513, Japan
| | - Kazuno Negishi
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| |
Collapse
|
241
|
Soh ZD, Cheng CY. Application of big data in ophthalmology. Taiwan J Ophthalmol 2023; 13:123-132. [PMID: 37484625 PMCID: PMC10361443 DOI: 10.4103/tjo.tjo-d-23-00012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 04/02/2023] [Indexed: 07/25/2023] Open
Abstract
The advents of information technologies have led to the creation of ever-larger datasets. Also known as big data, these large datasets are characterized by its volume, variety, velocity, veracity, and value. More importantly, big data has the potential to expand traditional research capabilities, inform clinical practice based on real-world data, and improve the health system and service delivery. This review first identified the different sources of big data in ophthalmology, including electronic medical records, data registries, research consortia, administrative databases, and biobanks. Then, we provided an in-depth look at how big data analytics have been applied in ophthalmology for disease surveillance, and evaluation on disease associations, detection, management, and prognostication. Finally, we discussed the challenges involved in big data analytics, such as data suitability and quality, data security, and analytical methodologies.
Collapse
Affiliation(s)
- Zhi Da Soh
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| |
Collapse
|
242
|
Arnould L, Meriaudeau F, Guenancia C, Germanese C, Delcourt C, Kawasaki R, Cheung CY, Creuzot-Garcher C, Grzybowski A. Using Artificial Intelligence to Analyse the Retinal Vascular Network: The Future of Cardiovascular Risk Assessment Based on Oculomics? A Narrative Review. Ophthalmol Ther 2023; 12:657-674. [PMID: 36562928 PMCID: PMC10011267 DOI: 10.1007/s40123-022-00641-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/09/2022] [Indexed: 12/24/2022] Open
Abstract
The healthcare burden of cardiovascular diseases remains a major issue worldwide. Understanding the underlying mechanisms and improving identification of people with a higher risk profile of systemic vascular disease through noninvasive examinations is crucial. In ophthalmology, retinal vascular network imaging is simple and noninvasive and can provide in vivo information of the microstructure and vascular health. For more than 10 years, different research teams have been working on developing software to enable automatic analysis of the retinal vascular network from different imaging techniques (retinal fundus photographs, OCT angiography, adaptive optics, etc.) and to provide a description of the geometric characteristics of its arterial and venous components. Thus, the structure of retinal vessels could be considered a witness of the systemic vascular status. A new approach called "oculomics" using retinal image datasets and artificial intelligence algorithms recently increased the interest in retinal microvascular biomarkers. Despite the large volume of associated research, the role of retinal biomarkers in the screening, monitoring, or prediction of systemic vascular disease remains uncertain. A PubMed search was conducted until August 2022 and yielded relevant peer-reviewed articles based on a set of inclusion criteria. This literature review is intended to summarize the state of the art in oculomics and cardiovascular disease research.
Collapse
Affiliation(s)
- Louis Arnould
- Ophthalmology Department, Dijon University Hospital, 14 Rue Paul Gaffarel, 21079, Dijon CEDEX, France. .,University of Bordeaux, Inserm, Bordeaux Population Health Research Center, UMR U1219, 33000, Bordeaux, France.
| | - Fabrice Meriaudeau
- Laboratory ImViA, IFTIM, Université Bourgogne Franche-Comté, 21078, Dijon, France
| | - Charles Guenancia
- Pathophysiology and Epidemiology of Cerebro-Cardiovascular Diseases, (EA 7460), Faculty of Health Sciences, Université de Bourgogne Franche-Comté, Dijon, France.,Cardiology Department, Dijon University Hospital, Dijon, France
| | - Clément Germanese
- Ophthalmology Department, Dijon University Hospital, 14 Rue Paul Gaffarel, 21079, Dijon CEDEX, France
| | - Cécile Delcourt
- University of Bordeaux, Inserm, Bordeaux Population Health Research Center, UMR U1219, 33000, Bordeaux, France
| | - Ryo Kawasaki
- Artificial Intelligence Center for Medical Research and Application, Osaka University Hospital, Osaka, Japan
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Catherine Creuzot-Garcher
- Ophthalmology Department, Dijon University Hospital, 14 Rue Paul Gaffarel, 21079, Dijon CEDEX, France.,Centre des Sciences du Goût et de l'Alimentation, AgroSup Dijon, CNRS, INRAE, Université Bourgogne Franche-Comté, Dijon, France
| | - Andrzej Grzybowski
- Department of Ophthalmology, University of Warmia and Mazury, Olsztyn, Poland.,Institute for Research in Ophthalmology, Poznan, Poland
| |
Collapse
|
243
|
Liu R, Wang T, Li H, Zhang P, Li J, Yang X, Shen D, Sheng B. TMM-Nets: Transferred Multi- to Mono-Modal Generation for Lupus Retinopathy Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1083-1094. [PMID: 36409801 DOI: 10.1109/tmi.2022.3223683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Rare diseases, which are severely underrepresented in basic and clinical research, can particularly benefit from machine learning techniques. However, current learning-based approaches usually focus on either mono-modal image data or matched multi-modal data, whereas the diagnosis of rare diseases necessitates the aggregation of unstructured and unmatched multi-modal image data due to their rare and diverse nature. In this study, we therefore propose diagnosis-guided multi-to-mono modal generation networks (TMM-Nets) along with training and testing procedures. TMM-Nets can transfer data from multiple sources to a single modality for diagnostic data structurization. To demonstrate their potential in the context of rare diseases, TMM-Nets were deployed to diagnose the lupus retinopathy (LR-SLE), leveraging unmatched regular and ultra-wide-field fundus images for transfer learning. The TMM-Nets encoded the transfer learning from diabetic retinopathy to LR-SLE based on the similarity of the fundus lesions. In addition, a lesion-aware multi-scale attention mechanism was developed for clinical alerts, enabling TMM-Nets not only to inform patient care, but also to provide insights consistent with those of clinicians. An adversarial strategy was also developed to refine multi- to mono-modal image generation based on diagnostic results and the data distribution to enhance the data augmentation performance. Compared to the baseline model, the TMM-Nets showed 35.19% and 33.56% F1 score improvements on the test and external validation sets, respectively. In addition, the TMM-Nets can be used to develop diagnostic models for other rare diseases.
Collapse
|
244
|
Poly TN, Islam MM, Walther BA, Lin MC, Jack Li YC. Artificial intelligence in diabetic retinopathy: Bibliometric analysis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107358. [PMID: 36731310 DOI: 10.1016/j.cmpb.2023.107358] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 01/08/2023] [Accepted: 01/16/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND The use of artificial intelligence in diabetic retinopathy has become a popular research focus in the past decade. However, no scientometric report has provided a systematic overview of this scientific area. AIMS We utilized a bibliometric approach to identify and analyse the academic literature on artificial intelligence in diabetic retinopathy and explore emerging research trends, key authors, co-authorship networks, institutions, countries, and journals. We further captured the diabetic retinopathy conditions and technology commonly used within this area. METHODS Web of Science was used to collect relevant articles on artificial intelligence use in diabetic retinopathy published between January 1, 2012, and December 31, 2022 . All the retrieved titles were screened for eligibility, with one criterion that they must be in English. All the bibliographic information was extracted and used to perform a descriptive analysis. Bibliometrix (R tool) and VOSviewer (Leiden University) were used to construct and visualize the annual numbers of publications, journals, authors, countries, institutions, collaboration networks, keywords, and references. RESULTS In total, 931 articles that met the criteria were collected. The number of annual publications showed an increasing trend over the last ten years. Investigative Ophthalmology & Visual Science (58/931), IEEE Access (54/931), and Computers in Biology and Medicine (23/931) were the most journals with most publications. China (211/931), India (143/931, USA (133/931), and South Korea (44/931) were the most productive countries of origin. The National University of Singapore (40/931), Singapore Eye Research Institute (35/931), and Johns Hopkins University (34/931) were the most productive institutions. Ting D. (34/931), Wong T. (28/931), and Tan G. (17/931) were the most productive researchers. CONCLUSION This study summarizes the recent advances in artificial intelligence technology on diabetic retinopathy research and sheds light on the emerging trends, sources, leading institutions, and hot topics through bibliometric analysis and network visualization. Although this field has already shown great potential in health care, our findings will provide valuable clues relevant to future research directions and clinical practice.
Collapse
Affiliation(s)
- Tahmina Nasrin Poly
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan; Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan
| | - Md Mohaimenul Islam
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan; AESOP Technology, Songshan District, Taipei 105, Taiwan
| | - Bruno Andreas Walther
- Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung, Am Handelshafen 12, Bremerhaven D-27570, Germany
| | - Ming Chin Lin
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; Department of Neurosurgery, Shuang Ho Hospital, Taipei Medical University, New Taipei City 235041, Taiwan; Taipei Neuroscience Institute, Taipei Medical University, Taipei 110301, Taiwan
| | - Yu-Chuan Jack Li
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan; Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan; AESOP Technology, Songshan District, Taipei 105, Taiwan.
| |
Collapse
|
245
|
Randhawa J, Chiang M, Porporato N, Pardeshi AA, Dredge J, Apolo Aroca G, Tun TA, Quah JH, Tan M, Higashita R, Aung T, Varma R, Xu BY. Generalisability and performance of an OCT-based deep learning classifier for community-based and hospital-based detection of gonioscopic angle closure. Br J Ophthalmol 2023; 107:511-517. [PMID: 34670749 PMCID: PMC9018872 DOI: 10.1136/bjophthalmol-2021-319470] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 10/02/2021] [Indexed: 11/04/2022]
Abstract
PURPOSE To assess the generalisability and performance of a deep learning classifier for automated detection of gonioscopic angle closure in anterior segment optical coherence tomography (AS-OCT) images. METHODS A convolutional neural network (CNN) model developed using data from the Chinese American Eye Study (CHES) was used to detect gonioscopic angle closure in AS-OCT images with reference gonioscopy grades provided by trained ophthalmologists. Independent test data were derived from the population-based CHES, a community-based clinic in Singapore, and a hospital-based clinic at the University of Southern California (USC). Classifier performance was evaluated with receiver operating characteristic curve and area under the receiver operating characteristic curve (AUC) metrics. Interexaminer agreement between the classifier and two human examiners at USC was calculated using Cohen's kappa coefficients. RESULTS The classifier was tested using 640 images (311 open and 329 closed) from 127 Chinese Americans, 10 165 images (9595 open and 570 closed) from 1318 predominantly Chinese Singaporeans and 300 images (234 open and 66 closed) from 40 multiethnic USC patients. The classifier achieved similar performance in the CHES (AUC=0.917), Singapore (AUC=0.894) and USC (AUC=0.922) cohorts. Standardising the distribution of gonioscopy grades across cohorts produced similar AUC metrics (range 0.890-0.932). The agreement between the CNN classifier and two human examiners (Ҡ=0.700 and 0.704) approximated interexaminer agreement (Ҡ=0.693) in the USC cohort. CONCLUSION An OCT-based deep learning classifier demonstrated consistent performance detecting gonioscopic angle closure across three independent patient populations. This automated method could aid ophthalmologists in the assessment of angle status in diverse patient populations.
Collapse
Affiliation(s)
- Jasmeen Randhawa
- Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Michael Chiang
- Roski Eye Institute, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Natalia Porporato
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Anmol A Pardeshi
- Roski Eye Institute, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Justin Dredge
- Roski Eye Institute, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Galo Apolo Aroca
- Roski Eye Institute, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Tin A Tun
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | | | - Marcus Tan
- Ophthalmology, National University of Singapore, Singapore
| | | | - Tin Aung
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
- Ophthalmology, National University of Singapore, Singapore
| | - Rohit Varma
- Southern California Eye Institute, CHA Hollywood Presbyterian Medical Center, Los Angeles, California, USA
| | - Benjamin Y Xu
- Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| |
Collapse
|
246
|
Chan YK, Cheng CY, Sabanayagam C. Eyes as the windows into cardiovascular disease in the era of big data. Taiwan J Ophthalmol 2023; 13:151-167. [PMID: 37484607 PMCID: PMC10361436 DOI: 10.4103/tjo.tjo-d-23-00018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 04/11/2023] [Indexed: 07/25/2023] Open
Abstract
Cardiovascular disease (CVD) is a major cause of mortality and morbidity worldwide and imposes significant socioeconomic burdens, especially with late diagnoses. There is growing evidence of strong correlations between ocular images, which are information-dense, and CVD progression. The accelerating development of deep learning algorithms (DLAs) is a promising avenue for research into CVD biomarker discovery, early CVD diagnosis, and CVD prognostication. We review a selection of 17 recent DLAs on the less-explored realm of DL as applied to ocular images to produce CVD outcomes, potential challenges in their clinical deployment, and the path forward. The evidence for CVD manifestations in ocular images is well documented. Most of the reviewed DLAs analyze retinal fundus photographs to predict CV risk factors, in particular hypertension. DLAs can predict age, sex, smoking status, alcohol status, body mass index, mortality, myocardial infarction, stroke, chronic kidney disease, and hematological disease with significant accuracy. While the cardio-oculomics intersection is now burgeoning, very much remain to be explored. The increasing availability of big data, computational power, technological literacy, and acceptance all prime this subfield for rapid growth. We pinpoint the specific areas of improvement toward ubiquitous clinical deployment: increased generalizability, external validation, and universal benchmarking. DLAs capable of predicting CVD outcomes from ocular inputs are of great interest and promise to individualized precision medicine and efficiency in the provision of health care with yet undetermined real-world efficacy with impactful initial results.
Collapse
Affiliation(s)
- Yarn Kit Chan
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Center for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Charumathi Sabanayagam
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| |
Collapse
|
247
|
Lou YS, Lin CS, Fang WH, Lee CC, Lin C. Extensive deep learning model to enhance electrocardiogram application via latent cardiovascular feature extraction from identity identification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107359. [PMID: 36738606 DOI: 10.1016/j.cmpb.2023.107359] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 12/22/2022] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning models (DLMs) have been successfully applied in biomedicine primarily using supervised learning with large, annotated databases. However, scarce training resources limit the potential of DLMs for electrocardiogram (ECG) analysis. METHODS We have developed a novel pre-training strategy for unsupervised identity identification with an area under the receiver operating characteristic curve (AUC) >0.98. Accordingly, a DLM pre-trained with identity identification can be applied to 70 patient characteristic predictions using transfer learning (TL). These ECG-based patient characteristics were then used for cardiovascular disease (CVD) risk prediction. The DLMs were trained using 507,729 ECGs from 222,473 patients and validated using two independent validation sets (n = 27,824/31,925). RESULTS The DLMs using our method exhibited better performance than directly trained DLMs. Additionally, our DLM performed better than those of previous studies in terms of gender (AUC [internal/external] = 0.982/0.968), age (correlation = 0.886/0.892), low ejection fraction (AUC = 0.942/0.951), and critical markers not addressed previously, including high B-type natriuretic peptide (AUC = 0.921/0.899). Additionally, approximately 50% of the ECG-based characteristics provided significantly more prediction information for cardiovascular risk than real characteristics. CONCLUSIONS This is the first study to use identity identification as a pre-training task for TL in ECG analysis. An extensive exploration of the relationship between ECG and 70 patient characteristics was conducted. Our DLM-enhanced ECG interpretation system extensively advanced ECG-related patient characteristic prediction and mortality risk management for cardiovascular diseases.
Collapse
Affiliation(s)
- Yu-Sheng Lou
- Graduate Institutes of Life Sciences, National Defense Medical Center, Taipei, Taiwan; School of Public Health, National Defense Medical Center, Taipei, Taiwan
| | - Chin-Sheng Lin
- Division of Cardiology, Department of Internal Medicine, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, R.O.C.; Medical Technology Education Center, School of Medicine, National Defense Medical Center, Taipei, Taiwan, R.O.C
| | - Wen-Hui Fang
- Department of Family and Community Medicine, Department of Internal Medicine, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, R.O.C
| | - Chia-Cheng Lee
- Department of Medical Informatics, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, R.O.C.; Division of Colorectal Surgery, Department of Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, R.O.C
| | - Chin Lin
- Graduate Institutes of Life Sciences, National Defense Medical Center, Taipei, Taiwan; School of Public Health, National Defense Medical Center, Taipei, Taiwan; Medical Technology Education Center, School of Medicine, National Defense Medical Center, Taipei, Taiwan, R.O.C.; School of Medicine, National Defense Medical Center, Taipei, Taiwan, R.O.C..
| |
Collapse
|
248
|
Jayaraman P, Crouse A, Nadkarni G, Might M. A Primer in Precision Nephrology: Optimizing Outcomes in Kidney Health and Disease through Data-Driven Medicine. KIDNEY360 2023; 4:e544-e554. [PMID: 36951457 PMCID: PMC10278804 DOI: 10.34067/kid.0000000000000089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 01/04/2023] [Indexed: 03/24/2023]
Abstract
This year marks the 63rd anniversary of the International Society of Nephrology, which signaled nephrology's emergence as a modern medical discipline. In this article, we briefly trace the course of nephrology's history to show a clear arc in its evolution-of increasing resolution in nephrological data-an arc that is converging with computational capabilities to enable precision nephrology. In general, precision medicine refers to tailoring treatment to the individual characteristics of patients. For an operational definition, this tailoring takes the form of an optimization, in which treatments are selected to maximize a patient's expected health with respect to all available data. Because modern health data are large and high resolution, this optimization process requires computational intervention, and it must be tuned to the contours of specific medical disciplines. An advantage of this operational definition for precision medicine is that it allows us to better understand what precision medicine means in the context of a specific medical discipline. The goal of this article was to demonstrate how to instantiate this definition of precision medicine for the field of nephrology. Correspondingly, the goal of precision nephrology was to answer two related questions: ( 1 ) How do we optimize kidney health with respect to all available data? and ( 2 ) How do we optimize general health with respect to kidney data?
Collapse
Affiliation(s)
- Pushkala Jayaraman
- The Charles Bronfman Institute for Personalized Medicine Icahn School of Medicine at Mount Sinai, New York, New York
| | - Andrew Crouse
- Hugh Kaul Precision Medicine Institute, University of Alabama at Birmingham, Birmingham, Alabama
| | - Girish Nadkarni
- The Charles Bronfman Institute for Personalized Medicine Icahn School of Medicine at Mount Sinai, New York, New York
- The Mount Sinai Clinical Intelligence Center (MSCIC), Icahn School of Medicine at Mount Sinai, New York, New York
- Division of Data Driven and Digital Medicine, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York
- Barbara T Murphy Division of Nephrology, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Matthew Might
- Department of Medicine, University of Alabama at Birmingham, Birmingham, Alabama
- Department of Computer Science, University of Alabama at Birmingham, Birmingham, Alabama
| |
Collapse
|
249
|
Zhu C, Zhu J, Wang L, Xiong S, Zou Y, Huang J, Xie H, Zhang W, Wu H, Liu Y. Development and validation of a risk prediction model for diabetic retinopathy in type 2 diabetic patients. Sci Rep 2023; 13:5034. [PMID: 36977687 PMCID: PMC10049996 DOI: 10.1038/s41598-023-31463-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 03/13/2023] [Indexed: 03/30/2023] Open
Abstract
AbstractTo establish a risk prediction model and make individualized assessment for the susceptible diabetic retinopathy (DR) population in type 2 diabetic mellitus (T2DM) patients. According to the retrieval strategy, inclusion and exclusion criteria, the relevant meta-analyses on DR risk factors were searched and evaluated. The pooled odds ratio (OR) or relative risk (RR) of each risk factor was obtained and calculated for β coefficients using logistic regression (LR) model. Besides, an electronic patient-reported outcome questionnaire was developed and 60 cases of DR and non-DR T2DM patients were investigated to validate the developed model. Receiver operating characteristic curve (ROC) was drawn to verify the prediction accuracy of the model. After retrieving, eight meta-analyses with a total of 15,654 cases and 12 risk factors associated with the onset of DR in T2DM, including weight loss surgery, myopia, lipid-lowing drugs, intensive glucose control, course of T2DM, glycated hemoglobin (HbA1c), fasting plasma glucose, hypertension, gender, insulin treatment, residence, and smoking were included for LR modeling. These factors, followed by the respective β coefficient was bariatric surgery (− 0.942), myopia (− 0.357), lipid-lowering drug follow-up < 3y (− 0.994), lipid-lowering drug follow-up > 3y (− 0.223), course of T2DM (0.174), HbA1c (0.372), fasting plasma glucose (0.223), insulin therapy (0.688), rural residence (0.199), smoking (− 0.083), hypertension (0.405), male (0.548), intensive glycemic control (− 0.400) with constant term α (− 0.949) in the constructed model. The area under receiver operating characteristic curve (AUC) of the model in the external validation was 0.912. An application was presented as an example of use. In conclusion, the risk prediction model of DR is developed, which makes individualized assessment for the susceptible DR population feasible and needs to be further verified with large sample size application.
Collapse
|
250
|
Chen Y, Sun Q, Li Z, Zhong Y, Zeng J, Nie T. Development and validation of a deep learning model using convolutional neural networks to identify femoral internal fixation device in radiographs. Skeletal Radiol 2023:10.1007/s00256-023-04324-5. [PMID: 36964792 DOI: 10.1007/s00256-023-04324-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/13/2023] [Accepted: 03/13/2023] [Indexed: 03/26/2023]
Abstract
OBJECTIVE The purpose of this study is to develop and validate a deep convolutional neural network (DCNN) model to automatically identify the manufacturer and model of hip internal fixation devices from anteroposterior (AP) radiographs. MATERIALS AND METHODS In this retrospective study, 1721 hip AP radiographs, including six internal fixation devices from 1012 patients, were collected from an orthopedic center between June 2014 and June 2022 to establish a classification network. The images were divided into training set (1106 images), validation set (272 images), and test set (343 images). The model efficacy is evaluated by using the data on the test set. The overall TOP-1 accuracy, and the precision, sensitivity, specificity, and F1 score of each model are calculated, and receiver operating characteristic (ROC) curves are plotted to evaluate the model performance. Gradient-weighted class activation mapping (Grad-CAM) images are used to determine the image features that are most important for DCNN decisions. RESULTS A total of 1378 (80%) images were used for model development, and model efficacy was validated on a test set with 343 (20%) images. The overall TOP-1 accuracy was 98.5%. The area under the receiver operating characteristic curve (AUC) values for each internal fixation model were 1.000, 1.000, 0.980, 1.000, 0.999, and 1.000, respectively. Gradient-weighted class activation mapping showed the unique design of the internal fixation device. CONCLUSION We developed a deep convolutional neural network model that can identify the manufacturer and model of hip internal fixation devices from the hip AP radiographs.
Collapse
Affiliation(s)
- Yanzhen Chen
- Deparment of Orthopedics, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Qian Sun
- Software Engineering Institute, East China Normal University, Shanghai, China
| | - Zhipeng Li
- Deparment of Orthopedics, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Yuanwu Zhong
- Deparment of Orthopedics, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Junfeng Zeng
- Deparment of Orthopedics, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Tao Nie
- Deparment of Orthopedics, The First Affiliated Hospital of Nanchang University, Nanchang, China.
| |
Collapse
|