251
|
Artificial Intelligence in Biological Sciences. Life (Basel) 2022; 12:life12091430. [PMID: 36143468 PMCID: PMC9505413 DOI: 10.3390/life12091430] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 08/25/2022] [Accepted: 09/10/2022] [Indexed: 12/03/2022] Open
Abstract
Artificial intelligence (AI), currently a cutting-edge concept, has the potential to improve the quality of life of human beings. The fields of AI and biological research are becoming more intertwined, and methods for extracting and applying the information stored in live organisms are constantly being refined. As the field of AI matures with more trained algorithms, the potential of its application in epidemiology, the study of host–pathogen interactions and drug designing widens. AI is now being applied in several fields of drug discovery, customized medicine, gene editing, radiography, image processing and medication management. More precise diagnosis and cost-effective treatment will be possible in the near future due to the application of AI-based technologies. In the field of agriculture, farmers have reduced waste, increased output and decreased the amount of time it takes to bring their goods to market due to the application of advanced AI-based approaches. Moreover, with the use of AI through machine learning (ML) and deep-learning-based smart programs, one can modify the metabolic pathways of living systems to obtain the best possible outputs with the minimal inputs. Such efforts can improve the industrial strains of microbial species to maximize the yield in the bio-based industrial setup. This article summarizes the potentials of AI and their application to several fields of biology, such as medicine, agriculture, and bio-based industry.
Collapse
|
252
|
Volovici V, Syn NL, Ercole A, Zhao JJ, Liu N. Steps to avoid overuse and misuse of machine learning in clinical research. Nat Med 2022; 28:1996-1999. [PMID: 36097217 DOI: 10.1038/s41591-022-01961-6] [Citation(s) in RCA: 42] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
- Victor Volovici
- Department of Neurosurgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands.
| | - Nicholas L Syn
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.,Department of Surgery, National University Hospital, National University Health System, Singapore, Singapore
| | - Ari Ercole
- Cambridge Centre for AI in Medicine, University of Cambridge, Cambridge, UK
| | - Joseph J Zhao
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Nan Liu
- Programme in Health Services and Systems Research, Duke-NUS Medical School, Singapore, Singapore
| |
Collapse
|
253
|
González-Gonzalo C, Thee EF, Klaver CCW, Lee AY, Schlingemann RO, Tufail A, Verbraak F, Sánchez CI. Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res 2022; 90:101034. [PMID: 34902546 DOI: 10.1016/j.preteyeres.2021.101034] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/14/2023]
Abstract
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Collapse
Affiliation(s)
- Cristina González-Gonzalo
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands.
| | - Eric F Thee
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Caroline C W Klaver
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the Netherlands; Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Aaron Y Lee
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
| | - Reinier O Schlingemann
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Frank Verbraak
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands
| | - Clara I Sánchez
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Amsterdam, the Netherlands
| |
Collapse
|
254
|
Lim JI, Regillo CD, Sadda SR, Ipp E, Bhaskaranand M, Ramachandra C, Solanki K. Artificial Intelligence Detection of Diabetic Retinopathy: Subgroup Comparison of the EyeArt System with Ophthalmologists' Dilated Exams. OPHTHALMOLOGY SCIENCE 2022; 3:100228. [PMID: 36345378 PMCID: PMC9636573 DOI: 10.1016/j.xops.2022.100228] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Revised: 08/30/2022] [Accepted: 09/22/2022] [Indexed: 11/26/2022]
Abstract
Objective To compare general ophthalmologists, retina specialists, and the EyeArt Artificial Intelligence (AI) system to the clinical reference standard for detecting more than mild diabetic retinopathy (mtmDR). Design Prospective, pivotal, multicenter trial conducted from April 2017 to May 2018. Participants Participants were aged ≥ 18 years who had diabetes mellitus and underwent dilated ophthalmoscopy. A total of 521 of 893 participants met these criteria and completed the study protocol. Testing Participants underwent 2-field fundus photography (macula centered, disc centered) for the EyeArt system, dilated ophthalmoscopy, and 4-widefield stereoscopic dilated fundus photography for reference standard grading. Main Outcome Measures For mtmDR detection, sensitivity and specificity of EyeArt gradings of 2-field, fundus photographs and ophthalmoscopy grading versus a rigorous clinical reference standard comprising Reading Center grading of 4-widefield stereoscopic dilated fundus photographs using the ETDRS severity scale. The AI system provided automatic eye-level results regarding mtmDR. Results Overall, 521 participants (999 eyes) at 10 centers underwent dilated ophthalmoscopy: 406 by nonretina and 115 by retina specialists. Reading Center graded 207 positive and 792 eyes negative for mtmDR. Of these 999 eyes, 26 eyes were ungradable by the EyeArt system, leaving 973 eyes with both EyeArt and Reading Center gradings. Retina specialists correctly identified 22 of 37 eyes as positive (sensitivity 59.5%) and 182 of 184 eyes as negative (specificity 98.9%) for mtmDR versus the EyeArt AI system that identified 36 of 37 as positive (sensitivity 97%) and 162 of 184 eyes as negative (specificity of 88%) for mtmDR. General ophthalmologists correctly identified 35 of 170 eyes as positive (sensitivity 20.6%) and 607 of 608 eyes as negative (specificity 99.8%) for mtmDR compared with the EyeArt AI system that identified 164 of 170 as positive (sensitivity 96.5%) and 525 of 608 eyes as negative (specificity 86%) for mtmDR. Conclusions The AI system had a higher sensitivity for detecting mtmDR than either general ophthalmologists or retina specialists compared with the clinical reference standard. It can potentially serve as a low-cost point-of-care diabetic retinopathy detection tool and help address the diabetic eye screening burden.
Collapse
|
255
|
Ong J, Tan G, Ang M, Chhablani J. Digital Advancements in Retinal Models of Care in the Post-COVID-19 Lockdown Era. Asia Pac J Ophthalmol (Phila) 2022; 11:403-407. [PMID: 36094383 DOI: 10.1097/apo.0000000000000533] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 03/14/2022] [Indexed: 11/25/2022] Open
Abstract
The coronavirus disease-2019 (COVID-19) pandemic introduced unique barriers to retinal care including limited access to imaging modalities, ophthalmic clinicians, and direct medical interventions. These unprecedented barriers were met with the robust implementation of digital advances to aid in monitoring and efficiency of retinal care while taking into the account of public safety. Many of these innovations have been successful in maintaining efficiency and patient satisfaction and are likely to stay to help preserve vision in the future. In this article we highlight these advances implemented during the pandemic including telescreening triage, virtual retinal imaging clinics, at-home optical coherence tomography, mobile phone self-monitoring, and virtual reality monitoring technology. We also discuss advancing innovations including Internet of Things and Blockchain technology that will be critical for further implementation and security of these digital advancements.
Collapse
Affiliation(s)
- Joshua Ong
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA
| | - Gavin Tan
- Surgical Retinal Department of the Singapore National Eye Centre, Singapore
- Clinician Scientist, Singapore Eye Research Institute, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Marcus Ang
- Duke-NUS Department of Ophthalmology and Visual Sciences, Singapore
| | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA
| |
Collapse
|
256
|
Padilla-Pantoja FD, Sanchez YD, Quijano-Nieto BA, Perdomo OJ, Gonzalez FA. Etiology of Macular Edema Defined by Deep Learning in Optical Coherence Tomography Scans. Transl Vis Sci Technol 2022; 11:29. [PMID: 36169966 PMCID: PMC9526369 DOI: 10.1167/tvst.11.9.29] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To develop an automated method based on deep learning (DL) to classify macular edema (ME) from the evaluation of optical coherence tomography (OCT) scans. Methods A total of 4230 images were obtained from data repositories of patients attended in an ophthalmology clinic in Colombia and two free open-access databases. They were annotated with four biomarkers (BMs) as intraretinal fluid, subretinal fluid, hyperreflective foci/tissue, and drusen. Then the scans were labeled as control or ocular disease among diabetic macular edema (DME), neovascular age-related macular degeneration (nAMD), and retinal vein occlusion (RVO) by two expert ophthalmologists. Our method was developed by following four consecutive phases: segmentation of BMs, the combination of BMs, feature extraction with convolutional neural networks to achieve binary classification for each disease, and, finally, multiclass classification of diseases and control images. Results The accuracy of our model for nAMD was 97%, and for DME, RVO, and control were 94%, 93%, and 93%, respectively. Area under curve values were 0.99, 0.98, 0.96, and 0.97, respectively. The mean Cohen's kappa coefficient for the multiclass classification task was 0.84. Conclusions The proposed DL model may identify OCT scans as normal and ME. In addition, it may classify its cause among three major exudative retinal diseases with high accuracy and reliability. Translational Relevance Our DL approach can optimize the efficiency and timeliness of appropriate etiological diagnosis of ME, thus improving patient access and clinical decision making. It could be useful in places with a shortage of specialists and for readers that evaluate OCT scans remotely.
Collapse
Affiliation(s)
| | - Yeison D Sanchez
- MindLab Research Group, Universidad Nacional de Colombia, Bogotá, Colombia
| | | | - Oscar J Perdomo
- School of Medicine and Health Sciences, Universidad del Rosario, Bogotá, Colombia
| | - Fabio A Gonzalez
- MindLab Research Group, Universidad Nacional de Colombia, Bogotá, Colombia
| |
Collapse
|
257
|
Curran DM, Kim BY, Withers N, Shepard DS, Brady CJ. Telehealth Screening for Diabetic Retinopathy: Economic Modeling Reveals Cost Savings. Telemed J E Health 2022; 28:1300-1308. [PMID: 35073213 PMCID: PMC9508450 DOI: 10.1089/tmj.2021.0352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 12/10/2021] [Accepted: 12/13/2021] [Indexed: 11/12/2022] Open
Abstract
Introduction: The use of telehealth screening (TS) for diabetic retinopathy (DR) consists of fundus photography in a primary care setting with remote interpretation of images. TS for DR is known to increase screening utilization and reduce vision loss compared with standard in-person conventional diabetic retinal exam (CDRE). Anti-vascular endothelial growth factor intravitreal injections have become standard of care for the treatment of DR, but they are expensive. We investigated whether TS for DR is cost-effective when DR management includes intravitreal injections using national data. Materials and Methods: We compared cost and effectiveness of TS and CDRE using decision-tree analysis and probabilistic sensitivity analysis with Monte Carlo simulation. We considered the disability weight (DW) of vision impairment and 1-year direct medical costs of managing patients based on Medicare allowable rates and clinical trial data. Primary outcomes include incremental costs and incremental effectiveness. Results: The average annual direct cost of eye care was $196 per person for TS and $275 for CDRE. On average, TS saves $78 (28%) compared with CDRE and was cost saving in 88.9% of simulations. The average DW outcome was equivalent in both groups. Discussion: Although this study was limited by a 1-year time horizon, it provides support that TS for DR can reduce costs of DR management despite expensive treatment with anti-VEGF agents. TS for DR is equally effective as CDRE at preserving vision. Conclusions: Annual TS for DR is cost saving and equally effective compared with CDRE given a 1-year time horizon.
Collapse
Affiliation(s)
- Delaney M. Curran
- Division of Ophthalmology, Department of Surgery, University of Vermont Larner College of Medicine, Burlington, Vermont, USA
| | - Brian Y. Kim
- Division of Ophthalmology, Department of Surgery, University of Vermont Larner College of Medicine, Burlington, Vermont, USA
- Division of Ophthalmology, Department of Surgery, University of Vermont Medical Center, Burlington, Vermont, USA
| | - Natasha Withers
- Ambulatory Care, Porter Medical Center, University of Vermont Health Network, Middlebury, Vermont, USA
| | - Donald S. Shepard
- Heller School for Social Policy and Management, Brandeis University, Waltham, Massachusetts, USA
| | - Christopher J. Brady
- Division of Ophthalmology, Department of Surgery, University of Vermont Larner College of Medicine, Burlington, Vermont, USA
- Division of Ophthalmology, Department of Surgery, University of Vermont Medical Center, Burlington, Vermont, USA
- Vermont Center on Behavior and Health, Larner College of Medicine, Burlington, Vermont, USA
| |
Collapse
|
258
|
Grauslund J. Diabetic retinopathy screening in the emerging era of artificial intelligence. Diabetologia 2022; 65:1415-1423. [PMID: 35639120 DOI: 10.1007/s00125-022-05727-0] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 04/05/2022] [Indexed: 12/29/2022]
Abstract
Diabetic retinopathy is a frequent complication in diabetes and a leading cause of visual impairment. Regular eye screening is imperative to detect sight-threatening stages of diabetic retinopathy such as proliferative diabetic retinopathy and diabetic macular oedema in order to treat these before irreversible visual loss occurs. Screening is cost-effective and has been implemented in various countries in Europe and elsewhere. Along with optimised diabetes care, this has substantially reduced the risk of visual loss. Nevertheless, the growing number of patients with diabetes poses an increasing burden on healthcare systems and automated solutions are needed to alleviate the task of screening and improve diagnostic accuracy. Deep learning by convolutional neural networks is an optimised branch of artificial intelligence that is particularly well suited to automated image analysis. Pivotal studies have demonstrated high sensitivity and specificity for classifying advanced stages of diabetic retinopathy and identifying diabetic macular oedema in optical coherence tomography scans. Based on this, different algorithms have obtained regulatory approval for clinical use and have recently been implemented to some extent in a few countries. Handheld mobile devices are another promising option for self-monitoring, but so far they have not demonstrated comparable image quality to that of fundus photography using non-portable retinal cameras, which is the gold standard for diabetic retinopathy screening. Such technology has the potential to be integrated in telemedicine-based screening programmes, enabling self-captured retinal images to be transferred virtually to reading centres for analysis and planning of further steps. While emerging technologies have shown a lot of promise, clinical implementation has been sparse. Legal obstacles and difficulties in software integration may partly explain this, but it may also indicate that existing algorithms may not necessarily integrate well with national screening initiatives, which often differ substantially between countries.
Collapse
Affiliation(s)
- Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark.
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark.
- Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark.
- Vestfold Hospital Trust, Tønsberg, Norway.
| |
Collapse
|
259
|
Charng J, Alam K, Swartz G, Kugelman J, Alonso-Caneiro D, Mackey DA, Chen FK. Deep learning: applications in retinal and optic nerve diseases. Clin Exp Optom 2022:1-10. [PMID: 35999058 DOI: 10.1080/08164622.2022.2111201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
Abstract
Deep learning (DL) represents a paradigm-shifting, burgeoning field of research with emerging clinical applications in optometry. Unlike traditional programming, which relies on human-set specific rules, DL works by exposing the algorithm to a large amount of annotated data and allowing the software to develop its own set of rules (i.e. learn) by adjusting the parameters inside the model (network) during a training process in order to complete the task on its own. One major limitation of traditional programming is that, with complex tasks, it may require an extensive set of rules to accurately complete the assignment. Additionally, traditional programming can be susceptible to human bias from programmer experience. With the dramatic increase in the amount and the complexity of clinical data, DL has been utilised to automate data analysis and thus to assist clinicians in patient management. This review will present the latest advances in DL, for managing posterior eye diseases as well as DL-based solutions for patients with vision loss.
Collapse
Affiliation(s)
- Jason Charng
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Khyber Alam
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Gavin Swartz
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Jason Kugelman
- School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David Alonso-Caneiro
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David A Mackey
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Fred K Chen
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia.,Department of Ophthalmology, Royal Perth Hospital, Western Australia, Perth, Australia
| |
Collapse
|
260
|
Pareja-Ríos A, Ceruso S, Romero-Aroca P, Bonaque-González S. A New Deep Learning Algorithm with Activation Mapping for Diabetic Retinopathy: Backtesting after 10 Years of Tele-Ophthalmology. J Clin Med 2022; 11:jcm11174945. [PMID: 36078875 PMCID: PMC9456446 DOI: 10.3390/jcm11174945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 08/17/2022] [Accepted: 08/22/2022] [Indexed: 11/16/2022] Open
Abstract
We report the development of a deep learning algorithm (AI) to detect signs of diabetic retinopathy (DR) from fundus images. For this, we use a ResNet-50 neural network with a double resolution, the addition of Squeeze–Excitation blocks, pre-trained in ImageNet, and trained for 50 epochs using the Adam optimizer. The AI-based algorithm not only classifies an image as pathological or not but also detects and highlights those signs that allow DR to be identified. For development, we have used a database of about half a million images classified in a real clinical environment by family doctors (FDs), ophthalmologists, or both. The AI was able to detect more than 95% of cases worse than mild DR and had 70% fewer misclassifications of healthy cases than FDs. In addition, the AI was able to detect DR signs in 1258 patients before they were detected by FDs, representing 7.9% of the total number of DR patients detected by the FDs. These results suggest that AI is at least comparable to the evaluation of FDs. We suggest that it may be useful to use signaling tools such as an aid to diagnosis rather than an AI as a stand-alone tool.
Collapse
Affiliation(s)
- Alicia Pareja-Ríos
- Department of Ophthalmology, University Hospital of the Canary Islands, 38320 San Cristóbal de La Laguna, Spain
| | - Sabato Ceruso
- School of Engineering and Technology, University of La Laguna, 38200 San Cristóbal de La Laguna, Spain
| | - Pedro Romero-Aroca
- Ophthalmology Department, University Hospital Sant Joan, Institute of Health Research Pere Virgili (IISPV), Universitat Rovira & Virgili, 43002 Tarragona, Spain
| | - Sergio Bonaque-González
- Instituto de Astrofísica de Canarias, 38205 San Cristóbal de La Laguna, Spain
- Correspondence:
| |
Collapse
|
261
|
Zhang T, Chen J, Lu Y, Yang X, Ouyang Z. Identification of technology frontiers of artificial intelligence-assisted pathology based on patent citation network. PLoS One 2022; 17:e0273355. [PMID: 35994484 PMCID: PMC9394838 DOI: 10.1371/journal.pone.0273355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 08/05/2022] [Indexed: 12/04/2022] Open
Abstract
OBJECTIVES This paper aimed to identify the technology frontiers of artificial intelligence-assisted pathology based on patent citation network. METHODS Patents related to artificial intelligence-assisted pathology were searched and collected from the Derwent Innovation Index (DII), which were imported into Derwent Data Analyzer (DDA, Clarivate Derwent, New York, NY, USA) for authority control, and imported into the freely available computer program Ucinet 6 for drawing the patent citation network. The patent citation network according to the citation relationship could describe the technology development context in the field of artificial intelligence-assisted pathology. The patent citations were extracted from the collected patent data, selected highly cited patents to form a co-occurrence matrix, and built a patent citation network based on the co-occurrence matrix in each period. Text clustering is an unsupervised learning method, an important method in text mining, where similar documents are grouped into clusters. The similarity between documents are determined by calculating the distance between them, and the two documents with the closest distance are combined. The method of text clustering was used to identify the technology frontiers based on the patent citation network, which was according to co-word analysis of the title and abstract of the patents in this field. RESULTS 1704 patents were obtained in the field of artificial intelligence-assisted pathology, which had been currently undergoing three stages, namely the budding period (1992-2000), the development period (2001-2015), and the rapid growth period (2016-2021). There were two technology frontiers in the budding period (1992-2000), namely systems and methods for image data processing in computerized tomography (CT), and immunohistochemistry (IHC), five technology frontiers in the development period (2001-2015), namely spectral analysis methods of biomacromolecules, pathological information system, diagnostic biomarkers, molecular pathology diagnosis, and pathological diagnosis antibody, and six technology frontiers in the rapid growth period (2016-2021), namely digital pathology (DP), deep learning (DL) algorithms-convolutional neural networks (CNN), disease prediction models, computational pathology, pathological image analysis method, and intelligent pathological system. CONCLUSIONS Artificial intelligence-assisted pathology was currently in a rapid development period, and computational pathology, DL and other technologies in this period all involved the study of algorithms. Future research hotspots in this field would focus on algorithm improvement and intelligent diagnosis in order to realize the precise diagnosis. The results of this study presented an overview of the characteristics of research status and development trends in the field of artificial intelligence-assisted pathology, which could help readers broaden innovative ideas and discover new technological opportunities, and also served as important indicators for government policymaking.
Collapse
Affiliation(s)
- Ting Zhang
- Institute of Medical Information & Library, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, People’s Republic of China
| | - Juan Chen
- Institute of Medical Information & Library, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, People’s Republic of China
| | - Yan Lu
- Institute of Medical Information & Library, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, People’s Republic of China
| | - Xiaoyi Yang
- Institute of Medical Information & Library, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, People’s Republic of China
| | - Zhaolian Ouyang
- Institute of Medical Information & Library, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, People’s Republic of China
| |
Collapse
|
262
|
Big Data in Laboratory Medicine—FAIR Quality for AI? Diagnostics (Basel) 2022; 12:diagnostics12081923. [PMID: 36010273 PMCID: PMC9406962 DOI: 10.3390/diagnostics12081923] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 08/05/2022] [Accepted: 08/06/2022] [Indexed: 12/22/2022] Open
Abstract
Laboratory medicine is a digital science. Every large hospital produces a wealth of data each day—from simple numerical results from, e.g., sodium measurements to highly complex output of “-omics” analyses, as well as quality control results and metadata. Processing, connecting, storing, and ordering extensive parts of these individual data requires Big Data techniques. Whereas novel technologies such as artificial intelligence and machine learning have exciting application for the augmentation of laboratory medicine, the Big Data concept remains fundamental for any sophisticated data analysis in large databases. To make laboratory medicine data optimally usable for clinical and research purposes, they need to be FAIR: findable, accessible, interoperable, and reusable. This can be achieved, for example, by automated recording, connection of devices, efficient ETL (Extract, Transform, Load) processes, careful data governance, and modern data security solutions. Enriched with clinical data, laboratory medicine data allow a gain in pathophysiological insights, can improve patient care, or can be used to develop reference intervals for diagnostic purposes. Nevertheless, Big Data in laboratory medicine do not come without challenges: the growing number of analyses and data derived from them is a demanding task to be taken care of. Laboratory medicine experts are and will be needed to drive this development, take an active role in the ongoing digitalization, and provide guidance for their clinical colleagues engaging with the laboratory data in research.
Collapse
|
263
|
Chen JS, Baxter SL. Applications of natural language processing in ophthalmology: present and future. Front Med (Lausanne) 2022; 9:906554. [PMID: 36004369 PMCID: PMC9393550 DOI: 10.3389/fmed.2022.906554] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 05/31/2022] [Indexed: 11/13/2022] Open
Abstract
Advances in technology, including novel ophthalmic imaging devices and adoption of the electronic health record (EHR), have resulted in significantly increased data available for both clinical use and research in ophthalmology. While artificial intelligence (AI) algorithms have the potential to utilize these data to transform clinical care, current applications of AI in ophthalmology have focused mostly on image-based deep learning. Unstructured free-text in the EHR represents a tremendous amount of underutilized data in big data analyses and predictive AI. Natural language processing (NLP) is a type of AI involved in processing human language that can be used to develop automated algorithms using these vast quantities of available text data. The purpose of this review was to introduce ophthalmologists to NLP by (1) reviewing current applications of NLP in ophthalmology and (2) exploring potential applications of NLP. We reviewed current literature published in Pubmed and Google Scholar for articles related to NLP and ophthalmology, and used ancestor search to expand our references. Overall, we found 19 published studies of NLP in ophthalmology. The majority of these publications (16) focused on extracting specific text such as visual acuity from free-text notes for the purposes of quantitative analysis. Other applications included: domain embedding, predictive modeling, and topic modeling. Future ophthalmic applications of NLP may also focus on developing search engines for data within free-text notes, cleaning notes, automated question-answering, and translating ophthalmology notes for other specialties or for patients, especially with a growing interest in open notes. As medicine becomes more data-oriented, NLP offers increasing opportunities to augment our ability to harness free-text data and drive innovations in healthcare delivery and treatment of ophthalmic conditions.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, United States
- Health Department of Biomedical Informatics, University of California San Diego, La Jolla, CA, United States
| | - Sally L. Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, United States
- Health Department of Biomedical Informatics, University of California San Diego, La Jolla, CA, United States
| |
Collapse
|
264
|
The Role of Medical Image Modalities and AI in the Early Detection, Diagnosis and Grading of Retinal Diseases: A Survey. Bioengineering (Basel) 2022; 9:bioengineering9080366. [PMID: 36004891 PMCID: PMC9405367 DOI: 10.3390/bioengineering9080366] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 07/28/2022] [Accepted: 08/01/2022] [Indexed: 11/16/2022] Open
Abstract
Traditional dilated ophthalmoscopy can reveal diseases, such as age-related macular degeneration (AMD), diabetic retinopathy (DR), diabetic macular edema (DME), retinal tear, epiretinal membrane, macular hole, retinal detachment, retinitis pigmentosa, retinal vein occlusion (RVO), and retinal artery occlusion (RAO). Among these diseases, AMD and DR are the major causes of progressive vision loss, while the latter is recognized as a world-wide epidemic. Advances in retinal imaging have improved the diagnosis and management of DR and AMD. In this review article, we focus on the variable imaging modalities for accurate diagnosis, early detection, and staging of both AMD and DR. In addition, the role of artificial intelligence (AI) in providing automated detection, diagnosis, and staging of these diseases will be surveyed. Furthermore, current works are summarized and discussed. Finally, projected future trends are outlined. The work done on this survey indicates the effective role of AI in the early detection, diagnosis, and staging of DR and/or AMD. In the future, more AI solutions will be presented that hold promise for clinical applications.
Collapse
|
265
|
Brady CJ, Cockrell RC, Aldrich LR, Wolle MA, West SK. A Virtual Reading Center Model Using Crowdsourcing to Grade Photographs for Trachoma: Validation Study (Preprint). J Med Internet Res 2022; 25:e41233. [PMID: 37023420 PMCID: PMC10132003 DOI: 10.2196/41233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 01/30/2023] [Accepted: 02/19/2023] [Indexed: 02/21/2023] Open
Abstract
BACKGROUND As trachoma is eliminated, skilled field graders become less adept at correctly identifying active disease (trachomatous inflammation-follicular [TF]). Deciding if trachoma has been eliminated from a district or if treatment strategies need to be continued or reinstated is of critical public health importance. Telemedicine solutions require both connectivity, which can be poor in the resource-limited regions of the world in which trachoma occurs, and accurate grading of the images. OBJECTIVE Our purpose was to develop and validate a cloud-based "virtual reading center" (VRC) model using crowdsourcing for image interpretation. METHODS The Amazon Mechanical Turk (AMT) platform was used to recruit lay graders to interpret 2299 gradable images from a prior field trial of a smartphone-based camera system. Each image received 7 grades for US $0.05 per grade in this VRC. The resultant data set was divided into training and test sets to internally validate the VRC. In the training set, crowdsourcing scores were summed, and the optimal raw score cutoff was chosen to optimize kappa agreement and the resulting prevalence of TF. The best method was then applied to the test set, and the sensitivity, specificity, kappa, and TF prevalence were calculated. RESULTS In this trial, over 16,000 grades were rendered in just over 60 minutes for US $1098 including AMT fees. After choosing an AMT raw score cut point to optimize kappa near the World Health Organization (WHO)-endorsed level of 0.7 (with a simulated 40% prevalence TF), crowdsourcing was 95% sensitive and 87% specific for TF in the training set with a kappa of 0.797. All 196 crowdsourced-positive images received a skilled overread to mimic a tiered reading center and specificity improved to 99%, while sensitivity remained above 78%. Kappa for the entire sample improved from 0.162 to 0.685 with overreads, and the skilled grader burden was reduced by over 80%. This tiered VRC model was then applied to the test set and produced a sensitivity of 99% and a specificity of 76% with a kappa of 0.775 in the entire set. The prevalence estimated by the VRC was 2.70% (95% CI 1.84%-3.80%) compared to the ground truth prevalence of 2.87% (95% CI 1.98%-4.01%). CONCLUSIONS A VRC model using crowdsourcing as a first pass with skilled grading of positive images was able to identify TF rapidly and accurately in a low prevalence setting. The findings from this study support further validation of a VRC and crowdsourcing for image grading and estimation of trachoma prevalence from field-acquired images, although further prospective field testing is required to determine if diagnostic characteristics are acceptable in real-world surveys with a low prevalence of the disease.
Collapse
Affiliation(s)
- Christopher J Brady
- Division of Ophthalmology, Department of Surgery, Larner College of Medicine at The University of Vermont, Burlington, VT, United States
| | - R Chase Cockrell
- Division of Surgical Research, Department of Surgery, Larner College of Medicine at The University of Vermont, Burlington, VT, United States
| | - Lindsay R Aldrich
- Larner College of Medicine at The University of Vermont, Burlington, VT, United States
| | - Meraf A Wolle
- Dana Center for Preventive Ophthalmology, Wilmer Eye Institute, Baltimore, MD, United States
| | - Sheila K West
- Dana Center for Preventive Ophthalmology, Wilmer Eye Institute, Baltimore, MD, United States
| |
Collapse
|
266
|
Liu R, Li Q, Xu F, Wang S, He J, Cao Y, Shi F, Chen X, Chen J. Application of artificial intelligence-based dual-modality analysis combining fundus photography and optical coherence tomography in diabetic retinopathy screening in a community hospital. Biomed Eng Online 2022; 21:47. [PMID: 35859144 PMCID: PMC9301845 DOI: 10.1186/s12938-022-01018-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 07/11/2022] [Indexed: 11/24/2022] Open
Abstract
Background To assess the feasibility and clinical utility of artificial intelligence (AI)-based screening for diabetic retinopathy (DR) and macular edema (ME) by combining fundus photos and optical coherence tomography (OCT) images in a community hospital. Methods Fundus photos and OCT images were taken for 600 diabetic patients in a community hospital. Ophthalmologists graded these fundus photos according to the International Clinical Diabetic Retinopathy (ICDR) Severity Scale as the ground truth. Two existing trained AI models were used to automatically classify the fundus images into DR grades according to ICDR, and to detect concomitant ME from OCT images, respectively. The criteria for referral were DR grades 2–4 and/or the presence of ME. The sensitivity and specificity of AI grading were evaluated. The number of referable DR cases confirmed by ophthalmologists and AI was calculated, respectively. Results DR was detected in 81 (13.5%) participants by ophthalmologists and in 94 (15.6%) by AI, and 45 (7.5%) and 53 (8.8%) participants were diagnosed with referable DR by ophthalmologists and by AI, respectively. The sensitivity, specificity and area under the curve (AUC) of AI for detecting DR were 91.67%, 96.92% and 0.944, respectively. For detecting referable DR, the sensitivity, specificity and AUC of AI were 97.78%, 98.38% and 0.981, respectively. ME was detected from OCT images in 49 (8.2%) participants by ophthalmologists and in 57 (9.5%) by AI, and the sensitivity, specificity and AUC of AI were 91.30%, 97.46% and 0.944, respectively. When combining fundus photos and OCT images, the number of referrals identified by ophthalmologists increased from 45 to 75 and from 53 to 85 by AI. Conclusion AI-based DR screening has high sensitivity and specificity and may feasibly improve the referral rate of community DR.
Collapse
Affiliation(s)
- Rui Liu
- Department of Ophthalmology, Shanghai Jing'an District Shibei Hospital, 4500, Gonghexin Road, Jing'an, Shanghai, 200443, China
| | - Qingchen Li
- Department of Ophthalmology and Vision Science, Eye and ENT Hospital, Fudan University, Shanghai, 200031, China.,Key Laboratory of Myopia of State Health Ministry, and Key Laboratory of Visual Impairment and Restoration of Shanghai, Shanghai, 200031, China.,Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, 200031, China
| | - Feiping Xu
- Department of Ophthalmology, Shanghai Jing'an District Shibei Hospital, 4500, Gonghexin Road, Jing'an, Shanghai, 200443, China
| | - Shasha Wang
- Department of Ophthalmology, Shanghai Jing'an District Shibei Hospital, 4500, Gonghexin Road, Jing'an, Shanghai, 200443, China
| | - Jie He
- Department of Ophthalmology, Shanghai Jing'an District Shibei Hospital, 4500, Gonghexin Road, Jing'an, Shanghai, 200443, China
| | - Yiting Cao
- Department of Ophthalmology, Shanghai Jing'an District Shibei Hospital, 4500, Gonghexin Road, Jing'an, Shanghai, 200443, China
| | - Fei Shi
- School of Electronic and Information Engineering, Soochow University, Suzhou, 215006, Jiangsu, China.,Suzhou Big Vision Medical Imaging Technology Co. Ltd., Suzhou, 215000, Jiangsu, China
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, Suzhou, 215006, Jiangsu, China.,Suzhou Big Vision Medical Imaging Technology Co. Ltd., Suzhou, 215000, Jiangsu, China
| | - Jili Chen
- Department of Ophthalmology, Shanghai Jing'an District Shibei Hospital, 4500, Gonghexin Road, Jing'an, Shanghai, 200443, China.
| |
Collapse
|
267
|
Khan NC, Perera C, Dow ER, Chen KM, Mahajan VB, Mruthyunjaya P, Do DV, Leng T, Myung D. Predicting Systemic Health Features from Retinal Fundus Images Using Transfer-Learning-Based Artificial Intelligence Models. Diagnostics (Basel) 2022; 12:diagnostics12071714. [PMID: 35885619 PMCID: PMC9322827 DOI: 10.3390/diagnostics12071714] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 12/02/2022] Open
Abstract
While color fundus photos are used in routine clinical practice to diagnose ophthalmic conditions, evidence suggests that ocular imaging contains valuable information regarding the systemic health features of patients. These features can be identified through computer vision techniques including deep learning (DL) artificial intelligence (AI) models. We aim to construct a DL model that can predict systemic features from fundus images and to determine the optimal method of model construction for this task. Data were collected from a cohort of patients undergoing diabetic retinopathy screening between March 2020 and March 2021. Two models were created for each of 12 systemic health features based on the DenseNet201 architecture: one utilizing transfer learning with images from ImageNet and another from 35,126 fundus images. Here, 1277 fundus images were used to train the AI models. Area under the receiver operating characteristics curve (AUROC) scores were used to compare the model performance. Models utilizing the ImageNet transfer learning data were superior to those using retinal images for transfer learning (mean AUROC 0.78 vs. 0.65, p-value < 0.001). Models using ImageNet pretraining were able to predict systemic features including ethnicity (AUROC 0.93), age > 70 (AUROC 0.90), gender (AUROC 0.85), ACE inhibitor (AUROC 0.82), and ARB medication use (AUROC 0.78). We conclude that fundus images contain valuable information about the systemic characteristics of a patient. To optimize DL model performance, we recommend that even domain specific models consider using transfer learning from more generalized image sets to improve accuracy.
Collapse
Affiliation(s)
- Nergis C. Khan
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Chandrashan Perera
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
- Department of Ophthalmology, Fremantle Hospital, Perth, WA 6004, Australia
| | - Eliot R. Dow
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Karen M. Chen
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Vinit B. Mahajan
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Prithvi Mruthyunjaya
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Diana V. Do
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Theodore Leng
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - David Myung
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
- VA Palo Alto Health Care System, Palo Alto, CA 94304, USA
- Correspondence: ; Tel.: +1-650-724-3948
| |
Collapse
|
268
|
Lyu X, Jajal P, Tahir MZ, Zhang S. Fractal dimension of retinal vasculature as an image quality metric for automated fundus image analysis systems. Sci Rep 2022; 12:11868. [PMID: 35831401 PMCID: PMC9279448 DOI: 10.1038/s41598-022-16089-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 07/04/2022] [Indexed: 11/21/2022] Open
Abstract
Automated fundus screening is becoming a significant programme of telemedicine in ophthalmology. Instant quality evaluation of uploaded retinal images could decrease unreliable diagnosis. In this work, we propose fractal dimension of retinal vasculature as an easy, effective and explainable indicator of retinal image quality. The pipeline of our approach is as follows: utilize image pre-processing technique to standardize input retinal images from possibly different sources to a uniform style; then, an improved deep learning empowered vessel segmentation model is employed to extract retinal vessels from the pre-processed images; finally, a box counting module is used to measure the fractal dimension of segmented vessel images. A small fractal threshold (could be a value between 1.45 and 1.50) indicates insufficient image quality. Our approach has been validated on 30,644 images from four public database.
Collapse
Affiliation(s)
- Xingzheng Lyu
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Hangzhou, 310027, China.
| | - Purvish Jajal
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, T6G 1H9, Canada
| | - Muhammad Zeeshan Tahir
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Hangzhou, 310027, China
| | - Sanyuan Zhang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Hangzhou, 310027, China
| |
Collapse
|
269
|
Young LH, Kim J, Yakin M, Lin H, Dao DT, Kodati S, Sharma S, Lee AY, Lee CS, Sen HN. Automated Detection of Vascular Leakage in Fluorescein Angiography - A Proof of Concept. Transl Vis Sci Technol 2022; 11:19. [PMID: 35877095 PMCID: PMC9339697 DOI: 10.1167/tvst.11.7.19] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose The purpose of this paper was to develop a deep learning algorithm to detect retinal vascular leakage (leakage) in fluorescein angiography (FA) of patients with uveitis and use the trained algorithm to determine clinically notable leakage changes. Methods An algorithm was trained and tested to detect leakage on a set of 200 FA images (61 patients) and evaluated on a separate 50-image test set (21 patients). The ground truth was leakage segmentation by two clinicians. The Dice Similarity Coefficient (DSC) was used to measure concordance. Results During training, the algorithm achieved a best average DSC of 0.572 (95% confidence interval [CI] = 0.548–0.596). The trained algorithm achieved a DSC of 0.563 (95% CI = 0.543–0.582) when tested on an additional set of 50 images. The trained algorithm was then used to detect leakage on pairs of FA images from longitudinal patient visits. Longitudinal leakage follow-up showed a >2.21% change in the visible retina area covered by leakage (as detected by the algorithm) had a sensitivity and specificity of 90% (area under the curve [AUC] = 0.95) of detecting a clinically notable change compared to the gold standard, an expert clinician's assessment. Conclusions This deep learning algorithm showed modest concordance in identifying vascular leakage compared to ground truth but was able to aid in identifying vascular FA leakage changes over time. Translational Relevance This is a proof-of-concept study that vascular leakage can be detected in a more standardized way and that tools can be developed to help clinicians more objectively compare vascular leakage between FAs.
Collapse
Affiliation(s)
- LeAnne H Young
- National Eye Institute, Bethesda, MD, USA.,Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA
| | - Jongwoo Kim
- National Library of Medicine, Bethesda, MD, USA
| | | | - Henry Lin
- National Eye Institute, Bethesda, MD, USA
| | | | | | - Sumit Sharma
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | | | | | - H Nida Sen
- National Eye Institute, Bethesda, MD, USA
| |
Collapse
|
270
|
Automatic Screening of the Eyes in a Deep-Learning–Based Ensemble Model Using Actual Eye Checkup Optical Coherence Tomography Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12146872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Eye checkups have become increasingly important to maintain good vision and quality of life. As the population requiring eye checkups increases, so does the clinical work burden of clinicians. An automatic screening algorithm to reduce the clinicians’ workload is necessary. Machine learning (ML) has recently become one of the chief techniques for automated image recognition and is a helpful tool for identifying ocular diseases. However, the accuracy of ML models is lower in a clinical setting than in the laboratory. The performance of ML models depends on the training dataset. Eye checkups often prioritize speed and minimize image processing. Data distribution differs from the training dataset and, consequently, decreases prediction performance. The study aim was to investigate an ML model to screen for retinal diseases from low-quality optical coherence tomography (OCT) images captured during actual eye chechups to prevent a dataset shift. The ensemble model with convolutional neural networks (CNNs) and random forest models showed high screening performance in the single-shot OCT images captured during the actual eye checkups. Our study indicates the strong potential of the ensemble model combining the CNN and random forest models in accurately predicting abnormalities during eye checkups.
Collapse
|
271
|
Held LA, Wewetzer L, Steinhäuser J. Determinants of the implementation of an artificial intelligence-supported device for the screening of diabetic retinopathy in primary care - a qualitative study. Health Informatics J 2022; 28:14604582221112816. [PMID: 35921547 DOI: 10.1177/14604582221112816] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Diabetic retinopathy is a microvascular complication of diabetes mellitus that is usually asymptomatic in the early stages. Therefore, its timely detection and treatment are essential. First pilot projects exist to establish a smartphone-based and AI-supported screening of DR in primary care. This study explored health professionals' perceptions of potential barriers and enablers of using a screening such as this in primary care to understand the mechanisms that could influence implementation into routine clinical practice. Semi-structured telephone interviews were conducted and analysed with the help of qualitative analysis of Mayring. The following main influencing factors to implementation have been identified: personal attitude, organisation, time, financial factors, education, support, technical requirement, influence on profession and patient welfare. Most determinants could be relocated in the behaviour change wheel, a validated implementation model. Further research on the patients' perspective and a ranking of the determinants found is needed.
Collapse
Affiliation(s)
- Linda A Held
- Institute of Family Medicine, 54360University Medical Center Schleswig-Holstein, Campus Lübeck, Germany
| | - Larisa Wewetzer
- Institute of Family Medicine, 54360University Medical Center Schleswig-Holstein, Campus Lübeck, Germany
| | - Jost Steinhäuser
- Institute of Family Medicine, 54360University Medical Center Schleswig-Holstein, Campus Lübeck, Germany
| |
Collapse
|
272
|
Factors driving provider adoption of the TREWS machine learning-based early warning system and its effects on sepsis treatment timing. Nat Med 2022; 28:1447-1454. [PMID: 35864251 DOI: 10.1038/s41591-022-01895-z] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 06/08/2022] [Indexed: 01/04/2023]
Abstract
Machine learning-based clinical decision support tools for sepsis create opportunities to identify at-risk patients and initiate treatments at early time points, which is critical for improving sepsis outcomes. In view of the increasing use of such systems, better understanding of how they are adopted and used by healthcare providers is needed. Here, we analyzed provider interactions with a sepsis early detection tool (Targeted Real-time Early Warning System), which was deployed at five hospitals over a 2-year period. Among 9,805 retrospectively identified sepsis cases, the early detection tool achieved high sensitivity (82% of sepsis cases were identified) and a high rate of adoption: 89% of all alerts by the system were evaluated by a physician or advanced practice provider and 38% of evaluated alerts were confirmed by a provider. Adjusting for patient presentation and severity, patients with sepsis whose alert was confirmed by a provider within 3 h had a 1.85-h (95% CI 1.66-2.00) reduction in median time to first antibiotic order compared to patients with sepsis whose alert was either dismissed, confirmed more than 3 h after the alert or never addressed in the system. Finally, we found that emergency department providers and providers who had previous interactions with an alert were more likely to interact with alerts, as well as to confirm alerts on retrospectively identified patients with sepsis. Beyond efforts to improve the performance of early warning systems, efforts to improve adoption are essential to their clinical impact and should focus on understanding providers' knowledge of, experience with and attitudes toward such systems.
Collapse
|
273
|
Zhao J, Lu Y, Zhu S, Li K, Jiang Q, Yang W. Systematic Bibliometric and Visualized Analysis of Research Hotspots and Trends on the Application of Artificial Intelligence in Ophthalmic Disease Diagnosis. Front Pharmacol 2022; 13:930520. [PMID: 35754490 PMCID: PMC9214201 DOI: 10.3389/fphar.2022.930520] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 05/23/2022] [Indexed: 12/02/2022] Open
Abstract
Background: Artificial intelligence (AI) has been used in the research of ophthalmic disease diagnosis, and it may have an impact on medical and ophthalmic practice in the future. This study explores the general application and research frontier of artificial intelligence in ophthalmic disease detection. Methods: Citation data were downloaded from the Web of Science Core Collection database to evaluate the extent of the application of Artificial intelligence in ophthalmic disease diagnosis in publications from 1 January 2012, to 31 December 2021. This information was analyzed using CiteSpace.5.8. R3 and Vosviewer. Results: A total of 1,498 publications from 95 areas were examined, of which the United States was determined to be the most influential country in this research field. The largest cluster labeled “Brownian motion” was used prior to the application of AI for ophthalmic diagnosis from 2007 to 2017, and was an active topic during this period. The burst keywords in the period from 2020 to 2021 were system, disease, and model. Conclusion: The focus of artificial intelligence research in ophthalmic disease diagnosis has transitioned from the development of AI algorithms and the analysis of abnormal eye physiological structure to the investigation of more mature ophthalmic disease diagnosis systems. However, there is a need for further studies in ophthalmology and computer engineering.
Collapse
Affiliation(s)
- Junqiang Zhao
- Department of Nursing, Xinxiang Medical University, Xinxiang, China
| | - Yi Lu
- Department of Nursing, Xinxiang Medical University, Xinxiang, China
| | - Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Keran Li
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Qin Jiang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Weihua Yang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
274
|
Abràmoff MD, Roehrenbeck C, Trujillo S, Goldstein J, Graves AS, Repka MX, Silva Iii EZ. A reimbursement framework for artificial intelligence in healthcare. NPJ Digit Med 2022; 5:72. [PMID: 35681002 PMCID: PMC9184542 DOI: 10.1038/s41746-022-00621-w] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 05/25/2022] [Indexed: 11/09/2022] Open
Affiliation(s)
- Michael D Abràmoff
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, USA. .,AI Healthcare Coalition, Washington, DC, USA. .,Digital Diagnostics, Coralville, IA, USA.
| | - Cybil Roehrenbeck
- AI Healthcare Coalition, Washington, DC, USA.,Hogan Lovells LLP, Washington, DC, USA
| | | | | | | | - Michael X Repka
- Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, USA
| | - Ezequiel Zeke Silva Iii
- South Texas Radiology, San Antonio, TX, USA.,University of Texas Health, Long School of Medicine, San Antonio, TX, USA
| |
Collapse
|
275
|
Gobbi JD, Braga JPR, Lucena MM, Bellanda VCF, Frasson MVS, Ferraz D, Koh V, Jorge R. Efficacy of smartphone-based retinal photography by undergraduate students in screening and early diagnosing diabetic retinopathy. Int J Retina Vitreous 2022; 8:35. [PMID: 35672839 PMCID: PMC9172171 DOI: 10.1186/s40942-022-00388-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/23/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND To evaluate the efficacy of retinal photography obtained by undergraduate students using a smartphone-based device in screening and early diagnosing diabetic retinopathy (DR). METHODS We carried out an open prospective study with ninety-nine diabetic patients (194 eyes), who were submitted to an ophthalmological examination in which undergraduate students registered images of the fundus using a smartphone-based device. At the same occasion, an experienced nurse captured fundus photographs from the same patients using a gold standard tabletop camera system (Canon CR-2 Digital Non-Mydriatic Retinal Camera), with a 45º field of view. Two distinct masked specialists evaluated both forms of imaging according to the presence or absence of sings of DR and its markers of severity. We later compared those reports to assess agreement between the two technologies. RESULTS Concerning the presence or absence of DR, we found an agreement rate of 84.07% between reports obtained from images of the smartphone-based device and from the regular (tabletop) fundus camera; Kappa: 0.67; Sensitivity: 71.0% (Confidence Interval [CI]: 65.05-78.16%); Specificity: 94.06% (CI: 90.63-97.49%); Accuracy: 84.07%; Positive Predictive Value (PPV): 90.62%; Negative Predictive Value (NPV): 80.51%. As for the classification between proliferative diabetic retinopathy and non-proliferative diabetic retinopathy, we found an agreement of 90.00% between the reports; Kappa: 0.78; Sensitivity: 86.96%; (CI: 79.07-94.85%); Specificity: 91.49% (CI: 84.95-98.03%); Accuracy: 90.00%; PPV: 83.33%; NPV: 93.48%. Regarding the degree of classification of DR, we found an agreement rate of 69.23% between the reports; Kappa: 0.52. As relating to the presence or absence of hard macular exudates, we found an agreement of 84.07% between the reports; Kappa: 0.67; Sensitivity: 71.60% (CI: 65.05-78.16%); Specificity: 94.06% (CI: 90.63-97.49%); Accuracy: 84.07%; PPV: 90.62%; NPV: 80.51%. CONCLUSION The smartphone-based device showed promising accuracy in the detection of DR (84.07%), making it a potential tool in the screening and early diagnosis of DR.
Collapse
Affiliation(s)
- Jéssica Deponti Gobbi
- Division of Ophthalmology, Ribeirão Preto Medical School, University of São Paulo, 3900, Bandeirantes Ave, Ribeirão Preto, SP, 14049-900, Brazil
| | - João Pedro Romero Braga
- Division of Ophthalmology, Ribeirão Preto Medical School, University of São Paulo, 3900, Bandeirantes Ave, Ribeirão Preto, SP, 14049-900, Brazil
| | - Moises M Lucena
- Division of Ophthalmology, Ribeirão Preto Medical School, University of São Paulo, 3900, Bandeirantes Ave, Ribeirão Preto, SP, 14049-900, Brazil
| | - Victor C F Bellanda
- Division of Ophthalmology, Ribeirão Preto Medical School, University of São Paulo, 3900, Bandeirantes Ave, Ribeirão Preto, SP, 14049-900, Brazil
| | - Miguel V S Frasson
- Department of Applied Mathematics and Statistics, University of São Paulo, São Carlos, Brazil
| | - Daniel Ferraz
- Federal University of São Paulo; D'or Institute of Teaching and Research, São Paulo, Brazil
| | - Victor Koh
- Department of Ophthalmology, National University Hospital, Singapore, Singapore
| | - Rodrigo Jorge
- Division of Ophthalmology, Ribeirão Preto Medical School, University of São Paulo, 3900, Bandeirantes Ave, Ribeirão Preto, SP, 14049-900, Brazil.
| |
Collapse
|
276
|
Ramesh PV, Devadas AK, Ray P, Ramesh SV, Joshua T, Priyan V, Ramesh MK, Rajasekaran R. Under lock and key: Incorporation of blockchain technology in the field of ophthalmic artificial intelligence for big data management - A perfect match? Indian J Ophthalmol 2022; 70:2188-2190. [PMID: 35648013 PMCID: PMC9359239 DOI: 10.4103/ijo.ijo_143_22] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Big data has been a game changer of machine learning. But, big data is a form of centralized version of data only available and accessible to the technology giants. A way to decentralize this data and make machine learning accessible to the smaller organizations is via the blockchain technology. This peer-to-peer network creates a common database accessible to those in the network. Furthermore, blockchain helps in securing the digital data and prevents data tampering due to human interactions. This technology keeps a constant track of the document in terms of creation, editing, etc., and makes this information accessible to all. It is a chain of data being distributed across many computers, with a database containing details about each transaction. This record helps in data security and prevents data modification. This technology also helps create big data from multiple sources of small data paving way for creating a well serving artificial intelligence model. Here in this manuscript, we discuss about the usage of blockchain, its current role in machine learning and challenges faced by it.
Collapse
Affiliation(s)
- Prasanna Venkatesh Ramesh
- Medical Officer, Department of Glaucoma and Research, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Aji Kunnath Devadas
- Consultant Optometrist, Department of Optometry and Visual Science, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Prajnya Ray
- Consultant Optometrist, Department of Optometry and Visual Science, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Shruthy Vaishali Ramesh
- Medical Officer, Department of Cataract and Refractive Surgery, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Tensingh Joshua
- Head of the Department, Mahathma Centre of Moving Images, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Vinoth Priyan
- iOS Engineer, Mahathma Centre of Moving Images, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Meena Kumari Ramesh
- Head of the Department of Cataract and Refractive Surgery, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Ramesh Rajasekaran
- Chief Medical Officer, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| |
Collapse
|
277
|
Aujih AB, Shapiai MI, Meriaudeau F, Tang TB. EDR-Net: Lightweight Deep Neural Network Architecture for Detecting Referable Diabetic Retinopathy. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:467-478. [PMID: 35700260 DOI: 10.1109/tbcas.2022.3182907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Present architecture of convolution neural network for diabetic retinopathy (DR-Net) is based on normal convolution (NC). It incurs high computational cost as NC uses a multiplicative weight that measures a combined correlation in both cross-channel and spatial dimension of layer's inputs. This might cause the overall DR-Net architecture to be over-parameterised and computationally inefficient. This paper proposes EDR-Net - a new end-to-end, DR-Net architecture with depth-wise separable convolution module. The EDR-Net architecture was trained with DRKaggle-train dataset (35,126 images), and tested on two datasets, i.e. DRKaggle-test (53,576 images) and Messidor-2 (1,748 images). Results showed that the proposed EDR-Net achieved predictive performance comparable with current state-of-the-arts in detecting referable diabetic retinopathy (rDR) from fundus images and outperformed other light weight architectures, with at least two times less computation cost. This makes it more amenable for mobile device based computer-assisted rDR screening applications.
Collapse
|
278
|
Peng T, Wu Y, Qin J, Wu QJ, Cai J. H-ProSeg: Hybrid ultrasound prostate segmentation based on explainability-guided mathematical model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106752. [PMID: 35338887 DOI: 10.1016/j.cmpb.2022.106752] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 02/16/2022] [Accepted: 03/11/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate and robust prostate segmentation in transrectal ultrasound (TRUS) images is of great interest for image-guided prostate interventions and prostate cancer diagnosis. However, it remains a challenging task for various reasons, including a missing or ambiguous boundary between the prostate and surrounding tissues, the presence of shadow artifacts, intra-prostate intensity heterogeneity, and anatomical variations. METHODS Here, we present a hybrid method for prostate segmentation (H-ProSeg) in TRUS images, using a small number of radiologist-defined seed points as the prior points. This method consists of three subnetworks. The first subnetwork uses an improved principal curve-based model to obtain data sequences consisting of seed points and their corresponding projection index. The second subnetwork uses an improved differential evolution-based artificial neural network for training to decrease the model error. The third subnetwork uses the parameters of the artificial neural network to explain the smooth mathematical description of the prostate contour. The performance of the H-ProSeg method was assessed in 55 brachytherapy patients using Dice similarity coefficient (DSC), Jaccard similarity coefficient (Ω), and accuracy (ACC) values. RESULTS The H-ProSeg method achieved excellent segmentation accuracy, with DSC, Ω, and ACC values of 95.8%, 94.3%, and 95.4%, respectively. Meanwhile, the DSC, Ω, and ACC values of the proposed method were as high as 93.3%, 91.9%, and 93%, respectively, due to the influence of Gaussian noise (standard deviation of Gaussian function, σ = 50). Although the σ increased from 10 to 50, the DSC, Ω, and ACC values fluctuated by a maximum of approximately 2.5%, demonstrating the excellent robustness of our method. CONCLUSIONS Here, we present a hybrid method for accurate and robust prostate ultrasound image segmentation. The H-ProSeg method achieved superior performance compared with current state-of-the-art techniques. The knowledge of precise boundaries of the prostate is crucial for the conservation of risk structures. The proposed models have the potential to improve prostate cancer diagnosis and therapeutic outcomes.
Collapse
Affiliation(s)
- Tao Peng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Yiyun Wu
- Department of Medical Technology, Jiangsu Province Hospital, Nanjing, Jiangsu, China
| | - Jing Qin
- Department of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Qingrong Jackie Wu
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| |
Collapse
|
279
|
Automation of diabetic retinopathy grading: advancements and cost analysis. Eye (Lond) 2022; 36:1336. [PMID: 34211134 PMCID: PMC9151741 DOI: 10.1038/s41433-021-01666-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 06/16/2021] [Accepted: 06/22/2021] [Indexed: 11/09/2022] Open
|
280
|
Emerging Ethical Considerations for the Use of Artificial Intelligence in Ophthalmology. OPHTHALMOLOGY SCIENCE 2022; 2:100141. [PMID: 36249707 PMCID: PMC9560632 DOI: 10.1016/j.xops.2022.100141] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
281
|
Zafar S, Mahjoub H, Mehta N, Domalpally A, Channa R. Artificial Intelligence Algorithms in Diabetic Retinopathy Screening. Curr Diab Rep 2022; 22:267-274. [PMID: 35438458 DOI: 10.1007/s11892-022-01467-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/07/2022] [Indexed: 11/03/2022]
Abstract
PURPOSE OF REVIEW In this review, we focus on artificial intelligence (AI) algorithms for diabetic retinopathy (DR) screening and risk stratification and factors to consider when implementing AI algorithms in the clinic. RECENT FINDINGS AI algorithms have been adopted, and have received regulatory approval, for automated detection of referable DR with clinically acceptable diagnostic performance. While these metrics are an important first step, performance metrics that go beyond measures of technical accuracy are needed to fully evaluate the impact of AI algorithm on patient outcomes. Recent advances in AI present an exciting opportunity to improve patient care. Using DR as an example, we have reviewed factors to consider in the implementation of AI algorithms in real-world clinical practice. These include real-world evaluation of safety, efficacy, and equity (bias); impact on patient outcomes; ethical, logistical, and regulatory factors.
Collapse
Affiliation(s)
- Sidra Zafar
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Heba Mahjoub
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Nitish Mehta
- Department of Ophthalmology, New York University School of Medicine, New York, NY, USA
| | - Amitha Domalpally
- Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA
| | - Roomasa Channa
- Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA.
| |
Collapse
|
282
|
Wilson BS, Tucci DL, Moses DA, Chang EF, Young NM, Zeng FG, Lesica NA, Bur AM, Kavookjian H, Mussatto C, Penn J, Goodwin S, Kraft S, Wang G, Cohen JM, Ginsburg GS, Dawson G, Francis HW. Harnessing the Power of Artificial Intelligence in Otolaryngology and the Communication Sciences. J Assoc Res Otolaryngol 2022; 23:319-349. [PMID: 35441936 PMCID: PMC9086071 DOI: 10.1007/s10162-022-00846-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 04/02/2022] [Indexed: 02/01/2023] Open
Abstract
Use of artificial intelligence (AI) is a burgeoning field in otolaryngology and the communication sciences. A virtual symposium on the topic was convened from Duke University on October 26, 2020, and was attended by more than 170 participants worldwide. This review presents summaries of all but one of the talks presented during the symposium; recordings of all the talks, along with the discussions for the talks, are available at https://www.youtube.com/watch?v=ktfewrXvEFg and https://www.youtube.com/watch?v=-gQ5qX2v3rg . Each of the summaries is about 2500 words in length and each summary includes two figures. This level of detail far exceeds the brief summaries presented in traditional reviews and thus provides a more-informed glimpse into the power and diversity of current AI applications in otolaryngology and the communication sciences and how to harness that power for future applications.
Collapse
Affiliation(s)
- Blake S. Wilson
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710 USA
- Duke Hearing Center, Duke University School of Medicine, Durham, NC 27710 USA
- Department of Electrical & Computer Engineering, Duke University, Durham, NC 27708 USA
- Department of Biomedical Engineering, Duke University, Durham, NC 27708 USA
- Department of Otolaryngology – Head & Neck Surgery, University of North Carolina, Chapel Hill, Chapel Hill, NC 27599 USA
| | - Debara L. Tucci
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710 USA
- National Institute On Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD 20892 USA
| | - David A. Moses
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143 USA
- UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94117 USA
| | - Edward F. Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143 USA
- UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94117 USA
| | - Nancy M. Young
- Division of Otolaryngology, Ann and Robert H. Lurie Childrens Hospital of Chicago, Chicago, IL 60611 USA
- Department of Otolaryngology - Head and Neck Surgery, Northwestern University Feinberg School of Medicine, Chicago, IL 60611 USA
- Department of Communication, Knowles Hearing Center, Northwestern University, Evanston, IL 60208 USA
| | - Fan-Gang Zeng
- Center for Hearing Research, University of California, Irvine, Irvine, CA 92697 USA
- Department of Anatomy and Neurobiology, University of California, Irvine, Irvine, CA 92697 USA
- Department of Biomedical Engineering, University of California, Irvine, Irvine, CA 92697 USA
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697 USA
- Department of Otolaryngology – Head and Neck Surgery, University of California, Irvine, CA 92697 USA
| | | | - Andrés M. Bur
- Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Hannah Kavookjian
- Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Caroline Mussatto
- Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Joseph Penn
- Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Sara Goodwin
- Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Shannon Kraft
- Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Guanghui Wang
- Department of Computer Science, Ryerson University, Toronto, ON M5B 2K3 Canada
| | - Jonathan M. Cohen
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710 USA
- ENT Department, Kaplan Medical Center, 7661041 Rehovot, Israel
| | - Geoffrey S. Ginsburg
- Department of Biomedical Engineering, Duke University, Durham, NC 27708 USA
- MEDx (Medicine & Engineering at Duke), Duke University, Durham, NC 27708 USA
- Center for Applied Genomics & Precision Medicine, Duke University School of Medicine, Durham, NC 27710 USA
- Department of Medicine, Duke University School of Medicine, Durham, NC 27710 USA
- Department of Pathology, Duke University School of Medicine, Durham, NC 27710 USA
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC 27710 USA
| | - Geraldine Dawson
- Duke Institute for Brain Sciences, Duke University, Durham, NC 27710 USA
- Duke Center for Autism and Brain Development, Duke University School of Medicine and the Duke Institute for Brain Sciences, NIH Autism Center of Excellence, Durham, NC 27705 USA
- Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, NC 27701 USA
| | - Howard W. Francis
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710 USA
| |
Collapse
|
283
|
Murray NM, Phan P, Hager G, Menard A, Chin D, Liu A, Hui FK. Insurance payment for artificial intelligence technology: Methods used by a stroke artificial intelligence system and strategies to qualify for the new technology add-on payment. Neuroradiol J 2022; 35:284-289. [PMID: 34991404 PMCID: PMC9244751 DOI: 10.1177/19714009211067408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The first ever insurance reimbursement for an artificial intelligence (AI) system, which expedites triage of acute stroke, occurred in 2020 when the Centers for Medicare and Medicaid Services (CMS) granted approval for a New Technology Add-on Payment (NTAP). Key aspects of the AI system that led to its approval by the CMS included its unique mechanism of action, use of robotic process automation, and clear linkage of the system's output to clinical outcomes. The specific strategies employed encompass a first-case scenario of proving reimbursable value for improved stroke outcomes using AI. Given the rapid change in utilization of AI technology in stroke care, we describe the economic drivers of stroke AI systems in healthcare, focusing on concepts of reimbursement for value added by AI to the stroke care system. This report reviews (1) the successful approach used by the first NTAP-approved AI system, (2) economic variables in insurance reimbursement for AI, and (3) resultant strategies that may be utilized to facilitate qualification for NTAP reimbursement, which may be adopted by other AI systems used in stroke care.
Collapse
Affiliation(s)
- Nick M Murray
- Department of Neurology, Intermountain Medical Center, Murray, UT, USA
| | - Phillip Phan
- Carey Business School, Johns Hopkins University, Baltimore, MD, USA
- Department of Medicine, The Johns Hopkins Hospital, Baltimore, MD, USA
| | - Greg Hager
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Andrew Menard
- Department of Radiology, The Johns Hopkins Hospital, Baltimore, MD, USA
| | - David Chin
- Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA
| | - Alvin Liu
- Department of Ophthalmology, The Johns Hopkins Hospital, Baltimore, MD, USA
| | - Ferdinand K Hui
- Department of Radiology, The Johns Hopkins Hospital, Baltimore, MD, USA
| |
Collapse
|
284
|
Tang J, Yuan M, Tian K, Wang Y, Wang D, Yang J, Yang Z, He X, Luo Y, Li Y, Xu J, Li X, Ding D, Ren Y, Chen Y, Sadda SR, Yu W. An Artificial-Intelligence-Based Automated Grading and Lesions Segmentation System for Myopic Maculopathy Based on Color Fundus Photographs. Transl Vis Sci Technol 2022; 11:16. [PMID: 35704327 PMCID: PMC9206390 DOI: 10.1167/tvst.11.6.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
Purpose To develop deep learning models based on color fundus photographs that can automatically grade myopic maculopathy, diagnose pathologic myopia, and identify and segment myopia-related lesions. Methods Photographs were graded and annotated by four ophthalmologists and were then divided into a high-consistency subgroup or a low-consistency subgroup according to the consistency between the results of the graders. ResNet-50 network was used to develop the classification model, and DeepLabv3+ network was used to develop the segmentation model for lesion identification. The two models were then combined to develop the classification-and-segmentation–based co-decision model. Results This study included 1395 color fundus photographs from 895 patients. The grading accuracy of the co-decision model was 0.9370, and the quadratic-weighted κ coefficient was 0.9651; the co-decision model achieved an area under the receiver operating characteristic curve of 0.9980 in diagnosing pathologic myopia. The photograph-level F1 values of the segmentation model identifying optic disc, peripapillary atrophy, diffuse atrophy, patchy atrophy, and macular atrophy were all >0.95; the pixel-level F1 values for segmenting optic disc and peripapillary atrophy were both >0.9; the pixel-level F1 values for segmenting diffuse atrophy, patchy atrophy, and macular atrophy were all >0.8; and the photograph-level recall/sensitivity for detecting lacquer cracks was 0.9230. Conclusions The models could accurately and automatically grade myopic maculopathy, diagnose pathologic myopia, and identify and monitor progression of the lesions. Translational Relevance The models can potentially help with the diagnosis, screening, and follow-up for pathologic myopic in clinical practice.
Collapse
Affiliation(s)
- Jia Tang
- Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Mingzhen Yuan
- Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Kaibin Tian
- AI and Media Computing Lab, School of Information, Renmin University of China, Beijing, China
| | - Yuelin Wang
- Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Dongyue Wang
- Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Jingyuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Zhikun Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China
| | - Xixi He
- Vistel AI Lab, Visionary Intelligence, Beijing, China
| | - Yan Luo
- Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China
| | - Ying Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China
| | - Jie Xu
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Science Key Laboratory, Beijing, China
| | - Xirong Li
- AI and Media Computing Lab, School of Information, Renmin University of China, Beijing, China.,Key Laboratory of Data Engineering and Knowledge Engineering, Renmin University of China, Beijing, China
| | - Dayong Ding
- Vistel AI Lab, Visionary Intelligence, Beijing, China
| | - Yanhan Ren
- Chicago Medical School, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Srinivas R Sadda
- Doheny Eye Institute, Los Angeles, CA, USA.,Department of Ophthalmology, University of California, Los Angeles, Los Angeles, CA, USA
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, China
| |
Collapse
|
285
|
Campagner A, Sternini F, Cabitza F. Decisions are not all equal-Introducing a utility metric based on case-wise raters' perceptions. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106930. [PMID: 35690505 DOI: 10.1016/j.cmpb.2022.106930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 05/13/2022] [Accepted: 05/31/2022] [Indexed: 06/15/2023]
Abstract
Background and Objective Evaluation of AI-based decision support systems (AI-DSS) is of critical importance in practical applications, nonetheless common evaluation metrics fail to properly consider relevant and contextual information. In this article we discuss a novel utility metric, the weighted Utility (wU), for the evaluation of AI-DSS, which is based on the raters' perceptions of their annotation hesitation and of the relevance of the training cases. Methods We discuss the relationship between the proposed metric and other previous proposals; and we describe the application of the proposed metric for both model evaluation and optimization, through three realistic case studies. Results We show that our metric generalizes the well-known Net Benefit, as well as other common error-based and utility-based metrics. Through the empirical studies, we show that our metric can provide a more flexible tool for the evaluation of AI models. We also show that, compared to other optimization metrics, model optimization based on the wU can provide significantly better performance (AUC 0.862 vs 0.895, p-value <0.05), especially on cases judged to be more complex by the human annotators (AUC 0.85 vs 0.92, p-value <0.05). Conclusions We make the point for having utility as a primary concern in the evaluation and optimization of machine learning models in critical domains, like the medical one; and for the importance of a human-centred approach to assess the potential impact of AI models on human decision making also on the basis of further information that can be collected during the ground-truthing process.
Collapse
Affiliation(s)
- Andrea Campagner
- Dipartimento di Informatica, Sistemistica e Comunicazione, Università di Milano-Bicocca, Milano, Italy.
| | - Federico Sternini
- Polito(BIO)Med Lab, Politecnico di Torino, Torino, Italy; USE-ME-D srl, I3P Politecnico di Torino, Torino, Ital
| | - Federico Cabitza
- Dipartimento di Informatica, Sistemistica e Comunicazione, Università di Milano-Bicocca, Milano, Italy; IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| |
Collapse
|
286
|
Lan X, Yu H, Cui L. Application of Telemedicine in COVID-19: A Bibliometric Analysis. Front Public Health 2022; 10:908756. [PMID: 35719666 PMCID: PMC9199898 DOI: 10.3389/fpubh.2022.908756] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 05/03/2022] [Indexed: 11/13/2022] Open
Abstract
BackgroundTelemedicine as a tool that can reduce potential disease spread and fill a gap in healthcare has been increasingly applied during the COVID-19 pandemic. Many studies have summarized telemedicine's technologies or the diseases' applications. However, these studies were reviewed separately. There is a lack of a comprehensive overview of the telemedicine technologies, application areas, and medical service types.ObjectiveWe aimed to investigate the research direction of telemedicine at COVID-19 and to clarify what kind of telemedicine technology is used in what diseases, and what medical services are provided by telemedicine.MethodsPublications addressing telemedicine in COVID-19 were retrieved from the PubMed database. To extract bibliographic information and do a bi-clustering analysis, we used Bicomb and gCLUTO. The co-occurrence networks of diseases, technology, and healthcare services were then constructed and shown using R-studio and the Gephi tool.ResultsWe retrieved 5,224 research papers on telemedicine at COVID-19 distributed among 1460 journals. Most articles were published in the Journal of Medical Internet Research (166/5,224, 3.18%). The United States published the most articles on telemedicine. The research clusters comprised 6 clusters, which refer to mental health, mhealth, cross-infection control, and self-management of diseases. The network analysis revealed a triple relation with diseases, technologies, and health care services with 303 nodes and 5,664 edges. The entity “delivery of health care” was the node with the highest betweenness centrality at 6,787.79, followed by “remote consultation” (4,395.76) and “infection control” (3,700.50).ConclusionsThe results of this study highlight widely use of telemedicine during COVID-19. Most studies relate to the delivery of health care and mental health services. Technologies were primarily via mobile devices to deliver health care, remote consultation, control infection, and contact tracing. The study assists researchers in comprehending the knowledge structure in this sector, enabling them to discover critical topics and choose the best match for their survey work.
Collapse
|
287
|
Quigley HA. Identifying Glaucoma in Primary Care Offices. JAMA Ophthalmol 2022; 140:663-664. [PMID: 35608852 DOI: 10.1001/jamaophthalmol.2022.1608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Harry A Quigley
- Wilmer Institute, Johns Hopkins School of Medicine, Baltimore, Maryland
| |
Collapse
|
288
|
Parikh RB, Helmchen LA. Paying for artificial intelligence in medicine. NPJ Digit Med 2022; 5:63. [PMID: 35595986 PMCID: PMC9123184 DOI: 10.1038/s41746-022-00609-6] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Accepted: 04/27/2022] [Indexed: 11/24/2022] Open
Affiliation(s)
- Ravi B Parikh
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA.
| | - Lorens A Helmchen
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
- Milken Institute School of Public Health, The George Washington University, Washington, DC, USA
| |
Collapse
|
289
|
Kaskar OG, Wells-Gray E, Fleischman D, Grace L. Evaluating machine learning classifiers for glaucoma referral decision support in primary care settings. Sci Rep 2022; 12:8518. [PMID: 35595794 PMCID: PMC9122936 DOI: 10.1038/s41598-022-12270-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 04/18/2022] [Indexed: 11/09/2022] Open
Abstract
Several artificial intelligence algorithms have been proposed to help diagnose glaucoma by analyzing the functional and/or structural changes in the eye. These algorithms require carefully curated datasets with access to ocular images. In the current study, we have modeled and evaluated classifiers to predict self-reported glaucoma using a single, easily obtained ocular feature (intraocular pressure (IOP)) and non-ocular features (age, gender, race, body mass index, systolic and diastolic blood pressure, and comorbidities). The classifiers were trained on publicly available data of 3015 subjects without a glaucoma diagnosis at the time of enrollment. 337 subjects subsequently self-reported a glaucoma diagnosis in a span of 1–12 years after enrollment. The classifiers were evaluated on the ability to identify these subjects by only using their features recorded at the time of enrollment. Support vector machine, logistic regression, and adaptive boosting performed similarly on the dataset with F1 scores of 0.31, 0.30, and 0.28, respectively. Logistic regression had the highest sensitivity at 60% with a specificity of 69%. Predictive classifiers using primarily non-ocular features have the potential to be used for identifying suspected glaucoma in non-eye care settings, including primary care. Further research into finding additional features that improve the performance of predictive classifiers is warranted.
Collapse
Affiliation(s)
- Omkar G Kaskar
- North Carolina State University, Raleigh, NC, 27695, USA
| | | | - David Fleischman
- University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Landon Grace
- North Carolina State University, Raleigh, NC, 27695, USA.
| |
Collapse
|
290
|
London AJ. Artificial intelligence in medicine: Overcoming or recapitulating structural challenges to improving patient care? Cell Rep Med 2022; 3:100622. [PMID: 35584620 PMCID: PMC9133460 DOI: 10.1016/j.xcrm.2022.100622] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 02/10/2022] [Accepted: 04/06/2022] [Indexed: 01/09/2023]
Abstract
There is considerable enthusiasm about the prospect that artificial intelligence (AI) will help to improve the safety and efficacy of health services and the efficiency of health systems. To realize this potential, however, AI systems will have to overcome structural problems in the culture and practice of medicine and the organization of health systems that impact the data from which AI models are built, the environments into which they will be deployed, and the practices and incentives that structure their development. This perspective elaborates on some of these structural challenges and provides recommendations to address potential shortcomings.
Collapse
Affiliation(s)
- Alex John London
- Department of Philosophy and Center for Ethics and Policy, Carnegie Mellon University, Pittsburgh, PA 15228, USA.
| |
Collapse
|
291
|
Zhang WF, Li DH, Wei QJ, Ding DY, Meng LH, Wang YL, Zhao XY, Chen YX. The Validation of Deep Learning-Based Grading Model for Diabetic Retinopathy. Front Med (Lausanne) 2022; 9:839088. [PMID: 35652075 PMCID: PMC9148973 DOI: 10.3389/fmed.2022.839088] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 04/08/2022] [Indexed: 12/26/2022] Open
Abstract
Purpose To evaluate the performance of a deep learning (DL)-based artificial intelligence (AI) hierarchical diagnosis software, EyeWisdom V1 for diabetic retinopathy (DR). Materials and Methods The prospective study was a multicenter, double-blind, and self-controlled clinical trial. Non-dilated posterior pole fundus images were evaluated by ophthalmologists and EyeWisdom V1, respectively. The diagnosis of manual grading was considered as the gold standard. Primary evaluation index (sensitivity and specificity) and secondary evaluation index like positive predictive values (PPV), negative predictive values (NPV), etc., were calculated to evaluate the performance of EyeWisdom V1. Results A total of 1,089 fundus images from 630 patients were included, with a mean age of (56.52 ± 11.13) years. For any DR, the sensitivity, specificity, PPV, and NPV were 98.23% (95% CI 96.93-99.08%), 74.45% (95% CI 69.95-78.60%), 86.38% (95% CI 83.76-88.72%), and 96.23% (95% CI 93.50-98.04%), respectively; For sight-threatening DR (STDR, severe non-proliferative DR or worse), the above indicators were 80.47% (95% CI 75.07-85.14%), 97.96% (95% CI 96.75-98.81%), 92.38% (95% CI 88.07-95.50%), and 94.23% (95% CI 92.46-95.68%); For referral DR (moderate non-proliferative DR or worse), the sensitivity and specificity were 92.96% (95% CI 90.66-94.84%) and 93.32% (95% CI 90.65-95.42%), with the PPV of 94.93% (95% CI 92.89-96.53%) and the NPV of 90.78% (95% CI 87.81-93.22%). The kappa score of EyeWisdom V1 was 0.860 (0.827-0.890) with the AUC of 0.958 for referral DR. Conclusion The EyeWisdom V1 could provide reliable DR grading and referral recommendation based on the fundus images of diabetics.
Collapse
Affiliation(s)
- Wen-fei Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | | | - Qi-jie Wei
- Visionary Intelligence Ltd., Beijing, China
| | | | - Li-hui Meng
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Yue-lin Wang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Xin-yu Zhao
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - You-xin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| |
Collapse
|
292
|
Wolf RM, Abramoff MD, Channa R, Tava C, Clarida W, Lehmann HP. Potential reduction in healthcare carbon footprint by autonomous artificial intelligence. NPJ Digit Med 2022; 5:62. [PMID: 35551275 PMCID: PMC9098499 DOI: 10.1038/s41746-022-00605-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 04/15/2022] [Indexed: 11/09/2022] Open
Affiliation(s)
- Risa M Wolf
- Department of Pediatric Endocrinology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Michael D Abramoff
- Department of Ophthalmology, University of Iowa, Iowa City, IA, USA.
- Digital Diagnostics, Coralville, IA, USA.
| | - Roomasa Channa
- Department of Ophthalmology, University of Wisconsin Madison, Madison, WI, USA
| | - Chris Tava
- Digital Diagnostics, Coralville, IA, USA
| | | | - Harold P Lehmann
- Department of Health Informatics, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
293
|
Explainability and Causability for Artificial Intelligence-Supported Medical Image Analysis in the Context of the European In Vitro Diagnostic Regulation. N Biotechnol 2022; 70:67-72. [DOI: 10.1016/j.nbt.2022.05.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 05/02/2022] [Accepted: 05/03/2022] [Indexed: 01/02/2023]
|
294
|
Malerbi FK, Andrade RE, Morales PH, Stuchi JA, Lencione D, de Paulo JV, Carvalho MP, Nunes FS, Rocha RM, Ferraz DA, Belfort R. Diabetic Retinopathy Screening Using Artificial Intelligence and Handheld Smartphone-Based Retinal Camera. J Diabetes Sci Technol 2022; 16:716-723. [PMID: 33435711 PMCID: PMC9294565 DOI: 10.1177/1932296820985567] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Portable retinal cameras and deep learning (DL) algorithms are novel tools adopted by diabetic retinopathy (DR) screening programs. Our objective is to evaluate the diagnostic accuracy of a DL algorithm and the performance of portable handheld retinal cameras in the detection of DR in a large and heterogenous type 2 diabetes population in a real-world, high burden setting. METHOD Participants underwent fundus photographs of both eyes with a portable retinal camera (Phelcom Eyer). Classification of DR was performed by human reading and a DL algorithm (PhelcomNet), consisting of a convolutional neural network trained on a dataset of fundus images captured exclusively with the portable device; both methods were compared. We calculated the area under the curve (AUC), sensitivity, and specificity for more than mild DR. RESULTS A total of 824 individuals with type 2 diabetes were enrolled at Itabuna Diabetes Campaign, a subset of 679 (82.4%) of whom could be fully assessed. The algorithm sensitivity/specificity was 97.8 % (95% CI 96.7-98.9)/61.4 % (95% CI 57.7-65.1); AUC was 0·89. All false negative cases were classified as moderate non-proliferative diabetic retinopathy (NPDR) by human grading. CONCLUSIONS The DL algorithm reached a good diagnostic accuracy for more than mild DR in a real-world, high burden setting. The performance of the handheld portable retinal camera was adequate, with over 80% of individuals presenting with images of sufficient quality. Portable devices and artificial intelligence tools may increase coverage of DR screening programs.
Collapse
Affiliation(s)
- Fernando Korn Malerbi
- Department of Ophthalmology and Visual
Sciences, Federal University of São Paulo, São Paulo, Brazil
- Instituto Paulista de Estudos e
Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, Brazil
- Fernando Korn Malerbi, Federal University of
São Paulo, Rua Botucatu, 822. São Paulo, SP 04039-032, Brazil.
| | - Rafael Ernane Andrade
- Department of Ophthalmology and Visual
Sciences, Federal University of São Paulo, São Paulo, Brazil
- Hospital de Olhos Beira Rio, Itabuna,
BA, Brazil
| | - Paulo Henrique Morales
- Department of Ophthalmology and Visual
Sciences, Federal University of São Paulo, São Paulo, Brazil
- Instituto Paulista de Estudos e
Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, Brazil
| | | | | | | | | | | | | | - Daniel A. Ferraz
- Instituto Paulista de Estudos e
Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, Brazil
- NIHR Biomedical Research Centre for
Ophthalmology, Moorfields Eye Hospital, NHS Foundation Trust and UCL Institute of
Ophthalmology, London, UK
| | - Rubens Belfort
- Department of Ophthalmology and Visual
Sciences, Federal University of São Paulo, São Paulo, Brazil
- Instituto Paulista de Estudos e
Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, Brazil
| |
Collapse
|
295
|
Liu X, Ali TK, Singh P, Shah A, McKinney SM, Ruamviboonsuk P, Turner AW, Keane PA, Chotcomwongse P, Nganthavee V, Chia M, Huemer J, Cuadros J, Raman R, Corrado GS, Peng L, Webster DR, Hammel N, Varadarajan AV, Liu Y, Chopra R, Bavishi P. Deep Learning to Detect OCT-derived Diabetic Macular Edema from Color Retinal Photographs: A Multicenter Validation Study. Ophthalmol Retina 2022; 6:398-410. [PMID: 34999015 DOI: 10.1016/j.oret.2021.12.021] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 11/09/2021] [Accepted: 12/29/2021] [Indexed: 01/20/2023]
Abstract
PURPOSE To validate the generalizability of a deep learning system (DLS) that detects diabetic macular edema (DME) from 2-dimensional color fundus photographs (CFP), for which the reference standard for retinal thickness and fluid presence is derived from 3-dimensional OCT. DESIGN Retrospective validation of a DLS across international datasets. PARTICIPANTS Paired CFP and OCT of patients from diabetic retinopathy (DR) screening programs or retina clinics. The DLS was developed using data sets from Thailand, the United Kingdom, and the United States and validated using 3060 unique eyes from 1582 patients across screening populations in Australia, India, and Thailand. The DLS was separately validated in 698 eyes from 537 screened patients in the United Kingdom with mild DR and suspicion of DME based on CFP. METHODS The DLS was trained using DME labels from OCT. The presence of DME was based on retinal thickening or intraretinal fluid. The DLS's performance was compared with expert grades of maculopathy and to a previous proof-of-concept version of the DLS. We further simulated the integration of the current DLS into an algorithm trained to detect DR from CFP. MAIN OUTCOME MEASURES The superiority of specificity and noninferiority of sensitivity of the DLS for the detection of center-involving DME, using device-specific thresholds, compared with experts. RESULTS The primary analysis in a combined data set spanning Australia, India, and Thailand showed the DLS had 80% specificity and 81% sensitivity, compared with expert graders, who had 59% specificity and 70% sensitivity. Relative to human experts, the DLS had significantly higher specificity (P = 0.008) and noninferior sensitivity (P < 0.001). In the data set from the United Kingdom, the DLS had a specificity of 80% (P < 0.001 for specificity of >50%) and a sensitivity of 100% (P = 0.02 for sensitivity of > 90%). CONCLUSIONS The DLS can generalize to multiple international populations with an accuracy exceeding that of experts. The clinical value of this DLS to reduce false-positive referrals, thus decreasing the burden on specialist eye care, warrants a prospective evaluation.
Collapse
Affiliation(s)
- Xinle Liu
- Google Health, Google LLC, Mountain View, California
| | - Tayyeba K Ali
- Google Health via Advanced Clinical, Deerfield, Illinois; California Pacific Medical Center, Department of Ophthalmology, San Francisco, CA
| | - Preeti Singh
- Google Health, Google LLC, Mountain View, California
| | - Ami Shah
- Google Health via Advanced Clinical, Deerfield, Illinois
| | | | - Paisan Ruamviboonsuk
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Angus W Turner
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia; University of Western Australia, Perth, Western Australia, Australia
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Peranut Chotcomwongse
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Variya Nganthavee
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Mark Chia
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia; NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Josef Huemer
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | | | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, India
| | | | - Lily Peng
- Google Health, Google LLC, Mountain View, California
| | | | - Naama Hammel
- Google Health, Google LLC, Mountain View, California.
| | | | - Yun Liu
- Google Health, Google LLC, Mountain View, California
| | - Reena Chopra
- Google Health, Google LLC, Mountain View, California; NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom.
| | - Pinal Bavishi
- Google Health, Google LLC, Mountain View, California
| |
Collapse
|
296
|
WU JOHSUAN, NISHIDA TAKASHI, WEINREB ROBERTN, LIN JOUWEI. Performances of Machine Learning in Detecting Glaucoma Using Fundus and Retinal Optical Coherence Tomography Images: A Meta-Analysis. Am J Ophthalmol 2022; 237:1-12. [PMID: 34942113 DOI: 10.1016/j.ajo.2021.12.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 11/24/2021] [Accepted: 12/03/2021] [Indexed: 11/01/2022]
Abstract
PURPOSE To evaluate the performance of machine learning (ML) in detecting glaucoma using fundus and retinal optical coherence tomography (OCT) images. DESIGN Meta-analysis. METHODS PubMed and EMBASE were searched on August 11, 2021. A bivariate random-effects model was used to pool ML's diagnostic sensitivity, specificity, and area under the curve (AUC). Subgroup analyses were performed based on ML classifier categories and dataset types. RESULTS One hundred and five studies (3.3%) were retrieved. Seventy-three (69.5%), 30 (28.6%), and 2 (1.9%) studies tested ML using fundus, OCT, and both image types, respectively. Total testing data numbers were 197,174 for fundus and 16,039 for OCT. Overall, ML showed excellent performances for both fundus (pooled sensitivity = 0.92 [95% CI, 0.91-0.93]; specificity = 0.93 [95% CI, 0.91-0.94]; and AUC = 0.97 [95% CI, 0.95-0.98]) and OCT (pooled sensitivity = 0.90 [95% CI, 0.86-0.92]; specificity = 0.91 [95% CI, 0.89-0.92]; and AUC = 0.96 [95% CI, 0.93-0.97]). ML performed similarly using all data and external data for fundus and the external test result of OCT was less robust (AUC = 0.87). When comparing different classifier categories, although support vector machine showed the highest performance (pooled sensitivity, specificity, and AUC ranges, 0.92-0.96, 0.95-0.97, and 0.96-0.99, respectively), results by neural network and others were still good (pooled sensitivity, specificity, and AUC ranges, 0.88-0.93, 0.90-0.93, 0.95-0.97, respectively). When analyzed based on dataset types, ML demonstrated consistent performances on clinical datasets (fundus AUC = 0.98 [95% CI, 0.97-0.99] and OCT AUC = 0.95 [95% 0.93-0.97]). CONCLUSIONS Performance of ML in detecting glaucoma compares favorably to that of experts and is promising for clinical application. Future prospective studies are needed to better evaluate its real-world utility.
Collapse
|
297
|
Khoury P, Srinivasan R, Kakumanu S, Ochoa S, Keswani A, Sparks R, Rider NL. A Framework for Augmented Intelligence in Allergy and Immunology Practice and Research—A Work Group Report of the AAAAI Health Informatics, Technology, and Education Committee. THE JOURNAL OF ALLERGY AND CLINICAL IMMUNOLOGY: IN PRACTICE 2022; 10:1178-1188. [PMID: 35300959 PMCID: PMC9205719 DOI: 10.1016/j.jaip.2022.01.047] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 01/19/2022] [Accepted: 01/20/2022] [Indexed: 10/18/2022]
Abstract
Artificial and augmented intelligence (AI) and machine learning (ML) methods are expanding into the health care space. Big data are increasingly used in patient care applications, diagnostics, and treatment decisions in allergy and immunology. How these technologies will be evaluated, approved, and assessed for their impact is an important consideration for researchers and practitioners alike. With the potential of ML, deep learning, natural language processing, and other assistive methods to redefine health care usage, a scaffold for the impact of AI technology on research and patient care in allergy and immunology is needed. An American Academy of Asthma Allergy and Immunology Health Information Technology and Education subcommittee workgroup was convened to perform a scoping review of AI within health care as well as the specialty of allergy and immunology to address impacts on allergy and immunology practice and research as well as potential challenges including education, AI governance, ethical and equity considerations, and potential opportunities for the specialty. There are numerous potential clinical applications of AI in allergy and immunology that range from disease diagnosis to multidimensional data reduction in electronic health records or immunologic datasets. For appropriate application and interpretation of AI, specialists should be involved in the design, validation, and implementation of AI in allergy and immunology. Challenges include incorporation of data science and bioinformatics into training of future allergists-immunologists.
Collapse
|
298
|
Lim JS, Hong M, Lam WST, Zhang Z, Teo ZL, Liu Y, Ng WY, Foo LL, Ting DSW. Novel technical and privacy-preserving technology for artificial intelligence in ophthalmology. Curr Opin Ophthalmol 2022; 33:174-187. [PMID: 35266894 DOI: 10.1097/icu.0000000000000846] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The application of artificial intelligence (AI) in medicine and ophthalmology has experienced exponential breakthroughs in recent years in diagnosis, prognosis, and aiding clinical decision-making. The use of digital data has also heralded the need for privacy-preserving technology to protect patient confidentiality and to guard against threats such as adversarial attacks. Hence, this review aims to outline novel AI-based systems for ophthalmology use, privacy-preserving measures, potential challenges, and future directions of each. RECENT FINDINGS Several key AI algorithms used to improve disease detection and outcomes include: Data-driven, imagedriven, natural language processing (NLP)-driven, genomics-driven, and multimodality algorithms. However, deep learning systems are susceptible to adversarial attacks, and use of data for training models is associated with privacy concerns. Several data protection methods address these concerns in the form of blockchain technology, federated learning, and generative adversarial networks. SUMMARY AI-applications have vast potential to meet many eyecare needs, consequently reducing burden on scarce healthcare resources. A pertinent challenge would be to maintain data privacy and confidentiality while supporting AI endeavors, where data protection methods would need to rapidly evolve with AI technology needs. Ultimately, for AI to succeed in medicine and ophthalmology, a balance would need to be found between innovation and privacy.
Collapse
Affiliation(s)
- Jane S Lim
- Singapore National Eye Centre, Singapore Eye Research Institute
| | | | - Walter S T Lam
- Yong Loo Lin School of Medicine, National University of Singapore
| | - Zheting Zhang
- Lee Kong Chian School of Medicine, Nanyang Technological University
| | - Zhen Ling Teo
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Yong Liu
- National University of Singapore, DukeNUS Medical School, Singapore
| | - Wei Yan Ng
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Li Lian Foo
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute
| |
Collapse
|
299
|
Yap A, Wilkinson B, Chen E, Han L, Vaghefi E, Galloway C, Squirrell D. Patients Perceptions of Artificial Intelligence in Diabetic Eye Screening. Asia Pac J Ophthalmol (Phila) 2022; 11:287-293. [PMID: 35772087 DOI: 10.1097/apo.0000000000000525] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
PURPOSE Artificial intelligence (AI) technology is poised to revolutionize modern delivery of health care services. We set to evaluate the patient perspective of AI use in diabetic retinal screening. DESIGN Survey. METHODS Four hundred thirty-eight patients undergoing diabetic retinal screening across New Zealand participated in a survey about their opinion of AI technology in retinal screening. The survey consisted of 13 questions covering topics of awareness, trust, and receptivity toward AI systems. RESULTS The mean age was 59 years. The majority of participants identified as New Zealand European (50%), followed by Asian (31%), Pacific Islander (10%), and Maori (5%). Whilst 73% of participants were aware of AI, only 58% have heard of it being implemented in health care. Overall, 78% of respondents were comfortable with AI use in their care, with 53% saying they would trust an AI-assisted screening program as much as a health professional. Despite having a higher awareness of AI, younger participants had lower trust in AI systems. A higher proportion of Maori and Pacific participants indicated a preference toward human-led screening. The main perceived benefits of AI included faster diagnostic speeds and greater accuracy. CONCLUSIONS There is low awareness of clinical AI applications among our participants. Despite this, most are receptive toward the implementation of AI in diabetic eye screening. Overall, there was a strong preference toward continual involvement of clinicians in the screening process. There are key recommendations to enhance the receptivity of the public toward incorporation of AI into retinal screening programs.
Collapse
Affiliation(s)
- Aaron Yap
- Department of Ophthalmology, Auckland, New Zealand
| | - Benjamin Wilkinson
- Department of Ophthalmology, University of Auckland, Auckland, New Zealand
| | - Eileen Chen
- School of Optometry and Vision Science, Auckland, New Zealand
| | - Lydia Han
- School of Optometry and Vision Science, Auckland, New Zealand
| | - Ehsan Vaghefi
- School of Optometry and Vision Science, Auckland, New Zealand
- Toku Eyes, Auckland, New Zealand
| | - Chris Galloway
- School of Communication, Journalism and Marketing Massey Business School, New Zealand
| | - David Squirrell
- Department of Ophthalmology, Auckland, New Zealand
- Toku Eyes, Auckland, New Zealand
| |
Collapse
|
300
|
Juneja D, Gupta A, Singh O. Artificial intelligence in critically ill diabetic patients: current status and future prospects. Artif Intell Gastroenterol 2022; 3:66-79. [DOI: 10.35712/aig.v3.i2.66] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 04/21/2022] [Accepted: 04/28/2022] [Indexed: 02/06/2023] Open
Abstract
Recent years have witnessed increasing numbers of artificial intelligence (AI) based applications and devices being tested and approved for medical care. Diabetes is arguably the most common chronic disorder worldwide and AI is now being used for making an early diagnosis, to predict and diagnose early complications, increase adherence to therapy, and even motivate patients to manage diabetes and maintain glycemic control. However, these AI applications have largely been tested in non-critically ill patients and aid in managing chronic problems. Intensive care units (ICUs) have a dynamic environment generating huge data, which AI can extract and organize simultaneously, thus analysing many variables for diagnostic and/or therapeutic purposes in order to predict outcomes of interest. Even non-diabetic ICU patients are at risk of developing hypo or hyperglycemia, complicating their ICU course and affecting outcomes. In addition, to maintain glycemic control frequent blood sampling and insulin dose adjustments are required, increasing nursing workload and chances of error. AI has the potential to improve glycemic control while reducing the nursing workload and errors. Continuous glucose monitoring (CGM) devices, which are Food and Drug Administration (FDA) approved for use in non-critically ill patients, are now being recommended for use in specific ICU populations with increased accuracy. AI based devices including artificial pancreas and CGM regulated insulin infusion system have shown promise as comprehensive glycemic control solutions in critically ill patients. Even though many of these AI applications have shown potential, these devices need to be tested in larger number of ICU patients, have wider availability, show favorable cost-benefit ratio and be amenable for easy integration into the existing healthcare systems, before they become acceptable to ICU physicians for routine use.
Collapse
Affiliation(s)
- Deven Juneja
- Institute of Critical Care Medicine, Max Super Speciality Hospital, Saket, New Delhi 110092, India
| | - Anish Gupta
- Institute of Critical Care Medicine, Max Super Speciality Hospital, Saket, New Delhi 110092, India
| | - Omender Singh
- Institute of Critical Care Medicine, Max Super Speciality Hospital, Saket, New Delhi 110092, India
| |
Collapse
|