1
|
Oloruntoba A, Ingvar Å, Sashindranath M, Anthony O, Abbott L, Guitera P, Caccetta T, Janda M, Soyer HP, Mar V. Examining labelling guidelines for AI-based software as a medical device: A review and analysis of dermatology mobile applications in Australia. Australas J Dermatol 2024. [PMID: 38693690 DOI: 10.1111/ajd.14269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 02/26/2024] [Accepted: 04/01/2024] [Indexed: 05/03/2024]
Abstract
In recent years, there has been a surge in the development of AI-based Software as a Medical Device (SaMD), particularly in visual specialties such as dermatology. In Australia, the Therapeutic Goods Administration (TGA) regulates AI-based SaMD to ensure its safe use. Proper labelling of these devices is crucial to ensure that healthcare professionals and the general public understand how to use them and interpret results accurately. However, guidelines for labelling AI-based SaMD in dermatology are lacking, which may result in products failing to provide essential information about algorithm development and performance metrics. This review examines existing labelling guidelines for AI-based SaMD across visual medical specialties, with a specific focus on dermatology. Common recommendations for labelling are identified and applied to currently available dermatology AI-based SaMD mobile applications to determine usage of these labels. Of the 21 AI-based SaMD mobile applications identified, none fully comply with common labelling recommendations. Results highlight the need for standardized labelling guidelines. Ensuring transparency and accessibility of information is essential for the safe integration of AI into health care and preventing potential risks associated with inaccurate clinical decisions.
Collapse
Affiliation(s)
| | - Åsa Ingvar
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
- Victorian Melanoma Service, Alfred Health, Melbourne, Victoria, Australia
- Department of Dermatology, Skåne University Hospital, Lund University, Lund, Sweden
- Department of Clinical Sciences, Skåne University Hospital, Lund University, Lund, Sweden
| | - Maithili Sashindranath
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
| | - Ojochonu Anthony
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Lisa Abbott
- Melanoma Institute Australia, The University of Sydney, Sydney, New South Wales, Australia
| | - Pascale Guitera
- Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia
- Sydney Melanoma Diagnostic Centre, Royal Prince Alfred Hospital, Camperdown, New South Wales, Australia
- Perth Dermatology Clinic, Perth, Western Australia, Australia
| | - Tony Caccetta
- Perth Dermatology Clinic, Perth, Western Australia, Australia
| | - Monika Janda
- Dermatology Research Centre, Frazer Institute, The University of Queensland, Brisbane, Queensland, Australia
| | - H Peter Soyer
- Dermatology Research Centre, Frazer Institute, The University of Queensland, Brisbane, Queensland, Australia
| | - Victoria Mar
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
- Victorian Melanoma Service, Alfred Health, Melbourne, Victoria, Australia
| |
Collapse
|
2
|
Afifah A, Syafira F, Afladhanti PM, Dharmawidiarini D. Artificial intelligence as diagnostic modality for keratoconus: A systematic review and meta-analysis. J Taibah Univ Med Sci 2024; 19:296-303. [PMID: 38283379 PMCID: PMC10821587 DOI: 10.1016/j.jtumed.2023.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 11/13/2023] [Accepted: 12/25/2023] [Indexed: 01/30/2024] Open
Abstract
Objectives The challenges in diagnosing keratoconus (KC) have led researchers to explore the use of artificial intelligence (AI) as a diagnostic tool. AI has emerged as a new way to improve the efficiency of KC diagnosis. This study analyzed the use of AI as a diagnostic modality for KC. Methods This study used a systematic review and meta-analysis following the 2020 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. We searched selected databases using a combination of search terms: "((Artificial Intelligence) OR (Diagnostic Modality)) AND (Keratoconus)" from PubMed, Medline, and ScienceDirect within the last 5 years (2018-2023). Following a systematic review protocol, we selected 11 articles and 6 articles were eligible for final analysis. The relevant data were analyzed with Review Manager 5.4 software and the final output was presented in a forest plot. Results This research found neural networks as the most used AI model in diagnosing KC. Neural networks and naïve bayes showed the highest accuracy of AI in diagnosing KC with a sensitivity of 1.00, while random forests were >0.90. All studies in each group have proven high sensitivity and specificity over 0.90. Conclusions AI potentially makes a better diagnosis of the KC with its high performance, particularly on sensitivity and specificity, which can help clinicians make medical decisions about an individual patient.
Collapse
Affiliation(s)
- Azzahra Afifah
- Undaan Eye Hospital, Surabaya, Indonesia
- Medical Profession Program, Faculty of Medicine, Universitas Sriwijaya, Palembang, South Sumatra, Indonesia
| | - Fara Syafira
- Medical Profession Program, Faculty of Medicine, Universitas Sriwijaya, Palembang, South Sumatra, Indonesia
| | - Putri Mahirah Afladhanti
- Medical Profession Program, Faculty of Medicine, Universitas Sriwijaya, Palembang, South Sumatra, Indonesia
| | - Dini Dharmawidiarini
- Lens, Cornea and Refractive Surgery Division, Undaan Eye Hospital, Surabaya, Indonesia
| |
Collapse
|
3
|
Gurnani B, Kaur K. Leveraging ChatGPT for ophthalmic education: A critical appraisal. Eur J Ophthalmol 2024; 34:323-327. [PMID: 37974429 DOI: 10.1177/11206721231215862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
In recent years, the advent of artificial intelligence (AI) has transformed many sectors, including medical education. This editorial critically appraises the integration of ChatGPT, a state-of-the-art AI language model, into ophthalmic education, focusing on its potential, limitations, and ethical considerations. The application of ChatGPT in teaching and training ophthalmologists presents an innovative method to offer real-time, customized learning experiences. Through a systematic analysis of both experimental and clinical data, this editorial examines how ChatGPT enhances engagement, understanding, and retention of complex ophthalmological concepts. The study also evaluates the efficacy of ChatGPT in simulating patient interactions and clinical scenarios, which can foster improved diagnostic and interpersonal skills. Despite the promising advantages, concerns regarding reliability, lack of personal touch, and potential biases in the AI-generated content are scrutinized. Ethical considerations concerning data privacy and potential misuse are also explored. The findings underline the need for carefully designed integration, continuous evaluation, and adherence to ethical guidelines to maximize benefits while mitigating risks. By shedding light on these multifaceted aspects, this paper contributes to the ongoing discourse on the incorporation of AI in medical education, offering valuable insights and guidance for educators, practitioners, and policymakers aiming to leverage modern technology for enhancing ophthalmic education.
Collapse
Affiliation(s)
- Bharat Gurnani
- Cataract, Cornea, Trauma, External Diseases, Ocular Surface and Refractive Services, ASG Eye Hospital, Jodhpur, Rajasthan, India
- Sadguru Netra Chikitsalya, Shri Sadguru Seva Sangh Trust, Chitrakoot, Madhya Pradesh, India
| | - Kirandeep Kaur
- Cataract, Pediatric Ophthalmology and Strabismus, ASG Eye Hospital, Jodhpur, Rajasthan, India
- Children Eye Care Centre, Sadguru Netra Chikitsalya, Shri Sadguru Seva Sangh Trust, Chitrakoot, Madhya Pradesh, India
| |
Collapse
|
4
|
Zang M, Mukund P, Forsyth B, Laine AF, Thakoor KA. Predicting Clinician Fixations on Glaucoma OCT Reports via CNN-Based Saliency Prediction Methods. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:191-197. [PMID: 38606397 PMCID: PMC11008801 DOI: 10.1109/ojemb.2024.3367492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 01/26/2024] [Accepted: 02/15/2024] [Indexed: 04/13/2024] Open
Abstract
Goal: To predict physician fixations specifically on ophthalmology optical coherence tomography (OCT) reports from eye tracking data using CNN based saliency prediction methods in order to aid in the education of ophthalmologists and ophthalmologists-in-training. Methods: Fifteen ophthalmologists were recruited to each examine 20 randomly selected OCT reports and evaluate the likelihood of glaucoma for each report on a scale of 0-100. Eye movements were collected using a Pupil Labs Core eye-tracker. Fixation heat maps were generated using fixation data. Results: A model trained with traditional saliency mapping resulted in a correlation coefficient (CC) value of 0.208, a Normalized Scanpath Saliency (NSS) value of 0.8172, a Kullback-Leibler (KLD) value of 2.573, and a Structural Similarity Index (SSIM) of 0.169. Conclusions: The TranSalNet model was able to predict fixations within certain regions of the OCT report with reasonable accuracy, but more data is needed to improve model accuracy. Future steps include increasing data collection, improving quality of data, and modifying the model architecture.
Collapse
|
5
|
Marquez E, Barrón-Palma EV, Rodríguez K, Savage J, Sanchez-Sandoval AL. Supervised Machine Learning Methods for Seasonal Influenza Diagnosis. Diagnostics (Basel) 2023; 13:3352. [PMID: 37958248 PMCID: PMC10647880 DOI: 10.3390/diagnostics13213352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 10/24/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Influenza has been a stationary disease in Mexico since 2009, and this causes a high cost for the national public health system, including its detection using RT-qPCR tests, treatments, and absenteeism in the workplace. Despite influenza's relevance, the main clinical features to detect the disease defined by international institutions like the World Health Organization (WHO) and the United States Centers for Disease Control and Prevention (CDC) do not follow the same pattern in all populations. The aim of this work is to find a machine learning method to facilitate decision making in the clinical differentiation between positive and negative influenza patients, based on their symptoms and demographic features. The research sample consisted of 15480 records, including clinical and demographic data of patients with a positive/negative RT-qPCR influenza tests, from 2010 to 2020 in the public healthcare institutions of Mexico City. The performance of the methods for classifying influenza cases were evaluated with indices like accuracy, specificity, sensitivity, precision, the f1-measure and the area under the curve (AUC). Results indicate that random forest and bagging classifiers were the best supervised methods; they showed promise in supporting clinical diagnosis, especially in places where performing molecular tests might be challenging or not feasible.
Collapse
Affiliation(s)
- Edna Marquez
- Genomic Medicine Department, General Hospital of México “Dr. Eduardo Liceaga”, Mexico City 06726, Mexico; (E.V.B.-P.)
| | - Eira Valeria Barrón-Palma
- Genomic Medicine Department, General Hospital of México “Dr. Eduardo Liceaga”, Mexico City 06726, Mexico; (E.V.B.-P.)
| | - Katya Rodríguez
- Institute for Research in Applied Mathematics and Systems, National Autonomous University of Mexico, Mexico City 04510, Mexico;
| | - Jesus Savage
- Signal Processing Department, Engineering School, National Autonomous University of Mexico, Mexico City 04510, Mexico;
| | - Ana Laura Sanchez-Sandoval
- Genomic Medicine Department, General Hospital of México “Dr. Eduardo Liceaga”, Mexico City 06726, Mexico; (E.V.B.-P.)
| |
Collapse
|
6
|
Xiao J, Kopycka-Kedzierawski D, Ragusa P, Mendez Chagoya LA, Funkhouser K, Lischka T, Wu TT, Fiscella K, Kar KS, Al Jallad N, Rashwan N, Ren J, Meyerowitz C. Acceptance and Usability of an Innovative mDentistry eHygiene Model Amid the COVID-19 Pandemic Within the US National Dental Practice-Based Research Network: Mixed Methods Study. JMIR Hum Factors 2023; 10:e45418. [PMID: 37594795 PMCID: PMC10474507 DOI: 10.2196/45418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 04/17/2023] [Accepted: 06/17/2023] [Indexed: 08/19/2023] Open
Abstract
BACKGROUND Amid the COVID-19 pandemic and other possible future infectious disease pandemics, dentistry needs to consider modified dental examination regimens that render quality care and ensure the safety of patients and dental health care personnel (DHCP). OBJECTIVE This study aims to assess the acceptance and usability of an innovative mDentistry eHygiene model amid the COVID-19 pandemic. METHODS This pilot study used a 2-stage implementation design to assess 2 critical components of an innovative mDentistry eHygiene model: virtual hygiene examination (eHygiene) and patient self-taken intraoral images (SELFIE), within the National Dental Practice-Based Research Network. Mixed methods (quantitative and qualitative) were used to assess the acceptance and usability of the eHygiene model. RESULTS A total of 85 patients and 18 DHCP participated in the study. Overall, the eHygiene model was well accepted by patients (System Usability Scale [SUS] score: mean 70.0, SD 23.7) and moderately accepted by dentists (SUS score: mean 51.3, SD 15.9) and hygienists (SUS score: mean 57.1, SD 23.8). Dentists and patients had good communication during the eHygiene examination, as assessed using the Dentist-Patient Communication scale. In the SELFIE session, patients completed tasks with minimum challenges and obtained diagnostic intraoral photos. Patients and DHCP suggested that although eHygiene has the potential to improve oral health care services, it should be used selectively depending on patients' conditions. CONCLUSIONS The study results showed promise for the 2 components of the eHygiene model. eHygiene offers a complementary modality for oral health data collection and examination in dental offices, which would be particularly useful during an infectious disease outbreak. In addition, patients being able to capture critical oral health data in their home could facilitate dental treatment triage and oral health self-monitoring and potentially trigger oral health-promoting behaviors.
Collapse
Affiliation(s)
- Jin Xiao
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | | | - Patricia Ragusa
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | | | | | - Tamara Lischka
- Kaiser Permanente Center for Health Research, Portland, OR, United States
| | - Tong Tong Wu
- Department of Biostatistics and Computational Biology, University of Rochester, Rochester, NY, United States
| | - Kevin Fiscella
- Department of Family Medicine, University of Rochester, Rochester, NY, United States
| | - Kumari Saswati Kar
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Nisreen Al Jallad
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Noha Rashwan
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Johana Ren
- River Campus, University of Rochester, Rochester, NY, United States
| | - Cyril Meyerowitz
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| |
Collapse
|
7
|
Wei W, Southern J, Zhu K, Li Y, Cordeiro MF, Veselkov K. Deep learning to detect macular atrophy in wet age-related macular degeneration using optical coherence tomography. Sci Rep 2023; 13:8296. [PMID: 37217770 DOI: 10.1038/s41598-023-35414-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 05/17/2023] [Indexed: 05/24/2023] Open
Abstract
Here, we have developed a deep learning method to fully automatically detect and quantify six main clinically relevant atrophic features associated with macular atrophy (MA) using optical coherence tomography (OCT) analysis of patients with wet age-related macular degeneration (AMD). The development of MA in patients with AMD results in irreversible blindness, and there is currently no effective method of early diagnosis of this condition, despite the recent development of unique treatments. Using OCT dataset of a total of 2211 B-scans from 45 volumetric scans of 8 patients, a convolutional neural network using one-against-all strategy was trained to present all six atrophic features followed by a validation to evaluate the performance of the models. The model predictive performance has achieved a mean dice similarity coefficient score of 0.706 ± 0.039, a mean Precision score of 0.834 ± 0.048, and a mean Sensitivity score of 0.615 ± 0.051. These results show the unique potential of using artificially intelligence-aided methods for early detection and identification of the progression of MA in wet AMD, which can further support and assist clinical decisions.
Collapse
Affiliation(s)
- Wei Wei
- Department of Surgery and Cancer, Imperial College London, London, UK
- Ningbo Medical Center Lihuili Hospital, Ningbo, China
- Imperial College Ophthalmology Research Group, London, UK
| | | | - Kexuan Zhu
- Ningbo Medical Center Lihuili Hospital, Ningbo, China
| | - Yefeng Li
- School of Cyber Science and Engineering, Ningbo University of Technology, Ningbo, China
| | - Maria Francesca Cordeiro
- Department of Surgery and Cancer, Imperial College London, London, UK.
- Imperial College Ophthalmology Research Group, London, UK.
| | - Kirill Veselkov
- Department of Surgery and Cancer, Imperial College London, London, UK.
| |
Collapse
|
8
|
Tang YW, Ji J, Lin JW, Wang J, Wang Y, Liu Z, Hu Z, Yang JF, Ng TK, Zhang M, Pang CP, Cen LP. Automatic Detection of Peripheral Retinal Lesions From Ultrawide-Field Fundus Images Using Deep Learning. Asia Pac J Ophthalmol (Phila) 2023; 12:284-292. [PMID: 36912572 DOI: 10.1097/apo.0000000000000599] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 12/19/2022] [Indexed: 03/14/2023] Open
Abstract
PURPOSE To establish a multilabel-based deep learning (DL) algorithm for automatic detection and categorization of clinically significant peripheral retinal lesions using ultrawide-field fundus images. METHODS A total of 5958 ultrawide-field fundus images from 3740 patients were randomly split into a training set, validation set, and test set. A multilabel classifier was developed to detect rhegmatogenous retinal detachment, cystic retinal tuft, lattice degeneration, and retinal breaks. Referral decision was automatically generated based on the results of each disease class. t -distributed stochastic neighbor embedding heatmaps were used to visualize the features extracted by the neural networks. Gradient-weighted class activation mapping and guided backpropagation heatmaps were generated to investigate the image locations for decision-making by the DL models. The performance of the classifier(s) was evaluated by sensitivity, specificity, accuracy, F 1 score, area under receiver operating characteristic curve (AUROC) with 95% CI, and area under the precision-recall curve. RESULTS In the test set, all categories achieved a sensitivity of 0.836-0.918, a specificity of 0.858-0.989, an accuracy of 0.854-0.977, an F 1 score of 0.400-0.931, an AUROC of 0.9205-0.9882, and an area under the precision-recall curve of 0.6723-0.9745. The referral decisions achieved an AUROC of 0.9758 (95% CI= 0.9648-0.9869). The multilabel classifier had significantly better performance in cystic retinal tuft detection than the binary classifier (AUROC= 0.9781 vs 0.6112, P < 0.001). The model showed comparable performance with human experts. CONCLUSIONS This new DL model of a multilabel classifier is capable of automatic, accurate, and early detection of clinically significant peripheral retinal lesions with various sample sizes. It can be applied in peripheral retinal screening in clinics.
Collapse
Affiliation(s)
- Yi-Wen Tang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jie Ji
- Network and Information Center, Shantou University, Shantou, Guangdong, China
| | - Jian-Wei Lin
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Ji Wang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yun Wang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Zibo Liu
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Zhanchi Hu
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jian-Feng Yang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tsz Kin Ng
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Mingzhi Zhang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Chi Pui Pang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Ling-Ping Cen
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| |
Collapse
|
9
|
Li M, Liu S, Wang Z, Li X, Yan Z, Zhu R, Wan Z. MyopiaDETR: End-to-end pathological myopia detection based on transformer using 2D fundus images. Front Neurosci 2023; 17:1130609. [PMID: 36824210 PMCID: PMC9941630 DOI: 10.3389/fnins.2023.1130609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 01/23/2023] [Indexed: 02/09/2023] Open
Abstract
Background Automated diagnosis of various retinal diseases based on fundus images can serve as an important clinical decision aid for curing vision loss. However, developing such an automated diagnostic solution is challenged by the characteristics of lesion area in 2D fundus images, such as morphology irregularity, imaging angle, and insufficient data. Methods To overcome those challenges, we propose a novel deep learning model named MyopiaDETR to detect the lesion area of normal myopia (NM), high myopia (HM) and pathological myopia (PM) using 2D fundus images provided by the iChallenge-PM dataset. To solve the challenge of morphology irregularity, we present a novel attentional FPN architecture and generate multi-scale feature maps to a traditional Detection Transformer (DETR) for detecting irregular lesion more accurate. Then, we choose the DETR structure to view the lesion from the perspective of set prediction and capture better global information. Several data augmentation methods are used on the iChallenge-PM dataset to solve the challenge of insufficient data. Results The experimental results demonstrate that our model achieves excellent localization and classification performance on the iChallenge-PM dataset, reaching AP50 of 86.32%. Conclusion Our model is effective to detect lesion areas in 2D fundus images. The model not only achieves a significant improvement in capturing small objects, but also a significant improvement in convergence speed during training.
Collapse
Affiliation(s)
- Manyu Li
- School of Information Engineering, Nanchang University, Jiangxi, China
| | - Shichang Liu
- School of Computer Science, Shaanxi Normal University, Xi’an, China
| | - Zihan Wang
- School of Information Engineering, Nanchang University, Jiangxi, China
| | - Xin Li
- School of Computer Science, Shaanxi Normal University, Xi’an, China
| | - Zezhong Yan
- School of Information Engineering, Nanchang University, Jiangxi, China
| | - Renping Zhu
- School of Information Engineering, Nanchang University, Jiangxi, China,Industrial Institute of Artificial Intelligence, Nanchang University, Jiangxi, China,School of Information Management, Wuhan University, Hubei, China,*Correspondence: Renping Zhu,
| | - Zhijiang Wan
- School of Information Engineering, Nanchang University, Jiangxi, China,Industrial Institute of Artificial Intelligence, Nanchang University, Jiangxi, China,Zhijiang Wan,
| |
Collapse
|
10
|
Artificial intelligence for strengthening healthcare systems in low- and middle-income countries: a systematic scoping review. NPJ Digit Med 2022; 5:162. [PMID: 36307479 PMCID: PMC9614192 DOI: 10.1038/s41746-022-00700-y] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 09/29/2022] [Indexed: 02/08/2023] Open
Abstract
In low- and middle-income countries (LMICs), AI has been promoted as a potential means of strengthening healthcare systems by a growing number of publications. We aimed to evaluate the scope and nature of AI technologies in the specific context of LMICs. In this systematic scoping review, we used a broad variety of AI and healthcare search terms. Our literature search included records published between 1st January 2009 and 30th September 2021 from the Scopus, EMBASE, MEDLINE, Global Health and APA PsycInfo databases, and grey literature from a Google Scholar search. We included studies that reported a quantitative and/or qualitative evaluation of a real-world application of AI in an LMIC health context. A total of 10 references evaluating the application of AI in an LMIC were included. Applications varied widely, including: clinical decision support systems, treatment planning and triage assistants and health chatbots. Only half of the papers reported which algorithms and datasets were used in order to train the AI. A number of challenges of using AI tools were reported, including issues with reliability, mixed impacts on workflows, poor user friendliness and lack of adeptness with local contexts. Many barriers exists that prevent the successful development and adoption of well-performing, context-specific AI tools, such as limited data availability, trust and evidence of cost-effectiveness in LMICs. Additional evaluations of the use of AI in healthcare in LMICs are needed in order to identify their effectiveness and reliability in real-world settings and to generate understanding for best practices for future implementations.
Collapse
|
11
|
Ferro Desideri L, Rutigliani C, Corazza P, Nastasi A, Roda M, Nicolo M, Traverso CE, Vagge A. The upcoming role of Artificial Intelligence (AI) for retinal and glaucomatous diseases. JOURNAL OF OPTOMETRY 2022; 15 Suppl 1:S50-S57. [PMID: 36216736 PMCID: PMC9732476 DOI: 10.1016/j.optom.2022.08.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/14/2022] [Accepted: 08/16/2022] [Indexed: 06/16/2023]
Abstract
In recent years, the role of artificial intelligence (AI) and deep learning (DL) models is attracting increasing global interest in the field of ophthalmology. DL models are considered the current state-of-art among the AI technologies. In fact, DL systems have the capability to recognize, quantify and describe pathological clinical features. Their role is currently being investigated for the early diagnosis and management of several retinal diseases and glaucoma. The application of DL models to fundus photographs, visual fields and optical coherence tomography (OCT) imaging has provided promising results in the early detection of diabetic retinopathy (DR), wet age-related macular degeneration (w-AMD), retinopathy of prematurity (ROP) and glaucoma. In this review we analyze the current evidence of AI applied to these ocular diseases, as well as discuss the possible future developments and potential clinical implications, without neglecting the present limitations and challenges in order to adopt AI and DL models as powerful tools in the everyday routine clinical practice.
Collapse
Affiliation(s)
- Lorenzo Ferro Desideri
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy.
| | | | - Paolo Corazza
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | | | - Matilde Roda
- Ophthalmology Unit, Department of Experimental, Diagnostic and Specialty Medicine (DIMES), Alma Mater Studiorum University of Bologna and S.Orsola-Malpighi Teaching Hospital, Bologna, Italy
| | - Massimo Nicolo
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | - Carlo Enrico Traverso
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | - Aldo Vagge
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| |
Collapse
|
12
|
Yasin A, Ren Y, Li J, Sheng Y, Cao C, Zhang K. Advances in Hyaluronic Acid for Biomedical Applications. Front Bioeng Biotechnol 2022; 10:910290. [PMID: 35860333 PMCID: PMC9289781 DOI: 10.3389/fbioe.2022.910290] [Citation(s) in RCA: 51] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 05/24/2022] [Indexed: 11/13/2022] Open
Abstract
Hyaluronic acid (HA) is a large non-sulfated glycosaminoglycan that is the main component of the extracellular matrix (ECM). Because of its strong and diversified functions applied in broad fields, HA has been widely studied and reported previously. The molecular properties of HA and its derivatives, including a wide range of molecular weights but distinct effects on cells, moisture retention and anti-aging, and CD44 targeting, promised its role as a popular participant in tissue engineering, wound healing, cancer treatment, ophthalmology, and cosmetics. In recent years, HA and its derivatives have played an increasingly important role in the aforementioned biomedical fields in the formulation of coatings, nanoparticles, and hydrogels. This article highlights recent efforts in converting HA to smart formulation, such as multifunctional coatings, targeted nanoparticles, or injectable hydrogels, which are used in advanced biomedical application.
Collapse
Affiliation(s)
- Aqeela Yasin
- School of Materials Science and Engineering, and Henan Key Laboratory of Advanced Magnesium Alloy and Key Laboratory of Materials Processing and Mold Technology (Ministry of Education), Zhengzhou University, Zhengzhou, China
| | - Ying Ren
- School of Materials Science and EngineeringHenan University of Technology, Zhengzhou, China
| | - Jingan Li
- School of Materials Science and Engineering, and Henan Key Laboratory of Advanced Magnesium Alloy and Key Laboratory of Materials Processing and Mold Technology (Ministry of Education), Zhengzhou University, Zhengzhou, China
- *Correspondence: Jingan Li, ; Chang Cao,
| | - Yulong Sheng
- School of Materials Science and Engineering, and Henan Key Laboratory of Advanced Magnesium Alloy and Key Laboratory of Materials Processing and Mold Technology (Ministry of Education), Zhengzhou University, Zhengzhou, China
| | - Chang Cao
- Department of Cardiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
- *Correspondence: Jingan Li, ; Chang Cao,
| | - Kun Zhang
- School of Life Science, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
13
|
Use of Artificial Neural Networks to Predict the Progression of Glaucoma in Patients with Sleep Apnea. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12126061] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Aim: To construct neural models to predict the progression of glaucoma in patients with sleep apnea. Materials and Methods: Modeling the use of neural networks was performed using the Neurosolutions commercial simulator. The built databases gather information on a group of patients with primitive open-angle glaucoma and normal-tension glaucoma, who have been associated with sleep apnea syndrome and various stages of disease severity. The data within the database were divided as follows: 65 were used in the neural network training stage and 8 were kept for the validation stage. In total, 21 parameters were selected as input parameters for neural models including: age of patients, BMI (body mass index), systolic and diastolic blood pressure, intraocular pressure, central corneal thickness, corneal biomechanical parameters (IOPcc, HC, CRF), AHI, desaturation index, nocturnal oxygen saturation, remaining AHI, type of apnea, and associated general conditions (diabetes, hypertension, obesity, COPD). The selected output parameters are: c/d ratio, modified visual field parameters (MD, PSD), ganglion cell layer thickness. Forward-propagation neural networks (multilayer perceptron) were constructed with a layer of hidden neurons. The constructed neural models generated the output values for these data. The obtained results were then compared with the experimental values. Results: The best results were obtained during the training stage with the ANN network (21:35:4). If we consider a 25% confidence interval, we find that very good results are obtained during the validation stage, except for the average GCL thickness, for which the errors are slightly higher. Conclusions: Excellent results were obtained during the validation stage, which support the results obtained in other studies in the literature that strengthen the connection between sleep apnea syndrome and glaucoma changes.
Collapse
|
14
|
Al-Jallad N, Ly-Mapes O, Hao P, Ruan J, Ramesh A, Luo J, Wu TT, Dye T, Rashwan N, Ren J, Jang H, Mendez L, Alomeir N, Bullock S, Fiscella K, Xiao J. Artificial intelligence-powered smartphone application, AICaries, improves at-home dental caries screening in children: Moderated and unmoderated usability test. PLOS DIGITAL HEALTH 2022; 1:e0000046. [PMID: 36381137 PMCID: PMC9645586 DOI: 10.1371/journal.pdig.0000046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 04/15/2022] [Indexed: 06/16/2023]
Abstract
Early Childhood Caries (ECC) is the most common childhood disease worldwide and a health disparity among underserved children. ECC is preventable and reversible if detected early. However, many children from low-income families encounter barriers to dental care. An at-home caries detection technology could potentially improve access to dental care regardless of patients' economic status and address the overwhelming prevalence of ECC. Our team has developed a smartphone application (app), AICaries, that uses artificial intelligence (AI)-powered technology to detect caries using children's teeth photos. We used mixed methods to assess the acceptance, usability, and feasibility of the AICaries app among underserved parent-child dyads. We conducted moderated usability testing (Step 1) with ten parent-child dyads using "Think-aloud" methods to assess the flow and functionality of the app and analyze the data to refine the app and procedures. Next, we conducted unmoderated field testing (Step 2) with 32 parent-child dyads to test the app within their natural environment (home) over two weeks. We administered the System Usability Scale (SUS) and conducted semi-structured individual interviews with parents and conducted thematic analyses. AICaries app received a 78.4 SUS score from the participants, indicating an excellent acceptance. Notably, the majority (78.5%) of parent-taken photos of children's teeth were satisfactory in quality for detection of caries using the AI app. Parents suggested using community health workers to provide training to parents needing assistance in taking high quality photos of their young child's teeth. Perceived benefits from using the AICaries app include convenient at-home caries screening, informative on caries risk and education, and engaging family members. Data from this study support future clinical trial that evaluates the real-world impact of using this innovative smartphone app on early detection and prevention of ECC among low-income children.
Collapse
Affiliation(s)
- Nisreen Al-Jallad
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Oriana Ly-Mapes
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Peirong Hao
- Department of Computer Science, University of Rochester, United States of America
| | - Jinlong Ruan
- Department of Computer Science, University of Rochester, United States of America
| | - Ashwin Ramesh
- Department of Computer Science, University of Rochester, United States of America
| | - Jiebo Luo
- Department of Computer Science, University of Rochester, United States of America
| | - Tong Tong Wu
- Department of Biostatistics and computational biology, University of Rochester Medical Center, Rochester, United States of America
| | - Timothy Dye
- Department of Obstetrics and Gynecology, University of Rochester Medical Center, Rochester, United States of America
| | - Noha Rashwan
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Johana Ren
- University of Rochester, United States of America
| | - Hoonji Jang
- Temple University School of Dentistry, Pennsylvania, United States of America
| | - Luis Mendez
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Nora Alomeir
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | | | - Kevin Fiscella
- Department of Family Medicine, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Jin Xiao
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| |
Collapse
|
15
|
Kothandan S, Radhakrishana A, Kuppusamy G. Review on Artificial Intelligence Based Ophthalmic Application. Curr Pharm Des 2022; 28:2150-2160. [PMID: 35619317 DOI: 10.2174/1381612828666220520112240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 02/14/2022] [Indexed: 11/22/2022]
Abstract
Artificial intelligence is the leading branch of technology and innovation. The utility of artificial intelligence in the field of medicine is also remarkable. From drug discovery and development till the introduction of products in the market, artificial intelligence can play its role. As people age, they are more prone to be affected by eye diseases around the globe. Early diagnosis and detection help in minimizing the risk of vision loss and providing a quality life. With the help of artificial intelligence, the workload of humans and manmade errors can be reduced to an extent. The need for artificial intelligence in the area of ophthalmic is also found to be significant. As people age, they are more prone to be affected by eye diseases around the globe. Early diagnosis and detection help in minimizing the risk of vision loss and providing a quality life. In this review, we elaborated on the use of artificial intelligence in the field of pharmaceutical product development mainly with its application in ophthalmic care. AI in the future has a high potential to increase the success rate in the drug discovery phase has already been established. The application of artificial intelligence for drug development, diagnosis, and treatment is also reported with the scientific evidence in this paper.
Collapse
Affiliation(s)
- Sudhakar Kothandan
- Department of Pharmaceutics, JSS College of Pharmacy (JSS Academy of Higher Education & Research), Ooty
| | - Arun Radhakrishana
- Department of Pharmaceutics, JSS College of Pharmacy (JSS Academy of Higher Education & Research), Ooty
| | - Gowthamarajan Kuppusamy
- Department of Pharmaceutics, JSS College of Pharmacy (JSS Academy of Higher Education & Research), Ooty
| |
Collapse
|
16
|
Trends in Neonatal Ophthalmic Screening Methods. Diagnostics (Basel) 2022; 12:diagnostics12051251. [PMID: 35626406 PMCID: PMC9140133 DOI: 10.3390/diagnostics12051251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 05/12/2022] [Accepted: 05/17/2022] [Indexed: 11/30/2022] Open
Abstract
Neonatal ophthalmic screening should lead to early diagnosis of ocular abnormalities to reduce long-term visual impairment in selected diseases. If a treatable pathology is diagnosed within a few days after the birth, adequate therapy may be indicated to facilitate the best possible conditions for further development of visual functions. Traditional neonatal ophthalmic screening uses the red reflex test (RRT). It tests the transmittance of the light through optical media towards the retina and the general disposition of the central part of the retina. However, RRT has weaknesses, especially in posterior segment affections. Wide-field digital imaging techniques have shown promising results in detecting anterior and posterior segment pathologies. Particular attention should be paid to telemedicine and artificial intelligence. These methods can improve the specificity and sensitivity of neonatal eye screening. Both are already highly advanced in diagnosing and monitoring of retinopathy of prematurity.
Collapse
|
17
|
A Particle Swarm Optimization Backtracking Technique Inspired by Science-Fiction Time Travel. AI 2022. [DOI: 10.3390/ai3020024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Artificial intelligence techniques, such as particle swarm optimization, are used to solve problems throughout society. Optimization, in particular, seeks to identify the best possible decision within a search space. Problematically, particle swarm optimization will sometimes have particles that become trapped inside local minima, preventing them from identifying a global optimal solution. As a solution to this issue, this paper proposes a science-fiction inspired enhancement of particle swarm optimization where an impactful iteration is identified and the algorithm is rerun from this point, with a change made to the swarm. The proposed technique is tested using multiple variations on several different functions representing optimization problems and several standard test functions used to test various particle swarm optimization techniques.
Collapse
|
18
|
Diagnostic accuracy of current machine learning classifiers for age-related macular degeneration: a systematic review and meta-analysis. Eye (Lond) 2022; 36:994-1004. [PMID: 33958739 PMCID: PMC9046206 DOI: 10.1038/s41433-021-01540-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 02/23/2021] [Accepted: 04/06/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND AND OBJECTIVE The objective of this study was to systematically review and meta-analyze the diagnostic accuracy of current machine learning classifiers for age-related macular degeneration (AMD). Artificial intelligence diagnostic algorithms can automatically detect and diagnose AMD through training data from large sets of fundus or OCT images. The use of AI algorithms is a powerful tool, and it is a method of obtaining a cost-effective, simple, and fast diagnosis of AMD. METHODS MEDLINE, EMBASE, CINAHL, and ProQuest Dissertations and Theses were searched systematically and thoroughly. Conferences held through Association for Research in Vision and Ophthalmology, American Academy of Ophthalmology, and Canadian Society of Ophthalmology were searched. Studies were screened using Covidence software and data on sensitivity, specificity and area under curve were extracted from the included studies. STATA 15.0 was used to conduct the meta-analysis. RESULTS Our search strategy identified 307 records from online databases and 174 records from gray literature. Total of 13 records, 64,798 subjects (and 612,429 images), were used for the quantitative analysis. The pooled estimate for sensitivity was 0.918 [95% CI: 0.678, 0.98] and specificity was 0.888 [95% CI: 0.578, 0.98] for AMD screening using machine learning classifiers. The relative odds of a positive screen test in AMD cases were 89.74 [95% CI: 3.05-2641.59] times more likely than a negative screen test in non-AMD cases. The positive likelihood ratio was 8.22 [95% CI: 1.52-44.48] and the negative likelihood ratio was 0.09 [95% CI: 0.02-0.52]. CONCLUSION The included studies show promising results for the diagnostic accuracy of the machine learning classifiers for AMD and its implementation in clinical settings.
Collapse
|
19
|
Detecting glaucoma with only OCT: Implications for the clinic, research, screening, and AI development. Prog Retin Eye Res 2022; 90:101052. [PMID: 35216894 DOI: 10.1016/j.preteyeres.2022.101052] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 01/21/2022] [Accepted: 02/01/2022] [Indexed: 12/25/2022]
Abstract
A method for detecting glaucoma based only on optical coherence tomography (OCT) is of potential value for routine clinical decisions, for inclusion criteria for research studies and trials, for large-scale clinical screening, as well as for the development of artificial intelligence (AI) decision models. Recent work suggests that the OCT probability (p-) maps, also known as deviation maps, can play a key role in an OCT-based method. However, artifacts seen on the p-maps of healthy control eyes can resemble patterns of damage due to glaucoma. We document in section 2 that these glaucoma-like artifacts are relatively common and are probably due to normal anatomical variations in healthy eyes. We also introduce a simple anatomical artifact model based upon known anatomical variations to help distinguish these artifacts from actual glaucomatous damage. In section 3, we apply this model to an OCT-based method for detecting glaucoma that starts with an examination of the retinal nerve fiber layer (RNFL) p-map. While this method requires a judgment by the clinician, sections 4 and 5 describe automated methods that do not. In section 4, the simple model helps explain the relatively poor performance of commonly employed summary statistics, including circumpapillary RNFL thickness. In section 5, the model helps account for the success of an AI deep learning model, which in turn validates our focus on the RNFL p-map. Finally, in section 6 we consider the implications of OCT-based methods for the clinic, research, screening, and the development of AI models.
Collapse
|
20
|
Gour N, Tanveer M, Khanna P. Challenges for ocular disease identification in the era of artificial intelligence. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06770-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
21
|
Kozioł M, Nowak MS, Koń B, Udziela M, Szaflik JP. Regional analysis of diabetic retinopathy and co-existing social and demographic factors in the overall population of Poland. Arch Med Sci 2022; 18:320-327. [PMID: 35316912 PMCID: PMC8924831 DOI: 10.5114/aoms/131264] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Accepted: 12/07/2020] [Indexed: 01/23/2023] Open
Abstract
INTRODUCTION The aim of our study was to analyse the regional differences in diabetic retinopathy (DR) prevalence and its co-existing social and demographic factors in the overall population of Poland in the year 2017. MATERIAL AND METHODS Data from all levels of healthcare services at public and private institutions recorded in the National Health Fund database were evaluated. International Classification of Diseases codes were used to identify patients with type 1 and type 2 diabetes mellitus (DM) and with DR. Moran's I statistics and Spatial Autoregressive (SAR) model allowed us to understand the distribution of DR prevalence and its possible association with environmental and demographic exposures. RESULTS In total, 310,815 individuals with diabetic retinopathy (DR) were diagnosed in the year 2017 in Poland. Of them, 174,384 (56.11%) were women, 221,144 (71.15%) lived in urban areas, and 40,231 (12.94%) and 270,584 (87.06%) had type 1 and type 2 DM, respectively. The analysis of the SAR model showed that the significant factors for the occurrence of DR in particular counties were a higher level of average income and a higher number of ophthalmologic consultations per 10,000 adults. CONCLUSIONS The analyses of social, demographic, and systemic factors co-existing with DR revealed that level of income and access to ophthalmologic and diabetic services are crucial in DR prevalence in Poland.
Collapse
Affiliation(s)
| | - Michał S. Nowak
- Provisus Eye Clinic, Czestochowa, Poland
- Saint Family Hospital Medical Center, Lodz, Poland
| | - Beata Koń
- Collegium of Economic Analysis, SGH Warsaw School of Economics, Warsaw, Poland
| | - Monika Udziela
- Department of Ophthalmology, Medical University of Warsaw, Public Ophthalmic Clinical Hospital (SPKSO), Warsaw, Poland
| | - Jacek P. Szaflik
- Department of Ophthalmology, Medical University of Warsaw, Public Ophthalmic Clinical Hospital (SPKSO), Warsaw, Poland
| |
Collapse
|
22
|
Fully-automated atrophy segmentation in dry age-related macular degeneration in optical coherence tomography. Sci Rep 2021; 11:21893. [PMID: 34751189 PMCID: PMC8575929 DOI: 10.1038/s41598-021-01227-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/23/2021] [Indexed: 11/09/2022] Open
Abstract
Age-related macular degeneration (AMD) is a progressive retinal disease, causing vision loss. A more detailed characterization of its atrophic form became possible thanks to the introduction of Optical Coherence Tomography (OCT). However, manual atrophy quantification in 3D retinal scans is a tedious task and prevents taking full advantage of the accurate retina depiction. In this study we developed a fully automated algorithm segmenting Retinal Pigment Epithelial and Outer Retinal Atrophy (RORA) in dry AMD on macular OCT. 62 SD-OCT scans from eyes with atrophic AMD (57 patients) were collected and split into train and test sets. The training set was used to develop a Convolutional Neural Network (CNN). The performance of the algorithm was established by cross validation and comparison to the test set with ground-truth annotated by two graders. Additionally, the effect of using retinal layer segmentation during training was investigated. The algorithm achieved mean Dice scores of 0.881 and 0.844, sensitivity of 0.850 and 0.915 and precision of 0.928 and 0.799 in comparison with Expert 1 and Expert 2, respectively. Using retinal layer segmentation improved the model performance. The proposed model identified RORA with performance matching human experts. It has a potential to rapidly identify atrophy with high consistency.
Collapse
|
23
|
Xiao J, Luo J, Ly-Mapes O, Wu TT, Dye T, Al Jallad N, Hao P, Ruan J, Bullock S, Fiscella K. Assessing a Smartphone App (AICaries) That Uses Artificial Intelligence to Detect Dental Caries in Children and Provides Interactive Oral Health Education: Protocol for a Design and Usability Testing Study. JMIR Res Protoc 2021; 10:e32921. [PMID: 34529582 PMCID: PMC8571694 DOI: 10.2196/32921] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 09/14/2021] [Indexed: 01/09/2023] Open
Abstract
BACKGROUND Early childhood caries (ECC) is the most common chronic childhood disease, with nearly 1.8 billion new cases per year worldwide. ECC afflicts approximately 55% of low-income and minority US preschool children, resulting in harmful short- and long-term effects on health and quality of life. Clinical evidence shows that caries is reversible if detected and addressed in its early stages. However, many low-income US children often have poor access to pediatric dental services. In this underserved group, dental caries is often diagnosed at a late stage when extensive restorative treatment is needed. With more than 85% of lower-income Americans owning a smartphone, mobile health tools such as smartphone apps hold promise in achieving patient-driven early detection and risk control of ECC. OBJECTIVE This study aims to use a community-based participatory research strategy to refine and test the usability of an artificial intelligence-powered smartphone app, AICaries, to be used by children's parents/caregivers for dental caries detection in their children. METHODS Our previous work has led to the prototype of AICaries, which offers artificial intelligence-powered caries detection using photos of children's teeth taken by the parents' smartphones, interactive caries risk assessment, and personalized education on reducing children's ECC risk. This AICaries study will use a two-step qualitative study design to assess the feedback and usability of the app component and app flow, and whether parents can take photos of children's teeth on their own. Specifically, in step 1, we will conduct individual usability tests among 10 pairs of end users (parents with young children) to facilitate app module modification and fine-tuning using think aloud and instant data analysis strategies. In step 2, we will conduct unmoderated field testing for app feasibility and acceptability among 32 pairs of parents with their young children to assess the usability and acceptability of AICaries, including assessing the number/quality of teeth images taken by the parents for their children and parents' satisfaction. RESULTS The study is funded by the National Institute of Dental and Craniofacial Research, United States. This study received institutional review board approval and launched in August 2021. Data collection and analysis are expected to conclude by March 2022 and June 2022, respectively. CONCLUSIONS Using AICaries, parents can use their regular smartphones to take photos of their children's teeth and detect ECC aided by AICaries so that they can actively seek treatment for their children at an early and reversible stage of ECC. Using AICaries, parents can also obtain essential knowledge on reducing their children's caries risk. Data from this study will support a future clinical trial that evaluates the real-world impact of using this smartphone app on early detection and prevention of ECC among low-income children. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/32921.
Collapse
Affiliation(s)
- Jin Xiao
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Jiebo Luo
- Computer Science, University of Rochester, Rochester, NY, United States
| | - Oriana Ly-Mapes
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Tong Tong Wu
- Department of Biostatistics and Computational Biology, University of Rochester Medical Center, Rochester, NY, United States
| | - Timothy Dye
- Department of Obstetrics and Gynecology, University of Rochester Medical Center, Rochester, NY, United States
| | - Nisreen Al Jallad
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Peirong Hao
- Computer Science, University of Rochester, Rochester, NY, United States
| | - Jinlong Ruan
- Computer Science, University of Rochester, Rochester, NY, United States
| | | | - Kevin Fiscella
- Department of Family Medicine, University of Rochester Medical Center, Rochester, NY, United States
| |
Collapse
|
24
|
Ajitha S, Akkara JD, Judy MV. Identification of glaucoma from fundus images using deep learning techniques. Indian J Ophthalmol 2021; 69:2702-2709. [PMID: 34571619 PMCID: PMC8597466 DOI: 10.4103/ijo.ijo_92_21] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Purpose Glaucoma is one of the preeminent causes of incurable visual disability and blindness across the world due to elevated intraocular pressure within the eyes. Accurate and timely diagnosis is essential for preventing visual disability. Manual detection of glaucoma is a challenging task that needs expertise and years of experience. Methods In this paper, we suggest a powerful and accurate algorithm using a convolutional neural network (CNN) for the automatic diagnosis of glaucoma. In this work, 1113 fundus images consisting of 660 normal and 453 glaucomatous images from four databases have been used for the diagnosis of glaucoma. A 13-layer CNN is potently trained from this dataset to mine vital features, and these features are classified into either glaucomatous or normal class during testing. The proposed algorithm is implemented in Google Colab, which made the task straightforward without spending hours installing the environment and supporting libraries. To evaluate the effectiveness of our algorithm, the dataset is divided into 70% for training, 20% for validation, and the remaining 10% utilized for testing. The training images are augmented to 12012 fundus images. Results Our model with SoftMax classifier achieved an accuracy of 93.86%, sensitivity of 85.42%, specificity of 100%, and precision of 100%. In contrast, the model with the SVM classifier achieved accuracy, sensitivity, specificity, and precision of 95.61, 89.58, 100, and 100%, respectively. Conclusion These results demonstrate the ability of the deep learning model to identify glaucoma from fundus images and suggest that the proposed system can help ophthalmologists in a fast, accurate, and reliable diagnosis of glaucoma.
Collapse
Affiliation(s)
- S Ajitha
- Department of Computer Applications, Cochin University of Science and Technology, Kochi, Kerala, India
| | - John D Akkara
- Ophthalmology Department, Sri Ramachandra Institute of Higher Education and Research, Chennai, Tamil Nadu, India
| | - M V Judy
- Department of Computer Applications, Cochin University of Science and Technology, Kochi, Kerala, India
| |
Collapse
|
25
|
Chattopadhyay AK, Chattopadhyay S. VIRDOCD
: A
VIRtual DOCtor
to predict dengue fatality. EXPERT SYSTEMS 2021. [DOI: 10.1111/exsy.12796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
26
|
Accuracy of Deep Learning Algorithms for the Diagnosis of Retinopathy of Prematurity by Fundus Images: A Systematic Review and Meta-Analysis. J Ophthalmol 2021; 2021:8883946. [PMID: 34394982 PMCID: PMC8363465 DOI: 10.1155/2021/8883946] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 06/30/2021] [Accepted: 07/27/2021] [Indexed: 12/14/2022] Open
Abstract
Background Retinopathy of prematurity (ROP) occurs in preterm infants and may contribute to blindness. Deep learning (DL) models have been used for ophthalmologic diagnoses. We performed a systematic review and meta-analysis of published evidence to summarize and evaluate the diagnostic accuracy of DL algorithms for ROP by fundus images. Methods We searched PubMed, EMBASE, Web of Science, and Institute of Electrical and Electronics Engineers Xplore Digital Library on June 13, 2021, for studies using a DL algorithm to distinguish individuals with ROP of different grades, which provided accuracy measurements. The pooled sensitivity and specificity values and the area under the curve (AUC) of summary receiver operating characteristics curves (SROC) summarized overall test performance. The performances in validation and test datasets were assessed together and separately. Subgroup analyses were conducted between the definition and grades of ROP. Threshold and nonthreshold effects were tested to assess biases and evaluate accuracy factors associated with DL models. Results Nine studies with fifteen classifiers were included in our meta-analysis. A total of 521,586 objects were applied to DL models. For combined validation and test datasets in each study, the pooled sensitivity and specificity were 0.953 (95% confidence intervals (CI): 0.946-0.959) and 0.975 (0.973-0.977), respectively, and the AUC was 0.984 (0.978-0.989). For the validation dataset and test dataset, the AUC was 0.977 (0.968-0.986) and 0.987 (0.982-0.992), respectively. In the subgroup analysis of ROP vs. normal and differentiation of two ROP grades, the AUC was 0.990 (0.944-0.994) and 0.982 (0.964-0.999), respectively. Conclusions Our study shows that DL models can play an essential role in detecting and grading ROP with high sensitivity, specificity, and repeatability. The application of a DL-based automated system may improve ROP screening and diagnosis in the future.
Collapse
|
27
|
Abstract
As resources in the healthcare environment continue to wane, leaders are seeking ways to continue to provide quality care bounded by the constraints of a reduced budget. This manuscript synthesizes the experience from a number of institutions to provide the healthcare leadership with an understanding of the value of an enterprise imaging program. The value of such a program extends across the entire health system. It leads to operational efficiencies through infrastructure and application consolidation and the creation of focused support capabilities with increased depth of skill. An enterprise imaging program provides a centralized foundation for all phases of image management from every image-producing specialty. Through centralization, standardized image exchange functions can be provided to all image producers. Telehealth services can be more tightly integrated into the electronic medical record. Mobile platforms can be utilized for image viewing and sharing by patients and providers. Mobile tools can also be utilized for image upload directly into the centralized image repository. Governance and data standards are more easily distributed, setting the stage for artificial intelligence and data analytics. Increased exposure to all image producers provides opportunities for cybersecurity optimization and increased awareness.
Collapse
|
28
|
Gupta K, Reddy S. Heart, Eye, and Artificial Intelligence: A Review. Cardiol Res 2021; 12:132-139. [PMID: 34046105 PMCID: PMC8139752 DOI: 10.14740/cr1179] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Accepted: 11/12/2020] [Indexed: 12/30/2022] Open
Abstract
Heart disease continues to be the leading cause of death in the USA. Deep learning-based artificial intelligence (AI) methods have become increasingly common in studying the various factors involved in cardiovascular disease. The usage of retinal scanning techniques to diagnose retinal diseases, such as diabetic retinopathy, age-related macular degeneration, glaucoma and others, using fundus photographs and optical coherence tomography angiography (OCTA) has been extensively documented. Researchers are now looking to combine the power of AI with the non-invasive ease of retinal scanning to examine the workings of the heart and predict changes in the macrovasculature based on microvascular features and function. In this review, we summarize the current state of the field in using retinal imaging to diagnose cardiovascular issues and other diseases.
Collapse
Affiliation(s)
- Kush Gupta
- Kasturba Medical College, Mangalore, India
| | | |
Collapse
|
29
|
Dutt S, Sivaraman A, Savoy F, Rajalakshmi R. Insights into the growing popularity of artificial intelligence in ophthalmology. Indian J Ophthalmol 2021; 68:1339-1346. [PMID: 32587159 PMCID: PMC7574057 DOI: 10.4103/ijo.ijo_1754_19] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Artificial intelligence (AI) in healthcare is the use of computer-algorithms in analyzing complex medical data to detect associations and provide diagnostic support outputs. AI and deep learning (DL) find obvious applications in fields like ophthalmology wherein huge amount of image-based data need to be analyzed; however, the outcomes related to image recognition are reasonably well-defined. AI and DL have found important roles in ophthalmology in early screening and detection of conditions such as diabetic retinopathy (DR), age-related macular degeneration (ARMD), retinopathy of prematurity (ROP), glaucoma, and other ocular disorders, being successful inroads as far as early screening and diagnosis are concerned and appear promising with advantages of high-screening accuracy, consistency, and scalability. AI algorithms need equally skilled manpower, trained optometrists/ophthalmologists (annotators) to provide accurate ground truth for training the images. The basis of diagnoses made by AI algorithms is mechanical, and some amount of human intervention is necessary for further interpretations. This review was conducted after tracing the history of AI in ophthalmology across multiple research databases and aims to summarise the journey of AI in ophthalmology so far, making a close observation of most of the crucial studies conducted. This article further aims to highlight the potential impact of AI in ophthalmology, the pitfalls, and how to optimally use it to the maximum benefits of the ophthalmologists, the healthcare systems and the patients, alike.
Collapse
Affiliation(s)
- Sreetama Dutt
- Department of Research & Development, Remidio Innovative Solutions, Bengaluru, Karnataka, India
| | - Anand Sivaraman
- Department of Research & Development, Remidio Innovative Solutions, Bengaluru, Karnataka, India
| | - Florian Savoy
- Department of Artificial Intelligence, Medios Technologies, Singapore
| | - Ramachandran Rajalakshmi
- Department of Ophthalmology, Dr. Mohan's Diabetes Specialities Centre Madras Diabetes Research Foundation, Chennai, Tamil Nadu, India
| |
Collapse
|
30
|
Pathological Myopia Image Recognition Strategy Based on Data Augmentation and Model Fusion. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5549779. [PMID: 34035883 PMCID: PMC8118733 DOI: 10.1155/2021/5549779] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 04/02/2021] [Accepted: 04/27/2021] [Indexed: 11/17/2022]
Abstract
The automatic diagnosis of various retinal diseases based on fundus images is important in supporting clinical decision-making. Convolutional neural networks (CNNs) have achieved remarkable results in such tasks. However, their high expression ability possibly leads to overfitting. Therefore, data augmentation (DA) techniques have been proposed to prevent overfitting while enriching datasets. Recent CNN architectures with more parameters render traditional DA techniques insufficient. In this study, we proposed a new DA strategy based on multimodal fusion (DAMF) which could integrate the standard DA method, data disrupting method, data mixing method, and autoadjustment method to enhance the image data in the training dataset to create new training images. In addition, we fused the results of the classifier by voting on the basis of DAMF, which further improved the generalization ability of the model. The experimental results showed that the optimal DA mode could be matched to the image dataset through our DA strategy. We evaluated DAMF on the iChallenge-PM dataset. At last, we compared training results between 12 DAMF processed datasets and the original training dataset. Compared with the original dataset, the optimal DAMF achieved an accuracy increase of 2.85% on iChallenge-PM.
Collapse
|
31
|
Chan EJJ, Najjar RP, Tang Z, Milea D. Deep Learning for Retinal Image Quality Assessment of Optic Nerve Head Disorders. Asia Pac J Ophthalmol (Phila) 2021; 10:282-288. [PMID: 34383719 DOI: 10.1097/apo.0000000000000404] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
ABSTRACT Deep learning (DL)-based retinal image quality assessment (RIQA) algorithms have been gaining popularity, as a solution to reduce the frequency of diagnostically unusable images. Most existing RIQA tools target retinal conditions, with a dearth of studies looking into RIQA models for optic nerve head (ONH) disorders. The recent success of DL systems in detecting ONH abnormalities on color fundus images prompts the development of tailored RIQA algorithms for these specific conditions. In this review, we discuss recent progress in DL-based RIQA models in general and the need for RIQA models tailored for ONH disorders. Finally, we propose suggestions for such models in the future.
Collapse
Affiliation(s)
| | - Raymond P Najjar
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Zhiqun Tang
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Dan Milea
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
- Ophthalmology Department, Singapore National Eye Centre, Singapore
- Rigshospitalet, Copenhagen University, Denmark
| |
Collapse
|
32
|
Tseng RMWW, Gunasekeran DV, Tan SSH, Rim TH, Lum E, Tan GSW, Wong TY, Tham YC. Considerations for Artificial Intelligence Real-World Implementation in Ophthalmology: Providers' and Patients' Perspectives. Asia Pac J Ophthalmol (Phila) 2021; 10:299-306. [PMID: 34383721 DOI: 10.1097/apo.0000000000000400] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
ABSTRACT Artificial Intelligence (AI), in particular deep learning, has made waves in the health care industry, with several prominent examples shown in ophthalmology. Despite the burgeoning reports on the development of new AI algorithms for detection and management of various eye diseases, few have reached the stage of regulatory approval for real-world implementation. To better enable real-world translation of AI systems, it is important to understand the demands, needs, and concerns of both health care professionals and patients, as providers and recipients of clinical care are impacted by these solutions. This review outlines the advantages and concerns of incorporating AI in ophthalmology care delivery, from both the providers' and patients' perspectives, and the key enablers for seamless transition to real-world implementation.
Collapse
Affiliation(s)
| | - Dinesh Visva Gunasekeran
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore (NUS), Singapore
| | | | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| | | | - Gavin S W Tan
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| |
Collapse
|
33
|
Park S, Kim H, Kim L, Kim JK, Lee IS, Ryu IH, Kim Y. Artificial intelligence-based nomogram for small-incision lenticule extraction. Biomed Eng Online 2021; 20:38. [PMID: 33892729 PMCID: PMC8063457 DOI: 10.1186/s12938-021-00867-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Accepted: 03/12/2021] [Indexed: 11/26/2022] Open
Abstract
Background Small-incision lenticule extraction (SMILE) is a surgical procedure for the refractive correction of myopia and astigmatism, which has been reported as safe and effective. However, over- and under-correction still occur after SMILE. The necessity of nomograms is emphasized to achieve optimal refractive results. Ophthalmologists diagnose nomograms by analyzing the preoperative refractive data with their individual knowledge which they accumulate over years of experience. Our aim was to predict the nomograms of sphere, cylinder, and astigmatism axis for SMILE accurately by applying machine learning algorithm. Methods We retrospectively analyzed the data of 3,034 eyes composed of four categorical features and 28 numerical features selected from 46 features. The multiple linear regression, decision tree, AdaBoost, XGBoost, and multi-layer perceptron were employed in developing the nomogram models for sphere, cylinder, and astigmatism axis. The scores of the root-mean-square error (RMSE) and accuracy were evaluated and compared. Subsequently, the feature importance of the best models was calculated. Results AdaBoost achieved the highest performance with RMSE of 0.1378, 0.1166, and 5.17 for the sphere, cylinder, and astigmatism axis, respectively. The accuracies of which error below 0.25 D for the sphere and cylinder nomograms and 25° for the astigmatism axis nomograms were 0.969, 0.976, and 0.994, respectively. The feature with the highest importance was preoperative manifest refraction for all the cases of nomograms. For the sphere and cylinder nomograms, the following highly important feature was the surgeon. Conclusions Among the diverse machine learning algorithms, AdaBoost exhibited the highest performance in the prediction of the sphere, cylinder, and astigmatism axis nomograms for SMILE. The study proved the feasibility of applying artificial intelligence (AI) to nomograms for SMILE. Also, it may enhance the quality of the surgical result of SMILE by providing assistance in nomograms and preventing the misdiagnosis in nomograms. Supplementary Information The online version contains supplementary material available at 10.1186/s12938-021-00867-7.
Collapse
Affiliation(s)
- Seungbin Park
- Center for Bionics, Korea Institute of Science and Technology, Seoul, Korea
| | - Hannah Kim
- Center for Bionics, Korea Institute of Science and Technology, Seoul, Korea.,Division of Bio-Medical Science &Technology, KIST School, Korea University of Science and Technology, Seoul, Korea
| | - Laehyun Kim
- Center for Bionics, Korea Institute of Science and Technology, Seoul, Korea
| | | | | | | | - Youngjun Kim
- Center for Bionics, Korea Institute of Science and Technology, Seoul, Korea. .,Division of Bio-Medical Science &Technology, KIST School, Korea University of Science and Technology, Seoul, Korea.
| |
Collapse
|
34
|
Digital Image Processing and Development of Machine Learning Models for the Discrimination of Corneal Pathology: An Experimental Model. PHOTONICS 2021. [DOI: 10.3390/photonics8040118] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Machine learning (ML) has an impressive capacity to learn and analyze a large volume of data. This study aimed to train different algorithms to discriminate between healthy and pathologic corneal images by evaluating digitally processed spectral-domain optical coherence tomography (SD-OCT) corneal images. A set of 22 SD-OCT images belonging to a random set of corneal pathologies was compared to 71 healthy corneas (control group). A binary classification method was applied where three approaches of ML were explored. Once all images were analyzed, representative areas from every digital image were also extracted, processed and analyzed for a statistical feature comparison between healthy and pathologic corneas. The best performance was obtained from transfer learning—support vector machine (TL-SVM) (AUC = 0.94, SPE 88%, SEN 100%) and transfer learning—random forest (TL- RF) method (AUC = 0.92, SPE 84%, SEN 100%), followed by convolutional neural network (CNN) (AUC = 0.84, SPE 77%, SEN 91%) and random forest (AUC = 0.77, SPE 60%, SEN 95%). The highest diagnostic accuracy in classifying corneal images was achieved with the TL-SVM and the TL-RF models. In image classification, CNN was a strong predictor. This pilot experimental study developed a systematic mechanized system to discern pathologic from healthy corneas using a small sample.
Collapse
|
35
|
Perepelkina T, Fulton AB. Artificial Intelligence (AI) Applications for Age-Related Macular Degeneration (AMD) and Other Retinal Dystrophies. Semin Ophthalmol 2021; 36:304-309. [PMID: 33764255 DOI: 10.1080/08820538.2021.1896756] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Artificial intelligence (AI), with its subdivisions (machine and deep learning), is a new branch of computer science that has shown impressive results across a variety of domains. The applications of AI to medicine and biology are being widely investigated. Medical specialties that rely heavily on images, including radiology, dermatology, oncology and ophthalmology, were the first to explore AI approaches in analysis and diagnosis. Applications of AI in ophthalmology have concentrated on diseases with high prevalence, such as diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration (AMD), and glaucoma. Here we provide an overview of AI applications for diagnosis, classification, and clinical management of AMD and other macular dystrophies.
Collapse
Affiliation(s)
- Tatiana Perepelkina
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, United States
| | - Anne B Fulton
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, United States
| |
Collapse
|
36
|
Oke I, VanderVeen D. Machine Learning Applications in Pediatric Ophthalmology. Semin Ophthalmol 2021; 36:210-217. [PMID: 33641598 DOI: 10.1080/08820538.2021.1890151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Purpose: To describe emerging applications of machine learning (ML) in pediatric ophthalmology with an emphasis on the diagnosis and treatment of disorders affecting visual development. Methods: Literature review of studies applying ML algorithms to problems in pediatric ophthalmology. Results: At present, the ML literature emphasizes applications in retinopathy of prematurity. However, there are increasing efforts to apply ML techniques in the diagnosis of amblyogenic conditions such as pediatric cataracts, strabismus, and high refractive error. Conclusions: A greater understanding of the principles governing ML will enable pediatric eye care providers to apply the methodology to unexplored challenges within the subspecialty.
Collapse
Affiliation(s)
- Isdin Oke
- Department of Ophthalmology, Boston Children's Hospital, Boston, MA, USA.,Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Deborah VanderVeen
- Department of Ophthalmology, Boston Children's Hospital, Boston, MA, USA.,Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
37
|
Prabhakar B, Singh RK, Yadav KS. Artificial intelligence (AI) impacting diagnosis of glaucoma and understanding the regulatory aspects of AI-based software as medical device. Comput Med Imaging Graph 2021; 87:101818. [DOI: 10.1016/j.compmedimag.2020.101818] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 09/01/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
|
38
|
Zimmerman C, Bruggeman B, LaPorte A, Kaushal S, Stalvey M, Beauchamp G, Dayton K, Hiers P, Filipp SL, Gurka MJ, Silverstein JH, Jacobsen LM. Real-World Screening for Retinopathy in Youth With Type 1 Diabetes Using a Nonmydriatic Fundus Camera. Diabetes Spectr 2021; 34:27-33. [PMID: 33627991 PMCID: PMC7887527 DOI: 10.2337/ds20-0017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE To assess the use of a portable retinal camera in diabetic retinopathy (DR) screening in multiple settings and the presence of associated risk factors among children, adolescents, and young adults with type 1 diabetes. DESIGN AND METHODS Five hundred youth with type 1 diabetes of at least 1 year's duration were recruited from clinics, diabetes camp, and a diabetes conference and underwent retinal imaging using a nonmydriatic fundus camera. Retinal characterization was performed remotely by a licensed ophthalmologist. Risk factors for DR development were evaluated by a patient-reported questionnaire and medical chart review. RESULTS Of the 500 recruited subjects aged 9-26 years (mean 14.9, SD 3.8), 10 cases of DR were identified (nine mild and one moderate nonproliferative DR) with 100% of images of gradable quality. The prevalence of DR was 2.04% (95% CI 0.78-3.29), at an average age of 20.2 years, with the youngest affected subject being 17.1 years of age. The rate of DR was higher, at 6.5%, with diabetes duration >10 years (95% CI 0.86-12.12, P = 0.0002). In subjects with DR, the average duration of diabetes was 12.1 years (SD 4.6, range 6.2-20.0), and in a subgroup of clinic-only subjects (n = 114), elevated blood pressure in the year before screening was associated with DR (P = 0.0068). CONCLUSION This study in a large cohort of subjects with type 1 diabetes demonstrates that older adolescents and young adults (>17 years) with longer disease duration (>6 years) are at risk for DR development, and screening using a portable retinal camera is feasible in clinics and other locations. Recent elevated blood pressure was a risk factor in an analyzed subgroup.
Collapse
Affiliation(s)
- Chelsea Zimmerman
- Division of Pediatric Endocrinology, University of Florida, Gainesville, FL
| | - Brittany Bruggeman
- Division of Pediatric Endocrinology, University of Florida, Gainesville, FL
| | - Amanda LaPorte
- University of Florida College of Medicine, Gainesville, FL
| | | | - Michael Stalvey
- Division of Pediatric Endocrinology, University of Alabama at Birmingham, Birmingham, AL
| | - Giovanna Beauchamp
- Division of Pediatric Endocrinology, University of Alabama at Birmingham, Birmingham, AL
| | - Kristin Dayton
- Division of Pediatric Endocrinology, University of Florida, Gainesville, FL
| | - Paul Hiers
- Division of Pediatric Endocrinology, University of Florida, Gainesville, FL
| | - Stephanie L. Filipp
- Department of Health Outcomes and Policy, University of Florida, Gainesville, FL
| | - Matthew J. Gurka
- Department of Health Outcomes and Policy, University of Florida, Gainesville, FL
| | | | - Laura M. Jacobsen
- Division of Pediatric Endocrinology, University of Florida, Gainesville, FL
| |
Collapse
|
39
|
Affiliation(s)
| | - Paulo Schor
- Universidade Federal de São Paulo, São Paulo, SP, Brazil
| |
Collapse
|
40
|
Jayadev C, Shetty R. Artificial intelligence in laser refractive surgery - Potential and promise! Indian J Ophthalmol 2020; 68:2650-2651. [PMID: 33229635 PMCID: PMC7856980 DOI: 10.4103/ijo.ijo_3304_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Affiliation(s)
- Chaitra Jayadev
- Narayana Nethralaya Eye Institute, 121/C, Chord Road, Rajajinagar, Bangalore - 560 010, Karnataka, India
| | - Rohit Shetty
- Narayana Nethralaya Eye Institute, 121/C, Chord Road, Rajajinagar, Bangalore - 560 010, Karnataka, India
| |
Collapse
|
41
|
Suri R, Neupane YR, Jain GK, Kohli K. Recent theranostic paradigms for the management of Age-related macular degeneration. Eur J Pharm Sci 2020; 153:105489. [PMID: 32717428 DOI: 10.1016/j.ejps.2020.105489] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/21/2022]
Abstract
Degenerative diseases of eye like Age-related macular degeneration (AMD), that affects the central portion of the retina (macula), is one of the leading causes of blindness worldwide especially in the elderly population. It is classified mainly as wet and dry form. With expanding knowledge about the underlying pathophysiology of the disease, various treatment strategies are being employed to halt the course of the disease progression. Hitherto, there is no ideal therapy which can cure the disease completely, and targeting the posterior segment of the eye is yet another challenge. The purpose of this review is to summarize the recent advances in the management and treatment stratagems (therapies, delivery systems and diagnostic tools) pertaining to AMD viz. molecular targeting, stem cell therapy, nanotechnology and exosomes with special reference to newer technologies like artificial intelligence and 3D printing. Furthermore, the role of diet and nutritional supplements in the prevention and treatment of the disease has also been highlighted. The alarming increase in the said disorder around the globe demands exhaustive research and investigations in the treatment zone. This review thus additionally directs the attention towards the challenges and future perspectives of different treatment approaches for AMD.
Collapse
Affiliation(s)
- Reshal Suri
- Department of Pharmaceutics, School of Pharmaceutical Education & Research, Jamia Hamdard, New Delhi, 110062, India
| | - Yub Raj Neupane
- Department of Pharmacy, National University of Singapore, 117559, Singapore
| | - Gaurav Kumar Jain
- Department of Pharmaceutics, School of Pharmaceutical Education & Research, Jamia Hamdard, New Delhi, 110062, India
| | - Kanchan Kohli
- Department of Pharmaceutics, School of Pharmaceutical Education & Research, Jamia Hamdard, New Delhi, 110062, India.
| |
Collapse
|
42
|
Detection of Diabetic Retinopathy Using Bichannel Convolutional Neural Network. J Ophthalmol 2020; 2020:9139713. [PMID: 32655944 PMCID: PMC7322591 DOI: 10.1155/2020/9139713] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 05/18/2020] [Indexed: 01/14/2023] Open
Abstract
Deep learning of fundus photograph has emerged as a practical and cost-effective technique for automatic screening and diagnosis of severer diabetic retinopathy (DR). The entropy image of luminance of fundus photograph has been demonstrated to increase the detection performance for referable DR using a convolutional neural network- (CNN-) based system. In this paper, the entropy image computed by using the green component of fundus photograph is proposed. In addition, image enhancement by unsharp masking (UM) is utilized for preprocessing before calculating the entropy images. The bichannel CNN incorporating the features of both the entropy images of the gray level and the green component preprocessed by UM is also proposed to improve the detection performance of referable DR by deep learning.
Collapse
|
43
|
Gupta V, Rajendran A, Narayanan R, Chawla S, Kumar A, Palanivelu MS, Muralidhar NS, Jayadev C, Pappuru R, Khatri M, Agarwal M, Aurora A, Bhende P, Bhende M, Bawankule P, Rishi P, Vinekar A, Trehan HS, Biswas J, Agarwal R, Natarajan S, Verma L, Ramasamy K, Giridhar A, Rishi E, Talwar D, Pathangey A, Azad R, Honavar SG. Evolving consensus on managing vitreo-retina and uvea practice in post-COVID-19 pandemic era. Indian J Ophthalmol 2020; 68:962-973. [PMID: 32461407 PMCID: PMC7508071 DOI: 10.4103/ijo.ijo_1404_20] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 05/09/2020] [Accepted: 05/09/2020] [Indexed: 02/06/2023] Open
Abstract
The COVID-19 pandemic has brought new challenges to the health care community. Many of the super-speciality practices are planning to re-open after the lockdown is lifted. However there is lot of apprehension in everyone's mind about conforming practices that would safeguard the patients, ophthalmologists, healthcare workers as well as taking adequate care of the equipment to minimize the damage. The aim of this article is to develop preferred practice patterns, by developing a consensus amongst the lead experts, that would help the institutes as well as individual vitreo-retina and uveitis experts to restart their practices with confidence. As the situation remains volatile, we would like to mention that these suggestions are evolving and likely to change as our understanding and experience gets better. Further, the suggestions are for routine patients as COVID-19 positive patients may be managed in designated hospitals as per local protocols. Also these suggestions have to be implemented keeping in compliance with local rules and regulations.
Collapse
Affiliation(s)
- Vishali Gupta
- Advanced Eye Centre, Post Graduate Institute of Medical Education and Research, Chandigarha, India
| | | | | | | | - Atul Kumar
- Dr. RP.Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Rupesh Agarwal
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore
| | | | | | | | | | | | | | | | - Rajvardhan Azad
- Regional Institute of Ophthalmology Indira Gandhi Institute of Medical Institute of Medical Sciences, Patna, India
| | | |
Collapse
|
44
|
Thakoor KA, Li X, Tsamis E, Sajda P, Hood DC. Enhancing the Accuracy of Glaucoma Detection from OCT Probability Maps using Convolutional Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:2036-2040. [PMID: 31946301 DOI: 10.1109/embc.2019.8856899] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
We describe and assess convolutional neural network (CNN) models for detection of glaucoma based upon optical coherence tomography (OCT) retinal nerve fiber layer (RNFL) probability maps. CNNs pretrained on natural images performed comparably to CNNs trained solely on OCT data, and all models showed high accuracy in detecting glaucoma, with receiver operating characteristic area under the curve (AUC) scores ranging from 0.930 to 0.989. Attention-based heat maps of CNN regions of interest suggest that these models could be improved by incorporation of blood vessel location information. Such CNN models have the potential to work in tandem with human experts to maintain overall eye health and expedite detection of blindness-causing eye disease.
Collapse
|
45
|
Affiliation(s)
- Ashish Ahuja
- Vitreo Retina Consultant, Sadhu Kamal Eye Hospital, Mumbai, Maharashtra, India
| | - Dheeraj Kewlani
- Department of Ophthalmology, TS Misra Medical College and Hospital, Lucknow, Uttar Pradesh, India
| |
Collapse
|
46
|
Yoo TK, Choi JY, Kim HK. A generative adversarial network approach to predicting postoperative appearance after orbital decompression surgery for thyroid eye disease. Comput Biol Med 2020; 118:103628. [PMID: 32174327 DOI: 10.1016/j.compbiomed.2020.103628] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 01/23/2020] [Accepted: 01/23/2020] [Indexed: 02/01/2023]
Abstract
PURPOSE Orbital decompression for thyroid-associated ophthalmopathy (TAO) is an ophthalmic plastic surgery technique to prevent optic neuropathy and reduce exophthalmos. Because the postoperative appearance can significantly change, sometimes it is difficult to make decisions regarding decompression surgery. Herein, we present a deep learning technique to synthesize the realistic postoperative appearance for orbital decompression surgery. METHODS This data-driven approach is based on a conditional generative adversarial network (GAN) to transform preoperative facial input images into predicted postoperative images. The conditional GAN model was trained on 109 pairs of matched pre- and postoperative facial images through data augmentation. RESULTS When the conditional variable was changed, the synthesized facial image was transferred from a preoperative image to a postoperative image. The predicted postoperative images were similar to the ground truth postoperative images. We also found that GAN-based synthesized images can improve the deep learning classification performance between the pre- and postoperative status using a small training dataset. However, a relatively low quality of synthesized images was noted after a readout by clinicians. CONCLUSIONS Using this framework, we synthesized TAO facial images that can be queried using conditioning on the orbital decompression status. The synthesized postoperative images may be helpful for patients in determining the impact of decompression surgery. However, the quality of the generated image should be further improved. The proposed deep learning technique based on a GAN can rapidly synthesize such realistic images of the postoperative appearance, suggesting that a GAN can function as a decision support tool for plastic and cosmetic surgery techniques.
Collapse
Affiliation(s)
- Tae Keun Yoo
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea.
| | - Joon Yul Choi
- Epilepsy Center, Neurological Institute, Cleveland Clinic, Cleveland, OH, USA.
| | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| |
Collapse
|
47
|
Murtagh P, Greene G, O'Brien C. Current applications of machine learning in the screening and diagnosis of glaucoma: a systematic review and Meta-analysis. Int J Ophthalmol 2020; 13:149-162. [PMID: 31956584 DOI: 10.18240/ijo.2020.01.22] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Accepted: 09/23/2019] [Indexed: 12/22/2022] Open
Abstract
AIM To compare the effectiveness of two well described machine learning modalities, ocular coherence tomography (OCT) and fundal photography, in terms of diagnostic accuracy in the screening and diagnosis of glaucoma. METHODS A systematic search of Embase and PubMed databases was undertaken up to 1st of February 2019. Articles were identified alongside their reference lists and relevant studies were aggregated. A Meta-analysis of diagnostic accuracy in terms of area under the receiver operating curve (AUROC) was performed. For the studies which did not report an AUROC, reported sensitivity and specificity values were combined to create a summary ROC curve which was included in the Meta-analysis. RESULTS A total of 23 studies were deemed suitable for inclusion in the Meta-analysis. This included 10 papers from the OCT cohort and 13 from the fundal photos cohort. Random effects Meta-analysis gave a pooled AUROC of 0.957 (95%CI=0.917 to 0.997) for fundal photos and 0.923 (95%CI=0.889 to 0.957) for the OCT cohort. The slightly higher accuracy of fundal photos methods is likely attributable to the much larger database of images used to train the models (59 788 vs 1743). CONCLUSION No demonstrable difference is shown between the diagnostic accuracy of the two modalities. The ease of access and lower cost associated with fundal photo acquisition make that the more appealing option in terms of screening on a global scale, however further studies need to be undertaken, owing largely to the poor study quality associated with the fundal photography cohort.
Collapse
Affiliation(s)
- Patrick Murtagh
- Department of Ophthalmology, Mater Misericordiae University Hospital, Eccles Street, Dublin D07 R2WY, Ireland
| | - Garrett Greene
- RCSI Education and Research Centre, Beaumont Hospital, Dublin D05 AT88, Ireland
| | - Colm O'Brien
- Department of Ophthalmology, Mater Misericordiae University Hospital, Eccles Street, Dublin D07 R2WY, Ireland
| |
Collapse
|
48
|
Sumaroka A, Garafalo AV, Semenov EP, Sheplock R, Krishnan AK, Roman AJ, Jacobson SG, Cideciyan AV. Treatment Potential for Macular Cone Vision in Leber Congenital Amaurosis Due to CEP290 or NPHP5 Mutations: Predictions From Artificial Intelligence. Invest Ophthalmol Vis Sci 2019; 60:2551-2562. [PMID: 31212307 PMCID: PMC6586080 DOI: 10.1167/iovs.19-27156] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Purpose To use supervised machine learning to predict visual function from retinal structure in retinitis pigmentosa (RP) and apply these estimates to CEP290- and NPHP5-associated Leber congenital amaurosis (LCA) to determine the potential for functional improvement. Methods Patients with RP (n = 20) and LCA due to CEP290 (n = 12) or NPHP5 (n = 6) mutations were studied. A patient with CEP290 mutations but mild retinal degeneration was included. RP patients had cone-mediated macular function. A machine learning technique was used to associate perimetric sensitivities to local structure in RP patients. Models trained on RP data were applied to predict visual function in LCA. Results The RP and LCA patients had comparable retinal structure. RP patients had peak sensitivity at the fovea surrounded by decreasing sensitivity. Machine learning could successfully predict perimetry results from segmented or unsegmented optical coherence tomography (OCT) input. Application of machine learning predictions to LCA within the residual macular island of photoreceptor structure showed differences between predicted and measured sensitivities defining treatment potential. In patients with retained vision, the treatment potential was 4.6 ± 2.9 dB at the fovea but 16.4 ± 4.4 dB at the parafovea. In patients with limited or no vision, the treatment potential was 17.6 ± 9.4 dB. Conclusions Cone vision improvement potential in LCA due to CEP290 or NPHP5 mutations is predictable from retinal structure using a machine learning approach. This should allow individual prediction of the maximal efficacy in clinical trials and guide decisions about dosing. Similar strategies can be used in other retinal degenerations to estimate the extent and location of treatment potential.
Collapse
Affiliation(s)
- Alexander Sumaroka
- Scheie Eye Institute, Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States
| | - Alexandra V Garafalo
- Scheie Eye Institute, Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States
| | - Evelyn P Semenov
- Scheie Eye Institute, Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States
| | - Rebecca Sheplock
- Scheie Eye Institute, Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States
| | - Arun K Krishnan
- Scheie Eye Institute, Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States
| | - Alejandro J Roman
- Scheie Eye Institute, Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States
| | - Samuel G Jacobson
- Scheie Eye Institute, Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States
| | - Artur V Cideciyan
- Scheie Eye Institute, Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States
| |
Collapse
|
49
|
Yang J, Zhang C, Wang E, Chen Y, Yu W. Utility of a public-available artificial intelligence in diagnosis of polypoidal choroidal vasculopathy. Graefes Arch Clin Exp Ophthalmol 2019; 258:17-21. [DOI: 10.1007/s00417-019-04493-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2019] [Revised: 06/27/2019] [Accepted: 09/20/2019] [Indexed: 01/29/2023] Open
|
50
|
Patil SV. Artificial intelligence in ophthalmology: Is it just hype with no substance or the real McCoy. Indian J Ophthalmol 2019; 67:1251-1252. [PMID: 31238486 PMCID: PMC6611229 DOI: 10.4103/ijo.ijo_32_19] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Affiliation(s)
- Santosh V Patil
- Department of Ophthalmology, Gulbarga Institute of Medical Sciences, Gulbarga, Karnataka, India
| |
Collapse
|