1
|
Kumar N, Srivastava R. Deep learning in structural bioinformatics: current applications and future perspectives. Brief Bioinform 2024; 25:bbae042. [PMID: 38701422 PMCID: PMC11066934 DOI: 10.1093/bib/bbae042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 01/05/2024] [Accepted: 01/18/2024] [Indexed: 05/05/2024] Open
Abstract
In this review article, we explore the transformative impact of deep learning (DL) on structural bioinformatics, emphasizing its pivotal role in a scientific revolution driven by extensive data, accessible toolkits and robust computing resources. As big data continue to advance, DL is poised to become an integral component in healthcare and biology, revolutionizing analytical processes. Our comprehensive review provides detailed insights into DL, featuring specific demonstrations of its notable applications in bioinformatics. We address challenges tailored for DL, spotlight recent successes in structural bioinformatics and present a clear exposition of DL-from basic shallow neural networks to advanced models such as convolution, recurrent, artificial and transformer neural networks. This paper discusses the emerging use of DL for understanding biomolecular structures, anticipating ongoing developments and applications in the realm of structural bioinformatics.
Collapse
Affiliation(s)
- Niranjan Kumar
- School of Computational and Integrative Sciences, Jawaharlal Nehru University, New Delhi, India
| | - Rakesh Srivastava
- Center for Computational Natural Sciences and Bioinformatics, International Institute of Information Technology, Hyderabad, India
| |
Collapse
|
2
|
Hussain S, Chua J, Wong D, Lo J, Kadziauskiene A, Asoklis R, Barbastathis G, Schmetterer L, Yong L. Predicting glaucoma progression using deep learning framework guided by generative algorithm. Sci Rep 2023; 13:19960. [PMID: 37968437 PMCID: PMC10651936 DOI: 10.1038/s41598-023-46253-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 10/30/2023] [Indexed: 11/17/2023] Open
Abstract
Glaucoma is a slowly progressing optic neuropathy that may eventually lead to blindness. To help patients receive customized treatment, predicting how quickly the disease will progress is important. Structural assessment using optical coherence tomography (OCT) can be used to visualize glaucomatous optic nerve and retinal damage, while functional visual field (VF) tests can be used to measure the extent of vision loss. However, VF testing is patient-dependent and highly inconsistent, making it difficult to track glaucoma progression. In this work, we developed a multimodal deep learning model comprising a convolutional neural network (CNN) and a long short-term memory (LSTM) network, for glaucoma progression prediction. We used OCT images, VF values, demographic and clinical data of 86 glaucoma patients with five visits over 12 months. The proposed method was used to predict VF changes 12 months after the first visit by combining past multimodal inputs with synthesized future images generated using generative adversarial network (GAN). The patients were classified into two classes based on their VF mean deviation (MD) decline: slow progressors (< 3 dB) and fast progressors (> 3 dB). We showed that our generative model-based novel approach can achieve the best AUC of 0.83 for predicting the progression 6 months earlier. Further, the use of synthetic future images enabled the model to accurately predict the vision loss even earlier (9 months earlier) with an AUC of 0.81, compared to using only structural (AUC = 0.68) or only functional measures (AUC = 0.72). This study provides valuable insights into the potential of using synthetic follow-up OCT images for early detection of glaucoma progression.
Collapse
Affiliation(s)
- Shaista Hussain
- Institute of High Performance Computing, A*STAR, Singapore, Singapore.
| | - Jacqueline Chua
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Damon Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | | | - Aiste Kadziauskiene
- Clinic of Ears, Nose, Throat and Eye Diseases, Institute of Clinical Medicine, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
- Department of Eye Diseases, Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania
| | - Rimvydas Asoklis
- Clinic of Ears, Nose, Throat and Eye Diseases, Institute of Clinical Medicine, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
- Department of Eye Diseases, Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, Singapore, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland.
- Department of Ophthalmology, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore.
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore, Singapore.
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria.
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| | - Liu Yong
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| |
Collapse
|
3
|
Kim H, Lee J, Moon S, Kim S, Kim T, Jin SW, Kim JL, Shin J, Lee SU, Jang G, Hu Y, Park JR. Visual field prediction using a deep bidirectional gated recurrent unit network model. Sci Rep 2023; 13:11154. [PMID: 37429862 DOI: 10.1038/s41598-023-37360-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 06/20/2023] [Indexed: 07/12/2023] Open
Abstract
Although deep learning architecture has been used to process sequential data, only a few studies have explored the usefulness of deep learning algorithms to detect glaucoma progression. Here, we proposed a bidirectional gated recurrent unit (Bi-GRU) algorithm to predict visual field loss. In total, 5413 eyes from 3321 patients were included in the training set, whereas 1272 eyes from 1272 patients were included in the test set. Data from five consecutive visual field examinations were used as input; the sixth visual field examinations were compared with predictions by the Bi-GRU. The performance of Bi-GRU was compared with the performances of conventional linear regression (LR) and long short-term memory (LSTM) algorithms. Overall prediction error was significantly lower for Bi-GRU than for LR and LSTM algorithms. In pointwise prediction, Bi-GRU showed the lowest prediction error among the three models in most test locations. Furthermore, Bi-GRU was the least affected model in terms of worsening reliability indices and glaucoma severity. Accurate prediction of visual field loss using the Bi-GRU algorithm may facilitate decision-making regarding the treatment of patients with glaucoma.
Collapse
Grants
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
Collapse
Affiliation(s)
- Hwayeong Kim
- Department of Ophthalmology, Pusan National University College of Medicine, Busan, Korea
| | - Jiwoong Lee
- Department of Ophthalmology, Pusan National University College of Medicine, Busan, Korea
- Biomedical Research Institute, Pusan National University Hospital, Busan, Korea
| | - Sangwoo Moon
- Department of Ophthalmology, Pusan National University College of Medicine, Busan, Korea
| | - Sangil Kim
- Department of Mathematics, Pusan National University, Busan, Republic of Korea
| | - Taehyeong Kim
- Department of Mathematics, Pusan National University, Busan, Republic of Korea
| | - Sang Wook Jin
- Department of Ophthalmology, Dong-A University College of Medicine, Busan, Korea
| | - Jung Lim Kim
- Department of Ophthalmology, Busan Paik Hospital, Inje University College of Medicine, Busan, Korea
| | - Jonghoon Shin
- Department of Ophthalmology, Pusan National University Yangsan Hospital, Pusan National University School of Medicine, Yangsan, Korea
| | - Seung Uk Lee
- Department of Ophthalmology, Kosin University College of Medicine, Busan, Korea
| | - Geunsoo Jang
- Nonlinear Dynamics and Mathematical Application Center, Kyungpook National University, Daegu, Korea
| | - Yuanmeng Hu
- Department of Mathematics, Pusan National University, Busan, Republic of Korea
| | - Jeong Rye Park
- Department of Mathematics, Kyungpook National University, 80, Daehak-ro, Buk-gu, Daegu, 41566, Republic of Korea.
| |
Collapse
|
4
|
Park JR, Kim S, Kim T, Jin SW, Kim JL, Shin J, Lee SU, Jang G, Hu Y, Lee JW. Data Preprocessing and Augmentation Improved Visual Field Prediction of Recurrent Neural Network with Multi-Central Datasets. Ophthalmic Res 2023; 66:978-991. [PMID: 37231880 PMCID: PMC10357387 DOI: 10.1159/000531144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 05/15/2023] [Indexed: 05/27/2023]
Abstract
INTRODUCTION The purpose of this study was to determine whether data preprocessing and augmentation could improve visual field (VF) prediction of recurrent neural network (RNN) with multi-central datasets. METHODS This retrospective study collected data from five glaucoma services between June 2004 and January 2021. From an initial dataset of 331,691 VFs, we considered reliable VF tests with fixed intervals. Since the VF monitoring interval is very variable, we applied data augmentation using multiple sets of data for patients with more than eight VFs. We obtained 5,430 VFs from 463 patients and 13,747 VFs from 1,076 patients by setting the fixed test interval to 365 ± 60 days (D = 365) and 180 ± 60 days (D = 180), respectively. Five consecutive VFs were provided to the constructed RNN as input and the 6th VF was compared with the output of the RNN. The performance of the periodic RNN (D = 365) was compared to that of an aperiodic RNN. The performance of the RNN with 6 long- and short-term memory (LSTM) cells (D = 180) was compared with that of the RNN with 5-LSTM cells. To compare the prediction performance, the root mean square error (RMSE) and mean absolute error (MAE) of the total deviation value (TDV) were calculated as accuracy metrics. RESULTS The performance of the periodic model (D = 365) improved significantly over aperiodic model. Overall prediction error (MAE) was 2.56 ± 0.46 dB versus 3.26 ± 0.41 dB (periodic vs. aperiodic) (p < 0.001). A higher perimetric frequency was better for predicting future VF. The overall prediction error (RMSE) was 3.15 ± 2.29 dB versus 3.42 ± 2.25 dB (D = 180 vs. D = 365). Increasing the number of input VFs improved the performance of VF prediction in D = 180 periodic model (3.15 ± 2.29 dB vs. 3.18 ± 2.34 dB, p < 0.001). The 6-LSTM in the D = 180 periodic model was more robust to worsening of VF reliability and disease severity. The prediction accuracy worsened as the false-negative rate increased and the mean deviation decreased. CONCLUSION Data preprocessing with augmentation improved the VF prediction of the RNN model using multi-center datasets. The periodic RNN model predicted the future VF significantly better than the aperiodic RNN model.
Collapse
Affiliation(s)
- Jeong Rye Park
- Finance Fishery Manufacture Industrial Center on Big Data, Pusan National University, Busan, South Korea
| | - Sangil Kim
- Department of Mathematics, Pusan National University, Busan, South Korea
| | - Taehyeong Kim
- Department of Mathematics, Pusan National University, Busan, South Korea
| | - Sang Wook Jin
- Department of Ophthalmology, Dong-A University College of Medicine, Busan, South Korea
| | - Jung Lim Kim
- Department of Ophthalmology, Busan Paik Hospital, Inje University College of Medicine, Busan, South Korea
| | - Jonghoon Shin
- Department of Ophthalmology, Pusan National University Yangsan Hospital, Pusan National University School of Medicine, Yangsan, South Korea
| | - Seung Uk Lee
- Department of Ophthalmology, Kosin University College of Medicine, Busan, South Korea
| | - Geunsoo Jang
- Department of Mathematics, Pusan National University, Busan, South Korea
| | - Yuanmeng Hu
- Department of Mathematics, Pusan National University, Busan, South Korea
| | - Ji Woong Lee
- Department of Ophthalmology, Pusan National University College of Medicine, Busan, South Korea
- Biomedical Research Institute, Pusan National University Hospital, Busan, South Korea
| |
Collapse
|
5
|
Thakur S, Dinh LL, Lavanya R, Quek TC, Liu Y, Cheng CY. Use of artificial intelligence in forecasting glaucoma progression. Taiwan J Ophthalmol 2023; 13:168-183. [PMID: 37484617 PMCID: PMC10361424 DOI: 10.4103/tjo.tjo-d-23-00022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 03/03/2023] [Indexed: 07/25/2023] Open
Abstract
Artificial intelligence (AI) has been widely used in ophthalmology for disease detection and monitoring progression. For glaucoma research, AI has been used to understand progression patterns and forecast disease trajectory based on analysis of clinical and imaging data. Techniques such as machine learning, natural language processing, and deep learning have been employed for this purpose. The results from studies using AI for forecasting glaucoma progression however vary considerably due to dataset constraints, lack of a standard progression definition and differences in methodology and approach. While glaucoma detection and screening have been the focus of most research that has been published in the last few years, in this narrative review we focus on studies that specifically address glaucoma progression. We also summarize the current evidence, highlight studies that have translational potential, and provide suggestions on how future research that addresses glaucoma progression can be improved.
Collapse
Affiliation(s)
- Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Linh Le Dinh
- Institute of High Performance Computing, The Agency for Science, Technology and Research, Singapore
| | - Raghavan Lavanya
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, The Agency for Science, Technology and Research, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Department of Ophthalmology, Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| |
Collapse
|
6
|
A deep learning model incorporating spatial and temporal information successfully detects visual field worsening using a consensus based approach. Sci Rep 2023; 13:1041. [PMID: 36658309 PMCID: PMC9852268 DOI: 10.1038/s41598-023-28003-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 01/11/2023] [Indexed: 01/20/2023] Open
Abstract
Glaucoma is a leading cause of irreversible blindness, and its worsening is most often monitored with visual field (VF) testing. Deep learning models (DLM) may help identify VF worsening consistently and reproducibly. In this study, we developed and investigated the performance of a DLM on a large population of glaucoma patients. We included 5099 patients (8705 eyes) seen at one institute from June 1990 to June 2020 that had VF testing as well as clinician assessment of VF worsening. Since there is no gold standard to identify VF worsening, we used a consensus of six commonly used algorithmic methods which include global regressions as well as point-wise change in the VFs. We used the consensus decision as a reference standard to train/test the DLM and evaluate clinician performance. 80%, 10%, and 10% of patients were included in training, validation, and test sets, respectively. Of the 873 eyes in the test set, 309 [60.6%] were from females and the median age was 62.4; (IQR 54.8-68.9). The DLM achieved an AUC of 0.94 (95% CI 0.93-0.99). Even after removing the 6 most recent VFs, providing fewer data points to the model, the DLM successfully identified worsening with an AUC of 0.78 (95% CI 0.72-0.84). Clinician assessment of worsening (based on documentation from the health record at the time of the final VF in each eye) had an AUC of 0.64 (95% CI 0.63-0.66). Both the DLM and clinician performed worse when the initial disease was more severe. This data shows that a DLM trained on a consensus of methods to define worsening successfully identified VF worsening and could help guide clinicians during routine clinical care.
Collapse
|
7
|
Jaumandreu L, Antón A, Pazos M, Rodriguez-Uña I, Rodriguez Agirretxe I, Martinez de la Casa JM, Ayala ME, Parrilla-Vallejo M, Dyrda A, Díez-Álvarez L, Rebolleda G, Muñoz-Negrete FJ. Glaucoma progression. Clinical practice guide. ARCHIVOS DE LA SOCIEDAD ESPANOLA DE OFTALMOLOGIA 2023; 98:40-57. [PMID: 36089479 DOI: 10.1016/j.oftale.2022.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 05/19/2022] [Indexed: 01/11/2023]
Abstract
OBJECTIVE To provide general recommendations that serve as a guide for the evaluation and management of glaucomatous progression in daily clinical practice based on the existing quality of clinical evidence. METHODS After defining the objectives and scope of the guide, the working group was formed and structured clinical questions were formulated following the PICO (Patient, Intervention, Comparison, Outcomes) format. Once all the existing clinical evidence had been independently evaluated with the AMSTAR 2 (Assessment of Multiple Systematic Reviews) and Cochrane "Risk of bias" tools by at least two reviewers, recommendations were formulated following the Scottish Intercollegiate Guideline network (SIGN) methodology. RESULTS Recommendations with their corresponding levels of evidence that may be useful in the interpretation and decision-making related to the different methods for the detection of glaucomatous progression are presented. CONCLUSIONS Despite the fact that for many of the questions the level of scientific evidence available is not very high, this clinical practice guideline offers an updated review of the different existing aspects related to the evaluation and management of glaucomatous progression.
Collapse
Affiliation(s)
- L Jaumandreu
- Servicio de Oftalmología, Hospital Universitario Ramón y Cajal, IRYCIS, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain.
| | - A Antón
- Institut Català de la Retina (ICR), Barcelona, Spain; Universitat Internacional de Catalunya (UIC), Barcelona, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - M Pazos
- Institut Clínic d'Oftalmologia, Hospital Clínic de Barcelona, IDIBAPS, Universitat de Barcelona, Barcelona, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - I Rodriguez-Uña
- Instituto Oftalmológico Fernández-Vega, Universidad de Oviedo, Oviedo, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - I Rodriguez Agirretxe
- Servicio de Oftalmología, Hospital Universitario Donostia, San Sebastián, Gipuzkoa, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - J M Martinez de la Casa
- Servicio de Oftalmología, Hospital Clinico San Carlos, Instituto de investigación sanitaria del Hospital Clínico San Carlos (IsISSC), IIORC, Universidad Complutense de Madrid, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - M E Ayala
- Institut Català de la Retina (ICR), Barcelona, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - M Parrilla-Vallejo
- Servicio de Oftalmología, Hospital Universitario Virgen Macarena, Sevilla, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - A Dyrda
- Institut Català de la Retina (ICR), Barcelona, Spain
| | - L Díez-Álvarez
- Servicio de Oftalmología, Hospital Universitario Ramón y Cajal, IRYCIS, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - G Rebolleda
- Servicio de Oftalmología, Hospital Universitario Ramón y Cajal, IRYCIS, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - F J Muñoz-Negrete
- Servicio de Oftalmología, Hospital Universitario Ramón y Cajal, IRYCIS, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| |
Collapse
|
8
|
Comprehensive Review on the Use of Artificial Intelligence in Ophthalmology and Future Research Directions. Diagnostics (Basel) 2022; 13:diagnostics13010100. [PMID: 36611392 PMCID: PMC9818832 DOI: 10.3390/diagnostics13010100] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/12/2022] [Accepted: 12/26/2022] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND Having several applications in medicine, and in ophthalmology in particular, artificial intelligence (AI) tools have been used to detect visual function deficits, thus playing a key role in diagnosing eye diseases and in predicting the evolution of these common and disabling diseases. AI tools, i.e., artificial neural networks (ANNs), are progressively involved in detecting and customized control of ophthalmic diseases. The studies that refer to the efficiency of AI in medicine and especially in ophthalmology were analyzed in this review. MATERIALS AND METHODS We conducted a comprehensive review in order to collect all accounts published between 2015 and 2022 that refer to these applications of AI in medicine and especially in ophthalmology. Neural networks have a major role in establishing the demand to initiate preliminary anti-glaucoma therapy to stop the advance of the disease. RESULTS Different surveys in the literature review show the remarkable benefit of these AI tools in ophthalmology in evaluating the visual field, optic nerve, and retinal nerve fiber layer, thus ensuring a higher precision in detecting advances in glaucoma and retinal shifts in diabetes. We thus identified 1762 applications of artificial intelligence in ophthalmology: review articles and research articles (301 pub med, 144 scopus, 445 web of science, 872 science direct). Of these, we analyzed 70 articles and review papers (diabetic retinopathy (N = 24), glaucoma (N = 24), DMLV (N = 15), other pathologies (N = 7)) after applying the inclusion and exclusion criteria. CONCLUSION In medicine, AI tools are used in surgery, radiology, gynecology, oncology, etc., in making a diagnosis, predicting the evolution of a disease, and assessing the prognosis in patients with oncological pathologies. In ophthalmology, AI potentially increases the patient's access to screening/clinical diagnosis and decreases healthcare costs, mainly when there is a high risk of disease or communities face financial shortages. AI/DL (deep learning) algorithms using both OCT and FO images will change image analysis techniques and methodologies. Optimizing these (combined) technologies will accelerate progress in this area.
Collapse
|
9
|
Nunez R, Harris A, Ibrahim O, Keller J, Wikle CK, Robinson E, Zukerman R, Siesky B, Verticchio A, Rowe L, Guidoboni G. Artificial Intelligence to Aid Glaucoma Diagnosis and Monitoring: State of the Art and New Directions. PHOTONICS 2022; 9:810. [PMID: 36816462 PMCID: PMC9934292 DOI: 10.3390/photonics9110810] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Recent developments in the use of artificial intelligence in the diagnosis and monitoring of glaucoma are discussed. To set the context and fix terminology, a brief historic overview of artificial intelligence is provided, along with some fundamentals of statistical modeling. Next, recent applications of artificial intelligence techniques in glaucoma diagnosis and the monitoring of glaucoma progression are reviewed, including the classification of visual field images and the detection of glaucomatous change in retinal nerve fiber layer thickness. Current challenges in the direct application of artificial intelligence to further our understating of this disease are also outlined. The article also discusses how the combined use of mathematical modeling and artificial intelligence may help to address these challenges, along with stronger communication between data scientists and clinicians.
Collapse
Affiliation(s)
- Roberto Nunez
- Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| | - Alon Harris
- Department of Ophthalmology, Icahn School of Medicine at Mt. Sinai, New York, NY 10029, USA
| | - Omar Ibrahim
- Department of Electrical Engineering, Tikrit University, Tikrit P.O. Box 42, Iraq
| | - James Keller
- Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| | | | - Erin Robinson
- Department of Social Work, University of Missouri, Columbia, MO 65211, USA
| | - Ryan Zukerman
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York-Presbyterian Hospital, New York, NY 10034, USA
| | - Brent Siesky
- Department of Ophthalmology, Icahn School of Medicine at Mt. Sinai, New York, NY 10029, USA
| | - Alice Verticchio
- Department of Ophthalmology, Icahn School of Medicine at Mt. Sinai, New York, NY 10029, USA
| | - Lucas Rowe
- Department of Ophthalmology, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| | - Giovanna Guidoboni
- Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
- Department of Mathematics, University of Missouri, Columbia, MO 65211, USA
| |
Collapse
|
10
|
Eslami M, Kim JA, Zhang M, Boland MV, Wang M, Chang DS, Elze T. Visual Field Prediction: Evaluating the Clinical Relevance of Deep Learning Models. OPHTHALMOLOGY SCIENCE 2022; 3:100222. [PMID: 36325476 PMCID: PMC9619031 DOI: 10.1016/j.xops.2022.100222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 07/28/2022] [Accepted: 09/07/2022] [Indexed: 12/27/2022]
Abstract
Purpose Two novel deep learning methods using a convolutional neural network (CNN) and a recurrent neural network (RNN) have recently been developed to forecast future visual fields (VFs). Although the original evaluations of these models focused on overall accuracy, it was not assessed whether they can accurately identify patients with progressive glaucomatous vision loss to aid clinicians in preventing further decline. We evaluated these 2 prediction models for potential biases in overestimating or underestimating VF changes over time. Design Retrospective observational cohort study. Participants All available and reliable Swedish Interactive Thresholding Algorithm Standard 24-2 VFs from Massachusetts Eye and Ear Glaucoma Service collected between 1999 and 2020 were extracted. Because of the methods' respective needs, the CNN data set included 54 373 samples from 7472 patients, and the RNN data set included 24 430 samples from 1809 patients. Methods The CNN and RNN methods were reimplemented. A fivefold cross-validation procedure was performed on each model, and pointwise mean absolute error (PMAE) was used to measure prediction accuracy. Test data were stratified into categories based on the severity of VF progression to investigate the models' performances on predicting worsening cases. The models were additionally compared with a no-change model that uses the baseline VF (for the CNN) and the last-observed VF (for the RNN) for its prediction. Main Outcome Measures PMAE in predictions. Results The overall PMAE 95% confidence intervals were 2.21 to 2.24 decibels (dB) for the CNN and 2.56 to 2.61 dB for the RNN, which were close to the original studies' reported values. However, both models exhibited large errors in identifying patients with worsening VFs and often failed to outperform the no-change model. Pointwise mean absolute error values were higher in patients with greater changes in mean sensitivity (for the CNN) and mean total deviation (for the RNN) between baseline and follow-up VFs. Conclusions Although our evaluation confirms the low overall PMAEs reported in the original studies, our findings also reveal that both models severely underpredict worsening of VF loss. Because the accurate detection and projection of glaucomatous VF decline is crucial in ophthalmic clinical practice, we recommend that this consideration is explicitly taken into account when developing and evaluating future deep learning models.
Collapse
Key Words
- Artificial intelligence
- CI, confidence interval
- CNN, convolutional neural network
- DL, deep learning
- Deep learning
- Glaucoma
- MD, mean deviation
- MPark, recurrent neural network method from Park et al
- MWen, convolutional neural network method from Wen et al
- PMAE, pointwise mean absolute error
- Prediction
- RNN, recurrent neural network
- ROP, rate of progression
- TD, total deviation
- VF, visual field
- Visual fields
- dB, decibel
Collapse
Affiliation(s)
- Mohammad Eslami
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts,Correspondence: Mohammad Eslami, PhD, Schepens Eye Research Institute of Massachusetts Eye and Ear, 20 Staniford Street, Boston, MA 02114.
| | - Julia A. Kim
- Early Clinical Development, Genentech, Inc, South San Francisco, California
| | - Miao Zhang
- Early Clinical Development, Genentech, Inc, South San Francisco, California
| | - Michael V. Boland
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| | - Mengyu Wang
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| | - Dolly S. Chang
- Early Clinical Development, Genentech, Inc, South San Francisco, California,Byers Eye Institute, Stanford University, Palo Alto, California
| | - Tobias Elze
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
11
|
Young SL, Jain N, Tatham AJ. The application of advanced imaging techniques in glaucoma. EXPERT REVIEW OF OPHTHALMOLOGY 2022. [DOI: 10.1080/17469899.2022.2101449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
- Su Ling Young
- Princess Alexandra Eye Pavilion, Edinburgh, UK
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Nikhil Jain
- Addenbrooke’s Hospital, Cambridge University Hospitals NHS trust, Cambridge, UK
| | - Andrew J Tatham
- Princess Alexandra Eye Pavilion, Edinburgh, UK
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
12
|
Sabouri S, Pourahmad S, Vermeer KA, Lemij HG, Yousefi S. Pointwise and Region-Wise Course of Visual Field Loss in Patients With Glaucoma. Transl Vis Sci Technol 2022; 11:20. [PMID: 35877094 PMCID: PMC9339695 DOI: 10.1167/tvst.11.7.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Purpose Accurate assessment of visual field (VF) trend may help clinicians devise the optimum treatment regimen. This study was conducted to investigate the behavior of VF sequences using pointwise and region-wise linear, exponential, and sigmoid regression models. Materials and Methods In a retrospective cohort study, 277 eyes of 139 patients with glaucoma who had been followed for at least 7 years were investigated. Linear, exponential, and sigmoid regression models were fitted for each VF test location and Glaucoma Hemifield Test (GHT) region to model the trend of VF loss. The model with the lowest root mean square error (RMSE) was selected as the best fit. Results The mean age (standard deviation [SD]) of the patients was 59.9 years (9.8) with a mean follow-up time of 9.3 (0.7) years. The exponential regression had the best fit based on pointwise and region-wise approaches in 39.3% and 38.1% of eyes, respectively. The results showed a better performance based on sigmoid regression in patients with initial VF sensitivity threshold greater than 22 dB (71.6% in pointwise and 62.2% in region-wise approaches). The overall RMSE of the region-wise regression model was lower than the overall RMSE of the pointwise model. Conclusions In the current study, nonlinear regression models showed a better fit compared to the linear regression models in tracking VF loss behavior. Moreover, findings suggest region-wise analysis may provide a more appropriate approach for assessing VF deterioration. Translational Relevance Findings may confirm a nonlinear progression of VF deterioration in patients with glaucoma.
Collapse
Affiliation(s)
- Samaneh Sabouri
- Department of Biostatistics, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Saeedeh Pourahmad
- Department of Biostatistics, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Koenraad A Vermeer
- Rotterdam Ophthalmic Institute, the Rotterdam Eye Hospital, Rotterdam, The Netherlands
| | - Hans G Lemij
- Rotterdam Ophthalmic Institute, the Rotterdam Eye Hospital, Rotterdam, The Netherlands
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA.,Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
13
|
Smolkov MI, Blatova OA, Krutov AF, Blatov VA. Generating triply periodic surfaces from crystal structures: the tiling approach and its application to zeolites. Acta Crystallogr A Found Adv 2022; 78:327-336. [DOI: 10.1107/s2053273322004545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 04/29/2022] [Indexed: 11/11/2022] Open
Abstract
Physical properties of objects depend on topological features of the corresponding triply periodic surfaces; thus topological exploration and classification of the surfaces has practical relevance. A general method is developed for generating triply periodic surfaces from triply periodic crystal structures. A triply periodic surface is derived from the natural tiling of a crystal network by an appropriate removal of some tile faces and subsequent smoothing of the resulting facet surface. The labyrinth nets of a generated triply periodic surface are built from the natural tiling, and in turn the topological parameters of the labyrinth nets are used to determine if the surface is isomorphic to a minimal surface. This method has been applied to all known 253 zeolite frameworks and 98 triply periodic surfaces were obtained, which belong to 55 topological types. Twelve surfaces were found to be isomorphic to already known triply periodic minimal surfaces (TPMSs), while four surfaces can be treated as isomorphic to new TPMSs. A procedure has also been developed for transferring the generated surfaces to a 3D-printer-readable format.
Collapse
|
14
|
Wang SY, Tseng B, Hernandez-Boussard T. Deep Learning Approaches for Predicting Glaucoma Progression Using Electronic Health Records and Natural Language Processing. OPHTHALMOLOGY SCIENCE 2022; 2:100127. [PMID: 36249690 PMCID: PMC9559076 DOI: 10.1016/j.xops.2022.100127] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 01/19/2022] [Accepted: 02/07/2022] [Indexed: 11/09/2022]
Abstract
Purpose Advances in artificial intelligence have produced a few predictive models in glaucoma, including a logistic regression model predicting glaucoma progression to surgery. However, uncertainty exists regarding how to integrate the wealth of information in free-text clinical notes. The purpose of this study was to predict glaucoma progression requiring surgery using deep learning (DL) approaches on data from electronic health records (EHRs), including features from structured clinical data and from natural language processing of clinical free-text notes. Design Development of DL predictive model in an observational cohort. Participants Adult patients with glaucoma at a single center treated from 2008 through 2020. Methods Ophthalmology clinical notes of patients with glaucoma were identified from EHRs. Available structured data included patient demographic information, diagnosis codes, prior surgeries, and clinical information including intraocular pressure, visual acuity, and central corneal thickness. In addition, words from patients’ first 120 days of notes were mapped to ophthalmology domain-specific neural word embeddings trained on PubMed ophthalmology abstracts. Word embeddings and structured clinical data were used as inputs to DL models to predict subsequent glaucoma surgery. Main Outcome Measures Evaluation metrics included area under the receiver operating characteristic curve (AUC) and F1 score, the harmonic mean of positive predictive value, and sensitivity on a held-out test set. Results Seven hundred forty-eight of 4512 patients with glaucoma underwent surgery. The model that incorporated both structured clinical features as well as input features from clinical notes achieved an AUC of 73% and F1 of 40%, compared with only structured clinical features, (AUC, 66%; F1, 34%) and only clinical free-text features (AUC, 70%; F1, 42%). All models outperformed predictions from a glaucoma specialist’s review of clinical notes (F1, 29.5%). Conclusions We can successfully predict which patients with glaucoma will need surgery using DL models on EHRs unstructured text. Models incorporating free-text data outperformed those using only structured inputs. Future predictive models using EHRs should make use of information from within clinical free-text notes to improve predictive performance. Additional research is needed to investigate optimal methods of incorporating imaging data into future predictive models as well.
Collapse
|
15
|
A Comprehensive Performance Analysis of Transfer Learning Optimization in Visual Field Defect Classification. Diagnostics (Basel) 2022; 12:diagnostics12051258. [PMID: 35626413 PMCID: PMC9140208 DOI: 10.3390/diagnostics12051258] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 05/16/2022] [Accepted: 05/17/2022] [Indexed: 02/05/2023] Open
Abstract
Numerous research have demonstrated that Convolutional Neural Network (CNN) models are capable of classifying visual field (VF) defects with great accuracy. In this study, we evaluated the performance of different pre-trained models (VGG-Net, MobileNet, ResNet, and DenseNet) in classifying VF defects and produced a comprehensive comparative analysis to compare the performance of different CNN models before and after hyperparameter tuning and fine-tuning. Using 32 batch sizes, 50 epochs, and ADAM as the optimizer to optimize weight, bias, and learning rate, VGG-16 obtained the highest accuracy of 97.63 percent, according to experimental findings. Subsequently, Bayesian optimization was utilized to execute automated hyperparameter tuning and automated fine-tuning layers of the pre-trained models to determine the optimal hyperparameter and fine-tuning layer for classifying many VF defect with the highest accuracy. We found that the combination of different hyperparameters and fine-tuning of the pre-trained models significantly impact the performance of deep learning models for this classification task. In addition, we also discovered that the automated selection of optimal hyperparameters and fine-tuning by Bayesian has significantly enhanced the performance of the pre-trained models. The results observed the best performance for the DenseNet-121 model with a validation accuracy of 98.46% and a test accuracy of 99.57% for the tested datasets.
Collapse
|
16
|
Villasana GA, Bradley C, Elze T, Myers JS, Pasquale L, De Moraes CG, Wellik S, Boland MV, Ramulu P, Hager G, Unberath M, Yohannan J. Improving Visual Field Forecasting by Correcting for the Effects of Poor Visual Field Reliability. Transl Vis Sci Technol 2022; 11:27. [PMID: 35616923 PMCID: PMC9145029 DOI: 10.1167/tvst.11.5.27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 03/30/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose The purpose of this study was to accurately forecast future reliable visual field (VF) mean deviation (MD) values by correcting for poor reliability. Methods Four linear regression techniques (standard, unfiltered, corrected, and weighted) were fit to VF data from 5939 eyes with a final reliable VF. For each eye, all VFs, except the final one, were used to fit the models. Then, the difference between the final VF MD value and each model's estimate for the final VF MD value was used to calculate model error. We aggregated the error for each model across all eyes to compare model performance. The results were further broken down into eye-level reliability subgroups to track performance as reliability levels fluctuate. Results The standard method, used in the Humphrey Field Analyzer (HFA), was the worst performing model with an average residual that was 0.69 dB higher than the average from the unfiltered method, and 0.79 dB higher than that of the weighted and corrected methods. The weighted method was the best performing model, beating the standard model by as much as 1.75 dB in the 40% to 50% eye-level reliability subgroup. However, its average 95% prediction interval was relatively large at 7.67 dB. Conclusions Including all VFs in the trend estimation has more predictive power for future reliable VFs than excluding unreliable VFs. Correcting for VF reliability further improves model accuracy. Translational Relevance The VF correction methods described in this paper may allow clinicians to catch VF worsening at an earlier stage.
Collapse
Affiliation(s)
- Gabriel A. Villasana
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, USA
| | - Chris Bradley
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Tobias Elze
- Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, USA
| | | | - Louis Pasquale
- Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | | | - Sarah Wellik
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Michael V. Boland
- Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, USA
| | - Pradeep Ramulu
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Greg Hager
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, USA
| | - Mathias Unberath
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, USA
| | - Jithin Yohannan
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, USA
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
17
|
Sarwat SG, Kersting B, Moraitis T, Jonnalagadda VP, Sebastian A. Phase-change memtransistive synapses for mixed-plasticity neural computations. NATURE NANOTECHNOLOGY 2022; 17:507-513. [PMID: 35347271 DOI: 10.1038/s41565-022-01095-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Accepted: 01/12/2022] [Indexed: 06/14/2023]
Abstract
In the mammalian nervous system, various synaptic plasticity rules act, either individually or synergistically, over wide-ranging timescales to enable learning and memory formation. Hence, in neuromorphic computing platforms, there is a significant need for artificial synapses that can faithfully express such multi-timescale plasticity mechanisms. Although some plasticity rules have been emulated with elaborate complementary metal oxide semiconductor and memristive circuitry, device-level hardware realizations of long-term and short-term plasticity with tunable dynamics are lacking. Here we introduce a phase-change memtransistive synapse that leverages both the non-volatility of the phase configurations and the volatility of field-effect modulation for implementing tunable plasticities. We show that these mixed-plasticity synapses can enable plasticity rules such as short-term spike-timing-dependent plasticity that helps with the modelling of dynamic environments. Further, we demonstrate the efficacy of the memtransistive synapses in realizing accelerators for Hopfield neural networks for solving combinatorial optimization problems.
Collapse
|
18
|
Shigueoka LS, Jammal AA, Medeiros FA, Costa VP. Artificial Intelligence in Ophthalmology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
19
|
Veerankutty FH, Jayan G, Yadav MK, Manoj KS, Yadav A, Nair SRS, Shabeerali TU, Yeldho V, Sasidharan M, Rather SA. Artificial Intelligence in hepatology, liver surgery and transplantation: Emerging applications and frontiers of research. World J Hepatol 2021; 13:1977-1990. [PMID: 35070002 PMCID: PMC8727218 DOI: 10.4254/wjh.v13.i12.1977] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/09/2021] [Accepted: 11/25/2021] [Indexed: 02/06/2023] Open
Abstract
The integration of artificial intelligence (AI) and augmented realities into the medical field is being attempted by various researchers across the globe. As a matter of fact, most of the advanced technologies utilized by medical providers today have been borrowed and extrapolated from other industries. The introduction of AI into the field of hepatology and liver surgery is relatively a recent phenomenon. The purpose of this narrative review is to highlight the different AI concepts which are currently being tried to improve the care of patients with liver diseases. We end with summarizing emerging trends and major challenges in the future development of AI in hepatology and liver surgery.
Collapse
Affiliation(s)
- Fadl H Veerankutty
- Comprehensive Liver Care, VPS Lakeshore Hospital, Cochin 682040, Kerala, India
| | - Govind Jayan
- Hepatobiliary Pancreatic and Liver Transplant Surgery, Kerala Institute of Medical Sciences, Trivandrum 695029, Kerala, India
| | - Manish Kumar Yadav
- Department of Radiodiagnosis, Kerala Institute of Medical Sciences, Trivandrum 695029, Kerala, India
| | - Krishnan Sarojam Manoj
- Department of Radiodiagnosis, Kerala Institute of Medical Sciences, Trivandrum 695029, Kerala, India
| | - Abhishek Yadav
- Comprehensive Liver Care, VPS Lakeshore Hospital, Cochin 682040, Kerala, India
| | - Sindhu Radha Sadasivan Nair
- Hepatobiliary Pancreatic and Liver Transplant Surgery, Kerala Institute of Medical Sciences, Trivandrum 695029, Kerala, India
| | - T U Shabeerali
- Hepatobiliary Pancreatic and Liver Transplant Surgery, Kerala Institute of Medical Sciences, Trivandrum 695029, Kerala, India
| | - Varghese Yeldho
- Hepatobiliary Pancreatic and Liver Transplant Surgery, Kerala Institute of Medical Sciences, Trivandrum 695029, Kerala, India
| | - Madhu Sasidharan
- Gastroenterology and Hepatology, Kerala Institute of Medical Sciences, Thiruvananthapuram 695029, India
| | - Shiraz Ahmad Rather
- Hepatobiliary Pancreatic and Liver Transplant Surgery, Kerala Institute of Medical Sciences, Trivandrum 695029, Kerala, India
| |
Collapse
|
20
|
Christopher M, Bowd C, Proudfoot JA, Belghith A, Goldbaum MH, Rezapour J, Fazio MA, Girkin CA, De Moraes G, Liebmann JM, Weinreb RN, Zangwill LM. Deep Learning Estimation of 10-2 and 24-2 Visual Field Metrics Based on Thickness Maps from Macula OCT. Ophthalmology 2021; 128:1534-1548. [PMID: 33901527 DOI: 10.1016/j.ophtha.2021.04.022] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 03/16/2021] [Accepted: 04/19/2021] [Indexed: 01/27/2023] Open
Abstract
PURPOSE To develop deep learning (DL) systems estimating visual function from macula-centered spectral-domain (SD) OCT images. DESIGN Evaluation of a diagnostic technology. PARTICIPANTS A total of 2408 10-2 visual field (VF) SD OCT pairs and 2999 24-2 VF SD OCT pairs collected from 645 healthy and glaucoma subjects (1222 eyes). METHODS Deep learning models were trained on thickness maps from Spectralis macula SD OCT to estimate 10-2 and 24-2 VF mean deviation (MD) and pattern standard deviation (PSD). Individual and combined DL models were trained using thickness data from 6 layers (retinal nerve fiber layer [RNFL], ganglion cell layer [GCL], inner plexiform layer [IPL], ganglion cell-IPL [GCIPL], ganglion cell complex [GCC] and retina). Linear regression of mean layer thicknesses were used for comparison. MAIN OUTCOME MEASURES Deep learning models were evaluated using R2 and mean absolute error (MAE) compared with 10-2 and 24-2 VF measurements. RESULTS Combined DL models estimating 10-2 achieved R2 of 0.82 (95% confidence interval [CI], 0.68-0.89) for MD and 0.69 (95% CI, 0.55-0.81) for PSD and MAEs of 1.9 dB (95% CI, 1.6-2.4 dB) for MD and 1.5 dB (95% CI, 1.2-1.9 dB) for PSD. This was significantly better than mean thickness estimates for 10-2 MD (0.61 [95% CI, 0.47-0.71] and 3.0 dB [95% CI, 2.5-3.5 dB]) and 10-2 PSD (0.46 [95% CI, 0.31-0.60] and 2.3 dB [95% CI, 1.8-2.7 dB]). Combined DL models estimating 24-2 achieved R2 of 0.79 (95% CI, 0.72-0.84) for MD and 0.68 (95% CI, 0.53-0.79) for PSD and MAEs of 2.1 dB (95% CI, 1.8-2.5 dB) for MD and 1.5 dB (95% CI, 1.3-1.9 dB) for PSD. This was significantly better than mean thickness estimates for 24-2 MD (0.41 [95% CI, 0.26-0.57] and 3.4 dB [95% CI, 2.7-4.5 dB]) and 24-2 PSD (0.38 [95% CI, 0.20-0.57] and 2.4 dB [95% CI, 2.0-2.8 dB]). The GCIPL (R2 = 0.79) and GCC (R2 = 0.75) had the highest performance estimating 10-2 and 24-2 MD, respectively. CONCLUSIONS Deep learning models improved estimates of functional loss from SD OCT imaging. Accurate estimates can help clinicians to individualize VF testing to patients.
Collapse
Affiliation(s)
- Mark Christopher
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Christopher Bowd
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - James A Proudfoot
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Akram Belghith
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Michael H Goldbaum
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Jasmin Rezapour
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California; Department of Ophthalmology, University Medical Center Mainz, Mainz, Germany
| | - Massimo A Fazio
- School of Medicine, University of Alabama-Birmingham, Birmingham, Alabama
| | | | - Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York
| | - Jeffrey M Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York
| | - Robert N Weinreb
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California.
| |
Collapse
|
21
|
Villasana GA, Bradley C, Ramulu P, Unberath M, Yohannan J. The Effect of Achieving Target Intraocular Pressure on Visual Field Worsening. Ophthalmology 2021; 129:35-44. [PMID: 34506846 PMCID: PMC10122267 DOI: 10.1016/j.ophtha.2021.08.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/27/2021] [Accepted: 08/31/2021] [Indexed: 10/20/2022] Open
Abstract
PURPOSE To estimate the effect of achieving target intraocular pressure (IOP) values on visual field (VF) worsening in a treated clinical population. DESIGN Retrospective analysis of longitudinal data. PARTICIPANTS A total of 2852 eyes of 1688 patients with glaucoma-related diagnoses treated in a tertiary care practice. All included eyes had at least 5 reliable VF tests and 5 IOP measures on separate visits along with at least 1 target IOP defined by a clinician on the first or second visit. METHODS The primary dependent variable was the slope of the mean deviation (MD) over time (decibels [dB]/year). The primary independent variable was mean target difference (measured IOP - target IOP). We created simple linear regression models and mixed-effects linear models to study the relationship between MD slope and mean target difference for individual eyes. In the mixed-effects models, we included an interaction term to account for disease severity (mild/suspect, moderate, or advanced) and a spline term to account for the differing effects of achieving target IOP (target difference ≤0) and failing to achieve target IOP (target difference >0). MAIN OUTCOME MEASURES Rate of change in MD slope (changes in dB/year) per 1 mmHg change in target difference at different stages of glaucoma severity. RESULTS Across all eyes, a simple linear regression model demonstrated that a 1 mmHg increase in target difference had a -0.018 dB/year (confidence interval [CI], -0.026 to -0.011; P < 0.05) effect on MD slope. The mixed-effects model shows that eyes with moderate disease that fail to achieve their target IOP experience the largest effects, with a 1 mmHg increase in target difference resulting in a -0.119 dB/year (CI, -0.168 to -0.070; P < 0.05) worse MD slope. The effects of missing target IOP on VF worsening were more pronounced than the effect of absolute level of IOP on VF worsening, where a 1 mmHg increase in IOP had a -0.004 dB/year (CI, -0.011 to 0.003; P > 0.05) effect on the MD slope. CONCLUSIONS In treated patients, failing to achieve target IOP was associated with more rapid VF worsening. Eyes with moderate glaucoma experienced the greatest VF worsening from failing to achieve target IOP.
Collapse
Affiliation(s)
- Gabriel A Villasana
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland
| | - Chris Bradley
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Pradeep Ramulu
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Mathias Unberath
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland
| | - Jithin Yohannan
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland; Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland.
| |
Collapse
|
22
|
Buisson M, Navel V, Labbé A, Watson SL, Baker JS, Murtagh P, Chiambaretta F, Dutheil F. Deep learning versus ophthalmologists for screening for glaucoma on fundus examination: A systematic review and meta-analysis. Clin Exp Ophthalmol 2021; 49:1027-1038. [PMID: 34506041 DOI: 10.1111/ceo.14000] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 09/02/2021] [Accepted: 09/08/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND In this systematic review and meta-analysis, we aimed to compare deep learning versus ophthalmologists in glaucoma diagnosis on fundus examinations. METHOD PubMed, Cochrane, Embase, ClinicalTrials.gov and ScienceDirect databases were searched for studies reporting a comparison between the glaucoma diagnosis performance of deep learning and ophthalmologists on fundus examinations on the same datasets, until 10 December 2020. Studies had to report an area under the receiver operating characteristics (AUC) with SD or enough data to generate one. RESULTS We included six studies in our meta-analysis. There was no difference in AUC between ophthalmologists (AUC = 82.0, 95% confidence intervals [CI] 65.4-98.6) and deep learning (97.0, 89.4-104.5). There was also no difference using several pessimistic and optimistic variants of our meta-analysis: the best (82.2, 60.0-104.3) or worst (77.7, 53.1-102.3) ophthalmologists versus the best (97.1, 89.5-104.7) or worst (97.1, 88.5-105.6) deep learning of each study. We did not retrieve any factors influencing those results. CONCLUSION Deep learning had similar performance compared to ophthalmologists in glaucoma diagnosis from fundus examinations. Further studies should evaluate deep learning in clinical situations.
Collapse
Affiliation(s)
- Mathieu Buisson
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France
| | - Valentin Navel
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Antoine Labbé
- Department of Ophthalmology III, Quinze-Vingts National Ophthalmology Hospital, IHU FOReSIGHT, Paris, France.,Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France.,Department of Ophthalmology, Ambroise Paré Hospital, APHP, Université de Versailles Saint-Quentin en Yvelines, Versailles, France
| | - Stephanie L Watson
- Save Sight Institute, Discipline of Ophthalmology, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,Corneal Unit, Sydney Eye Hospital, Sydney, New South Wales, Australia
| | - Julien S Baker
- Centre for Health and Exercise Science Research, Department of Sport, Physical Education and Health, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Patrick Murtagh
- Department of Ophthalmology, Royal Victoria Eye and Ear Hospital, Dublin, Ireland
| | - Frédéric Chiambaretta
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Frédéric Dutheil
- Université Clermont Auvergne, CNRS, LaPSCo, Physiological and Psychosocial Stress, CHU Clermont-Ferrand, University Hospital of Clermont-Ferrand, Preventive and Occupational Medicine, Witty Fit, Clermont-Ferrand, France
| |
Collapse
|
23
|
Abstract
Electromagnetic actuator systems composed of an induction servo motor (ISM) drive system and a rice milling machine system have widely been used in agricultural applications. In order to achieve a finer control performance, a witty control system using a revised recurrent Jacobi polynomial neural network (RRJPNN) control and two remunerated controls with an altered bat search algorithm (ABSA) method is proposed to control electromagnetic actuator systems. The witty control system with finer learning capability can fulfill the RRJPNN control, which involves an attunement law, two remunerated controls, which have two evaluation laws, and a dominator control. Based on the Lyapunov stability principle, the attunement law in the RRJPNN control and two evaluation laws in the two remunerated controls are derived. Moreover, the ABSA method can acquire the adjustable learning rates to quicken convergence of weights. Finally, the proposed control method exhibits a finer control performance that is confirmed by experimental results.
Collapse
|
24
|
Shigueoka LS, Jammal AA, Medeiros FA, Costa VP. Artificial Intelligence in Ophthalmology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_201-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
25
|
Assessing Glaucoma Progression Using Machine Learning Trained on Longitudinal Visual Field and Clinical Data. Ophthalmology 2020; 128:1016-1026. [PMID: 33359887 DOI: 10.1016/j.ophtha.2020.12.020] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 11/30/2020] [Accepted: 12/13/2020] [Indexed: 01/17/2023] Open
Abstract
PURPOSE Rule-based approaches to determining glaucoma progression from visual fields (VFs) alone are discordant and have tradeoffs. To detect better when glaucoma progression is occurring, we used a longitudinal data set of merged VF and clinical data to assess the performance of a convolutional long short-term memory (LSTM) neural network. DESIGN Retrospective analysis of longitudinal clinical and VF data. PARTICIPANTS From 2 initial datasets of 672 123 VF results from 213 254 eyes and 350 437 samples of clinical data, persons at the intersection of both datasets with 4 or more VF results and corresponding baseline clinical data (cup-to-disc ratio, central corneal thickness, and intraocular pressure) were included. After exclusion criteria-specifically the removal of VFs with high false-positive and false-negative rates and entries with missing data-were applied to ensure reliable data, 11 242 eyes remained. METHODS Three commonly used glaucoma progression algorithms (VF index slope, mean deviation slope, and pointwise linear regression) were used to define eyes as stable or progressing. Two machine learning models, one exclusively trained on VF data and another trained on both VF and clinical data, were tested. MAIN OUTCOME MEASURES Area under the receiver operating characteristic curve (AUC) and area under the precision-recall curve (AUPRC) calculated on a held-out test set and mean accuracies from threefold cross-validation were used to compare the performance of the machine learning models. RESULTS The convolutional LSTM network demonstrated 91% to 93% accuracy with respect to the different conventional glaucoma progression algorithms given 4 consecutive VF results for each participant. The model that was trained on both VF and clinical data (AUC, 0.89-0.93) showed better diagnostic ability than a model exclusively trained on VF results (AUC, 0.79-0.82; P < 0.001). CONCLUSIONS A convolutional LSTM architecture can capture local and global trends in VFs over time. It is well suited to assessing glaucoma progression because of its ability to extract spatiotemporal features that other algorithms cannot. Supplementing VF results with clinical data improves the model's ability to assess glaucoma progression and better reflects the way clinicians manage data when managing glaucoma.
Collapse
|
26
|
Abstract
PURPOSE OF REVIEW To summarize how big data and artificial intelligence technologies have evolved, their current state, and next steps to enable future generations of artificial intelligence for ophthalmology. RECENT FINDINGS Big data in health care is ever increasing in volume and variety, enabled by the widespread adoption of electronic health records (EHRs) and standards for health data information exchange, such as Digital Imaging and Communications in Medicine and Fast Healthcare Interoperability Resources. Simultaneously, the development of powerful cloud-based storage and computing architectures supports a fertile environment for big data and artificial intelligence in health care. The high volume and velocity of imaging and structured data in ophthalmology and is one of the reasons why ophthalmology is at the forefront of artificial intelligence research. Still needed are consensus labeling conventions for performing supervised learning on big data, promotion of data sharing and reuse, standards for sharing artificial intelligence model architectures, and access to artificial intelligence models through open application program interfaces (APIs). SUMMARY Future requirements for big data and artificial intelligence include fostering reproducible science, continuing open innovation, and supporting the clinical use of artificial intelligence by promoting standards for data labels, data sharing, artificial intelligence model architecture sharing, and accessible code and APIs.
Collapse
|
27
|
Girard MJA, Schmetterer L. Artificial intelligence and deep learning in glaucoma: Current state and future prospects. PROGRESS IN BRAIN RESEARCH 2020; 257:37-64. [PMID: 32988472 DOI: 10.1016/bs.pbr.2020.07.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Over the past few years, there has been an unprecedented and tremendous excitement for artificial intelligence (AI) research in the field of Ophthalmology; this has naturally been translated to glaucoma-a progressive optic neuropathy characterized by retinal ganglion cell axon loss and associated visual field defects. In this review, we aim to discuss how AI may have a unique opportunity to tackle the many challenges faced in the glaucoma clinic. This is because glaucoma remains poorly understood with difficulties in providing early diagnosis and prognosis accurately and in a timely fashion. In the short term, AI could also become a game changer by paving the way for the first cost-effective glaucoma screening campaigns. While there are undeniable technical and clinical challenges ahead, and more so than for other ophthalmic disorders whereby AI is already booming, we strongly believe that glaucoma specialists should embrace AI as a companion to their practice. Finally, this review will also remind ourselves that glaucoma is a complex group of disorders with a multitude of physiological manifestations that cannot yet be observed clinically. AI in glaucoma is here to stay, but it will not be the only tool to solve glaucoma.
Collapse
Affiliation(s)
- Michaël J A Girard
- Ophthalmic Engineering & Innovation Laboratory (OEIL), Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
| | - Leopold Schmetterer
- Ocular Imaging, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore; School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, Singapore; SERI-NTU Advanced Ocular Engineering (STANCE), Singapore, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore; Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria; Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria; Institute of Clinical and Experimental Ophthalmology, Basel, Switzerland.
| |
Collapse
|
28
|
Masuzaki R, Kanda T, Sasaki R, Matsumoto N, Nirei K, Ogawa M, Moriyama M. Application of artificial intelligence in hepatology: Minireview. Artif Intell Gastroenterol 2020; 1:5-11. [DOI: 10.35712/aig.v1.i1.5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 06/23/2020] [Accepted: 07/16/2020] [Indexed: 02/06/2023] Open
Abstract
With the rapid advancements in computer science, artificial intelligence (AI) has become an intrinsic part of our daily life and clinical practices. The concepts of AI, such as machine learning, deep learning, and big data, are extensively used in clinical and basic research. In this review, we searched for the articles in PubMed and summarized recent developments of AI concerning hepatology while focusing on the diagnosis and risk assessment of liver diseases. Ultrasound is widely conducted for the routine surveillance of hepatocellular carcinoma along with tumor markers. Computer-aided diagnosis is useful in the detection of tumors and characterization of space-occupying lesions. The prognosis of hepatocellular carcinoma can be estimated via AI using large-scale and high-quality training datasets. The prevalence of nonalcoholic fatty liver disease is increasing worldwide and pivotal concern in the field is who will progress and develop hepatocellular carcinoma. Most AI studies require a large dataset, including laboratory or radiological findings and outcome data. AI will be useful in reducing medical errors, supporting clinical decisions, and predicting clinical outcomes. Thus, cooperation between AI and humans is expected to improve healthcare.
Collapse
Affiliation(s)
- Ryota Masuzaki
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| | - Tatsuo Kanda
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| | - Reina Sasaki
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| | - Naoki Matsumoto
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| | - Kazushige Nirei
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| | - Masahiro Ogawa
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| | - Mitsuhiko Moriyama
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| |
Collapse
|
29
|
Thompson AC, Jammal AA, Medeiros FA. A Review of Deep Learning for Screening, Diagnosis, and Detection of Glaucoma Progression. Transl Vis Sci Technol 2020; 9:42. [PMID: 32855846 PMCID: PMC7424906 DOI: 10.1167/tvst.9.2.42] [Citation(s) in RCA: 64] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 05/21/2020] [Indexed: 12/23/2022] Open
Abstract
Because of recent advances in computing technology and the availability of large datasets, deep learning has risen to the forefront of artificial intelligence, with performances that often equal, or sometimes even exceed, those of human subjects on a variety of tasks, especially those related to image classification and pattern recognition. As one of the medical fields that is highly dependent on ancillary imaging tests, ophthalmology has been in a prime position to witness the application of deep learning algorithms that can help analyze the vast amount of data coming from those tests. In particular, glaucoma stands as one of the conditions where application of deep learning algorithms could potentially lead to better use of the vast amount of information coming from structural and functional tests evaluating the optic nerve and macula. The purpose of this article is to critically review recent applications of deep learning models in glaucoma, discussing their advantages but also focusing on the challenges inherent to the development of such models for screening, diagnosis and detection of progression. After a brief general overview of deep learning and how it compares to traditional machine learning classifiers, we discuss issues related to the training and validation of deep learning models and how they specifically apply to glaucoma. We then discuss specific scenarios where deep learning has been proposed for use in glaucoma, such as screening with fundus photography, and diagnosis and detection of glaucoma progression with optical coherence tomography and standard automated perimetry. Translational Relevance Deep learning algorithms have the potential to significantly improve diagnostic capabilities in glaucoma, but their application in clinical practice requires careful validation, with consideration of the target population, the reference standards used to build the models, and potential sources of bias.
Collapse
Affiliation(s)
- Atalie C Thompson
- Vision, Imaging and Performance Laboratory (VIP), Duke Eye Center, Duke University, Durham, NC, USA
| | - Alessandro A Jammal
- Vision, Imaging and Performance Laboratory (VIP), Duke Eye Center, Duke University, Durham, NC, USA
| | - Felipe A Medeiros
- Vision, Imaging and Performance Laboratory (VIP), Duke Eye Center, Duke University, Durham, NC, USA
| |
Collapse
|
30
|
Li M, Lian S, Wang F, Zhou Y, Chen B, Guan L, Wu Y. Prediction Model of Organic Molecular Absorption Energies based on Deep Learning trained by Chaos-enhanced Accelerated Evolutionary algorithm. Sci Rep 2019; 9:17261. [PMID: 31754116 PMCID: PMC6872818 DOI: 10.1038/s41598-019-53206-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Accepted: 10/29/2019] [Indexed: 01/24/2023] Open
Abstract
As an important physical property of molecules, absorption energy can characterize the electronic property and structural information of molecules. Moreover, the accurate calculation of molecular absorption energies is highly valuable. Present linear and nonlinear methods hold low calculation accuracies due to great errors, especially irregular complicated molecular systems for structures. Thus, developing a prediction model for molecular absorption energies with enhanced accuracy, efficiency, and stability is highly beneficial. By combining deep learning and intelligence algorithms, we propose a prediction model based on the chaos-enhanced accelerated particle swarm optimization algorithm and deep artificial neural network (CAPSO BP DNN) that possesses a seven-layer 8-4-4-4-4-4-1 structure. Eight parameters related to molecular absorption energies are selected as inputs, such as a theoretical calculating value Ec of absorption energy (B3LYP/STO-3G), molecular electron number Ne, oscillator strength Os, number of double bonds Ndb, total number of atoms Na, number of hydrogen atoms Nh, number of carbon atoms Nc, and number of nitrogen atoms NN; and one parameter representing the molecular absorption energy is regarded as the output. A prediction experiment on organic molecular absorption energies indicates that CAPSO BP DNN exhibits a favourable predictive effect, accuracy, and correlation. The tested absolute average relative error, predicted root-mean-square error, and square correlation coefficient are 0.033, 0.0153, and 0.9957, respectively. Relative to other prediction models, the CAPSO BP DNN model exhibits a good comprehensive prediction performance and can provide references for other materials, chemistry and physics fields, such as nonlinear prediction of chemical and physical properties, QSAR/QAPR and chemical information modelling, etc.
Collapse
Affiliation(s)
- Mengshan Li
- College of Physics and Electronic Information, Gannan Normal University, Ganzhou, Jiangxi, 341000, China.
| | - Suyun Lian
- College of Physics and Electronic Information, Gannan Normal University, Ganzhou, Jiangxi, 341000, China
| | - Fan Wang
- College of Physics and Electronic Information, Gannan Normal University, Ganzhou, Jiangxi, 341000, China
| | - Yanying Zhou
- College of Physics and Electronic Information, Gannan Normal University, Ganzhou, Jiangxi, 341000, China
| | - Bingsheng Chen
- College of Physics and Electronic Information, Gannan Normal University, Ganzhou, Jiangxi, 341000, China
| | - Lixin Guan
- College of Physics and Electronic Information, Gannan Normal University, Ganzhou, Jiangxi, 341000, China
| | - Yan Wu
- College of Physics and Electronic Information, Gannan Normal University, Ganzhou, Jiangxi, 341000, China
| |
Collapse
|