1
|
Huang C, Jiang Y, Yang X, Wei C, Chen H, Xiong W, Lin H, Wang X, Tian T, Tan H. Enhancing Retinal Fundus Image Quality Assessment With Swin-Transformer-Based Learning Across Multiple Color-Spaces. Transl Vis Sci Technol 2024; 13:8. [PMID: 38568606 PMCID: PMC10996994 DOI: 10.1167/tvst.13.4.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 02/18/2024] [Indexed: 04/05/2024] Open
Abstract
Purpose The assessment of retinal image (RI) quality holds significant importance in both clinical trials and large datasets, because suboptimal images can potentially conceal early signs of diseases, thereby resulting in inaccurate medical diagnoses. This study aims to develop an automatic method for Retinal Image Quality Assessment (RIQA) that incorporates visual explanations, aiming to comprehensively evaluate the quality of retinal fundus images (RIs). Methods We developed an automatic RIQA system, named Swin-MCSFNet, utilizing 28,792 RIs from the EyeQ dataset, as well as 2000 images from the EyePACS dataset and an additional 1,000 images from the OIA-ODIR dataset. After preprocessing, including cropping black regions, data augmentation, and normalization, a Swin-MCSFNet classifier based on the Swin-Transformer for multiple color-space fusion was proposed to grade the quality of RIs. The generalizability of Swin-MCSFNet was validated across multiple data centers. Additionally, for enhanced interpretability, a Score-CAM-generated heatmap was applied to provide visual explanations. Results Experimental results reveal that the proposed Swin-MCSFNet achieves promising performance, yielding a micro-receiver operating characteristic (ROC) of 0.93 and ROC scores of 0.96, 0.81, and 0.96 for the "Good," "Usable," and "Reject" categories, respectively. These scores underscore the accuracy of RIQA based on Swin-MCSF in distinguishing among the three categories. Furthermore, heatmaps generated across different RIQA classification scores and various color spaces suggest that regions in the retinal images from multiple color spaces contribute significantly to the decision-making process of the Swin-MCSFNet classifier. Conclusions Our study demonstrates that the proposed Swin-MCSFNet outperforms other methods in experiments conducted on multiple datasets, as evidenced by the superior performance metrics and insightful Score-CAM heatmaps. Translational Relevance This study constructs a new retinal image quality evaluation system, which will contribute to the subsequent research of retinal images.
Collapse
Affiliation(s)
- Chengcheng Huang
- Department of Preventive Medicine, Shantou University Medical College, Shantou, China
| | - Yukang Jiang
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Xiaochun Yang
- The First People's Hospital of Yun Nan Province, Kunming, China
| | - Chiyu Wei
- Department of Preventive Medicine, Shantou University Medical College, Shantou, China
| | - Hongyu Chen
- Department of Optoelectronic Information Science and Engineering, Physical and Materials Science College, Guangzhou University, Guangzhou, China
| | - Weixue Xiong
- Department of Preventive Medicine, Shantou University Medical College, Shantou, China
| | - Henghui Lin
- Department of Preventive Medicine, Shantou University Medical College, Shantou, China
| | - Xueqin Wang
- School of Management, University of Science and Technology of China, Hefei, Anhui, China
| | - Ting Tian
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Haizhu Tan
- Department of Preventive Medicine, Shantou University Medical College, Shantou, China
| |
Collapse
|
2
|
Chuter B, Huynh J, Bowd C, Walker E, Rezapour J, Brye N, Belghith A, Fazio MA, Girkin CA, De Moraes G, Liebmann JM, Weinreb RN, Zangwill LM, Christopher M. Deep Learning Identifies High-Quality Fundus Photographs and Increases Accuracy in Automated Primary Open Angle Glaucoma Detection. Transl Vis Sci Technol 2024; 13:23. [PMID: 38285462 PMCID: PMC10829806 DOI: 10.1167/tvst.13.1.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 12/26/2023] [Indexed: 01/30/2024] Open
Abstract
Purpose To develop and evaluate a deep learning (DL) model to assess fundus photograph quality, and quantitatively measure its impact on automated POAG detection in independent study populations. Methods Image quality ground truth was determined by manual review of 2815 fundus photographs of healthy and POAG eyes from the Diagnostic Innovations in Glaucoma Study and African Descent and Glaucoma Evaluation Study (DIGS/ADAGES), as well as 11,350 from the Ocular Hypertension Treatment Study (OHTS). Human experts assessed a photograph as high quality if of sufficient quality to determine POAG status and poor quality if not. A DL quality model was trained on photographs from DIGS/ADAGES and tested on OHTS. The effect of DL quality assessment on DL POAG detection was measured using area under the receiver operating characteristic (AUROC). Results The DL quality model yielded an AUROC of 0.97 for differentiating between high- and low-quality photographs; qualitative human review affirmed high model performance. Diagnostic accuracy of the DL POAG model was significantly greater (P < 0.001) in good (AUROC, 0.87; 95% CI, 0.80-0.92) compared with poor quality photographs (AUROC, 0.77; 95% CI, 0.67-0.88). Conclusions The DL quality model was able to accurately assess fundus photograph quality. Using automated quality assessment to filter out low-quality photographs increased the accuracy of a DL POAG detection model. Translational Relevance Incorporating DL quality assessment into automated review of fundus photographs can help to decrease the burden of manual review and improve accuracy for automated DL POAG detection.
Collapse
Affiliation(s)
- Benton Chuter
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Justin Huynh
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Christopher Bowd
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Evan Walker
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Jasmin Rezapour
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
- Department of Ophthalmology, University Medical Center Mainz, Germany
| | - Nicole Brye
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Akram Belghith
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Massimo A. Fazio
- School of Medicine, Callahan Eye Hospital, University of Alabama-Birmingham, Birmingham, Alabama, United States
| | - Christopher A. Girkin
- School of Medicine, Callahan Eye Hospital, University of Alabama-Birmingham, Birmingham, Alabama, United States
| | - Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York, United States
| | - Jeffrey M. Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York, United States
| | - Robert N. Weinreb
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Linda M. Zangwill
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Mark Christopher
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| |
Collapse
|
3
|
König M, Seeböck P, Gerendas BS, Mylonas G, Winklhofer R, Dimakopoulou I, Schmidt-Erfurth UM. Quality assessment of colour fundus and fluorescein angiography images using deep learning. Br J Ophthalmol 2023; 108:98-104. [PMID: 36418144 PMCID: PMC10804038 DOI: 10.1136/bjo-2022-321963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 11/11/2022] [Indexed: 11/24/2022]
Abstract
BACKGROUND/AIMS Image quality assessment (IQA) is crucial for both reading centres in clinical studies and routine practice, as only adequate quality allows clinicians to correctly identify diseases and treat patients accordingly. Here we aim to develop a neural network for automated real-time IQA in colour fundus (CF) and fluorescein angiography (FA) images. METHODS Training and evaluation of two neural networks were conducted using 2272 CF and 2492 FA images, with binary labels in four (contrast, focus, illumination, shadow and reflection) and three (contrast, focus, noise) modality specific categories plus an overall quality ranking. Performance was compared with a second human grader, evaluated on an external public dataset and in a clinical trial use-case. RESULTS The networks achieved a F1-score/area under the receiving operator characteristic/precision recall curve of 0.907/0.963/0.966 for CF and 0.822/0.918/0.889 for FA in overall quality prediction with similar results in most categories. A clear relation between model uncertainty and prediction error was observed. In the clinical trial use-case evaluation, the networks achieved an accuracy of 0.930 for CF and 0.895 for FA. CONCLUSION The presented method allows automated IQA in real time, demonstrating human-level performance for CF as well as FA. Such models can help to overcome the problem of human intergrader and intragrader variability by providing objective and reproducible IQA results. It has particular relevance for real-time feedback in multicentre clinical studies, when images are uploaded to central reading centre portals. Moreover, automated IQA as preprocessing step can support integrating automated approaches into clinical practice.
Collapse
Affiliation(s)
- Michael König
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Philipp Seeböck
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Bianca S Gerendas
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Georgios Mylonas
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Rudolf Winklhofer
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Ioanna Dimakopoulou
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | | |
Collapse
|
4
|
Nakayama LF, Zago Ribeiro L, Novaes F, Miyawaki IA, Miyawaki AE, de Oliveira JAE, Oliveira T, Malerbi FK, Regatieri CVS, Celi LA, Silva PS. Artificial intelligence for telemedicine diabetic retinopathy screening: a review. Ann Med 2023; 55:2258149. [PMID: 37734417 PMCID: PMC10515659 DOI: 10.1080/07853890.2023.2258149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 08/31/2023] [Indexed: 09/23/2023] Open
Abstract
PURPOSE This study aims to compare artificial intelligence (AI) systems applied in diabetic retinopathy (DR) teleophthalmology screening, currently deployed systems, fairness initiatives and the challenges for implementation. METHODS The review included articles retrieved from PubMed/Medline/EMBASE literature search strategy regarding telemedicine, DR and AI. The screening criteria included human articles in English, Portuguese or Spanish and related to telemedicine and AI for DR screening. The author's affiliations and the study's population income group were classified according to the World Bank Country and Lending Groups. RESULTS The literature search yielded a total of 132 articles, and nine were included after full-text assessment. The selected articles were published between 2004 and 2020 and were grouped as telemedicine systems, algorithms, economic analysis and image quality assessment. Four telemedicine systems that perform a quality assessment, image preprocessing and pathological screening were reviewed. A data and post-deployment bias assessment are not performed in any of the algorithms, and none of the studies evaluate the social impact implementations. There is a lack of representativeness in the reviewed articles, with most authors and target populations from high-income countries and no low-income country representation. CONCLUSIONS Telemedicine and AI hold great promise for augmenting decision-making in medical care, expanding patient access and enhancing cost-effectiveness. Economic studies and social science analysis are crucial to support the implementation of AI in teleophthalmology screening programs. Promoting fairness and generalizability in automated systems combined with telemedicine screening programs is not straightforward. Improving data representativeness, reducing biases and promoting equity in deployment and post-deployment studies are all critical steps in model development.
Collapse
Affiliation(s)
- Luis Filipe Nakayama
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Ophthalmology, São Paulo Federal University, Sao Paulo, Brazil
| | - Lucas Zago Ribeiro
- Department of Ophthalmology, São Paulo Federal University, Sao Paulo, Brazil
| | - Frederico Novaes
- Department of Ophthalmology, São Paulo Federal University, Sao Paulo, Brazil
| | | | | | | | - Talita Oliveira
- Department of Ophthalmology, São Paulo Federal University, Sao Paulo, Brazil
| | | | | | - Leo Anthony Celi
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Biostatistics, Harvard TH Chan School of Public Health, Boston, MA, USA
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Paolo S. Silva
- Beetham Eye Institute, Joslin Diabetes Centre, Harvard Medical School, Boston, MA, USA
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
| |
Collapse
|
5
|
Bryan JM, Bryar PJ, Mirza RG. Convolutional Neural Networks Accurately Identify Ungradable Images in a Diabetic Retinopathy Telemedicine Screening Program. Telemed J E Health 2023; 29:1349-1355. [PMID: 36730708 DOI: 10.1089/tmj.2022.0357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Purpose: Diabetic retinopathy (DR) is a microvascular complication of diabetes mellitus (DM). Standard of care for patients with DM is an annual eye examination or retinal imaging to assess for DR, the latter of which may be completed through telemedicine approaches. One significant issue is poor-quality images that prevent adequate screening and are thus ungradable. We used artificial intelligence to enable point-of-care (at time of imaging) identification of ungradable images in a DR screening program. Methods: Nonmydriatic retinal images were gathered from patients with DM imaged during a primary care or endocrinology visit from September 1, 2017, to June 1, 2021. The Topcon TRC-NW400 retinal camera (Topcon Corp., Tokyo, Japan) was used. Images were interpreted by 5 ophthalmologists for gradeability, presence and stage of DR, and presence of non-DR pathologies. A convolutional neural network with Inception V3 network architecture was trained to assess image gradeability. Images were divided into training and test sets, and 10-fold cross-validation was performed. Results: A total of 1,377 images from 537 patients (56.1% female, median age 58) were analyzed. Ophthalmologists classified 25.9% of images as ungradable. Of gradable images, 18.6% had DR of varying degrees and 26.5% had non-DR pathology. 10 fold cross-validation produced an average area under receiver operating characteristic curve (AUC) of 0.922 (standard deviation: 0.027, range: 0.882 to 0.961). The final model exhibited similar test set performance with an AUC of 0.924. Conclusions: This model accurately assesses gradeability of nonmydriatic retinal images. It could be used for increasing the efficiency of DR screening programs by enabling point-of-care identification of poor-quality images.
Collapse
Affiliation(s)
- John M Bryan
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Paul J Bryar
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Rukhsana G Mirza
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| |
Collapse
|
6
|
Abramovich O, Pizem H, Van Eijgen J, Oren I, Melamed J, Stalmans I, Blumenthal EZ, Behar JA. FundusQ-Net: A regression quality assessment deep learning algorithm for fundus images quality grading. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 239:107522. [PMID: 37285697 DOI: 10.1016/j.cmpb.2023.107522] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 03/23/2023] [Accepted: 03/30/2023] [Indexed: 06/09/2023]
Abstract
OBJECTIVE Ophthalmological pathologies such as glaucoma, diabetic retinopathy and age-related macular degeneration are major causes of blindness and vision impairment. There is a need for novel decision support tools that can simplify and speed up the diagnosis of these pathologies. A key step in this process is to automatically estimate the quality of the fundus images to make sure these are interpretable by a human operator or a machine learning model. We present a novel fundus image quality scale and deep learning (DL) model that can estimate fundus image quality relative to this new scale. METHODS A total of 1245 images were graded for quality by two ophthalmologists within the range 1-10, with a resolution of 0.5. A DL regression model was trained for fundus image quality assessment. The architecture used was Inception-V3. The model was developed using a total of 89,947 images from 6 databases, of which 1245 were labeled by the specialists and the remaining 88,702 images were used for pre-training and semi-supervised learning. The final DL model was evaluated on an internal test set (n=209) as well as an external test set (n=194). RESULTS The final DL model, denoted FundusQ-Net, achieved a mean absolute error of 0.61 (0.54-0.68) on the internal test set. When evaluated as a binary classification model on the public DRIMDB database as an external test set the model obtained an accuracy of 99%. SIGNIFICANCE the proposed algorithm provides a new robust tool for automated quality grading of fundus images.
Collapse
Affiliation(s)
- Or Abramovich
- The Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| | - Hadas Pizem
- Rambam Medical Center: Rambam Health Care Campus, Israel
| | - Jan Van Eijgen
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Oude Markt 13, 3000 Leuven; Department of Ophthalmology, University Hospitals UZ Leuven, Herestraat 49, 3000 Leuven, Belgium
| | - Ilan Oren
- The Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| | - Joshua Melamed
- The Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| | - Ingeborg Stalmans
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Oude Markt 13, 3000 Leuven; Department of Ophthalmology, University Hospitals UZ Leuven, Herestraat 49, 3000 Leuven, Belgium
| | | | - Joachim A Behar
- The Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel.
| |
Collapse
|
7
|
Harikiran J, Chandana BS, Rao BS, Raviteja B. Ocular disease examination of fundus images by hybriding SFCNN and rule mining algorithms. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2183456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Affiliation(s)
- J. Harikiran
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, India
| | - B. Sai Chandana
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, India
| | - B. Srinivasa Rao
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, India
| | - B. Raviteja
- Department of Computer Science and Engineering, GITAM Deemed to be University, Visakhapatnam, India
| |
Collapse
|
8
|
A Neural Network for Automated Image Quality Assessment of Optic Disc Photographs. J Clin Med 2023; 12:jcm12031217. [PMID: 36769865 PMCID: PMC9917571 DOI: 10.3390/jcm12031217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/26/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023] Open
Abstract
This study describes the development of a convolutional neural network (CNN) for automated assessment of optic disc photograph quality. Using a code-free deep learning platform, a total of 2377 optic disc photographs were used to develop a deep CNN capable of determining optic disc photograph quality. Of these, 1002 were good-quality images, 609 were acceptable-quality, and 766 were poor-quality images. The dataset was split 80/10/10 into training, validation, and test sets and balanced for quality. A ternary classification model (good, acceptable, and poor quality) and a binary model (usable, unusable) were developed. In the ternary classification system, the model had an overall accuracy of 91% and an AUC of 0.98. The model had higher predictive accuracy for images of good (93%) and poor quality (96%) than for images of acceptable quality (91%). The binary model performed with an overall accuracy of 98% and an AUC of 0.99. When validated on 292 images not included in the original training/validation/test dataset, the model's accuracy was 85% on the three-class classification task and 97% on the binary classification task. The proposed system for automated image-quality assessment for optic disc photographs achieves high accuracy in both ternary and binary classification systems, and highlights the success achievable with a code-free platform. There is wide clinical and research potential for such a model, with potential applications ranging from integration into fundus camera software to provide immediate feedback to ophthalmic photographers, to prescreening large databases before their use in research.
Collapse
|
9
|
Chan E, Tang Z, Najjar RP, Narayanaswamy A, Sathianvichitr K, Newman NJ, Biousse V, Milea D. A Deep Learning System for Automated Quality Evaluation of Optic Disc Photographs in Neuro-Ophthalmic Disorders. Diagnostics (Basel) 2023; 13:diagnostics13010160. [PMID: 36611452 PMCID: PMC9818957 DOI: 10.3390/diagnostics13010160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/27/2022] [Accepted: 12/28/2022] [Indexed: 01/05/2023] Open
Abstract
The quality of ocular fundus photographs can affect the accuracy of the morphologic assessment of the optic nerve head (ONH), either by humans or by deep learning systems (DLS). In order to automatically identify ONH photographs of optimal quality, we have developed, trained, and tested a DLS, using an international, multicentre, multi-ethnic dataset of 5015 ocular fundus photographs from 31 centres in 20 countries participating to the Brain and Optic Nerve Study with Artificial Intelligence (BONSAI). The reference standard in image quality was established by three experts who independently classified photographs as of "good", "borderline", or "poor" quality. The DLS was trained on 4208 fundus photographs and tested on an independent external dataset of 807 photographs, using a multi-class model, evaluated with a one-vs-rest classification strategy. In the external-testing dataset, the DLS could identify with excellent performance "good" quality photographs (AUC = 0.93 (95% CI, 0.91-0.95), accuracy = 91.4% (95% CI, 90.0-92.9%), sensitivity = 93.8% (95% CI, 92.5-95.2%), specificity = 75.9% (95% CI, 69.7-82.1%) and "poor" quality photographs (AUC = 1.00 (95% CI, 0.99-1.00), accuracy = 99.1% (95% CI, 98.6-99.6%), sensitivity = 81.5% (95% CI, 70.6-93.8%), specificity = 99.7% (95% CI, 99.6-100.0%). "Borderline" quality images were also accurately classified (AUC = 0.90 (95% CI, 0.88-0.93), accuracy = 90.6% (95% CI, 89.1-92.2%), sensitivity = 65.4% (95% CI, 56.6-72.9%), specificity = 93.4% (95% CI, 92.1-94.8%). The overall accuracy to distinguish among the three classes was 90.6% (95% CI, 89.1-92.1%), suggesting that this DLS could select optimal quality fundus photographs in patients with neuro-ophthalmic and neurological disorders affecting the ONH.
Collapse
Affiliation(s)
- Ebenezer Chan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
| | - Zhiqun Tang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
| | - Raymond P. Najjar
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117597, Singapore
- Center for Innovation & Precision Eye Health, National University of Singapore, Singapore 119077, Singapore
| | - Arun Narayanaswamy
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Glaucoma Department, Singapore National Eye Centre, Singapore 168751, Singapore
| | | | - Nancy J. Newman
- Departments of Ophthalmology and Neurology, Emory University, Atlanta, GA 30322, USA
| | - Valérie Biousse
- Departments of Ophthalmology and Neurology, Emory University, Atlanta, GA 30322, USA
| | - Dan Milea
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
- Department of Ophthalmology, Rigshospitalet, University of Copenhagen, 2600 Copenhagen, Denmark
- Department of Ophthalmology, Angers University Hospital, 49100 Angers, France
- Neuro-Ophthalmology Department, Singapore National Eye Centre, Singapore 168751, Singapore
- Correspondence:
| | | |
Collapse
|
10
|
Parashar D, Agrawal DK. Classification of Glaucoma Stages Using Image Empirical Mode Decomposition from Fundus Images. J Digit Imaging 2022; 35:1283-1292. [PMID: 35581407 PMCID: PMC9582090 DOI: 10.1007/s10278-022-00648-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 03/18/2022] [Accepted: 04/03/2022] [Indexed: 11/29/2022] Open
Abstract
One of the most prevalent causes of visual loss and blindness is glaucoma. Conventionally, instrument-based tools are employed for glaucoma screening. However, they are inefficient, time-consuming, and manual. Hence, computerized methodologies are needed for fast and accurate diagnosis of glaucoma. Therefore, we proposed a Computer-Aided Diagnosis (CAD) method for the classification of glaucoma stages using Image Empirical Mode decomposition (IEMD). In this study, IEMD is applied to decompose the preprocessed fundus photographs into different Intrinsic Mode Functions (IMFs) to capture the pixel variations. Then, the significant texture-based descriptors have been computed from the IMFs. A dimensionality reduction approach called Principal Component Analysis (PCA) has been employed to pick the robust descriptors from the retrieved feature set. We used the Analysis of Variance (ANOVA) test for feature ranking. Finally, the LS-SVM classifier has been employed to classify glaucoma stages. The proposed CAD system achieved a classification accuracy of 94.45% for the binary classification on the RIM-ONE r12 database. Our approach demonstrated better glaucoma classification performance than the existing automated systems.
Collapse
Affiliation(s)
- Deepak Parashar
- Department of Electronics and Communication Engineering, IES College of Technology, Bhopal, 462044, MP, India.
- Department of Electronics and Communication Engineering, Maulana Azad National Institute of Technology, Bhopal, 462003, MP, India.
| | - Dheraj Kumar Agrawal
- Department of Electronics and Communication Engineering, Maulana Azad National Institute of Technology, Bhopal, 462003, MP, India
| |
Collapse
|
11
|
Nadeem MW, Goh HG, Hussain M, Liew SY, Andonovic I, Khan MA. Deep Learning for Diabetic Retinopathy Analysis: A Review, Research Challenges, and Future Directions. SENSORS (BASEL, SWITZERLAND) 2022; 22:6780. [PMID: 36146130 PMCID: PMC9505428 DOI: 10.3390/s22186780] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 08/02/2022] [Accepted: 08/08/2022] [Indexed: 05/12/2023]
Abstract
Deep learning (DL) enables the creation of computational models comprising multiple processing layers that learn data representations at multiple levels of abstraction. In the recent past, the use of deep learning has been proliferating, yielding promising results in applications across a growing number of fields, most notably in image processing, medical image analysis, data analysis, and bioinformatics. DL algorithms have also had a significant positive impact through yielding improvements in screening, recognition, segmentation, prediction, and classification applications across different domains of healthcare, such as those concerning the abdomen, cardiac, pathology, and retina. Given the extensive body of recent scientific contributions in this discipline, a comprehensive review of deep learning developments in the domain of diabetic retinopathy (DR) analysis, viz., screening, segmentation, prediction, classification, and validation, is presented here. A critical analysis of the relevant reported techniques is carried out, and the associated advantages and limitations highlighted, culminating in the identification of research gaps and future challenges that help to inform the research community to develop more efficient, robust, and accurate DL models for the various challenges in the monitoring and diagnosis of DR.
Collapse
Affiliation(s)
- Muhammad Waqas Nadeem
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
| | - Hock Guan Goh
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
| | - Muzammil Hussain
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan
| | - Soung-Yue Liew
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
| | - Ivan Andonovic
- Department of Electronic and Electrical Engineering, Royal College Building, University of Strathclyde, 204 George St., Glasgow G1 1XW, UK
| | - Muhammad Adnan Khan
- Pattern Recognition and Machine Learning Lab, Department of Software, Gachon University, Seongnam 13557, Korea
- Faculty of Computing, Riphah School of Computing and Innovation, Riphah International University, Lahore Campus, Lahore 54000, Pakistan
| |
Collapse
|
12
|
Lyu X, Jajal P, Tahir MZ, Zhang S. Fractal dimension of retinal vasculature as an image quality metric for automated fundus image analysis systems. Sci Rep 2022; 12:11868. [PMID: 35831401 PMCID: PMC9279448 DOI: 10.1038/s41598-022-16089-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 07/04/2022] [Indexed: 11/21/2022] Open
Abstract
Automated fundus screening is becoming a significant programme of telemedicine in ophthalmology. Instant quality evaluation of uploaded retinal images could decrease unreliable diagnosis. In this work, we propose fractal dimension of retinal vasculature as an easy, effective and explainable indicator of retinal image quality. The pipeline of our approach is as follows: utilize image pre-processing technique to standardize input retinal images from possibly different sources to a uniform style; then, an improved deep learning empowered vessel segmentation model is employed to extract retinal vessels from the pre-processed images; finally, a box counting module is used to measure the fractal dimension of segmented vessel images. A small fractal threshold (could be a value between 1.45 and 1.50) indicates insufficient image quality. Our approach has been validated on 30,644 images from four public database.
Collapse
Affiliation(s)
- Xingzheng Lyu
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Hangzhou, 310027, China.
| | - Purvish Jajal
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, T6G 1H9, Canada
| | - Muhammad Zeeshan Tahir
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Hangzhou, 310027, China
| | - Sanyuan Zhang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Hangzhou, 310027, China
| |
Collapse
|
13
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
14
|
Ramani RG, Shanthamalar JJ. Automated image quality appraisal through partial least squares discriminant analysis. Int J Comput Assist Radiol Surg 2022; 17:1367-1377. [PMID: 35650346 DOI: 10.1007/s11548-022-02668-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Accepted: 04/29/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Automatic retinal fundus image quality analysis is one of the most essential preliminary stages in automatic computer-aided retinal disease diagnosis system, which allows good-quality fundus images for accurate disease prediction through localization and segmentation of retinal regions. This paper presents new feature extraction methods using full-reference and no-reference image quality metrics for image quality classification. METHODS Basic image features, reference and no-reference features are extracted from the fundus image and applied through different classification techniques to determine the image quality for further diagnosis. In this paper, human-made categorization including good and non-good-quality fundus image classification is constructed by considering major features of retinal fundus images are illumination, clarity, image intensity, contrast and region visibility. The proposed system presented fundus image quality classification by automatic extraction of features from fundus images through image processing techniques and automatic classification of image quality through different classification algorithm. RESULTS This system was thoroughly investigated on 2674 retinal fundus images from publically available datasets, namely MESSIDOR, Drishti-GS1, DRIVE, HRF, DRIONS-DB, DIARETDB0, DIARETDB1, IDRiD, INSPIRE-AVR, CHASE-DB1, ONHSD, DRIMDB and e-ophtha-EX with better performance results in terms of sensitivity, accuracy, precision and F1 score of 99.36%, 96.79%, 96.29% and 97.79%, respectively. CONCLUSION The proposed system results were compared to the existing state-of-the-art approaches and outperform existing methods for image quality assessment representing the efficiency and robustness of our system is most suitable for automatic image analysis during retinal disease diagnosis.
Collapse
Affiliation(s)
- R Geetha Ramani
- Department of Information Science and Technology, Anna University, Chennai, India
| | - J Jeslin Shanthamalar
- Sathyabama Institute of Science and Technology, Sathyabama University, Chennai, India.
| |
Collapse
|
15
|
Matta S, Lamard M, Conze PH, Le Guilcher A, Ricquebourg V, Benyoussef AA, Massin P, Rottier JB, Cochener B, Quellec G. Automatic Screening for Ocular Anomalies Using Fundus Photographs. Optom Vis Sci 2022; 99:281-291. [PMID: 34897234 DOI: 10.1097/opx.0000000000001845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
SIGNIFICANCE Screening for ocular anomalies using fundus photography is key to prevent vision impairment and blindness. With the growing and aging population, automated algorithms that can triage fundus photographs and provide instant referral decisions are relevant to scale-up screening and face the shortage of ophthalmic expertise. PURPOSE This study aimed to develop a deep learning algorithm that detects any ocular anomaly in fundus photographs and to evaluate this algorithm for "normal versus anomalous" eye examination classification in the diabetic and general populations. METHODS The deep learning algorithm was developed and evaluated in two populations: the diabetic and general populations. Our patient cohorts consist of 37,129 diabetic patients from the OPHDIAT diabetic retinopathy screening network in Paris, France, and 7356 general patients from the OphtaMaine private screening network, in Le Mans, France. Each data set was divided into a development subset and a test subset of more than 4000 examinations each. For ophthalmologist/algorithm comparison, a subset of 2014 examinations from the OphtaMaine test subset was labeled by a second ophthalmologist. First, the algorithm was trained on the OPHDIAT development subset. Then, it was fine-tuned on the OphtaMaine development subset. RESULTS On the OPHDIAT test subset, the area under the receiver operating characteristic curve for normal versus anomalous classification was 0.9592. On the OphtaMaine test subset, the area under the receiver operating characteristic curve was 0.8347 before fine-tuning and 0.9108 after fine-tuning. On the ophthalmologist/algorithm comparison subset, the second ophthalmologist achieved a specificity of 0.8648 and a sensitivity of 0.6682. For the same specificity, the fine-tuned algorithm achieved a sensitivity of 0.8248. CONCLUSIONS The proposed algorithm compares favorably with human performance for normal versus anomalous eye examination classification using fundus photography. Artificial intelligence, which previously targeted a few retinal pathologies, can be used to screen for ocular anomalies comprehensively.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Pascale Massin
- Ophtalmology Department, Lariboisière Hospital, APHP, Paris, France
| | | | | | | |
Collapse
|
16
|
Your mileage may vary: impact of data input method for a deep learning bone age app's predictions. Skeletal Radiol 2022; 51:423-429. [PMID: 34476558 DOI: 10.1007/s00256-021-03897-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 08/26/2021] [Accepted: 08/26/2021] [Indexed: 02/02/2023]
Abstract
OBJECTIVE The purpose of this study was to evaluate agreement in predictions made by a bone age prediction application ("app") among three data input methods. METHODS The 16Bit Bone Age app is a browser-based deep learning application for predicting bone age on pediatric hand radiographs; recommended data input methods are direct image file upload or smartphone-capture of image. We collected 50 hand radiographs, split equally among 5 bone age groups. Three observers used the 16Bit Bone Age app to assess these images using 3 different data input methods: (1) direct image upload, (2) smartphone photo of image in radiology reading room, and (3) smartphone photo of image in a clinic. RESULTS Interobserver agreement was excellent for direct upload (ICC = 1.00) and for photos in reading room (ICC = 0.96) and good for photos in clinic (ICC = 0.82), respectively. Intraobserver agreement for the entire test set across the 3 data input methods was variable with ICCs of 0.95, 0.96, and 0.57 for the 3 observers, respectively. DISCUSSION Our findings indicate that different data input methods can result in discordant bone age predictions from the 16Bit Bone Age app. Further study is needed to determine the impact of data input methods, such as smartphone image capture, on deep learning app performance and accuracy.
Collapse
|
17
|
Wan C, Zhou X, You Q, Sun J, Shen J, Zhu S, Jiang Q, Yang W. Retinal Image Enhancement Using Cycle-Constraint Adversarial Network. Front Med (Lausanne) 2022; 8:793726. [PMID: 35096883 PMCID: PMC8789669 DOI: 10.3389/fmed.2021.793726] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 12/14/2021] [Indexed: 11/25/2022] Open
Abstract
Retinal images are the most intuitive medical images for the diagnosis of fundus diseases. Low-quality retinal images cause difficulties in computer-aided diagnosis systems and the clinical diagnosis of ophthalmologists. The high quality of retinal images is an important basis of precision medicine in ophthalmology. In this study, we propose a retinal image enhancement method based on deep learning to enhance multiple low-quality retinal images. A generative adversarial network is employed to build a symmetrical network, and a convolutional block attention module is introduced to improve the feature extraction capability. The retinal images in our dataset are sorted into two sets according to their quality: low and high quality. Generators and discriminators alternately learn the features of low/high-quality retinal images without the need for paired images. We analyze the proposed method both qualitatively and quantitatively on public datasets and a private dataset. The study results demonstrate that the proposed method is superior to other advanced algorithms, especially in enhancing color-distorted retinal images. It also performs well in the task of retinal vessel segmentation. The proposed network effectively enhances low-quality retinal images, aiding ophthalmologists and enabling computer-aided diagnosis in pathological analysis. Our method enhances multiple types of low-quality retinal images using a deep learning network.
Collapse
Affiliation(s)
- Cheng Wan
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Xueting Zhou
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Qijing You
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jing Sun
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Qin Jiang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Weihua Yang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
18
|
Luca AR, Ursuleanu TF, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Grigorovici A. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
19
|
Cai S, Han IC, Scott AW. Artificial intelligence for improving sickle cell retinopathy diagnosis and management. Eye (Lond) 2021; 35:2675-2684. [PMID: 33958737 PMCID: PMC8452674 DOI: 10.1038/s41433-021-01556-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Revised: 03/17/2021] [Accepted: 04/13/2021] [Indexed: 02/04/2023] Open
Abstract
Sickle cell retinopathy is often initially asymptomatic even in proliferative stages, but can progress to cause vision loss due to vitreous haemorrhages or tractional retinal detachments. Challenges with access and adherence to screening dilated fundus examinations, particularly in medically underserved areas where the burden of sickle cell disease is highest, highlight the need for novel approaches to screening for patients with vision-threatening sickle cell retinopathy. This article reviews the existing literature on and suggests future research directions for coupling artificial intelligence with multimodal retinal imaging to expand access to automated, accurate, imaging-based screening for sickle cell retinopathy. Given the variability in retinal specialist practice patterns with regards to monitoring and treatment of sickle cell retinopathy, we also discuss recent progress toward development of machine learning models that can quantitatively track disease progression over time. These artificial intelligence-based applications have great potential for informing evidence-based and resource-efficient clinical diagnosis and management of sickle cell retinopathy.
Collapse
Affiliation(s)
- Sophie Cai
- Retina Division, Duke Eye Center, Durham, NC, USA
| | - Ian C Han
- Institute for Vision Research, Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Adrienne W Scott
- Retina Division, Wilmer Eye Institute, Johns Hopkins University School of Medicine and Hospital, Baltimore, MD, USA.
| |
Collapse
|
20
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
21
|
Pieczynski J, Kuklo P, Grzybowski A. The Role of Telemedicine, In-Home Testing and Artificial Intelligence to Alleviate an Increasingly Burdened Healthcare System: Diabetic Retinopathy. Ophthalmol Ther 2021; 10:445-464. [PMID: 34156632 PMCID: PMC8217784 DOI: 10.1007/s40123-021-00353-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 05/15/2021] [Indexed: 01/30/2023] Open
Abstract
In the presence of the ever-increasing incidence of diabetes mellitus (DM), the prevalence of diabetic eye disease (DED) is also growing. Despite many improvements in diabetic care, DM remains a leading cause of visual impairment in working-age patients. So far, prevention has been the best way to protect vision. The sooner we diagnose DED, the more effective the treatment is. Thus, diabetic retinopathy (DR) screening, especially with imaging techniques, is a method of choice for vision protection. To alleviate the burden of diabetic patients who need ophthalmic care, telemedicine and in-home testing are used, supported by artificial intelligence (AI) algorithms. This is why we decided to evaluate current image teleophthalmology methods used for DR screening. We searched the PubMed platform for papers published over the last 5 years (2015–2020) using the following key words: telemedicine in diabetic retinopathy screening, diabetic retinopathy screening, automated diabetic retinopathy screening, artificial intelligence in diabetic retinopathy screening, smartphone diabetic retinopathy testing. We have included 118 original articles meeting the above criteria, discussing imaging diabetic retinopathy screening methods. We have found that fundus cameras, stable or mobile, are most commonly used for retinal photography, with portable fundus cameras also relatively common. Other possibilities involve the use of ultra-wide-field (UWF) imaging and even optical coherence tomography (OCT) devices for DR screening. Also, the role of smartphones is increasingly recognized in the field. Retinal fundus images are assessed by humans instantly or remotely, while AI algorithms seem to be useful tools facilitating retinal image assessment. The common use of smartphones and availability of relatively cheap, easy-to-use adapters for retinal photographs augmented by AI algorithms make it possible for eye fundus photographs to be taken by non-specialists and in non-medical setting. This opens the way for in-home testing conducted on a much larger scale in the future. In conclusion, based on current DR screening techniques, we can suggest that the future practice of eye care specialists will be widely supported by AI algorithms, and this way will be more effective.
Collapse
Affiliation(s)
- Janusz Pieczynski
- Chair of Ophthalmology, University of Warmia and Mazury, Zolnierska 18, 10-561, Olsztyn, Poland. .,The Voivodal Specialistic Hospital in Olsztyn, Olsztyn, Poland.
| | - Patrycja Kuklo
- Chair of Ophthalmology, University of Warmia and Mazury, Zolnierska 18, 10-561, Olsztyn, Poland.,The Voivodal Specialistic Hospital in Olsztyn, Olsztyn, Poland
| | - Andrzej Grzybowski
- Chair of Ophthalmology, University of Warmia and Mazury, Zolnierska 18, 10-561, Olsztyn, Poland.,Institute for Research in Ophthalmology, Poznan, Poland, Gorczyczewskiego 2/3, 61-553, Poznan, Poland
| |
Collapse
|
22
|
Reguant R, Brunak S, Saha S. Understanding inherent image features in CNN-based assessment of diabetic retinopathy. Sci Rep 2021; 11:9704. [PMID: 33958686 PMCID: PMC8102512 DOI: 10.1038/s41598-021-89225-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 04/20/2021] [Indexed: 11/20/2022] Open
Abstract
Diabetic retinopathy (DR) is a leading cause of blindness and affects millions of people throughout the world. Early detection and timely checkups are key to reduce the risk of blindness. Automated grading of DR is a cost-effective way to ensure early detection and timely checkups. Deep learning or more specifically convolutional neural network (CNN)-based methods produce state-of-the-art performance in DR detection. Whilst CNN based methods have been proposed, no comparisons have been done between the extracted image features and their clinical relevance. Here we first adopt a CNN visualization strategy to discover the inherent image features involved in the CNN's decision-making process. Then, we critically analyze those features with respect to commonly known pathologies namely microaneurysms, hemorrhages and exudates, and other ocular components. We also critically analyze different CNNs by considering what image features they pick up during learning to predict and justify their clinical relevance. The experiments are executed on publicly available fundus datasets (EyePACS and DIARETDB1) achieving an accuracy of 89 ~ 95% with AUC, sensitivity and specificity of respectively 95 ~ 98%, 74 ~ 86%, and 93 ~ 97%, for disease level grading of DR. Whilst different CNNs produce consistent classification results, the rate of picked-up image features disagreement between models could be as high as 70%.
Collapse
Affiliation(s)
- Roc Reguant
- Novo Nordisk Foundation Center for Protein Research, University of Copenhagen, 2200, Copenhagen N, Denmark.
- Australian E-Health Research Centre, CSIRO, Perth, Australia.
| | - Søren Brunak
- Novo Nordisk Foundation Center for Protein Research, University of Copenhagen, 2200, Copenhagen N, Denmark
| | - Sajib Saha
- Australian E-Health Research Centre, CSIRO, Perth, Australia
| |
Collapse
|
23
|
Chan EJJ, Najjar RP, Tang Z, Milea D. Deep Learning for Retinal Image Quality Assessment of Optic Nerve Head Disorders. Asia Pac J Ophthalmol (Phila) 2021; 10:282-288. [PMID: 34383719 DOI: 10.1097/apo.0000000000000404] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
ABSTRACT Deep learning (DL)-based retinal image quality assessment (RIQA) algorithms have been gaining popularity, as a solution to reduce the frequency of diagnostically unusable images. Most existing RIQA tools target retinal conditions, with a dearth of studies looking into RIQA models for optic nerve head (ONH) disorders. The recent success of DL systems in detecting ONH abnormalities on color fundus images prompts the development of tailored RIQA algorithms for these specific conditions. In this review, we discuss recent progress in DL-based RIQA models in general and the need for RIQA models tailored for ONH disorders. Finally, we propose suggestions for such models in the future.
Collapse
Affiliation(s)
| | - Raymond P Najjar
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Zhiqun Tang
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Dan Milea
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
- Ophthalmology Department, Singapore National Eye Centre, Singapore
- Rigshospitalet, Copenhagen University, Denmark
| |
Collapse
|
24
|
Secinaro S, Calandra D, Secinaro A, Muthurangu V, Biancone P. The role of artificial intelligence in healthcare: a structured literature review. BMC Med Inform Decis Mak 2021; 21:125. [PMID: 33836752 PMCID: PMC8035061 DOI: 10.1186/s12911-021-01488-9] [Citation(s) in RCA: 128] [Impact Index Per Article: 42.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Accepted: 04/01/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND/INTRODUCTION Artificial intelligence (AI) in the healthcare sector is receiving attention from researchers and health professionals. Few previous studies have investigated this topic from a multi-disciplinary perspective, including accounting, business and management, decision sciences and health professions. METHODS The structured literature review with its reliable and replicable research protocol allowed the researchers to extract 288 peer-reviewed papers from Scopus. The authors used qualitative and quantitative variables to analyse authors, journals, keywords, and collaboration networks among researchers. Additionally, the paper benefited from the Bibliometrix R software package. RESULTS The investigation showed that the literature in this field is emerging. It focuses on health services management, predictive medicine, patient data and diagnostics, and clinical decision-making. The United States, China, and the United Kingdom contributed the highest number of studies. Keyword analysis revealed that AI can support physicians in making a diagnosis, predicting the spread of diseases and customising treatment paths. CONCLUSIONS The literature reveals several AI applications for health services and a stream of research that has not fully been covered. For instance, AI projects require skills and data quality awareness for data-intensive analysis and knowledge-based management. Insights can help researchers and health professionals understand and address future research on AI in the healthcare field.
Collapse
Affiliation(s)
| | - Davide Calandra
- Department of Management, University of Turin, Turin, Italy.
| | | | - Vivek Muthurangu
- Institute of Child Health, University College London, London, UK
| | - Paolo Biancone
- Department of Management, University of Turin, Turin, Italy
| |
Collapse
|
25
|
Lu L, Ren P, Lu Q, Zhou E, Yu W, Huang J, He X, Han W. Analyzing fundus images to detect diabetic retinopathy (DR) using deep learning system in the Yangtze River delta region of China. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:226. [PMID: 33708853 PMCID: PMC7940941 DOI: 10.21037/atm-20-3275] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Background This study aimed to establish and evaluate an artificial intelligence-based deep learning system (DLS) for automatic detection of diabetic retinopathy. This could be important in developing an advanced tele-screening system for diabetic retinopathy. Methods A DLS with a convolutional neural network was developed to recognize fundus images of referable diabetic retinopathy. A total data set of 41,866 color fundus images were obtained from 17 cities in the Yangtze River Delta Urban Agglomeration (YRDUA). Five experienced retinal specialists and 15 ophthalmologists were recruited to verify images. For training, 80% of the data set was used, and the other 20% served as the validation data set. To effectively understand the learning process, the DLS automatically superimposed a heatmap on the original image. The regions utilized by the DLS were highlighted for diagnosis. Results Using the local validation data set, the DLS achieved an area under the curve of 0.9824. Based on the manual screening criteria, an operating point was set at about 0.9 sensitivity to evaluate the DLS. Specificity was recorded at 0.9609 and sensitivity was 0.9003. The DLSs showed excellent reliability, repeatability, and high efficiency. After analyzing the misclassification, it was found that 88.6% of the false-positives were mild non-proliferative diabetic retinopathy (NPDR) whereas, 81.6% of the false-negatives were intraretinal microvascular abnormalities. Conclusions The DLS efficiently detected fundus images from complex sources in the real world. Incorporating DLS technology in tele-screening will advance the current screening programs to offer a cost-effective and time-efficient solution for detecting diabetic retinopathy.
Collapse
Affiliation(s)
- Li Lu
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China.,Department of Ophthalmology, The First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Peifang Ren
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Qianyi Lu
- Department of Ophthalmology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Enliang Zhou
- Department of Ophthalmology, The First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Wangshu Yu
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Jiani Huang
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Xiaoying He
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Wei Han
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| |
Collapse
|
26
|
Nogales A, García-Tejedor ÁJ, Monge D, Vara JS, Antón C. A survey of deep learning models in medical therapeutic areas. Artif Intell Med 2021; 112:102020. [PMID: 33581832 DOI: 10.1016/j.artmed.2021.102020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 12/21/2020] [Accepted: 01/10/2021] [Indexed: 12/18/2022]
Abstract
Artificial intelligence is a broad field that comprises a wide range of techniques, where deep learning is presently the one with the most impact. Moreover, the medical field is an area where data both complex and massive and the importance of the decisions made by doctors make it one of the fields in which deep learning techniques can have the greatest impact. A systematic review following the Cochrane recommendations with a multidisciplinary team comprised of physicians, research methodologists and computer scientists has been conducted. This survey aims to identify the main therapeutic areas and the deep learning models used for diagnosis and treatment tasks. The most relevant databases included were MedLine, Embase, Cochrane Central, Astrophysics Data System, Europe PubMed Central, Web of Science and Science Direct. An inclusion and exclusion criteria were defined and applied in the first and second peer review screening. A set of quality criteria was developed to select the papers obtained after the second screening. Finally, 126 studies from the initial 3493 papers were selected and 64 were described. Results show that the number of publications on deep learning in medicine is increasing every year. Also, convolutional neural networks are the most widely used models and the most developed area is oncology where they are used mainly for image analysis.
Collapse
Affiliation(s)
- Alberto Nogales
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Álvaro J García-Tejedor
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Diana Monge
- Faculty of Medicine, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Juan Serrano Vara
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Cristina Antón
- Faculty of Medicine, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| |
Collapse
|
27
|
Qummar S, Khan FG, Shah S, Khan A, Din A, Gao J. Deep Learning Techniques for Diabetic Retinopathy Detection. Curr Med Imaging 2021; 16:1201-1213. [DOI: 10.2174/1573405616666200213114026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 11/26/2019] [Accepted: 12/19/2019] [Indexed: 11/22/2022]
Abstract
Diabetes occurs due to the excess of glucose in the blood that may affect many organs
of the body. Elevated blood sugar in the body causes many problems including Diabetic Retinopathy
(DR). DR occurs due to the mutilation of the blood vessels in the retina. The manual detection
of DR by ophthalmologists is complicated and time-consuming. Therefore, automatic detection is
required, and recently different machine and deep learning techniques have been applied to detect
and classify DR. In this paper, we conducted a study of the various techniques available in the literature
for the identification/classification of DR, the strengths and weaknesses of available datasets
for each method, and provides the future directions. Moreover, we also discussed the different
steps of detection, that are: segmentation of blood vessels in a retina, detection of lesions, and other
abnormalities of DR.
Collapse
Affiliation(s)
- Sehrish Qummar
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Fiaz Gul Khan
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Sajid Shah
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Ahmad Khan
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Ahmad Din
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Jinfeng Gao
- Department of Information Engineering, Huanghuai University, Henan, China
| |
Collapse
|
28
|
de Araujo AL, Rados DRV, Szortyka AD, Falavigna M, Moreira TDC, Hauser L, Gross PB, Lorentz AL, Maturro L, Cabral F, Costa ALFDA, Martins TGDS, da Silva RS, Schor P, Harzheim E, Gonçalves MR, Umpierre RN. Ophthalmic image acquired by ophthalmologists and by allied health personnel as part of a telemedicine strategy: a comparative study of image quality. Eye (Lond) 2020; 35:1398-1404. [PMID: 32555520 DOI: 10.1038/s41433-020-1035-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 06/01/2020] [Accepted: 06/09/2020] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVES This study evaluates the quality of ophthalmic images acquired by a nurse technician trained in teleophthalmology as compared with images acquired by an ophthalmologist, in order to provide a better understanding of the workforce necessary to operate remote care programs. METHODS A cross-sectional study was performed on 2044 images obtained from 118 participants of the TeleOftalmo project, in Brazil. Fundus and slit-lamp photography were performed on site by an ophthalmologist and by a nurse technician under the supervision of a remote ophthalmologist. Image quality was then evaluated by masked ophthalmologists. Proportion of suitable images in each group was compared. RESULTS The proportion of concordant classification regarding quality was 94.8%, with a corrected kappa agreement of 0.94. When analyzing each type of photo separately, there was no significant difference in the proportion of suitable images between on-site ophthalmologist and nurse technician with remote ophthalmologist assistance for the following: slit-lamp views of the anterior segment and anterior chamber periphery, and fundus photographs centered on the macula and on the optic disc (P = 0.825, P = 0.997, P = 0.194, and P = 0.449, respectively). For slit-lamp views of the lens, the proportion of suitable images was higher among those obtained by an ophthalmologist (99.6%) than by a technician (93.8%, P < 0.01). CONCLUSIONS Ophthalmic photographs acquired by a trained technician consistently achieved >90% adequacy for remote reading. Compared with ophthalmologist-acquired photos, the proportion of images deemed suitable achieved a high overall agreement. These findings provide favorable evidence of the adequacy of teleophthalmological imaging by nurse technicians.
Collapse
Affiliation(s)
- Aline Lutz de Araujo
- Núcleo de Telessaúde, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil. .,Department of Ophthalmology and Visual Sciences, Escola Paulista de Medicina, Universidade Federal de São Paulo, São Paulo, SP, Brazil.
| | | | | | | | | | - Lisiane Hauser
- Núcleo de Telessaúde, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Paula Blasco Gross
- Núcleo de Telessaúde, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Andrea Longoni Lorentz
- Núcleo de Telessaúde, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | | | | | - Ana Luiza Fontes de Azevedo Costa
- Department of Ophthalmology and Visual Sciences, Escola Paulista de Medicina, Universidade Federal de São Paulo, São Paulo, SP, Brazil
| | | | - Rodolfo Souza da Silva
- Núcleo de Telessaúde, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Paulo Schor
- Department of Ophthalmology and Visual Sciences, Escola Paulista de Medicina, Universidade Federal de São Paulo, São Paulo, SP, Brazil
| | - Erno Harzheim
- Programa de Pós-Graduação em Epidemiologia, Faculdade de Medicina, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Marcelo Rodrigues Gonçalves
- Núcleo de Telessaúde, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil.,Programa de Pós-Graduação em Epidemiologia, Faculdade de Medicina, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Roberto Nunes Umpierre
- Núcleo de Telessaúde, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| |
Collapse
|
29
|
Gupta V, Rajendran A, Narayanan R, Chawla S, Kumar A, Palanivelu MS, Muralidhar NS, Jayadev C, Pappuru R, Khatri M, Agarwal M, Aurora A, Bhende P, Bhende M, Bawankule P, Rishi P, Vinekar A, Trehan HS, Biswas J, Agarwal R, Natarajan S, Verma L, Ramasamy K, Giridhar A, Rishi E, Talwar D, Pathangey A, Azad R, Honavar SG. Evolving consensus on managing vitreo-retina and uvea practice in post-COVID-19 pandemic era. Indian J Ophthalmol 2020; 68:962-973. [PMID: 32461407 PMCID: PMC7508071 DOI: 10.4103/ijo.ijo_1404_20] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 05/09/2020] [Accepted: 05/09/2020] [Indexed: 02/06/2023] Open
Abstract
The COVID-19 pandemic has brought new challenges to the health care community. Many of the super-speciality practices are planning to re-open after the lockdown is lifted. However there is lot of apprehension in everyone's mind about conforming practices that would safeguard the patients, ophthalmologists, healthcare workers as well as taking adequate care of the equipment to minimize the damage. The aim of this article is to develop preferred practice patterns, by developing a consensus amongst the lead experts, that would help the institutes as well as individual vitreo-retina and uveitis experts to restart their practices with confidence. As the situation remains volatile, we would like to mention that these suggestions are evolving and likely to change as our understanding and experience gets better. Further, the suggestions are for routine patients as COVID-19 positive patients may be managed in designated hospitals as per local protocols. Also these suggestions have to be implemented keeping in compliance with local rules and regulations.
Collapse
Affiliation(s)
- Vishali Gupta
- Advanced Eye Centre, Post Graduate Institute of Medical Education and Research, Chandigarha, India
| | | | | | | | - Atul Kumar
- Dr. RP.Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Rupesh Agarwal
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore
| | | | | | | | | | | | | | | | - Rajvardhan Azad
- Regional Institute of Ophthalmology Indira Gandhi Institute of Medical Institute of Medical Sciences, Patna, India
| | | |
Collapse
|
30
|
Piccini D, Demesmaeker R, Heerfordt J, Yerly J, Di Sopra L, Masci PG, Schwitter J, Van De Ville D, Richiardi J, Kober T, Stuber M. Deep Learning to Automate Reference-Free Image Quality Assessment of Whole-Heart MR Images. Radiol Artif Intell 2020; 2:e190123. [PMID: 33937825 DOI: 10.1148/ryai.2020190123] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 03/03/2020] [Accepted: 03/11/2020] [Indexed: 11/11/2022]
Abstract
Purpose To develop and characterize an algorithm that mimics human expert visual assessment to quantitatively determine the quality of three-dimensional (3D) whole-heart MR images. Materials and Methods In this study, 3D whole-heart cardiac MRI scans from 424 participants (average age, 57 years ± 18 [standard deviation]; 66.5% men) were used to generate an image quality assessment algorithm. A deep convolutional neural network for image quality assessment (IQ-DCNN) was designed, trained, optimized, and cross-validated on a clinical database of 324 (training set) scans. On a separate test set (100 scans), two hypotheses were tested: (a) that the algorithm can assess image quality in concordance with human expert assessment as assessed by human-machine correlation and intra- and interobserver agreement and (b) that the IQ-DCNN algorithm may be used to monitor a compressed sensing reconstruction process where image quality progressively improves. Weighted κ values, agreement and disagreement counts, and Krippendorff α reliability coefficients were reported. Results Regression performance of the IQ-DCNN was within the range of human intra- and interobserver agreement and in very good agreement with the human expert (R 2 = 0.78, κ = 0.67). The image quality assessment during compressed sensing reconstruction correlated with the cost function at each iteration and was successfully applied to rank the results in very good agreement with the human expert. Conclusion The proposed IQ-DCNN was trained to mimic expert visual image quality assessment of 3D whole-heart MR images. The results from the IQ-DCNN were in good agreement with human expert reading, and the network was capable of automatically comparing different reconstructed volumes.Supplemental material is available for this article.© RSNA, 2020.
Collapse
Affiliation(s)
- Davide Piccini
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Robin Demesmaeker
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - John Heerfordt
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Jérôme Yerly
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Lorenzo Di Sopra
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Pier Giorgio Masci
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Juerg Schwitter
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Dimitri Van De Ville
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Jonas Richiardi
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Tobias Kober
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Matthias Stuber
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| |
Collapse
|
31
|
Calderon-Auza G, Carrillo-Gomez C, Nakano M, Toscano-Medina K, Perez-Meana H, Gonzalez-H. Leon A, Quiroz-Mercado H. A Teleophthalmology Support System Based on the Visibility of Retinal Elements Using the CNNs. SENSORS (BASEL, SWITZERLAND) 2020; 20:E2838. [PMID: 32429400 PMCID: PMC7287628 DOI: 10.3390/s20102838] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2020] [Revised: 05/12/2020] [Accepted: 05/13/2020] [Indexed: 06/11/2023]
Abstract
This paper proposes a teleophthalmology support system in which we use algorithms of object detection and semantic segmentation, such as faster region-based CNN (FR-CNN) and SegNet, based on several CNN architectures such as: Vgg16, MobileNet, AlexNet, etc. These are used to segment and analyze the principal anatomical elements, such as optic disc (OD), region of interest (ROI) composed by the macular region, real retinal region, and vessels. Unlike the conventional retinal image quality assessment system, the proposed system provides some possible reasons about the low-quality image to support the operator of an ophthalmoscope and patient to acquire and transmit a better-quality image to central eye hospital for its diagnosis. The proposed system consists of four steps: OD detection, OD quality analysis, obstruction detection of the region of interest (ROI), and vessel segmentation. For the OD detection, artefacts and vessel segmentation, the FR-CNN and SegNet are used, while for the OD quality analysis, we use transfer learning. The proposed system provides accuracies of 0.93 for the OD detection, 0.86 for OD image quality, 1.0 for artefact detection, and 0.98 for vessel segmentation. As the global performance metric, the kappa-based agreement score between ophthalmologist and the proposed system is calculated, which is higher than the score between ophthalmologist and general practitioner.
Collapse
Affiliation(s)
- Gustavo Calderon-Auza
- Graduate Section, Instituto Politécnico Nacional, Mexico City 04440, Mexico; (G.C.-A.); (C.C.-G.); (K.T.-M.); (H.P.-M.)
| | - Cesar Carrillo-Gomez
- Graduate Section, Instituto Politécnico Nacional, Mexico City 04440, Mexico; (G.C.-A.); (C.C.-G.); (K.T.-M.); (H.P.-M.)
| | - Mariko Nakano
- Graduate Section, Instituto Politécnico Nacional, Mexico City 04440, Mexico; (G.C.-A.); (C.C.-G.); (K.T.-M.); (H.P.-M.)
| | - Karina Toscano-Medina
- Graduate Section, Instituto Politécnico Nacional, Mexico City 04440, Mexico; (G.C.-A.); (C.C.-G.); (K.T.-M.); (H.P.-M.)
| | - Hector Perez-Meana
- Graduate Section, Instituto Politécnico Nacional, Mexico City 04440, Mexico; (G.C.-A.); (C.C.-G.); (K.T.-M.); (H.P.-M.)
| | - Ana Gonzalez-H. Leon
- Hospital Dr. Luis Sánchez-Bulnes, Asociación para Evitar la Ceguera in México, Mexico City 04030, Mexico; (A.G.-H.L.); (H.Q.-M.)
| | - Hugo Quiroz-Mercado
- Hospital Dr. Luis Sánchez-Bulnes, Asociación para Evitar la Ceguera in México, Mexico City 04030, Mexico; (A.G.-H.L.); (H.Q.-M.)
| |
Collapse
|
32
|
Appaji A, Nagendra B, Chako DM, Padmanabha A, Jacob A, Hiremath CV, Varambally S, Kesavan M, Venkatasubramanian G, Rao SV, Webers CAB, Berendschot TTJM, Rao NP. Examination of retinal vascular trajectory in schizophrenia and bipolar disorder. Psychiatry Clin Neurosci 2019; 73:738-744. [PMID: 31400288 DOI: 10.1111/pcn.12921] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 07/24/2019] [Accepted: 08/06/2019] [Indexed: 12/11/2022]
Abstract
AIM Evidence suggests microvascular dysfunction (wider retinal venules and narrower arterioles) in schizophrenia (SCZ) and bipolar disorder (BD). The vascular development is synchronous with neuronal development in the retina and brain. The retinal vessel trajectory is related to retinal nerve fiber layer thinning and cerebrovascular abnormalities in SCZ and BD and has not yet been examined. Hence, in this study we examined the retinal vascular trajectory in SCZ and BD in comparison with healthy volunteers (HV). METHODS Retinal images were acquired from 100 HV, SCZ patients, and BD patients, respectively, with a non-mydriatic fundus camera. Images were quantified to obtain the retinal arterial and venous trajectories using a validated, semiautomated algorithm. Analysis of covariance and regression analyses were conducted to examine group differences. A supervised machine-learning ensemble of bagged-trees method was used for automated classification of trajectory values. RESULTS There was a significant difference among groups in both the retinal venous trajectory (HV: 0.17 ± 0.08; SCZ: 0.25 ± 0.17; BD: 0.27 ± 0.20; P < 0.001) and the arterial trajectory (HV: 0.34 ± 0.15; SCZ: 0.29 ± 0.10; BD: 0.29 ± 0.11; P = 0.003) even after adjusting for age and sex (P < 0.001). On post-hoc analysis, the SCZ and BD groups differed from the HV on retinal venous and arterial trajectories, but there was no difference between SCZ and BD patients. The machine learning showed an accuracy of 86% and 73% for classifying HV versus SCZ and BD, respectively. CONCLUSION Smaller trajectories of retinal arteries indicate wider and flatter curves in SCZ and BD. Considering the relation between retinal/cerebral vasculatures and retinal nerve fiber layer thinness, the retinal vascular trajectory is a potential marker for SCZ and BD. As a relatively affordable investigation, retinal fundus photography should be further explored in SCZ and BD as a potential screening measure.
Collapse
Affiliation(s)
- Abhishek Appaji
- Department of Medical Electronics, B. M. S. College of Engineering, Bangalore, India.,University Eye Clinic Maastricht, Maastricht University, Maastricht, The Netherlands
| | - Bhargavi Nagendra
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, India
| | - Dona M Chako
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, India
| | - Ananth Padmanabha
- Department of Medical Electronics, B. M. S. College of Engineering, Bangalore, India
| | - Arpitha Jacob
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, India
| | - Chaitra V Hiremath
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, India
| | - Shivarama Varambally
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, India
| | - Muralidharan Kesavan
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, India
| | | | - Shyam V Rao
- Department of Medical Electronics, B. M. S. College of Engineering, Bangalore, India.,University Eye Clinic Maastricht, Maastricht University, Maastricht, The Netherlands
| | - Carroll A B Webers
- University Eye Clinic Maastricht, Maastricht University, Maastricht, The Netherlands
| | - Tos T J M Berendschot
- University Eye Clinic Maastricht, Maastricht University, Maastricht, The Netherlands
| | - Naren P Rao
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, India
| |
Collapse
|
33
|
Li F, Liu Z, Chen H, Jiang M, Zhang X, Wu Z. Automatic Detection of Diabetic Retinopathy in Retinal Fundus Photographs Based on Deep Learning Algorithm. Transl Vis Sci Technol 2019; 8:4. [PMID: 31737428 PMCID: PMC6855298 DOI: 10.1167/tvst.8.6.4] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 09/02/2019] [Indexed: 12/11/2022] Open
Abstract
PURPOSE To achieve automatic diabetic retinopathy (DR) detection in retinal fundus photographs through the use of a deep transfer learning approach using the Inception-v3 network. METHODS A total of 19,233 eye fundus color numerical images were retrospectively obtained from 5278 adult patients presenting for DR screening. The 8816 images passed image-quality review and were graded as no apparent DR (1374 images), mild nonproliferative DR (NPDR) (2152 images), moderate NPDR (2370 images), severe NPDR (1984 images), and proliferative DR (PDR) (936 images) by eight retinal experts according to the International Clinical Diabetic Retinopathy severity scale. After image preprocessing, 7935 DR images were selected from the above categories as a training dataset, while the rest of the images were used as validation dataset. We introduced a 10-fold cross-validation strategy to assess and optimize our model. We also selected the publicly independent Messidor-2 dataset to test the performance of our model. For discrimination between no referral (no apparent DR and mild NPDR) and referral (moderate NPDR, severe NPDR, and PDR), we also computed prediction accuracy, sensitivity, specificity, area under the receiver operating characteristic curve (AUC), and κ value. RESULTS The proposed approach achieved a high classification accuracy of 93.49% (95% confidence interval [CI], 93.13%-93.85%), with a 96.93% sensitivity (95% CI, 96.35%-97.51%) and a 93.45% specificity (95% CI, 93.12%-93.79%), while the AUC was up to 0.9905 (95% CI, 0.9887-0.9923) on the independent test dataset. The κ value of our best model was 0.919, while the three experts had κ values of 0.906, 0.931, and 0.914, independently. CONCLUSIONS This approach could automatically detect DR with excellent sensitivity, accuracy, and specificity and could aid in making a referral recommendation for further evaluation and treatment with high reliability. TRANSLATIONAL RELEVANCE This approach has great value in early DR screening using retinal fundus photographs.
Collapse
Affiliation(s)
- Feng Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Zheng Liu
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Hua Chen
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Minshan Jiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Xuedian Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Zhizheng Wu
- Department of Precision Mechanical Engineering, Shanghai University, Shanghai 200072, China
| |
Collapse
|
34
|
Automated detection and classification of early AMD biomarkers using deep learning. Sci Rep 2019; 9:10990. [PMID: 31358808 PMCID: PMC6662691 DOI: 10.1038/s41598-019-47390-3] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Accepted: 06/21/2019] [Indexed: 11/09/2022] Open
Abstract
Age-related macular degeneration (AMD) affects millions of people and is a leading cause of blindness throughout the world. Ideally, affected individuals would be identified at an early stage before late sequelae such as outer retinal atrophy or exudative neovascular membranes develop, which could produce irreversible visual loss. Early identification could allow patients to be staged and appropriate monitoring intervals to be established. Accurate staging of earlier AMD stages could also facilitate the development of new preventative therapeutics. However, accurate and precise staging of AMD, particularly using newer optical coherence tomography (OCT)-based biomarkers may be time-intensive and requires expert training which may not feasible in many circumstances, particularly in screening settings. In this work we develop deep learning method for automated detection and classification of early AMD OCT biomarker. Deep convolution neural networks (CNN) were explicitly trained for performing automated detection and classification of hyperreflective foci, hyporeflective foci within the drusen, and subretinal drusenoid deposits from OCT B-scans. Numerous experiments were conducted to evaluate the performance of several state-of-the-art CNNs and different transfer learning protocols on an image dataset containing approximately 20000 OCT B-scans from 153 patients. An overall accuracy of 87% for identifying the presence of early AMD biomarkers was achieved.
Collapse
|
35
|
Recent Development on Detection Methods for the Diagnosis of Diabetic Retinopathy. Symmetry (Basel) 2019. [DOI: 10.3390/sym11060749] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Diabetic retinopathy (DR) is a complication of diabetes that exists throughout the world. DR occurs due to a high ratio of glucose in the blood, which causes alterations in the retinal microvasculature. Without preemptive symptoms of DR, it leads to complete vision loss. However, early screening through computer-assisted diagnosis (CAD) tools and proper treatment have the ability to control the prevalence of DR. Manual inspection of morphological changes in retinal anatomic parts are tedious and challenging tasks. Therefore, many CAD systems were developed in the past to assist ophthalmologists for observing inter- and intra-variations. In this paper, a recent review of state-of-the-art CAD systems for diagnosis of DR is presented. We describe all those CAD systems that have been developed by various computational intelligence and image processing techniques. The limitations and future trends of current CAD systems are also described in detail to help researchers. Moreover, potential CAD systems are also compared in terms of statistical parameters to quantitatively evaluate them. The comparison results indicate that there is still a need for accurate development of CAD systems to assist in the clinical diagnosis of diabetic retinopathy.
Collapse
|
36
|
Jiménez-García J, Romero-Oraá R, García M, López-Gálvez MI, Hornero R. Combination of Global Features for the Automatic Quality Assessment of Retinal Images. ENTROPY 2019; 21:e21030311. [PMID: 33267025 PMCID: PMC7514792 DOI: 10.3390/e21030311] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 03/14/2019] [Accepted: 03/18/2019] [Indexed: 02/02/2023]
Abstract
Diabetic retinopathy (DR) is one of the most common causes of visual loss in developed countries. Computer-aided diagnosis systems aimed at detecting DR can reduce the workload of ophthalmologists in screening programs. Nevertheless, a large number of retinal images cannot be analyzed by physicians and automatic methods due to poor quality. Automatic retinal image quality assessment (RIQA) is needed before image analysis. The purpose of this study was to combine novel generic quality features to develop a RIQA method. Several features were calculated from retinal images to achieve this goal. Features derived from the spatial and spectral entropy-based quality (SSEQ) and the natural images quality evaluator (NIQE) methods were extracted. They were combined with novel sharpness and luminosity measures based on the continuous wavelet transform (CWT) and the hue saturation value (HSV) color model, respectively. A subset of non-redundant features was selected using the fast correlation-based filter (FCBF) method. Subsequently, a multilayer perceptron (MLP) neural network was used to obtain the quality of images from the selected features. Classification results achieved 91.46% accuracy, 92.04% sensitivity, and 87.92% specificity. Results suggest that the proposed RIQA method could be applied in a more general computer-aided diagnosis system aimed at detecting a variety of retinal pathologies such as DR and age-related macular degeneration.
Collapse
Affiliation(s)
- Jorge Jiménez-García
- Biomedical Engineering Group, University of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain
- Correspondence: ; Tel.: +34-983-18-47-16
| | - Roberto Romero-Oraá
- Biomedical Engineering Group, University of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain
| | - María García
- Biomedical Engineering Group, University of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain
| | - María I. López-Gálvez
- Biomedical Engineering Group, University of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain
- Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, Avenida Ramón y Cajal 3, 47003 Valladolid, Spain
- Instituto de Oftalmobiología Aplicada, University of Valladolid, Paseo de Belén 17, 47011 Valladolid, Spain
| | - Roberto Hornero
- Biomedical Engineering Group, University of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain
- Instituto de Investigación en Matemáticas (IMUVA), University of Valladolid, 47011 Valladolid, Spain
- Instituto de Neurociencias de Castilla y León (INCYL), University of Salamanca, 37007 Salamanca, Spain
| |
Collapse
|
37
|
Coyner AS, Swan R, Campbell JP, Ostmo S, Brown JM, Kalpathy-Cramer J, Kim SJ, Jonas KE, Chan RVP, Chiang MF. Automated Fundus Image Quality Assessment in Retinopathy of Prematurity Using Deep Convolutional Neural Networks. Ophthalmol Retina 2019; 3:444-450. [PMID: 31044738 DOI: 10.1016/j.oret.2019.01.015] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Revised: 01/12/2019] [Accepted: 01/23/2019] [Indexed: 10/27/2022]
Abstract
PURPOSE Accurate image-based ophthalmic diagnosis relies on fundus image clarity. This has important implications for the quality of ophthalmic diagnoses and for emerging methods such as telemedicine and computer-based image analysis. The purpose of this study was to implement a deep convolutional neural network (CNN) for automated assessment of fundus image quality in retinopathy of prematurity (ROP). DESIGN Experimental study. PARTICIPANTS Retinal fundus images were collected from preterm infants during routine ROP screenings. METHODS Six thousand one hundred thirty-nine retinal fundus images were collected from 9 academic institutions. Each image was graded for quality (acceptable quality [AQ], possibly acceptable quality [PAQ], or not acceptable quality [NAQ]) by 3 independent experts. Quality was defined as the ability to assess an image confidently for the presence of ROP. Of the 6139 images, NAQ, PAQ, and AQ images represented 5.6%, 43.6%, and 50.8% of the image set, respectively. Because of low representation of NAQ images in the data set, images labeled NAQ were grouped into the PAQ category, and a binary CNN classifier was trained using 5-fold cross-validation on 4000 images. A test set of 2109 images was held out for final model evaluation. Additionally, 30 images were ranked from worst to best quality by 6 experts via pairwise comparisons, and the CNN's ability to rank quality, regardless of quality classification, was assessed. MAIN OUTCOME MEASURES The CNN performance was evaluated using area under the receiver operating characteristic curve (AUC). A Spearman's rank correlation was calculated to evaluate the overall ability of the CNN to rank images from worst to best quality as compared with experts. RESULTS The mean AUC for 5-fold cross-validation was 0.958 (standard deviation, 0.005) for the diagnosis of AQ versus PAQ images. The AUC was 0.965 for the test set. The Spearman's rank correlation coefficient on the set of 30 images was 0.90 as compared with the overall expert consensus ranking. CONCLUSIONS This model accurately assessed retinal fundus image quality in a comparable manner with that of experts. This fully automated model has potential for application in clinical settings, telemedicine, and computer-based image analysis in ROP and for generalizability to other ophthalmic diseases.
Collapse
Affiliation(s)
- Aaron S Coyner
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, Oregon
| | - Ryan Swan
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, Oregon
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon
| | - James M Brown
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, Massachusetts
| | - Jayashree Kalpathy-Cramer
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, Oregon; Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, Massachusetts; Massachusetts General Hospital and Brigham and Women's Hospital Center for Clinical Data Science, Boston, Massachusetts
| | - Sang Jin Kim
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon; Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Karyn E Jonas
- Department of Ophthalmology, University of Illinois at Chicago, Chicago, Illinois
| | - R V Paul Chan
- Department of Ophthalmology, University of Illinois at Chicago, Chicago, Illinois
| | - Michael F Chiang
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, Oregon; Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon.
| | | |
Collapse
|
38
|
Teikari P, Najjar RP, Schmetterer L, Milea D. Embedded deep learning in ophthalmology: making ophthalmic imaging smarter. Ther Adv Ophthalmol 2019; 11:2515841419827172. [PMID: 30911733 PMCID: PMC6425531 DOI: 10.1177/2515841419827172] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Accepted: 12/20/2018] [Indexed: 01/22/2023] Open
Abstract
Deep learning has recently gained high interest in ophthalmology due to its ability to detect clinically significant features for diagnosis and prognosis. Despite these significant advances, little is known about the ability of various deep learning systems to be embedded within ophthalmic imaging devices, allowing automated image acquisition. In this work, we will review the existing and future directions for 'active acquisition'-embedded deep learning, leading to as high-quality images with little intervention by the human operator. In clinical practice, the improved image quality should translate into more robust deep learning-based clinical diagnostics. Embedded deep learning will be enabled by the constantly improving hardware performance with low cost. We will briefly review possible computation methods in larger clinical systems. Briefly, they can be included in a three-layer framework composed of edge, fog, and cloud layers, the former being performed at a device level. Improved egde-layer performance via 'active acquisition' serves as an automatic data curation operator translating to better quality data in electronic health records, as well as on the cloud layer, for improved deep learning-based clinical data mining.
Collapse
Affiliation(s)
- Petteri Teikari
- Visual Neurosciences Group, Singapore Eye Research Institute, Singapore
- Advanced Ocular Imaging, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Raymond P. Najjar
- Visual Neurosciences Group, Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Leopold Schmetterer
- Visual Neurosciences Group, Singapore Eye Research Institute, Singapore
- Advanced Ocular Imaging, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Christian Doppler Laboratory for Ocular and Dermal Effects of Thiomers, Medical University of Vienna, Vienna, Austria
| | - Dan Milea
- Visual Neurosciences Group, Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, National University of Singapore, Singapore
- Neuro-Ophthalmology Department, Singapore National Eye Centre, Singapore
| |
Collapse
|
39
|
Fenner BJ, Wong RLM, Lam WC, Tan GSW, Cheung GCM. Advances in Retinal Imaging and Applications in Diabetic Retinopathy Screening: A Review. Ophthalmol Ther 2018; 7:333-346. [PMID: 30415454 PMCID: PMC6258577 DOI: 10.1007/s40123-018-0153-7] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Indexed: 12/23/2022] Open
Abstract
Rising prevalence of diabetes worldwide has necessitated the implementation of population-based diabetic retinopathy (DR) screening programs that can perform retinal imaging and interpretation for extremely large patient cohorts in a rapid and sensitive manner while minimizing inappropriate referrals to retina specialists. While most current screening programs employ mydriatic or nonmydriatic color fundus photography and trained image graders to identify referable DR, new imaging modalities offer significant improvements in diagnostic accuracy, throughput, and affordability. Smartphone-based fundus photography, macular optical coherence tomography, ultrawide-field imaging, and artificial intelligence-based image reading address limitations of current approaches and will likely become necessary as DR becomes more prevalent. Here we review current trends in imaging for DR screening and emerging technologies that show potential for improving upon current screening approaches.
Collapse
Affiliation(s)
- Beau J Fenner
- Residency Program, Singapore National Eye Centre, Singapore, Singapore
| | - Raymond L M Wong
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Wai-Ching Lam
- Department of Ophthalmology, The University of Hong Kong, Shatin, Hong Kong
| | - Gavin S W Tan
- Surgical Retina Department, Singapore National Eye Centre, Singapore, Singapore
- Ophthlamology and Visual Sciences Academic Clinical Program, Duke-NUS Graduate Medical School, Singapore, Singapore
- Retina Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Gemmy C M Cheung
- Ophthlamology and Visual Sciences Academic Clinical Program, Duke-NUS Graduate Medical School, Singapore, Singapore.
- Retina Research Group, Singapore Eye Research Institute, Singapore, Singapore.
- Medical Retina Department, Singapore National Eye Centre, Singapore, Singapore.
| |
Collapse
|