1
|
Huang S, Jin K, Gao Z, Yang B, Shi X, Zhou J, Grzybowski A, Gawecki M, Ye J. Automated interpretation of retinal vein occlusion based on fundus fluorescein angiography images using deep learning: A retrospective, multi-center study. Heliyon 2024; 10:e33108. [PMID: 39027617 PMCID: PMC11255597 DOI: 10.1016/j.heliyon.2024.e33108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 06/13/2024] [Accepted: 06/14/2024] [Indexed: 07/20/2024] Open
Abstract
Purpose Fundus fluorescein angiography (FFA) is the gold standard for retinal vein occlusion (RVO) diagnosis. This study aims to develop a deep learning-based system to diagnose and classify RVO using FFA images, addressing the challenges of time-consuming and variable interpretations by ophthalmologists. Methods 4028 FFA images of 467 eyes from 463 patients were collected and annotated. Three convolutional neural networks (CNN) models (ResNet50, VGG19, InceptionV3) were trained to generate the label of image quality, eye, location, phase, lesions, diagnosis, and macular involvement. The performance of the models was evaluated by accuracy, precision, recall, F-1 score, the area under the curve, confusion matrix, human-machine comparison, and Clinical validation on three external data sets. Results The InceptionV3 model outperformed ResNet50 and VGG19 in labeling and interpreting FFA images for RVO diagnosis, achieving 77.63%-96.45% accuracy for basic information labels and 81.72%-96.45% for RVO-relevant labels. The comparison between the best CNN and ophthalmologists showed up to 19% accuracy improvement with the inceptionV3. Conclusion This study developed a deep learning model capable of automatically multi-label and multi-classification of FFA images for RVO diagnosis. The proposed system is anticipated to serve as a new tool for diagnosing RVO in places short of medical resources.
Collapse
Affiliation(s)
- Shenyu Huang
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Zhiyuan Gao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Boyuan Yang
- Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Xin Shi
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| | - Maciej Gawecki
- Department of Ophthalmology of Specialist Hospital in Chojnice, Lesna 10, 89-600, Chojnice, Poland
- Dobry Wzrok Ophthalmological Clinic, Zabi Kruk 10, 80-402, Gdańsk, Poland
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| |
Collapse
|
2
|
König M, Seeböck P, Gerendas BS, Mylonas G, Winklhofer R, Dimakopoulou I, Schmidt-Erfurth UM. Quality assessment of colour fundus and fluorescein angiography images using deep learning. Br J Ophthalmol 2023; 108:98-104. [PMID: 36418144 PMCID: PMC10804038 DOI: 10.1136/bjo-2022-321963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 11/11/2022] [Indexed: 11/24/2022]
Abstract
BACKGROUND/AIMS Image quality assessment (IQA) is crucial for both reading centres in clinical studies and routine practice, as only adequate quality allows clinicians to correctly identify diseases and treat patients accordingly. Here we aim to develop a neural network for automated real-time IQA in colour fundus (CF) and fluorescein angiography (FA) images. METHODS Training and evaluation of two neural networks were conducted using 2272 CF and 2492 FA images, with binary labels in four (contrast, focus, illumination, shadow and reflection) and three (contrast, focus, noise) modality specific categories plus an overall quality ranking. Performance was compared with a second human grader, evaluated on an external public dataset and in a clinical trial use-case. RESULTS The networks achieved a F1-score/area under the receiving operator characteristic/precision recall curve of 0.907/0.963/0.966 for CF and 0.822/0.918/0.889 for FA in overall quality prediction with similar results in most categories. A clear relation between model uncertainty and prediction error was observed. In the clinical trial use-case evaluation, the networks achieved an accuracy of 0.930 for CF and 0.895 for FA. CONCLUSION The presented method allows automated IQA in real time, demonstrating human-level performance for CF as well as FA. Such models can help to overcome the problem of human intergrader and intragrader variability by providing objective and reproducible IQA results. It has particular relevance for real-time feedback in multicentre clinical studies, when images are uploaded to central reading centre portals. Moreover, automated IQA as preprocessing step can support integrating automated approaches into clinical practice.
Collapse
Affiliation(s)
- Michael König
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Philipp Seeböck
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Bianca S Gerendas
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Georgios Mylonas
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Rudolf Winklhofer
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Ioanna Dimakopoulou
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | | |
Collapse
|
3
|
A Neural Network for Automated Image Quality Assessment of Optic Disc Photographs. J Clin Med 2023; 12:jcm12031217. [PMID: 36769865 PMCID: PMC9917571 DOI: 10.3390/jcm12031217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/26/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023] Open
Abstract
This study describes the development of a convolutional neural network (CNN) for automated assessment of optic disc photograph quality. Using a code-free deep learning platform, a total of 2377 optic disc photographs were used to develop a deep CNN capable of determining optic disc photograph quality. Of these, 1002 were good-quality images, 609 were acceptable-quality, and 766 were poor-quality images. The dataset was split 80/10/10 into training, validation, and test sets and balanced for quality. A ternary classification model (good, acceptable, and poor quality) and a binary model (usable, unusable) were developed. In the ternary classification system, the model had an overall accuracy of 91% and an AUC of 0.98. The model had higher predictive accuracy for images of good (93%) and poor quality (96%) than for images of acceptable quality (91%). The binary model performed with an overall accuracy of 98% and an AUC of 0.99. When validated on 292 images not included in the original training/validation/test dataset, the model's accuracy was 85% on the three-class classification task and 97% on the binary classification task. The proposed system for automated image-quality assessment for optic disc photographs achieves high accuracy in both ternary and binary classification systems, and highlights the success achievable with a code-free platform. There is wide clinical and research potential for such a model, with potential applications ranging from integration into fundus camera software to provide immediate feedback to ophthalmic photographers, to prescreening large databases before their use in research.
Collapse
|
4
|
Czajkowska J, Borak M. Computer-Aided Diagnosis Methods for High-Frequency Ultrasound Data Analysis: A Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22218326. [PMID: 36366024 PMCID: PMC9653964 DOI: 10.3390/s22218326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/21/2022] [Accepted: 10/25/2022] [Indexed: 05/31/2023]
Abstract
Over the last few decades, computer-aided diagnosis systems have become a part of clinical practice. They have the potential to assist clinicians in daily diagnostic tasks. The image processing techniques are fast, repeatable, and robust, which helps physicians to detect, classify, segment, and measure various structures. The recent rapid development of computer methods for high-frequency ultrasound image analysis opens up new diagnostic paths in dermatology, allergology, cosmetology, and aesthetic medicine. This paper, being the first in this area, presents a research overview of high-frequency ultrasound image processing techniques, which have the potential to be a part of computer-aided diagnosis systems. The reviewed methods are categorized concerning the application, utilized ultrasound device, and image data-processing type. We present the bridge between diagnostic needs and already developed solutions and discuss their limitations and future directions in high-frequency ultrasound image analysis. A search was conducted of the technical literature from 2005 to September 2022, and in total, 31 studies describing image processing methods were reviewed. The quantitative and qualitative analysis included 39 algorithms, which were selected as the most effective in this field. They were completed by 20 medical papers and define the needs and opportunities for high-frequency ultrasound application and CAD development.
Collapse
|
5
|
Luo X, Xu Y, Zhong Z, Xiang P, Wu X, Chong A. miR-8485 alleviates the injury of cardiomyocytes through TP53INP1. J Biochem Mol Toxicol 2022; 36:e23159. [PMID: 35876212 DOI: 10.1002/jbt.23159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 04/25/2022] [Accepted: 07/01/2022] [Indexed: 11/10/2022]
Abstract
MicroRNAs (miRNAs) feature prominently in regulating the progression of chronic heart failure (CHF). This study was performed to investigate the role of miR-8485 in the injury of cardiomyocytes and CHF. It was found that miR-8485 level was markedly reduced in the plasma of CHF patients, compared with the healthy controls. H2 O2 treatment increased tumor necrosis factor-α, interleukin (IL)-6, and IL-1β levels, inhibited the viability of human adult ventricular cardiomyocyte cell line AC16, and increased the apoptosis, while miR-8485 overexpression reversed these effects. Tumor protein p53 inducible nuclear protein 1 (TP53INP1) was identified as a downstream target of miR-8485, and TP53INP1 overexpression weakened the effects of miR-8485 on cell viability, apoptosis, as well as inflammatory responses. Our data suggest that miR-8485 attenuates the injury of cardiomyocytes by targeting TP53INP1, suggesting it is a protective factor against CHF.
Collapse
Affiliation(s)
- Xiuying Luo
- Department of Cardiology, The Second Affiliated Hospital (Jiande Branch), Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Yanlin Xu
- Department of Nephrology, The Second Affiliated Hospital (Jiande Branch), Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Ze Zhong
- Department of Cardiology, The Second Affiliated Hospital (Jiande Branch), Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Peng Xiang
- Department of Cardiology, The Second Affiliated Hospital (Jiande Branch), Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Xindong Wu
- Department of Cardiology, The Second Affiliated Hospital (Jiande Branch), Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Aiguo Chong
- Department of Cardiology, The Second Affiliated Hospital (Jiande Branch), Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| |
Collapse
|
6
|
High-Frequency Ultrasound Dataset for Deep Learning-Based Image Quality Assessment. SENSORS 2022; 22:s22041478. [PMID: 35214381 PMCID: PMC8875486 DOI: 10.3390/s22041478] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/09/2022] [Accepted: 02/12/2022] [Indexed: 12/04/2022]
Abstract
This study aims at high-frequency ultrasound image quality assessment for computer-aided diagnosis of skin. In recent decades, high-frequency ultrasound imaging opened up new opportunities in dermatology, utilizing the most recent deep learning-based algorithms for automated image analysis. An individual dermatological examination contains either a single image, a couple of pictures, or an image series acquired during the probe movement. The estimated skin parameters might depend on the probe position, orientation, or acquisition setup. Consequently, the more images analyzed, the more precise the obtained measurements. Therefore, for the automated measurements, the best choice is to acquire the image series and then analyze its parameters statistically. However, besides the correctly received images, the resulting series contains plenty of non-informative data: Images with different artifacts, noise, or the images acquired for the time stamp when the ultrasound probe has no contact with the patient skin. All of them influence further analysis, leading to misclassification or incorrect image segmentation. Therefore, an automated image selection step is crucial. To meet this need, we collected and shared 17,425 high-frequency images of the facial skin from 516 measurements of 44 patients. Two experts annotated each image as correct or not. The proposed framework utilizes a deep convolutional neural network followed by a fuzzy reasoning system to assess the acquired data’s quality automatically. Different approaches to binary and multi-class image analysis, based on the VGG-16 model, were developed and compared. The best classification results reach 91.7% accuracy for the first, and 82.3% for the second analysis, respectively.
Collapse
|
7
|
Chan EJJ, Najjar RP, Tang Z, Milea D. Deep Learning for Retinal Image Quality Assessment of Optic Nerve Head Disorders. Asia Pac J Ophthalmol (Phila) 2021; 10:282-288. [PMID: 34383719 DOI: 10.1097/apo.0000000000000404] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
ABSTRACT Deep learning (DL)-based retinal image quality assessment (RIQA) algorithms have been gaining popularity, as a solution to reduce the frequency of diagnostically unusable images. Most existing RIQA tools target retinal conditions, with a dearth of studies looking into RIQA models for optic nerve head (ONH) disorders. The recent success of DL systems in detecting ONH abnormalities on color fundus images prompts the development of tailored RIQA algorithms for these specific conditions. In this review, we discuss recent progress in DL-based RIQA models in general and the need for RIQA models tailored for ONH disorders. Finally, we propose suggestions for such models in the future.
Collapse
Affiliation(s)
| | - Raymond P Najjar
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Zhiqun Tang
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Dan Milea
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
- Ophthalmology Department, Singapore National Eye Centre, Singapore
- Rigshospitalet, Copenhagen University, Denmark
| |
Collapse
|