1
|
Santarossa M, Beyer TT, Scharf ABA, Tatli A, von der Burchard C, Nazarenus J, Roider JB, Koch R. When Two Eyes Don't Suffice-Learning Difficult Hyperfluorescence Segmentations in Retinal Fundus Autofluorescence Images via Ensemble Learning. J Imaging 2024; 10:116. [PMID: 38786570 PMCID: PMC11122615 DOI: 10.3390/jimaging10050116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 05/03/2024] [Accepted: 05/06/2024] [Indexed: 05/25/2024] Open
Abstract
Hyperfluorescence (HF) and reduced autofluorescence (RA) are important biomarkers in fundus autofluorescence images (FAF) for the assessment of health of the retinal pigment epithelium (RPE), an important indicator of disease progression in geographic atrophy (GA) or central serous chorioretinopathy (CSCR). Autofluorescence images have been annotated by human raters, but distinguishing biomarkers (whether signals are increased or decreased) from the normal background proves challenging, with borders being particularly open to interpretation. Consequently, significant variations emerge among different graders, and even within the same grader during repeated annotations. Tests on in-house FAF data show that even highly skilled medical experts, despite previously discussing and settling on precise annotation guidelines, reach a pair-wise agreement measured in a Dice score of no more than 63-80% for HF segmentations and only 14-52% for RA. The data further show that the agreement of our primary annotation expert with herself is a 72% Dice score for HF and 51% for RA. Given these numbers, the task of automated HF and RA segmentation cannot simply be refined to the improvement in a segmentation score. Instead, we propose the use of a segmentation ensemble. Learning from images with a single annotation, the ensemble reaches expert-like performance with an agreement of a 64-81% Dice score for HF and 21-41% for RA with all our experts. In addition, utilizing the mean predictions of the ensemble networks and their variance, we devise ternary segmentations where FAF image areas are labeled either as confident background, confident HF, or potential HF, ensuring that predictions are reliable where they are confident (97% Precision), while detecting all instances of HF (99% Recall) annotated by all experts.
Collapse
Affiliation(s)
- Monty Santarossa
- Department of Computer Science, Kiel University, 24118 Kiel, Germany; (T.T.B.); (J.N.); (R.K.)
| | - Tebbo Tassilo Beyer
- Department of Computer Science, Kiel University, 24118 Kiel, Germany; (T.T.B.); (J.N.); (R.K.)
| | | | - Ayse Tatli
- Department of Ophthalmology, Kiel University, 24118 Kiel, Germany; (A.B.A.S.); (A.T.); (C.v.d.B.); (J.B.R.)
| | - Claus von der Burchard
- Department of Ophthalmology, Kiel University, 24118 Kiel, Germany; (A.B.A.S.); (A.T.); (C.v.d.B.); (J.B.R.)
| | - Jakob Nazarenus
- Department of Computer Science, Kiel University, 24118 Kiel, Germany; (T.T.B.); (J.N.); (R.K.)
| | - Johann Baptist Roider
- Department of Ophthalmology, Kiel University, 24118 Kiel, Germany; (A.B.A.S.); (A.T.); (C.v.d.B.); (J.B.R.)
| | - Reinhard Koch
- Department of Computer Science, Kiel University, 24118 Kiel, Germany; (T.T.B.); (J.N.); (R.K.)
| |
Collapse
|
2
|
Liu X, Wu J, Shao A, Shen W, Ye P, Wang Y, Ye J, Jin K, Yang J. Uncovering Language Disparity of ChatGPT on Retinal Vascular Disease Classification: Cross-Sectional Study. J Med Internet Res 2024; 26:e51926. [PMID: 38252483 PMCID: PMC10845019 DOI: 10.2196/51926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 10/07/2023] [Accepted: 11/30/2023] [Indexed: 01/23/2024] Open
Abstract
BACKGROUND Benefiting from rich knowledge and the exceptional ability to understand text, large language models like ChatGPT have shown great potential in English clinical environments. However, the performance of ChatGPT in non-English clinical settings, as well as its reasoning, have not been explored in depth. OBJECTIVE This study aimed to evaluate ChatGPT's diagnostic performance and inference abilities for retinal vascular diseases in a non-English clinical environment. METHODS In this cross-sectional study, we collected 1226 fundus fluorescein angiography reports and corresponding diagnoses written in Chinese and tested ChatGPT with 4 prompting strategies (direct diagnosis or diagnosis with a step-by-step reasoning process and in Chinese or English). RESULTS Compared with ChatGPT using Chinese prompts for direct diagnosis that achieved an F1-score of 70.47%, ChatGPT using English prompts for direct diagnosis achieved the best diagnostic performance (80.05%), which was inferior to ophthalmologists (89.35%) but close to ophthalmologist interns (82.69%). As for its inference abilities, although ChatGPT can derive a reasoning process with a low error rate (0.4 per report) for both Chinese and English prompts, ophthalmologists identified that the latter brought more reasoning steps with less incompleteness (44.31%), misinformation (1.96%), and hallucinations (0.59%) (all P<.001). Also, analysis of the robustness of ChatGPT with different language prompts indicated significant differences in the recall (P=.03) and F1-score (P=.04) between Chinese and English prompts. In short, when prompted in English, ChatGPT exhibited enhanced diagnostic and inference capabilities for retinal vascular disease classification based on Chinese fundus fluorescein angiography reports. CONCLUSIONS ChatGPT can serve as a helpful medical assistant to provide diagnosis in non-English clinical environments, but there are still performance gaps, language disparities, and errors compared to professionals, which demonstrate the potential limitations and the need to continually explore more robust large language models in ophthalmology practice.
Collapse
Affiliation(s)
- Xiaocong Liu
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
- School of Public Health, Zhejiang University School of Medicine, Zhejiang, China
| | - Jiageng Wu
- School of Public Health, Zhejiang University School of Medicine, Zhejiang, China
| | - An Shao
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
| | - Wenyue Shen
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
| | - Panpan Ye
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
| | - Yao Wang
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
| | - Jie Yang
- School of Public Health, Zhejiang University School of Medicine, Zhejiang, China
| |
Collapse
|
3
|
Zhao X, Lin Z, Yu S, Xiao J, Xie L, Xu Y, Tsui CK, Cui K, Zhao L, Zhang G, Zhang S, Lu Y, Lin H, Liang X, Lin D. An artificial intelligence system for the whole process from diagnosis to treatment suggestion of ischemic retinal diseases. Cell Rep Med 2023; 4:101197. [PMID: 37734379 PMCID: PMC10591037 DOI: 10.1016/j.xcrm.2023.101197] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 05/29/2023] [Accepted: 08/23/2023] [Indexed: 09/23/2023]
Abstract
Ischemic retinal diseases (IRDs) are a series of common blinding diseases that depend on accurate fundus fluorescein angiography (FFA) image interpretation for diagnosis and treatment. An artificial intelligence system (Ai-Doctor) was developed to interpret FFA images. Ai-Doctor performed well in image phase identification (area under the curve [AUC], 0.991-0.999, range), diabetic retinopathy (DR) and branch retinal vein occlusion (BRVO) diagnosis (AUC, 0.979-0.992), and non-perfusion area segmentation (Dice similarity coefficient [DSC], 89.7%-90.1%) and quantification. The segmentation model was expanded to unencountered IRDs (central RVO and retinal vasculitis), with DSCs of 89.2% and 83.6%, respectively. A clinically applicable ischemia index (CAII) was proposed to evaluate ischemic degree; patients with CAII values exceeding 0.17 in BRVO and 0.08 in DR may be associated with increased possibility for laser therapy. Ai-Doctor is expected to achieve accurate FFA image interpretation for IRDs, potentially reducing the reliance on retinal specialists.
Collapse
Affiliation(s)
- Xinyu Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China; Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Jun Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Liqiong Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Yue Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Ching-Kit Tsui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Kaixuan Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Shaochong Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Yan Lu
- Foshan Second People's Hospital, Foshan 528001, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou 570311, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510080, China.
| | - Xiaoling Liang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
4
|
Zhao PY, Bommakanti N, Yu G, Aaberg MT, Patel TP, Paulus YM. Deep learning for automated detection of neovascular leakage on ultra-widefield fluorescein angiography in diabetic retinopathy. Sci Rep 2023; 13:9165. [PMID: 37280345 DOI: 10.1038/s41598-023-36327-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 06/01/2023] [Indexed: 06/08/2023] Open
Abstract
Diabetic retinopathy is a leading cause of blindness in working-age adults worldwide. Neovascular leakage on fluorescein angiography indicates progression to the proliferative stage of diabetic retinopathy, which is an important distinction that requires timely ophthalmic intervention with laser or intravitreal injection treatment to reduce the risk of severe, permanent vision loss. In this study, we developed a deep learning algorithm to detect neovascular leakage on ultra-widefield fluorescein angiography images obtained from patients with diabetic retinopathy. The algorithm, an ensemble of three convolutional neural networks, was able to accurately classify neovascular leakage and distinguish this disease marker from other angiographic disease features. With additional real-world validation and testing, our algorithm could facilitate identification of neovascular leakage in the clinical setting, allowing timely intervention to reduce the burden of blinding diabetic eye disease.
Collapse
Affiliation(s)
- Peter Y Zhao
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Nikhil Bommakanti
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Gina Yu
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Michael T Aaberg
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Tapan P Patel
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Yannis M Paulus
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA.
| |
Collapse
|
5
|
Wilson KJ, Dhalla A, Meng Y, Tu Z, Zheng Y, Mhango P, Seydel KB, Beare NAV. Retinal imaging technologies in cerebral malaria: a systematic review. Malar J 2023; 22:139. [PMID: 37101295 PMCID: PMC10131356 DOI: 10.1186/s12936-023-04566-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 04/20/2023] [Indexed: 04/28/2023] Open
Abstract
BACKGROUND Cerebral malaria (CM) continues to present a major health challenge, particularly in sub-Saharan Africa. CM is associated with a characteristic malarial retinopathy (MR) with diagnostic and prognostic significance. Advances in retinal imaging have allowed researchers to better characterize the changes seen in MR and to make inferences about the pathophysiology of the disease. The study aimed to explore the role of retinal imaging in diagnosis and prognostication in CM; establish insights into pathophysiology of CM from retinal imaging; establish future research directions. METHODS The literature was systematically reviewed using the African Index Medicus, MEDLINE, Scopus and Web of Science databases. A total of 35 full texts were included in the final analysis. The descriptive nature of the included studies and heterogeneity precluded meta-analysis. RESULTS Available research clearly shows retinal imaging is useful both as a clinical tool for the assessment of CM and as a scientific instrument to aid the understanding of the condition. Modalities which can be performed at the bedside, such as fundus photography and optical coherence tomography, are best positioned to take advantage of artificial intelligence-assisted image analysis, unlocking the clinical potential of retinal imaging for real-time diagnosis in low-resource environments where extensively trained clinicians may be few in number, and for guiding adjunctive therapies as they develop. CONCLUSIONS Further research into retinal imaging technologies in CM is justified. In particular, co-ordinated interdisciplinary work shows promise in unpicking the pathophysiology of a complex disease.
Collapse
Affiliation(s)
- Kyle J Wilson
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, UK.
- Malawi-Liverpool-Wellcome Trust, Blantyre, Malawi.
| | - Amit Dhalla
- Department of Ophthalmology, Sheffield Teaching Hospitals, Sheffield, UK
| | - Yanda Meng
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, UK
| | - Zhanhan Tu
- School of Psychology and Vision Sciences, College of Life Science, The University of Leicester Ulverscroft Eye Unit, Robert Kilpatrick Clinical Sciences Building, Leicester Royal Infirmary, Leicester, UK
| | - Yalin Zheng
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, UK
- St. Paul's Eye Unit, Royal Liverpool University Hospitals, Liverpool, UK
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart and Chest Hospital, Liverpool, UK
| | - Priscilla Mhango
- Department of Ophthalmology, Kamuzu University of Health Sciences, Blantyre, Malawi
| | - Karl B Seydel
- College of Osteopathic Medicine, Michigan State University, East Lansing, MI, USA
- Blantyre Malaria Project, Kamuzu University of Health Sciences, Blantyre, Malawi
| | - Nicholas A V Beare
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, UK.
- St. Paul's Eye Unit, Royal Liverpool University Hospitals, Liverpool, UK.
| |
Collapse
|
6
|
Kurup AR, Wigdahl J, Benson J, Martínez-Ramón M, Solíz P, Joshi V. Automated malarial retinopathy detection using transfer learning and multi-camera retinal images. Biocybern Biomed Eng 2023; 43:109-123. [PMID: 36685736 PMCID: PMC9851283 DOI: 10.1016/j.bbe.2022.12.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Cerebral malaria (CM) is a fatal syndrome found commonly in children less than 5 years old in Sub-saharan Africa and Asia. The retinal signs associated with CM are known as malarial retinopathy (MR), and they include highly specific retinal lesions such as whitening and hemorrhages. Detecting these lesions allows the detection of CM with high specificity. Up to 23% of CM, patients are over-diagnosed due to the presence of clinical symptoms also related to pneumonia, meningitis, or others. Therefore, patients go untreated for these pathologies, resulting in death or neurological disability. It is essential to have a low-cost and high-specificity diagnostic technique for CM detection, for which We developed a method based on transfer learning (TL). Models pre-trained with TL select the good quality retinal images, which are fed into another TL model to detect CM. This approach shows a 96% specificity with low-cost retinal cameras.
Collapse
Affiliation(s)
| | - Jeff Wigdahl
- VisionQuest Biomedical Inc., Albuquerque, NM, USA
| | | | | | - Peter Solíz
- VisionQuest Biomedical Inc., Albuquerque, NM, USA
| | | |
Collapse
|
7
|
Zaaboub N, Sandid F, Douik A, Solaiman B. Optic disc detection and segmentation using saliency mask in retinal fundus images. Comput Biol Med 2022; 150:106067. [PMID: 36150251 DOI: 10.1016/j.compbiomed.2022.106067] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 08/25/2022] [Accepted: 08/27/2022] [Indexed: 11/28/2022]
Abstract
BACKGROUND AND OBJECTIVE Detection of the Optic Disc (OD) in retinal fundus image is crucial in identifying diverse abnormal conditions in the retina such as diabetic retinopathy. Previous systems are oriented to the OD detection and segmentation. Most research failed to locate the OD in the case when the image does not have a criterion appearance. The objective of the proposed work is to precisely define a new and robust OD segmentation in color retinal fundus images. METHODS The proposed algorithm is composed of two stages: OD localization and segmentation. The first phase consists in the OD localization through: 1) a preprocessing step; 2) vessel extraction and elimination, and 3) a geometric analysis allowing to decide the OD location. For the second phase, a set of is computed in order to produce various candidates. A combination of these candidates accurately forms a completed contour of the OD. RESULTS The proposed method is evaluated using 10 publicly available databases as well as a local database. Accuracy rates in the RimOne and IDRID databases are 98.06% and 99.71%, respectively, and 100% for the Chase, Drive, HRF, Drishti, Drions, Bin Rushed, Magrabia, Messidor and LocalDB databases with an overall success rate of 99.80% and specificity rates of 99.44%, 99.64%, 99.66%, 99.66%, 99.70%, 99.87%, 99.72%, 99.83% and 99.82% for the Rim One, Drions, IDRID, Drishti, HRF, Bin Rushed, Magrabia, Messidor and proprietary databases. CONCLUSION The main advantage of the proposed approach is the robustness and the excellent performances even with critical cases of retinal images. The proposed method achieves the state-of-the-art performances with regards to the OD detection and segmentation. It is also of a great interest for clinical usage without the expert intervention to treat each image.
Collapse
Affiliation(s)
- Nihal Zaaboub
- ENIT: National Engineering School of Tunis, University Tunis El Manar, Tunisia; NOCCS-ENISo: Networked Objects Control and Communication Systems Laboratory, Tunisia.
| | - Faten Sandid
- NOCCS-ENISo: Networked Objects Control and Communication Systems Laboratory, Tunisia
| | - Ali Douik
- NOCCS-ENISo: Networked Objects Control and Communication Systems Laboratory, Tunisia; ENISo: National Engineering School of Sousse, University of Sousse, Tunisia
| | - Basel Solaiman
- Image & Information Processing Department (iTi), IMT-Atlantique, Technopôle Brest Iroise CS 83818, 29238 Brest, France
| |
Collapse
|
8
|
Chronological Registration of OCT and Autofluorescence Findings in CSCR: Two Distinct Patterns in Disease Course. Diagnostics (Basel) 2022; 12:diagnostics12081780. [PMID: 35892493 PMCID: PMC9332035 DOI: 10.3390/diagnostics12081780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/11/2022] [Accepted: 07/16/2022] [Indexed: 11/17/2022] Open
Abstract
Optical coherence tomography (OCT) and fundus autofluorescence (FAF) are important imaging modalities for the assessment and prognosis of central serous chorioretinopathy (CSCR). However, setting the findings from both into spatial and temporal contexts as desirable for disease analysis remains a challenge due to both modalities being captured in different perspectives: sparse three-dimensional (3D) cross sections for OCT and two-dimensional (2D) en face images for FAF. To bridge this gap, we propose a visualisation pipeline capable of projecting OCT labels to en face image modalities such as FAF. By mapping OCT B-scans onto the accompanying en face infrared (IR) image and then registering the IR image onto the FAF image by a neural network, we can directly compare OCT labels to other labels in the en face plane. We also present a U-Net inspired segmentation model to predict segmentations in unlabeled OCTs. Evaluations show that both our networks achieve high precision (0.853 Dice score and 0.913 Area under Curve). Furthermore, medical analysis performed on exemplary, chronologically arranged CSCR progressions of 12 patients visualized with our pipeline indicates that, on CSCR, two patterns emerge: subretinal fluid (SRF) in OCT preceding hyperfluorescence (HF) in FAF and vice versa.
Collapse
|