Lu Z, Liu T, Ni Y, Liu H, Guan L. ChoroidSeg-ViT: A Transformer Model for Choroid Layer Segmentation Based on a Mixed Attention Feature Enhancement Mechanism.
Transl Vis Sci Technol 2024;
13:7. [PMID:
39235399 PMCID:
PMC11379093 DOI:
10.1167/tvst.13.9.7]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/06/2024] Open
Abstract
Purpose
To develop a Vision Transformer (ViT) model based on the mixed attention feature enhancement mechanism, ChoroidSeg-ViT, for choroid layer segmentation in optical coherence tomography (OCT) images.
Methods
This study included a dataset of 100 OCT B-scans images. Ground truths were carefully labeled by experienced ophthalmologists. An end-to-end local-enhanced Transformer model, ChoroidSeg-ViT, was designed to segment the choroid layer by integrating the local enhanced feature extraction and semantic feature fusion paths. Standard segmentation metrics were selected to evaluate ChoroidSeg-ViT.
Results
Experimental results demonstrate that ChoroidSeg-ViT exhibited superior segmentation performance (mDice: 98.31, mIoU: 96.62, mAcc: 98.29) compared to other deep learning approaches, thus indicating the effectiveness and superiority of this proposed model for the choroid layer segmentation task. Furthermore, ablation and generalization experiments validated the reasonableness of the module design.
Conclusions
We developed a novel Transformer model to precisely and automatically segment the choroid layer and achieved the state-of-the-art performance.
Translational Relevance
ChoroidSeg-ViT could segment precise and smooth choroid layers and form the basis of an automatic choroid analysis system that would facilitate future choroidal research in ophthalmology.
Collapse