1
|
Niu YN, He HL, Chen XY, Ling SG, Dong Z, Xiong Y, Qi Y, Jin ZB. A Novel Grading System for Diffuse Chorioretinal Atrophy in Pathologic Myopia. Ophthalmol Ther 2024; 13:1171-1184. [PMID: 38441856 DOI: 10.1007/s40123-024-00908-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 02/14/2024] [Indexed: 04/25/2024] Open
Abstract
INTRODUCTION This study aims to quantitatively assess diffuse chorioretinal atrophy (DCA) in pathologic myopia and establish a standardized classification system utilizing artificial intelligence. METHODS A total of 202 patients underwent comprehensive examinations, and 338 eyes were included in the study. The methodology involved image preprocessing, sample labeling, employing deep learning segmentation models, measuring and calculating the area and density of DCA lesions. Lesion severity of DCA was graded using statistical methods, and grades were assigned to describe the morphology of corresponding fundus photographs. Hierarchical clustering was employed to categorize diffuse atrophy fundus into three groups based on the area and density of diffuse atrophy (G1, G2, G3), while high myopic fundus without diffuse atrophy was designated as G0. One-way analysis of variance (ANOVA) and nonparametric tests were conducted to assess the statistical association with different grades of DCA. RESULTS On the basis of the area and density of DCA, the condition was classified into four grades: G0, G1 (0 < density ≤ 0.093), G2 (0.093 < density ≤ 0.245), and G3 (0.245 < density ≤ 0.712). Fundus photographs depicted a progressive enlargement of atrophic lesions, evolving from punctate-shaped to patchy with indistinct boundaries. DCA atrophy lesions exhibited a gradual shift in color from brown-yellow to yellow-white, originating from the temporal side of the optic disc and extending towards the macula, with severe cases exhibiting widespread distribution throughout the posterior pole. Patients with DCA were significantly older [34.00 (27.00, 48.00) vs 29.00 (26.00, 34.00) years], possessed a longer axial length (28.85 ± 1.57 vs 27.11 ± 1.01 mm), and exhibited a more myopic spherical equivalent [- 13.00 (- 16.00, - 10.50) vs - 9.09 ± 2.41 D] compared to those without DCA (G0) (all P < 0.001). In eyes with DCA, a trend emerged as grades increased from G1 to G3, showing associations with older age, longer axial length, deeper myopic spherical equivalent, larger area of parapapillary atrophy, and increased fundus tessellated density (all P < 0.001). CONCLUSIONS The novel grading system for DCA, based on assessments of area and density, serves as a reliable measure for evaluating the severity of this condition, making it suitable for widespread application in the screening of pathologic myopia.
Collapse
Affiliation(s)
- Yu-Ning Niu
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, 100005, China
| | - Hai-Long He
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, 100005, China
| | - Xuan-Yu Chen
- Capital Medical University, Beijing, 100069, China
| | - Sai-Guang Ling
- EVision Technology (Beijing) Co. Ltd, Beijing, 100085, China
| | - Zhou Dong
- EVision Technology (Beijing) Co. Ltd, Beijing, 100085, China
| | - Ying Xiong
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, 100005, China
| | - Yue Qi
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, 100005, China
| | - Zi-Bing Jin
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, 100005, China.
| |
Collapse
|
2
|
Wang Y, Yang Z, Liu X, Li Z, Wu C, Wang Y, Jin K, Chen D, Jia G, Chen X, Ye J, Huang X. PGKD-Net: Prior-guided and Knowledge Diffusive Network for Choroid Segmentation. Artif Intell Med 2024; 150:102837. [PMID: 38553151 DOI: 10.1016/j.artmed.2024.102837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 03/01/2024] [Accepted: 03/03/2024] [Indexed: 04/02/2024]
Abstract
The thickness of the choroid is considered to be an important indicator of clinical diagnosis. Therefore, accurate choroid segmentation in retinal OCT images is crucial for monitoring various ophthalmic diseases. However, this is still challenging due to the blurry boundaries and interference from other lesions. To address these issues, we propose a novel prior-guided and knowledge diffusive network (PGKD-Net) to fully utilize retinal structural information to highlight choroidal region features and boost segmentation performance. Specifically, it is composed of two parts: a Prior-mask Guided Network (PG-Net) for coarse segmentation and a Knowledge Diffusive Network (KD-Net) for fine segmentation. In addition, we design two novel feature enhancement modules, Multi-Scale Context Aggregation (MSCA) and Multi-Level Feature Fusion (MLFF). The MSCA module captures the long-distance dependencies between features from different receptive fields and improves the model's ability to learn global context. The MLFF module integrates the cascaded context knowledge learned from PG-Net to benefit fine-level segmentation. Comprehensive experiments are conducted to evaluate the performance of the proposed PGKD-Net. Experimental results show that our proposed method achieves superior segmentation accuracy over other state-of-the-art methods. Our code is made up publicly available at: https://github.com/yzh-hdu/choroid-segmentation.
Collapse
Affiliation(s)
- Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou, China.
| | - Zehua Yang
- Hangzhou Dianzi University, Hangzhou, China.
| | - Xindi Liu
- Department of Ophthalmology, School of Medicine, The Second Affiliated Hospital of Zhejiang University, Hangzhou, China.
| | - Zhi Li
- Hangzhou Dianzi University, Hangzhou, China.
| | - Chengyu Wu
- Department of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, China.
| | - Yizhen Wang
- Hangzhou Dianzi University, Hangzhou, China.
| | - Kai Jin
- Department of Ophthalmology, School of Medicine, The Second Affiliated Hospital of Zhejiang University, Hangzhou, China.
| | - Dechao Chen
- Hangzhou Dianzi University, Hangzhou, China.
| | | | | | - Juan Ye
- Department of Ophthalmology, School of Medicine, The Second Affiliated Hospital of Zhejiang University, Hangzhou, China.
| | | |
Collapse
|
3
|
Guo J, Yan P, Qin Y, Liu M, Ma Y, Li J, Wang R, Luo H, Lv S. Automated measurement and grading of knee cartilage thickness: a deep learning-based approach. Front Med (Lausanne) 2024; 11:1337993. [PMID: 38487024 PMCID: PMC10939064 DOI: 10.3389/fmed.2024.1337993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 02/05/2024] [Indexed: 03/17/2024] Open
Abstract
Background Knee cartilage is the most crucial structure in the knee, and the reduction of cartilage thickness is a significant factor in the occurrence and development of osteoarthritis. Measuring cartilage thickness allows for a more accurate assessment of cartilage wear, but this process is relatively time-consuming. Our objectives encompass using various DL methods to segment knee cartilage from MRIs taken with different equipment and parameters, building a DL-based model for measuring and grading knee cartilage, and establishing a standardized database of knee cartilage thickness. Methods In this retrospective study, we selected a mixed knee MRI dataset consisting of 700 cases from four datasets with varying cartilage thickness. We employed four convolutional neural networks-UNet, UNet++, ResUNet, and TransUNet-to train and segment the mixed dataset, leveraging an extensive array of labeled data for effective supervised learning. Subsequently, we measured and graded the thickness of knee cartilage in 12 regions. Finally, a standard knee cartilage thickness dataset was established using 291 cases with ages ranging from 20 to 45 years and a Kellgren-Lawrence grading of 0. Results The validation results of network segmentation showed that TransUNet performed the best in the mixed dataset, with an overall dice similarity coefficient of 0.813 and an Intersection over Union of 0.692. The model's mean absolute percentage error for automatic measurement and grading after segmentation was 0.831. The experiment also yielded standard knee cartilage thickness, with an average thickness of 1.98 mm for the femoral cartilage and 2.14 mm for the tibial cartilage. Conclusion By selecting the best knee cartilage segmentation network, we built a model with a stronger generalization ability to automatically segment, measure, and grade cartilage thickness. This model can assist surgeons in more accurately and efficiently diagnosing changes in patients' cartilage thickness.
Collapse
Affiliation(s)
- JiangRong Guo
- Department of Orthopedics and Sports Medicine, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - Pengfei Yan
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Yong Qin
- Department of Orthopedics and Sports Medicine, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - MeiNa Liu
- Department of Biostatistics, School of Public Health, Harbin Medical University, Harbin, Heilongjiang, China
| | - Yingkai Ma
- Department of Orthopedics and Sports Medicine, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - JiangQi Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Ren Wang
- Department of Orthopedics and Sports Medicine, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Songcen Lv
- Department of Orthopedics and Sports Medicine, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| |
Collapse
|
4
|
Chaoyang Z, Shibao S, Wenmao H, Pengcheng Z. FDR-TransUNet: A novel encoder-decoder architecture with vision transformer for improved medical image segmentation. Comput Biol Med 2024; 169:107858. [PMID: 38113680 DOI: 10.1016/j.compbiomed.2023.107858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 11/30/2023] [Accepted: 12/11/2023] [Indexed: 12/21/2023]
Abstract
The U-shaped and Transformer architectures have achieved exceptional performance in medical image segmentation and natural language processing, respectively. Their combination has also led to remarkable results but still suffers from enormous loss of image features during downsampling and the difficulty of recovering spatial information during upsampling. In this paper, we propose a novel encoder-decoder architecture for medical image segmentation, which has a flexibly adjustable hybrid encoder and two expanding paths decoder. The hybrid encoder incorporates the feature double reuse (FDR) block and the encoder of Vision Transformer (ViT), which can extract local and global pixel localization information, and alleviate image feature loss effectively. Meanwhile, we retain the original class-token sequence in the Vision Transformer and develop an additional corresponding expanding path. The class-token sequence and abstract image features are leveraged by two independent expanding paths with the deep-supervision strategy, which can better recover the image spatial information and accelerate model convergence. To further mitigate the feature loss and improve spatial information recovery, we introduce successive residual connections throughout the entire network. We evaluated our model on the COVID-19 lung segmentation and the infection area segmentation tasks. The mIoU index increased by 1.5 points and 3.9 points compared to other models which demonstrates a performance improvement.
Collapse
Affiliation(s)
- Zhang Chaoyang
- School of Information Engineering, HeNan University of Science and Technology, Luoyang, 471023, China
| | - Sun Shibao
- School of Information Engineering, HeNan University of Science and Technology, Luoyang, 471023, China.
| | - Hu Wenmao
- School of Information Engineering, HeNan University of Science and Technology, Luoyang, 471023, China
| | - Zhao Pengcheng
- School of Information Engineering, HeNan University of Science and Technology, Luoyang, 471023, China
| |
Collapse
|
5
|
Lu J, Cheng Y, Hiya FE, Shen M, Herrera G, Zhang Q, Gregori G, Rosenfeld PJ, Wang RK. Deep-learning-based automated measurement of outer retinal layer thickness for use in the assessment of age-related macular degeneration, applicable to both swept-source and spectral-domain OCT imaging. BIOMEDICAL OPTICS EXPRESS 2024; 15:413-427. [PMID: 38223170 PMCID: PMC10783897 DOI: 10.1364/boe.512359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 12/17/2023] [Accepted: 12/17/2023] [Indexed: 01/16/2024]
Abstract
Effective biomarkers are required for assessing the progression of age-related macular degeneration (AMD), a prevalent and progressive eye disease. This paper presents a deep learning-based automated algorithm, applicable to both swept-source OCT (SS-OCT) and spectral-domain OCT (SD-OCT) scans, for measuring outer retinal layer (ORL) thickness as a surrogate biomarker for outer retinal degeneration, e.g., photoreceptor disruption, to assess AMD progression. The algorithm was developed based on a modified TransUNet model with clinically annotated retinal features manifested in the progression of AMD. The algorithm demonstrates a high accuracy with an intersection of union (IoU) of 0.9698 in the testing dataset for segmenting ORL using both SS-OCT and SD-OCT datasets. The robustness and applicability of the algorithm are indicated by strong correlation (r = 0.9551, P < 0.0001 in the central-fovea 3 mm-circle, and r = 0.9442, P < 0.0001 in the 5 mm-circle) and agreement (the mean bias = 0.5440 um in the 3-mm circle, and 1.392 um in the 5-mm circle) of the ORL thickness measurements between SS-OCT and SD-OCT scans. Comparative analysis reveals significant differences (P < 0.0001) in ORL thickness among 80 normal eyes, 30 intermediate AMD eyes with reticular pseudodrusen, 49 intermediate AMD eyes with drusen, and 40 late AMD eyes with geographic atrophy, highlighting its potential as an independent biomarker for predicting AMD progression. The findings provide valuable insights into the ORL alterations associated with different stages of AMD and emphasize the potential of ORL thickness as a sensitive indicator of AMD severity and progression.
Collapse
Affiliation(s)
- Jie Lu
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Yuxuan Cheng
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Farhan E. Hiya
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Mengxi Shen
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Gissel Herrera
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Qinqin Zhang
- Research and Development, Carl Zeiss Meditec, Inc., Dublin, CA, USA
| | - Giovanni Gregori
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Philip J. Rosenfeld
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Ruikang K. Wang
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| |
Collapse
|
6
|
Hwang EE, Chen D, Han Y, Jia L, Shan J. Multi-Dataset Comparison of Vision Transformers and Convolutional Neural Networks for Detecting Glaucomatous Optic Neuropathy from Fundus Photographs. Bioengineering (Basel) 2023; 10:1266. [PMID: 38002390 PMCID: PMC10669064 DOI: 10.3390/bioengineering10111266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 10/26/2023] [Accepted: 10/27/2023] [Indexed: 11/26/2023] Open
Abstract
Glaucomatous optic neuropathy (GON) can be diagnosed and monitored using fundus photography, a widely available and low-cost approach already adopted for automated screening of ophthalmic diseases such as diabetic retinopathy. Despite this, the lack of validated early screening approaches remains a major obstacle in the prevention of glaucoma-related blindness. Deep learning models have gained significant interest as potential solutions, as these models offer objective and high-throughput methods for processing image-based medical data. While convolutional neural networks (CNN) have been widely utilized for these purposes, more recent advances in the application of Transformer architectures have led to new models, including Vision Transformer (ViT,) that have shown promise in many domains of image analysis. However, previous comparisons of these two architectures have not sufficiently compared models side-by-side with more than a single dataset, making it unclear which model is more generalizable or performs better in different clinical contexts. Our purpose is to investigate comparable ViT and CNN models tasked with GON detection from fundus photos and highlight their respective strengths and weaknesses. We train CNN and ViT models on six unrelated, publicly available databases and compare their performance using well-established statistics including AUC, sensitivity, and specificity. Our results indicate that ViT models often show superior performance when compared with a similarly trained CNN model, particularly when non-glaucomatous images are over-represented in a given dataset. We discuss the clinical implications of these findings and suggest that ViT can further the development of accurate and scalable GON detection for this leading cause of irreversible blindness worldwide.
Collapse
Affiliation(s)
- Elizabeth E. Hwang
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
- Medical Scientist Training Program, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Dake Chen
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Ying Han
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Lin Jia
- Digillect LLC, San Francisco, CA 94158, USA
| | - Jing Shan
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| |
Collapse
|