1
|
Xu X, Du L, Yin D. Dual-branch feature fusion S3D V-Net network for lung nodules segmentation. J Appl Clin Med Phys 2024; 25:e14331. [PMID: 38478388 PMCID: PMC11163502 DOI: 10.1002/acm2.14331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 02/01/2024] [Accepted: 03/04/2024] [Indexed: 06/11/2024] Open
Abstract
BACKGROUND Accurate segmentation of lung nodules can help doctors get more accurate results and protocols in early lung cancer diagnosis and treatment planning, so that patients can be better detected and treated at an early stage, and the mortality rate of lung cancer can be reduced. PURPOSE Currently, the improvement of lung nodule segmentation accuracy has been limited by his heterogeneous performance in the lungs, the imbalance between segmentation targets and background pixels, and other factors. We propose a new 2.5D lung nodule segmentation network model for lung nodule segmentation. This network model can well improve the extraction of edge information of lung nodules, and fuses intra-slice and inter-slice features, which makes good use of the three-dimensional structural information of lung nodules and can more effectively improve the accuracy of lung nodule segmentation. METHODS Our approach is based on a typical encoding-decoding network structure for improvement. The improved model captures the features of multiple nodules in both 3-D and 2-D CT images, complements the information of the segmentation target's features and enhances the texture features at the edges of the pulmonary nodules through the dual-branch feature fusion module (DFFM) and the reverse attention context module (RACM), and employs central pooling instead of the maximal pooling operation, which is used to preserve the features around the target and to eliminate the edge-irrelevant features, to further improve the performance of the segmentation of the pulmonary nodules. RESULTS We evaluated this method on a wide range of 1186 nodules from the LUNA16 dataset, and averaging the results of ten cross-validated, the proposed method achieved the mean dice similarity coefficient (mDSC) of 84.57%, the mean overlapping error (mOE) of 18.73% and average processing of a case is about 2.07 s. Moreover, our results were compared with inter-radiologist agreement on the LUNA16 dataset, and the average difference was 0.74%. CONCLUSION The experimental results show that our method improves the accuracy of pulmonary nodules segmentation and also takes less time than more 3-D segmentation methods in terms of time.
Collapse
Affiliation(s)
- Xiaoru Xu
- School of Automation and Information EngineeringSichuan University of Science and EngineeringZigongPeople's Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & EngineeringZigongPeople's Republic of China
| | - Lingyan Du
- School of Automation and Information EngineeringSichuan University of Science and EngineeringZigongPeople's Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & EngineeringZigongPeople's Republic of China
| | - Dongsheng Yin
- School of Automation and Information EngineeringSichuan University of Science and EngineeringZigongPeople's Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & EngineeringZigongPeople's Republic of China
| |
Collapse
|
2
|
Tadisetty S, Chodavarapu R, Jin R, Clements RJ, Yu M. Identifying the Edges of the Optic Cup and the Optic Disc in Glaucoma Patients by Segmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:4668. [PMID: 37430580 PMCID: PMC10221430 DOI: 10.3390/s23104668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 05/08/2023] [Accepted: 05/10/2023] [Indexed: 07/12/2023]
Abstract
With recent advancements in artificial intelligence, fundus diseases can be classified automatically for early diagnosis, and this is an interest of many researchers. The study aims to detect the edges of the optic cup and the optic disc of fundus images taken from glaucoma patients, which has further applications in the analysis of the cup-to-disc ratio (CDR). We apply a modified U-Net model architecture on various fundus datasets and use segmentation metrics to evaluate the model. We apply edge detection and dilation to post-process the segmentation and better visualize the optic cup and optic disc. Our model results are based on ORIGA, RIM-ONE v3, REFUGE, and Drishti-GS datasets. Our results show that our methodology obtains promising segmentation efficiency for CDR analysis.
Collapse
Affiliation(s)
- Srikanth Tadisetty
- Department of Computer Science, Kent State University, Kent, OH 44242, USA; (S.T.); (R.C.)
| | - Ranjith Chodavarapu
- Department of Computer Science, Kent State University, Kent, OH 44242, USA; (S.T.); (R.C.)
| | - Ruoming Jin
- Department of Computer Science, Kent State University, Kent, OH 44242, USA; (S.T.); (R.C.)
| | - Robert J. Clements
- Department of Biological Sciences, Kent State University, Kent, OH 44242, USA;
| | - Minzhong Yu
- Department of Ophthalmology, University Hospitals, Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
3
|
Haider A, Arsalan M, Park C, Sultan H, Park KR. Exploring deep feature-blending capabilities to assist glaucoma screening. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
4
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
5
|
WU JOHSUAN, NISHIDA TAKASHI, WEINREB ROBERTN, LIN JOUWEI. Performances of Machine Learning in Detecting Glaucoma Using Fundus and Retinal Optical Coherence Tomography Images: A Meta-Analysis. Am J Ophthalmol 2022; 237:1-12. [PMID: 34942113 DOI: 10.1016/j.ajo.2021.12.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 11/24/2021] [Accepted: 12/03/2021] [Indexed: 11/01/2022]
Abstract
PURPOSE To evaluate the performance of machine learning (ML) in detecting glaucoma using fundus and retinal optical coherence tomography (OCT) images. DESIGN Meta-analysis. METHODS PubMed and EMBASE were searched on August 11, 2021. A bivariate random-effects model was used to pool ML's diagnostic sensitivity, specificity, and area under the curve (AUC). Subgroup analyses were performed based on ML classifier categories and dataset types. RESULTS One hundred and five studies (3.3%) were retrieved. Seventy-three (69.5%), 30 (28.6%), and 2 (1.9%) studies tested ML using fundus, OCT, and both image types, respectively. Total testing data numbers were 197,174 for fundus and 16,039 for OCT. Overall, ML showed excellent performances for both fundus (pooled sensitivity = 0.92 [95% CI, 0.91-0.93]; specificity = 0.93 [95% CI, 0.91-0.94]; and AUC = 0.97 [95% CI, 0.95-0.98]) and OCT (pooled sensitivity = 0.90 [95% CI, 0.86-0.92]; specificity = 0.91 [95% CI, 0.89-0.92]; and AUC = 0.96 [95% CI, 0.93-0.97]). ML performed similarly using all data and external data for fundus and the external test result of OCT was less robust (AUC = 0.87). When comparing different classifier categories, although support vector machine showed the highest performance (pooled sensitivity, specificity, and AUC ranges, 0.92-0.96, 0.95-0.97, and 0.96-0.99, respectively), results by neural network and others were still good (pooled sensitivity, specificity, and AUC ranges, 0.88-0.93, 0.90-0.93, 0.95-0.97, respectively). When analyzed based on dataset types, ML demonstrated consistent performances on clinical datasets (fundus AUC = 0.98 [95% CI, 0.97-0.99] and OCT AUC = 0.95 [95% 0.93-0.97]). CONCLUSIONS Performance of ML in detecting glaucoma compares favorably to that of experts and is promising for clinical application. Future prospective studies are needed to better evaluate its real-world utility.
Collapse
|
6
|
Ni Y, Xie Z, Zheng D, Yang Y, Wang W. Two-stage multitask U-Net construction for pulmonary nodule segmentation and malignancy risk prediction. Quant Imaging Med Surg 2022; 12:292-309. [PMID: 34993079 PMCID: PMC8666775 DOI: 10.21037/qims-21-19] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 06/24/2021] [Indexed: 12/22/2022]
Abstract
BACKGROUND Accurate segmentation of pulmonary nodules is important for image-driven nodule analysis and nodule malignancy risk prediction. However, due to interobserver variability caused by manual segmentation, an accurate and robust automatic segmentation method has become an essential task. Therefore, the aim of the present study was to construct an accurate segmentation and malignant risk prediction algorithm for pulmonary nodules. METHODS In the present study, we proposed a coarse-to-fine 2-stage framework consisting of the following 2 convolutional neural networks: a 3D multiscale U-Net used for localization and a 2.5D multiscale separable U-Net (MSU-Net) used for segmentation refinement. A multitask framework was proposed for nodules' malignancy risk prediction. Features from encoding and decoding paths of MSU-Net were integrated for pathology or morphology characteristic classification. RESULTS Experimental results showed that our method achieved state-of-art results on the Lung Image Database Consortium and Image Database Resource Initiative dataset. The proposed method achieved a Dice similarity coefficient (DSC) of 83.04% and an overlapping error of 27.47% on the dataset. Our method achieved accuracy of 77.8% and area under the receiver-operating characteristic curve of 84.3% for malignancy risk prediction. Moreover, we compared our method with the inter-radiologist agreement, and the average DSC difference was only 0.39%. CONCLUSIONS The results showed the effectiveness of the multitask end-to-end framework. The coarse-to-fine 2.5D strategy increased the accuracy and efficiency of pulmonary nodule segmentation and malignancy risk prediction of the computer-aided diagnosis system. In clinical practice, doctors can obtain accurate morphological characteristics and quantitative information of nodules by using the proposed method, so as to make future treatment plan.
Collapse
Affiliation(s)
- Yangfan Ni
- Laboratory for Medical Imaging Informatics, Shanghai Institute of Technical Physics, Chinese Academy of Science, Shanghai, China
- Laboratory for Medical Imaging Informatics, University of Chinese Academy of Sciences, Beijing, China
| | - Zhe Xie
- Laboratory for Medical Imaging Informatics, Shanghai Institute of Technical Physics, Chinese Academy of Science, Shanghai, China
- Laboratory for Medical Imaging Informatics, University of Chinese Academy of Sciences, Beijing, China
| | - Dezhong Zheng
- Laboratory for Medical Imaging Informatics, Shanghai Institute of Technical Physics, Chinese Academy of Science, Shanghai, China
- Laboratory for Medical Imaging Informatics, University of Chinese Academy of Sciences, Beijing, China
| | - Yuanyuan Yang
- Laboratory for Medical Imaging Informatics, Shanghai Institute of Technical Physics, Chinese Academy of Science, Shanghai, China
| | - Weidong Wang
- Biological Engineering Research Center, The General Hospital of the People’s Liberation Army, Beijing, China
| |
Collapse
|
7
|
Yuan X, Zhou L, Yu S, Li M, Wang X, Zheng X. A multi-scale convolutional neural network with context for joint segmentation of optic disc and cup. Artif Intell Med 2021; 113:102035. [PMID: 33685591 DOI: 10.1016/j.artmed.2021.102035] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Revised: 01/12/2021] [Accepted: 02/09/2021] [Indexed: 01/27/2023]
Abstract
Glaucoma is the leading cause of irreversible blindness. For glaucoma screening, the cup to disc ratio (CDR) is a significant indicator, whose calculation relies on the segmentation of optic disc(OD) and optic cup(OC) in color fundus images. This study proposes a residual multi-scale convolutional neural network with a context semantic extraction module to jointly segment the OD and OC. The proposed method uses a W-shaped backbone network, including image pyramid multi-scale input with the side output layer as an early classifier to generate local prediction output. The proposed method includes a context extraction module that extracts contextual semantic information from multiple level receptive field sizes and adaptively recalibrates channel-wise feature responses. It can effectively extract global information and reduce the semantic gaps in the fusion of deep and shallow semantic information. We validated the proposed method on four datasets, including DRISHTI-GS1, REFUGE, RIM-ONE r3, and a private dataset. The overlap errors are 0.0540, 0.0684, 0.0492, 0.0511 in OC segmentation and 0.2332, 0.1777, 0.2372, 0.2547 in OD segmentation, respectively. Experimental results indicate that the proposed method can estimate the CDR for a large-scale glaucoma screening.
Collapse
Affiliation(s)
- Xin Yuan
- College of Electrical Engineering, Sichuan University, Chengdu, Sichuan, China
| | - Lingxiao Zhou
- Department of Ophthalmology, First Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, China
| | - Shuyang Yu
- College of Electrical Engineering, Sichuan University, Chengdu, Sichuan, China
| | - Miao Li
- College of Electrical Engineering, Sichuan University, Chengdu, Sichuan, China
| | - Xiang Wang
- Department of Ophthalmology, Shandong First Medical University & Shandong Academy of Medical Sciences, Tai'an, Shandong, China
| | - Xiujuan Zheng
- College of Electrical Engineering, Sichuan University, Chengdu, Sichuan, China.
| |
Collapse
|
8
|
Jiang Y, Pan J, Yuan M, Shen Y, Zhu J, Wang Y, Li Y, Zhang K, Yu Q, Xie H, Li H, Wang X, Luo Y. Segmentation of Laser Marks of Diabetic Retinopathy in the Fundus Photographs Using Lightweight U-Net. J Diabetes Res 2021; 2021:8766517. [PMID: 34712739 PMCID: PMC8548126 DOI: 10.1155/2021/8766517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 09/03/2021] [Accepted: 09/24/2021] [Indexed: 11/17/2022] Open
Abstract
Diabetic retinopathy (DR) is a prevalent vision-threatening disease worldwide. Laser marks are the scars left after panretinal photocoagulation, a treatment to prevent patients with severe DR from losing vision. In this study, we develop a deep learning algorithm based on the lightweight U-Net to segment laser marks from the color fundus photos, which could help indicate a stage or providing valuable auxiliary information for the care of DR patients. We prepared our training and testing data, manually annotated by trained and experienced graders from Image Reading Center, Zhongshan Ophthalmic Center, publicly available to fill the vacancy of public image datasets dedicated to the segmentation of laser marks. The lightweight U-Net, along with two postprocessing procedures, achieved an AUC of 0.9824, an optimal sensitivity of 94.16%, and an optimal specificity of 92.82% on the segmentation of laser marks in fundus photographs. With accurate segmentation and high numeric metrics, the lightweight U-Net method showed its reliable performance in automatically segmenting laser marks in fundus photographs, which could help the AI assist the diagnosis of DR in the severe stage.
Collapse
Affiliation(s)
- Yukang Jiang
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Jianying Pan
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Ming Yuan
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Yanhe Shen
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Jin Zhu
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Yishen Wang
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Yewei Li
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Ke Zhang
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Qingyun Yu
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Huirui Xie
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Huiting Li
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Xueqin Wang
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China
- Xinhua College, Sun Yat-Sen University, Guangzhou 510520, China
| | - Yan Luo
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| |
Collapse
|
9
|
Hasan MK, Alam MA, Elahi MTE, Roy S, Martí R. DRNet: Segmentation and localization of optic disc and Fovea from diabetic retinopathy image. Artif Intell Med 2020; 111:102001. [PMID: 33461693 DOI: 10.1016/j.artmed.2020.102001] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Revised: 11/22/2020] [Accepted: 12/06/2020] [Indexed: 12/18/2022]
Abstract
BACKGROUND AND OBJECTIVE In modern ophthalmology, automated Computer-aided Screening Tools (CSTs) are crucial non-intrusive diagnosis methods, where an accurate segmentation of Optic Disc (OD) and localization of OD and Fovea centers are substantial integral parts. However, designing such an automated tool remains challenging due to small dataset sizes, inconsistency in spatial, texture, and shape information of the OD and Fovea, and the presence of different artifacts. METHODS This article proposes an end-to-end encoder-decoder network, named DRNet, for the segmentation and localization of OD and Fovea centers. In our DRNet, we propose a skip connection, named residual skip connection, for compensating the spatial information lost due to pooling in the encoder. Unlike the earlier skip connection in the UNet, the proposed skip connection does not directly concatenate low-level feature maps from the encoder's beginning layers with the corresponding same scale decoder. We validate DRNet using different publicly available datasets, such as IDRiD, RIMONE, DRISHTI-GS, and DRIVE for OD segmentation; IDRiD and HRF for OD center localization; and IDRiD for Fovea center localization. RESULTS The proposed DRNet, for OD segmentation, achieves mean Intersection over Union (mIoU) of 0.845, 0.901, 0.933, and 0.920 for IDRiD, RIMONE, DRISHTI-GS, and DRIVE, respectively. Our OD segmentation result, in terms of mIoU, outperforms the state-of-the-art results for IDRiD and DRIVE datasets, whereas it outperforms state-of-the-art results concerning mean sensitivity for RIMONE and DRISHTI-GS datasets. The DRNet localizes the OD center with mean Euclidean Distance (mED) of 20.23 and 13.34 pixels, respectively, for IDRiD and HRF datasets; it outperforms the state-of-the-art by 4.62 pixels for IDRiD dataset. The DRNet also successfully localizes the Fovea center with mED of 41.87 pixels for the IDRiD dataset, outperforming the state-of-the-art by 1.59 pixels for the same dataset. CONCLUSION As the proposed DRNet exhibits excellent performance even with limited training data and without intermediate intervention, it can be employed to design a better-CST system to screen retinal images. Our source codes, trained models, and ground-truth heatmaps for OD and Fovea center localization will be made publicly available upon publication at GitHub.1.
Collapse
Affiliation(s)
- Md Kamrul Hasan
- Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna 9203, Bangladesh.
| | - Md Ashraful Alam
- Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna 9203, Bangladesh.
| | - Md Toufick E Elahi
- Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna 9203, Bangladesh.
| | - Shidhartho Roy
- Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna 9203, Bangladesh.
| | - Robert Martí
- Computer Vision and Robotics Institute, University of Girona, Spain.
| |
Collapse
|
10
|
Zhang Y, Wang N, Liu H. Applications of Artificial Intelligence in the Screening of Glaucoma in China. J Med Syst 2020; 44:124. [PMID: 32462430 DOI: 10.1007/s10916-020-01590-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 05/21/2020] [Indexed: 11/28/2022]
Affiliation(s)
- Yue Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Ophthalmology and Visual Sciences, Beijing Tongren Hospital, Capital Medical University, Beijing, 100730, China
| | - Ningli Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Ophthalmology and Visual Sciences, Beijing Tongren Hospital, Capital Medical University, Beijing, 100730, China
| | - Hanruo Liu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Ophthalmology and Visual Sciences, Beijing Tongren Hospital, Capital Medical University, Beijing, 100730, China.
| |
Collapse
|