1
|
Huang L, Zhang N, Yi Y, Zhou W, Zhou B, Dai J, Wang J. SAMCF: Adaptive global style alignment and multi-color spaces fusion for joint optic cup and disc segmentation. Comput Biol Med 2024; 178:108639. [PMID: 38878394 DOI: 10.1016/j.compbiomed.2024.108639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 04/21/2024] [Accepted: 05/18/2024] [Indexed: 07/24/2024]
Abstract
The optic cup (OC) and optic disc (OD) are two critical structures in retinal fundus images, and their relative positions and sizes are essential for effectively diagnosing eye diseases. With the success of deep learning in computer vision, deep learning-based segmentation models have been widely used for joint optic cup and disc segmentation. However, there are three prominent issues that impact the segmentation performance. First, significant differences among datasets collecting from various institutions, protocols, and devices lead to performance degradation of models. Second, we find that images with only RGB information struggle to counteract the interference caused by brightness variations, affecting color representation capability. Finally, existing methods typically ignored the edge perception, facing the challenges in obtaining clear and smooth edge segmentation results. To address these drawbacks, we propose a novel framework based on Style Alignment and Multi-Color Fusion (SAMCF) for joint OC and OD segmentation. Initially, we introduce a domain generalization method to generate uniformly styled images without damaged image content for mitigating domain shift issues. Next, based on multiple color spaces, we propose a feature extraction and fusion network aiming to handle brightness variation interference and improve color representation capability. Lastly, an edge aware loss is designed to generate fine edge segmentation results. Our experiments conducted on three public datasets, DGS, RIM, and REFUGE, demonstrate that our proposed SAMCF achieves superior performance to existing state-of-the-art methods. Moreover, SAMCF exhibits remarkable generalization ability across multiple retinal fundus image datasets, showcasing its outstanding generality.
Collapse
Affiliation(s)
- Longjun Huang
- School of Software, Nanchang Key Laboratory for Blindness and Visual Impairment Prevention Technology and Equipment, Jiangxi Normal University, Nanchang, 330022, China
| | - Ningyi Zhang
- School of Software, Nanchang Key Laboratory for Blindness and Visual Impairment Prevention Technology and Equipment, Jiangxi Normal University, Nanchang, 330022, China
| | - Yugen Yi
- School of Software, Nanchang Key Laboratory for Blindness and Visual Impairment Prevention Technology and Equipment, Jiangxi Normal University, Nanchang, 330022, China.
| | - Wei Zhou
- College of Computer Science, Shenyang Aerospace University, Shenyang, 110136, China
| | - Bin Zhou
- School of Software, Nanchang Key Laboratory for Blindness and Visual Impairment Prevention Technology and Equipment, Jiangxi Normal University, Nanchang, 330022, China
| | - Jiangyan Dai
- School of Computer Engineering, Weifang University, 261061, China.
| | - Jianzhong Wang
- College of Information Science and Technology, Northeast Normal University, Changchun, 130117, China
| |
Collapse
|
2
|
Huang X, Islam MR, Akter S, Ahmed F, Kazami E, Serhan HA, Abd-Alrazaq A, Yousefi S. Artificial intelligence in glaucoma: opportunities, challenges, and future directions. Biomed Eng Online 2023; 22:126. [PMID: 38102597 PMCID: PMC10725017 DOI: 10.1186/s12938-023-01187-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 12/01/2023] [Indexed: 12/17/2023] Open
Abstract
Artificial intelligence (AI) has shown excellent diagnostic performance in detecting various complex problems related to many areas of healthcare including ophthalmology. AI diagnostic systems developed from fundus images have become state-of-the-art tools in diagnosing retinal conditions and glaucoma as well as other ocular diseases. However, designing and implementing AI models using large imaging data is challenging. In this study, we review different machine learning (ML) and deep learning (DL) techniques applied to multiple modalities of retinal data, such as fundus images and visual fields for glaucoma detection, progression assessment, staging and so on. We summarize findings and provide several taxonomies to help the reader understand the evolution of conventional and emerging AI models in glaucoma. We discuss opportunities and challenges facing AI application in glaucoma and highlight some key themes from the existing literature that may help to explore future studies. Our goal in this systematic review is to help readers and researchers to understand critical aspects of AI related to glaucoma as well as determine the necessary steps and requirements for the successful development of AI models in glaucoma.
Collapse
Affiliation(s)
- Xiaoqin Huang
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
| | - Md Rafiqul Islam
- Business Information Systems, Australian Institute of Higher Education, Sydney, Australia
| | - Shanjita Akter
- School of Computer Science, Taylors University, Subang Jaya, Malaysia
| | - Fuad Ahmed
- Department of Computer Science & Engineering, Islamic University of Technology (IUT), Gazipur, Bangladesh
| | - Ehsan Kazami
- Ophthalmology, General Hospital of Mahabad, Urmia University of Medical Sciences, Urmia, Iran
| | - Hashem Abu Serhan
- Department of Ophthalmology, Hamad Medical Corporations, Doha, Qatar
| | - Alaa Abd-Alrazaq
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA.
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, USA.
| |
Collapse
|
3
|
Xu C, Chen Z, Zhang X, Peng Y, Tan Z, Fan Y, Liao X, Chen H, Shen J, Chen X. Accurate C/D ratio estimation with elliptical fitting for OCT image based on joint segmentation and detection network. Comput Biol Med 2023; 160:106903. [PMID: 37146494 DOI: 10.1016/j.compbiomed.2023.106903] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 03/12/2023] [Accepted: 04/09/2023] [Indexed: 05/07/2023]
Abstract
Proper estimation of the cup-to-disc ratio (C/D ratio) plays a significant role in ophthalmic examinations, and it is urgent to improve the efficiency of C/D ratio automatic measurement. Therefore, we propose a new method for measuring the C/D ratio of OCTs in normal subjects. Firstly, the end-to-end deep convolution network is used to segment and detect the inner limiting membrane (ILM) and the two Bruch's membrane opening (BMO) terminations. Then, we introduce an ellipse fitting technique to post-process the edge of the optic disc. Finally, the proposed method is evaluated on 41 normal subjects using the optic-disc-area scanning mode of three machines: BV1000, Topcon 3D OCT-1, and Nidek ARK-1. In addition, pairwise correlation analyses are carried out to compare the C/D ratio measurement method of BV1000 to existing commercial OCT machines as well as other state-of-the-art methods. The correlation coefficient between the C/D ratio calculated by BV1000 and the C/D ratio calculated by manual annotation is 0.84, which indicates that the proposed method has a strong correlation with the results of manual annotation by ophthalmologists. Moreover, in comparison between BV1000, Topcon and Nidek in practical screening among normal subjects, the proportion of the C/D ratio less than 0.6 calculated by BV1000 accounts for 96.34%, which is the closest to the clinical statistics among the three OCT machines. The above experimental results and analysis show that the proposed method performs well in cup and disc detection and C/D ratio measurement, and compared with the existing commercial OCT equipment, the C/D ratio measurement results are relatively close to reality, which has certain clinical application value.
Collapse
Affiliation(s)
- Chenan Xu
- State Key Laboratory of Radiation Medicine and Protection, Collaborative Innovation Center of Radiological Medicine of Jiangsu Higher Education Institutions, and School for Radiological and Interdisciplinary Sciences (RAD-X), Soochow University, Suzhou, 215006, China
| | - Zhongyue Chen
- School of Electronics and Information Engineering and Medical Image Processing, Analysis and Visualization Lab, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Xiao Zhang
- School of Electronics and Information Engineering and Medical Image Processing, Analysis and Visualization Lab, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Yuanyuan Peng
- School of Electronics and Information Engineering and Medical Image Processing, Analysis and Visualization Lab, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Zhiwei Tan
- School of Electronics and Information Engineering and Medical Image Processing, Analysis and Visualization Lab, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Yu Fan
- Bigvision Medical Technology Co., Ltd., Suzhou, Jiangsu Province, 215006, China
| | - Xulong Liao
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, Shantou, Guangdong Province, 515041, China
| | - Haoyu Chen
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, Shantou, Guangdong Province, 515041, China
| | - Jiayan Shen
- School of Electronics and Information Engineering and Medical Image Processing, Analysis and Visualization Lab, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Xinjian Chen
- State Key Laboratory of Radiation Medicine and Protection, Collaborative Innovation Center of Radiological Medicine of Jiangsu Higher Education Institutions, and School for Radiological and Interdisciplinary Sciences (RAD-X), Soochow University, Suzhou, 215006, China; School of Electronics and Information Engineering and Medical Image Processing, Analysis and Visualization Lab, Soochow University, Suzhou, Jiangsu Province, 215006, China.
| |
Collapse
|
4
|
Septiarini A, Hamdani H, Setyaningsih E, Junirianto E, Utaminingrum F. Automatic Method for Optic Disc Segmentation Using Deep Learning on Retinal Fundus Images. Healthc Inform Res 2023; 29:145-151. [PMID: 37190738 DOI: 10.4258/hir.2023.29.2.145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2021] [Accepted: 02/17/2023] [Indexed: 05/17/2023] Open
Abstract
OBJECTIVES The optic disc is part of the retinal fundus image structure, which influences the extraction of glaucoma features. This study proposes a method that automatically segments the optic disc area in retinal fundus images using deep learning based on a convolutional neural network (CNN). METHODS This study used private and public datasets containing retinal fundus images. The private dataset consisted of 350 images, while the public dataset was the Retinal Fundus Glaucoma Challenge (REFUGE). The proposed method was based on a CNN with a single-shot multibox detector (MobileNetV2) to form images of the region-of-interest (ROI) using the original image resized into 640 × 640 input data. A pre-processing sequence was then implemented, including augmentation, resizing, and normalization. Furthermore, a U-Net model was applied for optic disc segmentation with 128 × 128 input data. RESULTS The proposed method was appropriately applied to the datasets used, as shown by the values of the F1-score, dice score, and intersection over union of 0.9880, 0.9852, and 0.9763 for the private dataset, respectively, and 0.9854, 0.9838 and 0.9712 for the REFUGE dataset. CONCLUSIONS The optic disc area produced by the proposed method was similar to that identified by an ophthalmologist. Therefore, this method can be considered for implementing automatic segmentation of the optic disc area.
Collapse
Affiliation(s)
- Anindita Septiarini
- Department of Informatics, Faculty of Engineering, Mulawarman University, Samarinda, Indonesia
| | - Hamdani Hamdani
- Department of Informatics, Faculty of Engineering, Mulawarman University, Samarinda, Indonesia
| | - Emy Setyaningsih
- Department of Computer, System Engineering, Institut Sains & Teknologi AKPRIND, Yogyakarta, Indonesia
| | - Eko Junirianto
- Departmen of Information Technology, Samarinda Polytechnic of Agriculture, Samarinda, Indonesia
| | - Fitri Utaminingrum
- Computer Vision Research Group, Faculty of Computer Science, Brawijaya University, Malang, Indonesia
| |
Collapse
|
5
|
Zhang F, Zheng Y, Wu J, Yang X, Che X. Multi-rater label fusion based on an information bottleneck for fundus image segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
6
|
Dubey S, Dixit M. Recent developments on computer aided systems for diagnosis of diabetic retinopathy: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:14471-14525. [PMID: 36185322 PMCID: PMC9510498 DOI: 10.1007/s11042-022-13841-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 04/27/2022] [Accepted: 09/06/2022] [Indexed: 06/16/2023]
Abstract
Diabetes is a long-term condition in which the pancreas quits producing insulin or the body's insulin isn't utilised properly. One of the signs of diabetes is Diabetic Retinopathy. Diabetic retinopathy is the most prevalent type of diabetes, if remains unaddressed, diabetic retinopathy can affect all diabetics and become very serious, raising the chances of blindness. It is a chronic systemic condition that affects up to 80% of patients for more than ten years. Many researchers believe that if diabetes individuals are diagnosed early enough, they can be rescued from the condition in 90% of cases. Diabetes damages the capillaries, which are microscopic blood vessels in the retina. On images, blood vessel damage is usually noticeable. Therefore, in this study, several traditional, as well as deep learning-based approaches, are reviewed for the classification and detection of this particular diabetic-based eye disease known as diabetic retinopathy, and also the advantage of one approach over the other is also described. Along with the approaches, the dataset and the evaluation metrics useful for DR detection and classification are also discussed. The main finding of this study is to aware researchers about the different challenges occurs while detecting diabetic retinopathy using computer vision, deep learning techniques. Therefore, a purpose of this review paper is to sum up all the major aspects while detecting DR like lesion identification, classification and segmentation, security attacks on the deep learning models, proper categorization of datasets and evaluation metrics. As deep learning models are quite expensive and more prone to security attacks thus, in future it is advisable to develop a refined, reliable and robust model which overcomes all these aspects which are commonly found while designing deep learning models.
Collapse
Affiliation(s)
- Shradha Dubey
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| | - Manish Dixit
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| |
Collapse
|
7
|
Zhou Q, Guo J, Chen Z, Chen W, Deng C, Yu T, Li F, Yan X, Hu T, Wang L, Rong Y, Ding M, Wang J, Zhang X. Deep learning-based classification of the anterior chamber angle in glaucoma gonioscopy. BIOMEDICAL OPTICS EXPRESS 2022; 13:4668-4683. [PMID: 36187252 PMCID: PMC9484423 DOI: 10.1364/boe.465286] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/30/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
In the proposed network, the features were first extracted from the gonioscopically obtained anterior segment photographs using the densely-connected high-resolution network. Then the useful information is further strengthened using the hybrid attention module to improve the classification accuracy. Between October 30, 2020, and January 30, 2021, a total of 146 participants underwent glaucoma screening. One thousand seven hundred eighty original images of the ACA were obtained with the gonioscope and slit lamp microscope. After data augmentation, 4457 images are used for the training and validation of the HahrNet, and 497 images are used to evaluate our algorithm. Experimental results demonstrate that the proposed HahrNet exhibits a good performance of 96.2% accuracy, 99.0% specificity, 96.4% sensitivity, and 0.996 area under the curve (AUC) in classifying the ACA test dataset. Compared with several deep learning-based classification methods and nine human readers of different levels, the HahrNet achieves better or more competitive performance in terms of accuracy, specificity, and sensitivity. Indeed, the proposed ACA classification method will provide an automatic and accurate technology for the grading of glaucoma.
Collapse
Affiliation(s)
- Quan Zhou
- Department of Biomedical Engineering, College of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China
- These authors contribute equally to this work
| | - Jingmin Guo
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
- These authors contribute equally to this work
| | - Zhiqi Chen
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Wei Chen
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Chaohua Deng
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Tian Yu
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Fei Li
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Xiaoqin Yan
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Tian Hu
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Linhao Wang
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Yan Rong
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Mingyue Ding
- Department of Biomedical Engineering, College of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Junming Wang
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Xuming Zhang
- Department of Biomedical Engineering, College of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
8
|
Xiong H, Liu S, Sharan RV, Coiera E, Berkovsky S. Weak label based Bayesian U-Net for optic disc segmentation in fundus images. Artif Intell Med 2022; 126:102261. [DOI: 10.1016/j.artmed.2022.102261] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 01/18/2022] [Accepted: 02/20/2022] [Indexed: 01/27/2023]
|