1
|
Dadzie AK, Iddir SP, Abtahi M, Ebrahimi B, Le D, Ganesh S, Son T, Heiferman MJ, Yao X. Colour fusion effect on deep learning classification of uveal melanoma. Eye (Lond) 2024:10.1038/s41433-024-03148-4. [PMID: 38773261 DOI: 10.1038/s41433-024-03148-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 04/23/2024] [Accepted: 05/10/2024] [Indexed: 05/23/2024] Open
Abstract
BACKGROUND Reliable differentiation of uveal melanoma and choroidal nevi is crucial to guide appropriate treatment, preventing unnecessary procedures for benign lesions and ensuring timely treatment for potentially malignant cases. The purpose of this study is to validate deep learning classification of uveal melanoma and choroidal nevi, and to evaluate the effect of colour fusion options on the classification performance. METHODS A total of 798 ultra-widefield retinal images of 438 patients were included in this retrospective study, comprising 157 patients diagnosed with UM and 281 patients diagnosed with choroidal naevus. Colour fusion options, including early fusion, intermediate fusion and late fusion, were tested for deep learning image classification with a convolutional neural network (CNN). F1-score, accuracy and the area under the curve (AUC) of a receiver operating characteristic (ROC) were used to evaluate the classification performance. RESULTS Colour fusion options were observed to affect the deep learning performance significantly. For single-colour learning, the red colour image was observed to have superior performance compared to green and blue channels. For multi-colour learning, the intermediate fusion is better than early and late fusion options. CONCLUSION Deep learning is a promising approach for automated classification of uveal melanoma and choroidal nevi. Colour fusion options can significantly affect the classification performance.
Collapse
Affiliation(s)
- Albert K Dadzie
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - Sabrina P Iddir
- Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, 60612, USA
| | - Mansour Abtahi
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - Behrouz Ebrahimi
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - David Le
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - Sanjay Ganesh
- Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, 60612, USA
| | - Taeyoon Son
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - Michael J Heiferman
- Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, 60612, USA.
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA.
- Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, 60612, USA.
| |
Collapse
|
2
|
Wang Y, Zhen L, Tan TE, Fu H, Feng Y, Wang Z, Xu X, Goh RSM, Ng Y, Calhoun C, Tan GSW, Sun JK, Liu Y, Ting DSW. Geometric Correspondence-Based Multimodal Learning for Ophthalmic Image Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1945-1957. [PMID: 38206778 DOI: 10.1109/tmi.2024.3352602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2024]
Abstract
Color fundus photography (CFP) and Optical coherence tomography (OCT) images are two of the most widely used modalities in the clinical diagnosis and management of retinal diseases. Despite the widespread use of multimodal imaging in clinical practice, few methods for automated diagnosis of eye diseases utilize correlated and complementary information from multiple modalities effectively. This paper explores how to leverage the information from CFP and OCT images to improve the automated diagnosis of retinal diseases. We propose a novel multimodal learning method, named geometric correspondence-based multimodal learning network (GeCoM-Net), to achieve the fusion of CFP and OCT images. Specifically, inspired by clinical observations, we consider the geometric correspondence between the OCT slice and the CFP region to learn the correlated features of the two modalities for robust fusion. Furthermore, we design a new feature selection strategy to extract discriminative OCT representations by automatically selecting the important feature maps from OCT slices. Unlike the existing multimodal learning methods, GeCoM-Net is the first method that formulates the geometric relationships between the OCT slice and the corresponding region of the CFP image explicitly for CFP and OCT fusion. Experiments have been conducted on a large-scale private dataset and a publicly available dataset to evaluate the effectiveness of GeCoM-Net for diagnosing diabetic macular edema (DME), impaired visual acuity (VA) and glaucoma. The empirical results show that our method outperforms the current state-of-the-art multimodal learning methods by improving the AUROC score 0.4%, 1.9% and 2.9% for DME, VA and glaucoma detection, respectively.
Collapse
|
3
|
El-Ateif S, Idri A. Multimodality Fusion Strategies in Eye Disease Diagnosis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01105-x. [PMID: 38639808 DOI: 10.1007/s10278-024-01105-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 03/08/2024] [Accepted: 03/26/2024] [Indexed: 04/20/2024]
Abstract
Multimodality fusion has gained significance in medical applications, particularly in diagnosing challenging diseases like eye diseases, notably diabetic eye diseases that pose risks of vision loss and blindness. Mono-modality eye disease diagnosis proves difficult, often missing crucial disease indicators. In response, researchers advocate multimodality-based approaches to enhance diagnostics. This study is a unique exploration, evaluating three multimodality fusion strategies-early, joint, and late-in conjunction with state-of-the-art convolutional neural network models for automated eye disease binary detection across three datasets: fundus fluorescein angiography, macula, and combination of digital retinal images for vessel extraction, structured analysis of the retina, and high-resolution fundus. Findings reveal the efficacy of each fusion strategy: type 0 early fusion with DenseNet121 achieves an impressive 99.45% average accuracy. InceptionResNetV2 emerges as the top-performing joint fusion architecture with an average accuracy of 99.58%. Late fusion ResNet50V2 achieves a perfect score of 100% across all metrics, surpassing both early and joint fusion. Comparative analysis demonstrates that late fusion ResNet50V2 matches the accuracy of state-of-the-art feature-level fusion model for multiview learning. In conclusion, this study substantiates late fusion as the optimal strategy for eye disease diagnosis compared to early and joint fusion, showcasing its superiority in leveraging multimodal information.
Collapse
Affiliation(s)
- Sara El-Ateif
- Software Project Management Research Team, ENSIAS, Mohammed V University, BP 713, Agdal, Rabat, Morocco
| | - Ali Idri
- Software Project Management Research Team, ENSIAS, Mohammed V University, BP 713, Agdal, Rabat, Morocco.
- Faculty of Medical Sciences, Mohammed VI Polytechnic University, Marrakech-Rhamna, Benguerir, Morocco.
| |
Collapse
|
4
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
5
|
Huang X, Islam MR, Akter S, Ahmed F, Kazami E, Serhan HA, Abd-Alrazaq A, Yousefi S. Artificial intelligence in glaucoma: opportunities, challenges, and future directions. Biomed Eng Online 2023; 22:126. [PMID: 38102597 PMCID: PMC10725017 DOI: 10.1186/s12938-023-01187-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 12/01/2023] [Indexed: 12/17/2023] Open
Abstract
Artificial intelligence (AI) has shown excellent diagnostic performance in detecting various complex problems related to many areas of healthcare including ophthalmology. AI diagnostic systems developed from fundus images have become state-of-the-art tools in diagnosing retinal conditions and glaucoma as well as other ocular diseases. However, designing and implementing AI models using large imaging data is challenging. In this study, we review different machine learning (ML) and deep learning (DL) techniques applied to multiple modalities of retinal data, such as fundus images and visual fields for glaucoma detection, progression assessment, staging and so on. We summarize findings and provide several taxonomies to help the reader understand the evolution of conventional and emerging AI models in glaucoma. We discuss opportunities and challenges facing AI application in glaucoma and highlight some key themes from the existing literature that may help to explore future studies. Our goal in this systematic review is to help readers and researchers to understand critical aspects of AI related to glaucoma as well as determine the necessary steps and requirements for the successful development of AI models in glaucoma.
Collapse
Affiliation(s)
- Xiaoqin Huang
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
| | - Md Rafiqul Islam
- Business Information Systems, Australian Institute of Higher Education, Sydney, Australia
| | - Shanjita Akter
- School of Computer Science, Taylors University, Subang Jaya, Malaysia
| | - Fuad Ahmed
- Department of Computer Science & Engineering, Islamic University of Technology (IUT), Gazipur, Bangladesh
| | - Ehsan Kazami
- Ophthalmology, General Hospital of Mahabad, Urmia University of Medical Sciences, Urmia, Iran
| | - Hashem Abu Serhan
- Department of Ophthalmology, Hamad Medical Corporations, Doha, Qatar
| | - Alaa Abd-Alrazaq
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA.
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, USA.
| |
Collapse
|
6
|
Qian X, Xian S, Yifei S, Wei G, Liu H, Xiaoming X, Chu C, Yilong Y, Shuang Y, Kai M, Mei C, Yi Q. External validation of a deep learning detection system for glaucomatous optic neuropathy: a real-world multicentre study. Eye (Lond) 2023; 37:3813-3818. [PMID: 37322379 PMCID: PMC10698045 DOI: 10.1038/s41433-023-02622-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 05/17/2023] [Accepted: 06/02/2023] [Indexed: 06/17/2023] Open
Abstract
OBJECTIVES To conduct an external validation of an automated artificial intelligence (AI) diagnostic system using fundus photographs from a real-life multicentre cohort. METHODS We designed external validation in multiple scenarios, consisting of 3049 images from Qilu Hospital of Shandong University in China (QHSDU, validation dataset 1), 7495 images from three other hospitals in China (validation dataset 2), and 516 images from high myopia (HM) population of QHSDU (validation dataset 3). The corresponding sensitivity, specificity and accuracy of this AI diagnostic system to identify glaucomatous optic neuropathy (GON) were calculated. RESULTS In validation datasets 1 and 2, the algorithm yielded accuracy of 93.18% and 91.40%, area under the receiver operating curves (AUC) of 95.17% and 96.64%, and significantly higher sensitivity of 91.75% and 91.41%, respectively, compared to manual graders. On the subsets complicated with retinal comorbidities, such as diabetic retinopathy or age-related macular degeneration, in validation datasets 1 and 2, the algorithm achieved accuracy of 87.54% and 93.81%, and AUC of 97.02% and 97.46%, respectively. In validation dataset 3, the algorithm achieved comparable accuracy of 81.98% and AUC of 87.49%, with a sensitivity of 83.61% and specificity of 81.76% on GON recognition specifically in the HM population. CONCLUSIONS With acceptable generalization capability across varying levels of image quality, different clinical centres, or certain retinal comorbidities, such as HM, the automatic AI diagnostic system had the potential to provide expert-level glaucoma detection.
Collapse
Affiliation(s)
- Xu Qian
- Department of Geriatric Medicine, Qilu Hospital of Shandong University, No. 107, Wenhuaxi Road, Jinan, 250012, China
- Key Laboratory of Cardiovascular Proteomics of Shandong Province, Jinan, China
- Jinan Clinical Research Center for Geriatric Medicine (202132001), Jinan, China
| | - Song Xian
- Department of Geriatric Medicine, Qilu Hospital of Shandong University, No. 107, Wenhuaxi Road, Jinan, 250012, China
- Key Laboratory of Cardiovascular Proteomics of Shandong Province, Jinan, China
- Jinan Clinical Research Center for Geriatric Medicine (202132001), Jinan, China
| | - Su Yifei
- Global Health Research Center, Duke Kunshan University, No. 8 Duke Avenue, Kunshan, Jiangsu Province, 215316, China
| | - Guo Wei
- Lunan Eye Hospital, Linyi, 276000, China
| | - Hanruo Liu
- Beijing Tongren Hospital, Capital Medical University, Beijing, 100730, China
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100730, China
| | - Xi Xiaoming
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, 250101, China
| | | | - Yin Yilong
- School of Software, Shandong University, Jinan, 250101, China
| | - Yu Shuang
- Tencent Healthcare, Shenzhen, 51800, China
| | - Ma Kai
- Tencent Healthcare, Shenzhen, 51800, China
| | - Cheng Mei
- Department of Geriatric Medicine, Qilu Hospital of Shandong University, No. 107, Wenhuaxi Road, Jinan, 250012, China
- Key Laboratory of Cardiovascular Proteomics of Shandong Province, Jinan, China
- Jinan Clinical Research Center for Geriatric Medicine (202132001), Jinan, China
| | - Qu Yi
- Department of Geriatric Medicine, Qilu Hospital of Shandong University, No. 107, Wenhuaxi Road, Jinan, 250012, China.
- Key Laboratory of Cardiovascular Proteomics of Shandong Province, Jinan, China.
- Jinan Clinical Research Center for Geriatric Medicine (202132001), Jinan, China.
| |
Collapse
|
7
|
Fan R, Bowd C, Brye N, Christopher M, Weinreb RN, Kriegman DJ, Zangwill LM. One-Vote Veto: Semi-Supervised Learning for Low-Shot Glaucoma Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3764-3778. [PMID: 37610903 DOI: 10.1109/tmi.2023.3307689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
Convolutional neural networks (CNNs) are a promising technique for automated glaucoma diagnosis from images of the fundus, and these images are routinely acquired as part of an ophthalmic exam. Nevertheless, CNNs typically require a large amount of well-labeled data for training, which may not be available in many biomedical image classification applications, especially when diseases are rare and where labeling by experts is costly. This article makes two contributions to address this issue: 1) It extends the conventional Siamese network and introduces a training method for low-shot learning when labeled data are limited and imbalanced, and 2) it introduces a novel semi-supervised learning strategy that uses additional unlabeled training data to achieve greater accuracy. Our proposed multi-task Siamese network (MTSN) can employ any backbone CNN, and we demonstrate with four backbone CNNs that its accuracy with limited training data approaches the accuracy of backbone CNNs trained with a dataset that is 50 times larger. We also introduce One-Vote Veto (OVV) self-training, a semi-supervised learning strategy that is designed specifically for MTSNs. By taking both self-predictions and contrastive predictions of the unlabeled training data into account, OVV self-training provides additional pseudo labels for fine-tuning a pre-trained MTSN. Using a large (imbalanced) dataset with 66,715 fundus photographs acquired over 15 years, extensive experimental results demonstrate the effectiveness of low-shot learning with MTSN and semi-supervised learning with OVV self-training. Three additional, smaller clinical datasets of fundus images acquired under different conditions (cameras, instruments, locations, populations) are used to demonstrate the generalizability of the proposed methods.
Collapse
|
8
|
Yamashita T, Asaoka R, Terasaki H, Yoshihara N, Kakiuchi N, Sakamoto T. Three-year changes in sex judgment using color fundus parameters in elementary school students. PLoS One 2023; 18:e0295123. [PMID: 38033010 PMCID: PMC10688721 DOI: 10.1371/journal.pone.0295123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 11/14/2023] [Indexed: 12/02/2023] Open
Abstract
PURPOSE In a previous cross-sectional study, we reported that the sexes can be distinguished using known factors obtained from color fundus photography (CFP). However, it is not clear how sex differences in fundus parameters appear across the human lifespan. Therefore, we conducted a cohort study to investigate sex determination based on fundus parameters in elementary school students. METHODS This prospective observational longitudinal study investigated 109 right eyes of elementary school students over 4 years (age, 8.5 to 11.5 years). From each CFP, the tessellation fundus index was calculated as red/red + green + blue (R/[R+G+B]) using the mean value of red-green-blue intensity in eight locations around the optic disc and macular region. Optic disc area, ovality ratio, papillomacular angle, and retinal vessel angles and distances were quantified according to the data in our previous report. Using 54 fundus parameters, sex was predicted by L2 regularized binomial logistic regression for each grade. RESULTS The right eyes of 53 boys and 56 girls were analyzed. The discrimination accuracy rate significantly increased with age: 56.3% at 8.5 years, 46.1% at 9.5 years, 65.5% at 10.5 years and 73.1% at 11.5 years. CONCLUSIONS The accuracy of sex discrimination by fundus photography improved during a 3-year cohort study of elementary school students.
Collapse
Affiliation(s)
- Takehiro Yamashita
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
- School of Nursing, Seirei Christopher University, Hamamatsu, Shizuoka, Japan
- Nanovision Research Division, Research Institute of Electronics, Shizuoka University, Hamamatsu, Shizuoka, Japan
- The Graduate School for the Creation of New Photonics Industries, Hamamatsu, Shizuoka, Japan
| | - Hiroto Terasaki
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Naoya Yoshihara
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Naoko Kakiuchi
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| |
Collapse
|
9
|
Yao X, Dadzie A, Iddir S, Abtahi M, Ebrahimi B, Le D, Ganesh S, Son T, Heiferman M. Color Fusion Effect on Deep Learning Classification of Uveal Melanoma. RESEARCH SQUARE 2023:rs.3.rs-3399214. [PMID: 37986860 PMCID: PMC10659548 DOI: 10.21203/rs.3.rs-3399214/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Background Reliable differentiation of uveal melanoma and choroidal nevi is crucial to guide appropriate treatment, preventing unnecessary procedures for benign lesions and ensuring timely treatment for potentially malignant cases. The purpose of this study is to validate deep learning classification of uveal melanoma and choroidal nevi, and to evaluate the effect of color fusion options on the classification performance. Methods A total of 798 ultra-widefield retinal images of 438 patients were included in this retrospective study, comprising 157 patients diagnosed with UM and 281 patients diagnosed with choroidal nevus. Color fusion options, including early fusion, intermediate fusion and late fusion, were tested for deep learning image classification with a convolutional neural network (CNN). Specificity, sensitivity, F1-score, accuracy, and the area under the curve (AUC) of a receiver operating characteristic (ROC) were used to evaluate the classification performance. The saliency map visualization technique was used to understand the areas in the image that had the most influence on classification decisions of the CNN. Results Color fusion options were observed to affect the deep learning performance significantly. For single-color learning, the red color image was observed to have superior performance compared to green and blue channels. For multi-color learning, the intermediate fusion is better than early and late fusion options. Conclusion Deep learning is a promising approach for automated classification of uveal melanoma and choroidal nevi, and color fusion options can significantly affect the classification performance.
Collapse
|
10
|
Saha S, Vignarajan J, Frost S. A fast and fully automated system for glaucoma detection using color fundus photographs. Sci Rep 2023; 13:18408. [PMID: 37891238 PMCID: PMC10611813 DOI: 10.1038/s41598-023-44473-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023] Open
Abstract
This paper presents a low computationally intensive and memory efficient convolutional neural network (CNN)-based fully automated system for detection of glaucoma, a leading cause of irreversible blindness worldwide. Using color fundus photographs, the system detects glaucoma in two steps. In the first step, the optic disc region is determined relying upon You Only Look Once (YOLO) CNN architecture. In the second step classification of 'glaucomatous' and 'non-glaucomatous' is performed using MobileNet architecture. A simplified version of the original YOLO net, specific to the context, is also proposed. Extensive experiments are conducted using seven state-of-the-art CNNs with varying computational intensity, namely, MobileNetV2, MobileNetV3, Custom ResNet, InceptionV3, ResNet50, 18-Layer CNN and InceptionResNetV2. A total of 6671 fundus images collected from seven publicly available glaucoma datasets are used for the experiment. The system achieves an accuracy and F1 score of 97.4% and 97.3%, with sensitivity, specificity, and AUC of respectively 97.5%, 97.2%, 99.3%. These findings are comparable with the best reported methods in the literature. With comparable or better performance, the proposed system produces significantly faster decisions and drastically minimizes the resource requirement. For example, the proposed system requires 12 times less memory in comparison to ResNes50, and produces 2 times faster decisions. With significantly less memory efficient and faster processing, the proposed system has the capability to be directly embedded into resource limited devices such as portable fundus cameras.
Collapse
Affiliation(s)
- Sajib Saha
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Perth, Australia.
| | - Janardhan Vignarajan
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Perth, Australia
| | - Shaun Frost
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Perth, Australia
| |
Collapse
|
11
|
Wang JZ, Lu NH, Du WC, Liu KY, Hsu SY, Wang CY, Chen YJ, Chang LC, Twan WH, Chen TB, Huang YH. Classification of Color Fundus Photographs Using Fusion Extracted Features and Customized CNN Models. Healthcare (Basel) 2023; 11:2228. [PMID: 37570467 PMCID: PMC10418900 DOI: 10.3390/healthcare11152228] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 08/04/2023] [Accepted: 08/05/2023] [Indexed: 08/13/2023] Open
Abstract
This study focuses on overcoming challenges in classifying eye diseases using color fundus photographs by leveraging deep learning techniques, aiming to enhance early detection and diagnosis accuracy. We utilized a dataset of 6392 color fundus photographs across eight disease categories, which was later augmented to 17,766 images. Five well-known convolutional neural networks (CNNs)-efficientnetb0, mobilenetv2, shufflenet, resnet50, and resnet101-and a custom-built CNN were integrated and trained on this dataset. Image sizes were standardized, and model performance was evaluated via accuracy, Kappa coefficient, and precision metrics. Shufflenet and efficientnetb0demonstrated strong performances, while our custom 17-layer CNN outperformed all with an accuracy of 0.930 and a Kappa coefficient of 0.920. Furthermore, we found that the fusion of image features with classical machine learning classifiers increased the performance, with Logistic Regression showcasing the best results. Our study highlights the potential of AI and deep learning models in accurately classifying eye diseases and demonstrates the efficacy of custom-built models and the fusion of deep learning and classical methods. Future work should focus on validating these methods across larger datasets and assessing their real-world applicability.
Collapse
Affiliation(s)
- Jing-Zhe Wang
- Department of Information Engineering, I-Shou University, No. 8, Yida Road, Jiao-Su Village, Yan-Chao District, Kaohsiung City 84001, Taiwan
| | - Nan-Han Lu
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Road, Jiao-Su Village, Yan-Chao District, Kaohsiung City 82445, Taiwan
- Department of Radiology, E-DA Cancer Hospital, I-Shou University, No. 21, Yida Road, Jiao-Su Village, Yan-Chao District, Kaohsiung City 82445, Taiwan
| | - Wei-Chang Du
- Department of Information Engineering, I-Shou University, No. 8, Yida Road, Jiao-Su Village, Yan-Chao District, Kaohsiung City 84001, Taiwan
| | - Kuo-Ying Liu
- Department of Radiology, E-DA Cancer Hospital, I-Shou University, No. 21, Yida Road, Jiao-Su Village, Yan-Chao District, Kaohsiung City 82445, Taiwan
| | - Shih-Yen Hsu
- Department of Information Engineering, I-Shou University, No. 8, Yida Road, Jiao-Su Village, Yan-Chao District, Kaohsiung City 84001, Taiwan
| | - Chi-Yuan Wang
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Road, Jiao-Su Village, Yan-Chao District, Kaohsiung City 82445, Taiwan
| | - Yun-Ju Chen
- School of Medicine for International Students, I-Shu University, No. 8, Yida Road, Jiao-Su Village, Yan-Chao District, Kaohsiung City 84001, Taiwan
| | - Li-Ching Chang
- School of Medicine for International Students, I-Shu University, No. 8, Yida Road, Jiao-Su Village, Yan-Chao District, Kaohsiung City 84001, Taiwan
| | - Wen-Hung Twan
- Department of Life Sciences, National Taitung University, No. 369, Sec. 2, University Road, Taitung City 95048, Taiwan
| | - Tai-Been Chen
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Road, Jiao-Su Village, Yan-Chao District, Kaohsiung City 82445, Taiwan
- Institute of Statistics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu 30010, Taiwan
| | - Yung-Hui Huang
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Road, Jiao-Su Village, Yan-Chao District, Kaohsiung City 82445, Taiwan
| |
Collapse
|
12
|
Velpula VK, Sharma LD. Multi-stage glaucoma classification using pre-trained convolutional neural networks and voting-based classifier fusion. Front Physiol 2023; 14:1175881. [PMID: 37383146 PMCID: PMC10293617 DOI: 10.3389/fphys.2023.1175881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 05/19/2023] [Indexed: 06/30/2023] Open
Abstract
Aim: To design an automated glaucoma detection system for early detection of glaucoma using fundus images. Background: Glaucoma is a serious eye problem that can cause vision loss and even permanent blindness. Early detection and prevention are crucial for effective treatment. Traditional diagnostic approaches are time consuming, manual, and often inaccurate, thus making automated glaucoma diagnosis necessary. Objective: To propose an automated glaucoma stage classification model using pre-trained deep convolutional neural network (CNN) models and classifier fusion. Methods: The proposed model utilized five pre-trained CNN models: ResNet50, AlexNet, VGG19, DenseNet-201, and Inception-ResNet-v2. The model was tested using four public datasets: ACRIMA, RIM-ONE, Harvard Dataverse (HVD), and Drishti. Classifier fusion was created to merge the decisions of all CNN models using the maximum voting-based approach. Results: The proposed model achieved an area under the curve of 1 and an accuracy of 99.57% for the ACRIMA dataset. The HVD dataset had an area under the curve of 0.97 and an accuracy of 85.43%. The accuracy rates for Drishti and RIM-ONE were 90.55 and 94.95%, respectively. The experimental results showed that the proposed model performed better than the state-of-the-art methods in classifying glaucoma in its early stages. Understanding the model output includes both attribution-based methods such as activations and gradient class activation map and perturbation-based methods such as locally interpretable model-agnostic explanations and occlusion sensitivity, which generate heatmaps of various sections of an image for model prediction. Conclusion: The proposed automated glaucoma stage classification model using pre-trained CNN models and classifier fusion is an effective method for the early detection of glaucoma. The results indicate high accuracy rates and superior performance compared to the existing methods.
Collapse
|
13
|
Shoukat A, Akbar S, Hassan SA, Iqbal S, Mehmood A, Ilyas QM. Automatic Diagnosis of Glaucoma from Retinal Images Using Deep Learning Approach. Diagnostics (Basel) 2023; 13:diagnostics13101738. [PMID: 37238222 DOI: 10.3390/diagnostics13101738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 05/04/2023] [Accepted: 05/06/2023] [Indexed: 05/28/2023] Open
Abstract
Glaucoma is characterized by increased intraocular pressure and damage to the optic nerve, which may result in irreversible blindness. The drastic effects of this disease can be avoided if it is detected at an early stage. However, the condition is frequently detected at an advanced stage in the elderly population. Therefore, early-stage detection may save patients from irreversible vision loss. The manual assessment of glaucoma by ophthalmologists includes various skill-oriented, costly, and time-consuming methods. Several techniques are in experimental stages to detect early-stage glaucoma, but a definite diagnostic technique remains elusive. We present an automatic method based on deep learning that can detect early-stage glaucoma with very high accuracy. The detection technique involves the identification of patterns from the retinal images that are often overlooked by clinicians. The proposed approach uses the gray channels of fundus images and applies the data augmentation technique to create a large dataset of versatile fundus images to train the convolutional neural network model. Using the ResNet-50 architecture, the proposed approach achieved excellent results for detecting glaucoma on the G1020, RIM-ONE, ORIGA, and DRISHTI-GS datasets. We obtained a detection accuracy of 98.48%, a sensitivity of 99.30%, a specificity of 96.52%, an AUC of 97%, and an F1-score of 98% by using the proposed model on the G1020 dataset. The proposed model may help clinicians to diagnose early-stage glaucoma with very high accuracy for timely interventions.
Collapse
Affiliation(s)
- Ayesha Shoukat
- Department of Computer Science, Riphah International University, Faisalabad Campus, Faisalabad 44000, Pakistan
| | - Shahzad Akbar
- Department of Computer Science, Riphah International University, Faisalabad Campus, Faisalabad 44000, Pakistan
| | - Syed Ale Hassan
- Department of Computer Science, Riphah International University, Faisalabad Campus, Faisalabad 44000, Pakistan
| | - Sajid Iqbal
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Abid Mehmood
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Qazi Mudassar Ilyas
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| |
Collapse
|
14
|
Jaumandreu L, Antón A, Pazos M, Rodriguez-Uña I, Rodriguez Agirretxe I, Martinez de la Casa JM, Ayala ME, Parrilla-Vallejo M, Dyrda A, Díez-Álvarez L, Rebolleda G, Muñoz-Negrete FJ. Glaucoma progression. Clinical practice guide. ARCHIVOS DE LA SOCIEDAD ESPANOLA DE OFTALMOLOGIA 2023; 98:40-57. [PMID: 36089479 DOI: 10.1016/j.oftale.2022.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 05/19/2022] [Indexed: 01/11/2023]
Abstract
OBJECTIVE To provide general recommendations that serve as a guide for the evaluation and management of glaucomatous progression in daily clinical practice based on the existing quality of clinical evidence. METHODS After defining the objectives and scope of the guide, the working group was formed and structured clinical questions were formulated following the PICO (Patient, Intervention, Comparison, Outcomes) format. Once all the existing clinical evidence had been independently evaluated with the AMSTAR 2 (Assessment of Multiple Systematic Reviews) and Cochrane "Risk of bias" tools by at least two reviewers, recommendations were formulated following the Scottish Intercollegiate Guideline network (SIGN) methodology. RESULTS Recommendations with their corresponding levels of evidence that may be useful in the interpretation and decision-making related to the different methods for the detection of glaucomatous progression are presented. CONCLUSIONS Despite the fact that for many of the questions the level of scientific evidence available is not very high, this clinical practice guideline offers an updated review of the different existing aspects related to the evaluation and management of glaucomatous progression.
Collapse
Affiliation(s)
- L Jaumandreu
- Servicio de Oftalmología, Hospital Universitario Ramón y Cajal, IRYCIS, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain.
| | - A Antón
- Institut Català de la Retina (ICR), Barcelona, Spain; Universitat Internacional de Catalunya (UIC), Barcelona, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - M Pazos
- Institut Clínic d'Oftalmologia, Hospital Clínic de Barcelona, IDIBAPS, Universitat de Barcelona, Barcelona, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - I Rodriguez-Uña
- Instituto Oftalmológico Fernández-Vega, Universidad de Oviedo, Oviedo, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - I Rodriguez Agirretxe
- Servicio de Oftalmología, Hospital Universitario Donostia, San Sebastián, Gipuzkoa, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - J M Martinez de la Casa
- Servicio de Oftalmología, Hospital Clinico San Carlos, Instituto de investigación sanitaria del Hospital Clínico San Carlos (IsISSC), IIORC, Universidad Complutense de Madrid, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - M E Ayala
- Institut Català de la Retina (ICR), Barcelona, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - M Parrilla-Vallejo
- Servicio de Oftalmología, Hospital Universitario Virgen Macarena, Sevilla, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - A Dyrda
- Institut Català de la Retina (ICR), Barcelona, Spain
| | - L Díez-Álvarez
- Servicio de Oftalmología, Hospital Universitario Ramón y Cajal, IRYCIS, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - G Rebolleda
- Servicio de Oftalmología, Hospital Universitario Ramón y Cajal, IRYCIS, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - F J Muñoz-Negrete
- Servicio de Oftalmología, Hospital Universitario Ramón y Cajal, IRYCIS, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| |
Collapse
|
15
|
DeepPDT-Net: predicting the outcome of photodynamic therapy for chronic central serous chorioretinopathy using two-stage multimodal transfer learning. Sci Rep 2022; 12:18689. [PMID: 36333442 PMCID: PMC9636239 DOI: 10.1038/s41598-022-22984-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 10/21/2022] [Indexed: 11/06/2022] Open
Abstract
Central serous chorioretinopathy (CSC), characterized by serous detachment of the macular retina, can cause permanent vision loss in the chronic course. Chronic CSC is generally treated with photodynamic therapy (PDT), which is costly and quite invasive, and the results are unpredictable. In a retrospective case-control study design, we developed a two-stage deep learning model to predict 1-year outcome of PDT using initial multimodal clinical data. The training dataset included 166 eyes with chronic CSC and an additional learning dataset containing 745 healthy control eyes. A pre-trained ResNet50-based convolutional neural network was first trained with normal fundus photographs (FPs) to detect CSC and then adapted to predict CSC treatability through transfer learning. The domain-specific ResNet50 successfully predicted treatable and refractory CSC (accuracy, 83.9%). Then other multimodal clinical data were integrated with the FP deep features using XGBoost.The final combined model (DeepPDT-Net) outperformed the domain-specific ResNet50 (accuracy, 88.0%). The FP deep features had the greatest impact on DeepPDT-Net performance, followed by central foveal thickness and age. In conclusion, DeepPDT-Net could solve the PDT outcome prediction task challenging even to retinal specialists. This two-stage strategy, adopting transfer learning and concatenating multimodal data, can overcome the clinical prediction obstacles arising from insufficient datasets.
Collapse
|
16
|
de Smet MD, Haim-Langford D, Neumann R, Kramer M, Cunningham E, Deutsch L, Milman Z. Tarsier Anterior Chamber Cell Grading: Improving the SUN Grading Scheme with a Visual Analog Scale. Ocul Immunol Inflamm 2022; 30:1686-1691. [PMID: 34232824 DOI: 10.1080/09273948.2021.1934036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
PURPOSE To compare an analog visual scale in grading anterior chamber cells (ACC) to a modified Standardization of Uveitis Nomenclature (SUN) ACC scale. METHOD A graphical representation of anterior chamber cells as a reference and a test set was created and shown to two groups of experienced uveitis experts. Group 1 was given the analog scale in written format, while group two was given the reference images for comparison. Each test subject was asked to provide the best approximation for each grade. RESULTS Eleven graders participated in phase 1. Correct grading occurred in 87.4% of cases. Discrepancies were seen at all grades. Only 3 of 11 graders were able to achieve a perfect score. Seven graders participated in phase 2. Agreement was 95.2% with 4/7 graders achieving a perfect score. Discrepancies were seen at higher grades only. CONCLUSIONS ACC grading is improved by a visual grading scale, and interobserver variability is reduced.
Collapse
Affiliation(s)
- Marc D de Smet
- MIOS Sa, Lausanne, Switzerland; Department of Ophthalmology, Leiden Medical Center, University of Leiden, Leiden, The Netherlands
| | | | - Ron Neumann
- Department of Ophthalmology, Maccabi Sherutei Briut, Ramat Hasharon, Israel
| | - Michal Kramer
- Department of Ophthalmology, Rabin Medical Center, Petach-Tikva, Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Emmett Cunningham
- Department of Ophthalmology, California Pacific Medical Center, San Francisco, California; the Department of Ophthalmology, Stanford University School of Medicine, Stanford, California; the Francis I Proctor Foundation, UCSF School of Medicine, San Francisco, California; and West Coast Retina Medical Group, San Francisco, California, USA
| | - Lisa Deutsch
- BioStats, Statistical Consulting Ltd, Modiin, Israel
| | | |
Collapse
|
17
|
Deep multiple instance learning for automatic glaucoma prevention and auto-annotation using color fundus photography. PROGRESS IN ARTIFICIAL INTELLIGENCE 2022. [DOI: 10.1007/s13748-022-00292-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
18
|
Zhou Q, Guo J, Chen Z, Chen W, Deng C, Yu T, Li F, Yan X, Hu T, Wang L, Rong Y, Ding M, Wang J, Zhang X. Deep learning-based classification of the anterior chamber angle in glaucoma gonioscopy. BIOMEDICAL OPTICS EXPRESS 2022; 13:4668-4683. [PMID: 36187252 PMCID: PMC9484423 DOI: 10.1364/boe.465286] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/30/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
In the proposed network, the features were first extracted from the gonioscopically obtained anterior segment photographs using the densely-connected high-resolution network. Then the useful information is further strengthened using the hybrid attention module to improve the classification accuracy. Between October 30, 2020, and January 30, 2021, a total of 146 participants underwent glaucoma screening. One thousand seven hundred eighty original images of the ACA were obtained with the gonioscope and slit lamp microscope. After data augmentation, 4457 images are used for the training and validation of the HahrNet, and 497 images are used to evaluate our algorithm. Experimental results demonstrate that the proposed HahrNet exhibits a good performance of 96.2% accuracy, 99.0% specificity, 96.4% sensitivity, and 0.996 area under the curve (AUC) in classifying the ACA test dataset. Compared with several deep learning-based classification methods and nine human readers of different levels, the HahrNet achieves better or more competitive performance in terms of accuracy, specificity, and sensitivity. Indeed, the proposed ACA classification method will provide an automatic and accurate technology for the grading of glaucoma.
Collapse
Affiliation(s)
- Quan Zhou
- Department of Biomedical Engineering, College of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China
- These authors contribute equally to this work
| | - Jingmin Guo
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
- These authors contribute equally to this work
| | - Zhiqi Chen
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Wei Chen
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Chaohua Deng
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Tian Yu
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Fei Li
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Xiaoqin Yan
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Tian Hu
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Linhao Wang
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Yan Rong
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Mingyue Ding
- Department of Biomedical Engineering, College of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Junming Wang
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Xuming Zhang
- Department of Biomedical Engineering, College of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
19
|
Ferreira H, Serranho P, Guimarães P, Trindade R, Martins J, Moreira PI, Ambrósio AF, Castelo-Branco M, Bernardes R. Stage-independent biomarkers for Alzheimer's disease from the living retina: an animal study. Sci Rep 2022; 12:13667. [PMID: 35953633 PMCID: PMC9372147 DOI: 10.1038/s41598-022-18113-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 08/05/2022] [Indexed: 12/02/2022] Open
Abstract
The early diagnosis of neurodegenerative disorders is still an open issue despite the many efforts to address this problem. In particular, Alzheimer’s disease (AD) remains undiagnosed for over a decade before the first symptoms. Optical coherence tomography (OCT) is now common and widely available and has been used to image the retina of AD patients and healthy controls to search for biomarkers of neurodegeneration. However, early diagnosis tools would need to rely on images of patients in early AD stages, which are not available due to late diagnosis. To shed light on how to overcome this obstacle, we resort to 57 wild-type mice and 57 triple-transgenic mouse model of AD to train a network with mice aged 3, 4, and 8 months and classify mice at the ages of 1, 2, and 12 months. To this end, we computed fundus images from OCT data and trained a convolution neural network (CNN) to classify those into the wild-type or transgenic group. CNN performance accuracy ranged from 80 to 88% for mice out of the training group’s age, raising the possibility of diagnosing AD before the first symptoms through the non-invasive imaging of the retina.
Collapse
Affiliation(s)
- Hugo Ferreira
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Pedro Serranho
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Department of Sciences and Technology, Universidade Aberta, Rua da Escola Politécnica, n.º 147, 1269-001, Lisboa, Portugal
| | - Pedro Guimarães
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Rita Trindade
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - João Martins
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Paula I Moreira
- Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Laboratory of Physiology, Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Center for Neuroscience and Cell Biology (CNC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - António Francisco Ambrósio
- Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Miguel Castelo-Branco
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Rui Bernardes
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal. .,Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.
| |
Collapse
|
20
|
Sun K, He M, He Z, Liu H, Pi X. EfficientNet embedded with spatial attention for recognition of multi-label fundus disease from color fundus photographs. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
21
|
Analysis Model of Image Colour Data Elements Based on Deep Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7631788. [PMID: 35898791 PMCID: PMC9313933 DOI: 10.1155/2022/7631788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 06/08/2022] [Accepted: 06/14/2022] [Indexed: 11/26/2022]
Abstract
At present, the classification method used in image colour element analysis in China is still based on subjective visual evaluation. Because the evaluation process will inevitably be disturbed by human factors, it will not only have low efficiency but also produce large errors. To solve the above problems, this paper proposes an image colour data element analysis model based on depth neural network. Firstly, intelligent analysis of image colour data elements based on tensorflow is constructed, and the isomerized tensorflow framework is designed with the idea of Docker cluster to improve the efficiency of image element analysis. Secondly, considering the time error and spatial error diffusion model in the process of image analysis, the quantization modified error diffusion model is replaced by the original model for more accurate colour management. Image colour management is an important link in the process of image reproduction; the rotating principal component analysis method is used to correct and analyze the image colour error. Finally, using the properties of transfer learning and convolution neural network, an image colour element analysis model based on depth neural network is established. Large-scale image data is collected, and the effectiveness and reliability of the algorithm are verified from different angles. The results show that the new image colour analysis method can not only reveal the true colour components of the target image; furthermore, the real colour component of the target image also has high spectral data reconstruction accuracy, and the analysis results have strong adaptability.
Collapse
|
22
|
AI-Based Automatic Detection and Classification of Diabetic Retinopathy Using U-Net and Deep Learning. Symmetry (Basel) 2022. [DOI: 10.3390/sym14071427] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Artificial intelligence is widely applied to automate Diabetic retinopathy diagnosis. Diabetes-related retinal vascular disease is one of the world’s most common leading causes of blindness and vision impairment. Therefore, automated DR detection systems would greatly benefit the early screening and treatment of DR and prevent vision loss caused by it. Researchers have proposed several systems to detect abnormalities in retinal images in the past few years. However, Diabetic Retinopathy automatic detection methods have traditionally been based on hand-crafted feature extraction from the retinal images and using a classifier to obtain the final classification. DNN (Deep neural networks) have made several changes in the previous few years to assist overcome the problem mentioned above. We suggested a two-stage novel approach for automated DR classification in this research. Due to the low fraction of positive instances in the asymmetric Optic Disk (OD) and blood vessels (BV) detection system, preprocessing and data augmentation techniques are used to enhance the image quality and quantity. The first step uses two independent U-Net models for OD (optic disc) and BV (blood vessel) segmentation. In the second stage, the symmetric hybrid CNN-SVD model was created after preprocessing to extract and choose the most discriminant features following OD and BV extraction using Inception-V3 based on transfer learning, and detects DR by recognizing retinal biomarkers such as MA (microaneurysms), HM (hemorrhages), and exudates (EX). On EyePACS-1, Messidor-2, and DIARETDB0, the proposed methodology demonstrated state-of-the-art performance, with an average accuracy of 97.92%, 94.59%, and 93.52%, respectively. Extensive testing and comparisons with baseline approaches indicate the efficacy of the suggested methodology.
Collapse
|
23
|
Retinal Glaucoma Public Datasets: What Do We Have and What Is Missing? J Clin Med 2022; 11:jcm11133850. [PMID: 35807135 PMCID: PMC9267177 DOI: 10.3390/jcm11133850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 06/29/2022] [Accepted: 06/30/2022] [Indexed: 11/16/2022] Open
Abstract
Public databases for glaucoma studies contain color images of the retina, emphasizing the optic papilla. These databases are intended for research and standardized automated methodologies such as those using deep learning techniques. These techniques are used to solve complex problems in medical imaging, particularly in the automated screening of glaucomatous disease. The development of deep learning techniques has demonstrated potential for implementing protocols for large-scale glaucoma screening in the population, eliminating possible diagnostic doubts among specialists, and benefiting early treatment to delay the onset of blindness. However, the images are obtained by different cameras, in distinct locations, and from various population groups and are centered on multiple parts of the retina. We can also cite the small number of data, the lack of segmentation of the optic papillae, and the excavation. This work is intended to offer contributions to the structure and presentation of public databases used in the automated screening of glaucomatous papillae, adding relevant information from a medical point of view. The gold standard public databases present images with segmentations of the disc and cupping made by experts and division between training and test groups, serving as a reference for use in deep learning architectures. However, the data offered are not interchangeable. The quality and presentation of images are heterogeneous. Moreover, the databases use different criteria for binary classification with and without glaucoma, do not offer simultaneous pictures of the two eyes, and do not contain elements for early diagnosis.
Collapse
|
24
|
Atalay E, Özalp O, Devecioğlu ÖC, Erdoğan H, İnce T, Yıldırım N. Investigation of the Role of Convolutional Neural Network Architectures in the Diagnosis of Glaucoma using Color Fundus Photography. Turk J Ophthalmol 2022; 52:193-200. [PMID: 35770344 PMCID: PMC9249112 DOI: 10.4274/tjo.galenos.2021.29726] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Objectives: To evaluate the performance of convolutional neural network (CNN) architectures to distinguish eyes with glaucoma from normal eyes. Materials and Methods: A total of 9,950 fundus photographs of 5,388 patients from the database of Eskişehir Osmangazi University Faculty of Medicine Ophthalmology Clinic were labelled as glaucoma, glaucoma suspect, or normal by three different experienced ophthalmologists. The categorized fundus photographs were evaluated using a state-of-the-art two-dimensional CNN and compared with deep residual networks (ResNet) and very deep neural networks (VGG). The accuracy, sensitivity, and specificity of glaucoma detection with the different algorithms were evaluated using a dataset of 238 normal and 320 glaucomatous fundus photographs. For the detection of suspected glaucoma, ResNet-101 architectures were tested with a data set of 170 normal, 170 glaucoma, and 167 glaucoma-suspect fundus photographs. Results: Accuracy, sensitivity, and specificity in detecting glaucoma were 96.2%, 99.5%, and 93.7% with ResNet-50; 97.4%, 97.8%, and 97.1% with ResNet-101; 98.9%, 100%, and 98.1% with VGG-19, and 99.4%, 100%, and 99% with the 2D CNN, respectively. Accuracy, sensitivity, and specificity values in distinguishing glaucoma suspects from normal eyes were 62%, 68%, and 56% and those for differentiating glaucoma from suspected glaucoma were 92%, 81%, and 97%, respectively. While 55 photographs could be evaluated in 2 seconds with CNN, a clinician spent an average of 24.2 seconds to evaluate a single photograph. Conclusion: An appropriately designed and trained CNN was able to distinguish glaucoma with high accuracy even with a small number of fundus photographs.
Collapse
|
25
|
Widen the Applicability of a Convolutional Neural-Network-Assisted Glaucoma Detection Algorithm of Limited Training Images across Different Datasets. Biomedicines 2022; 10:biomedicines10061314. [PMID: 35740336 PMCID: PMC9219722 DOI: 10.3390/biomedicines10061314] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 05/22/2022] [Accepted: 05/30/2022] [Indexed: 02/04/2023] Open
Abstract
Automated glaucoma detection using deep learning may increase the diagnostic rate of glaucoma to prevent blindness, but generalizable models are currently unavailable despite the use of huge training datasets. This study aims to evaluate the performance of a convolutional neural network (CNN) classifier trained with a limited number of high-quality fundus images in detecting glaucoma and methods to improve its performance across different datasets. A CNN classifier was constructed using EfficientNet B3 and 944 images collected from one medical center (core model) and externally validated using three datasets. The performance of the core model was compared with (1) the integrated model constructed by using all training images from the four datasets and (2) the dataset-specific model built by fine-tuning the core model with training images from the external datasets. The diagnostic accuracy of the core model was 95.62% but dropped to ranges of 52.5–80.0% on the external datasets. Dataset-specific models exhibited superior diagnostic performance on the external datasets compared to other models, with a diagnostic accuracy of 87.50–92.5%. The findings suggest that dataset-specific tuning of the core CNN classifier effectively improves its applicability across different datasets when increasing training images fails to achieve generalization.
Collapse
|
26
|
Classification of Glaucoma Based on Elephant-Herding Optimization Algorithm and Deep Belief Network. ELECTRONICS 2022. [DOI: 10.3390/electronics11111763] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
This study proposes a novel glaucoma identification system from fundus images through the deep belief network (DBN) optimized by the elephant-herding optimization (EHO) algorithm. Initially, the input image undergoes the preprocessing steps of noise removal and enhancement processes, followed by optical disc (OD) and optical cup (OC) segmentation and extraction of structural, intensity, and textural features. Most discriminative features are then selected using the ReliefF algorithm and passed to the DBN for classification into glaucomatous or normal. To enhance the classification rate of the DBN, the DBN parameters are fine-tuned by the EHO algorithm. The model has experimented on public and private datasets with 7280 images, which attained a maximum classification rate of 99.4%, 100% specificity, and 99.89% sensitivity. The 10-fold cross validation reduced the misclassification and attained 98.5% accuracy. Investigations proved the efficacy of the proposed method in avoiding bias, dataset variability, and reducing false positives compared to similar works of glaucoma classification. The proposed system can be tested on diverse datasets, aiding in the improved glaucoma diagnosis.
Collapse
|
27
|
Rebinth A, Kumar S. Glaucoma diagnosis based on colour and spatial features using kernel SVM. CARDIOMETRY 2022. [DOI: 10.18137/cardiometry.2022.22.508515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
The main aim of the paper is to develop an early detection system for glaucoma classification using the fundus images. By reviewing the various glaucoma image classification schemes, suitable features and supervised approaches are identified. An automated Computer Aided Diagnosis (CAD) system is developed for glaucoma based on soft computing techniques. It consists of three stages. The Region Of Interest (ROI) is selected in the first stage that comprises of Optic Disc (OD) region only. It is selected automatically based on the on the green channel’s highest intensity. In the second stage, features such as colour and Local Binary patterns (LBP) are extracted. In the final stage, classification of fundus image is achieved by employing supervised learning of Support Vector Machine (SVM) classifier for classifying the fundus images into either normal or glaucomatous. The evaluation of the CAD system on four public databases; ORIGA, RIM-ONE, DRISHTI-GS, and HRF show that LBP gives promising results than the conventional colour features.
Collapse
|
28
|
Wang W, Zhou W, Ji J, Yang J, Guo W, Gong Z, Yi Y, Wang J. Deep sparse autoencoder integrated with three‐stage framework for glaucoma diagnosis. INT J INTELL SYST 2022. [DOI: 10.1002/int.22911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Wenle Wang
- School of Software Jiangxi Normal University Nanchang China
| | - Wei Zhou
- College of Computer Science Shenyang Aerospace University Shenyang China
| | - Jianhang Ji
- College of Computer Science Shenyang Aerospace University Shenyang China
| | - Jikun Yang
- Shenyang Aier Excellence Eye Hospital Co. Ltd. Shenyang China
| | - Wei Guo
- College of Computer Science Shenyang Aerospace University Shenyang China
| | - Zhaoxuan Gong
- College of Computer Science Shenyang Aerospace University Shenyang China
| | - Yugen Yi
- School of Software Jiangxi Normal University Nanchang China
| | - Jianzhong Wang
- College of Information Science and Technology Northeast Normal University Changchun China
| |
Collapse
|
29
|
Glaucoma diagnosis using multi-feature analysis and a deep learning technique. Sci Rep 2022; 12:8064. [PMID: 35577876 PMCID: PMC9110703 DOI: 10.1038/s41598-022-12147-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 04/25/2022] [Indexed: 11/08/2022] Open
Abstract
In this study, we aimed to facilitate the current diagnostic assessment of glaucoma by analyzing multiple features and introducing a new cross-sectional optic nerve head (ONH) feature from optical coherence tomography (OCT) images. The data (n = 100 for both glaucoma and control) were collected based on structural, functional, demographic and risk factors. The features were statistically analyzed, and the most significant four features were used to train machine learning (ML) algorithms. Two ML algorithms: deep learning (DL) and logistic regression (LR) were compared in terms of the classification accuracy for automated glaucoma detection. The performance of the ML models was evaluated on unseen test data, n = 55. An image segmentation pilot study was then performed on cross-sectional OCT scans. The ONH cup area was extracted, analyzed, and a new DL model was trained for glaucoma prediction. The DL model was estimated using five-fold cross-validation and compared with two pre-trained models. The DL model trained from the optimal features achieved significantly higher diagnostic performance (area under the receiver operating characteristic curve (AUC) 0.98 and accuracy of 97% on validation data and 96% on test data) compared to previous studies for automated glaucoma detection. The second DL model used in the pilot study also showed promising outcomes (AUC 0.99 and accuracy of 98.6%) to detect glaucoma compared to two pre-trained models. In combination, the result of the two studies strongly suggests the four features and the cross-sectional ONH cup area trained using deep learning have a great potential for use as an initial screening tool for glaucoma which will assist clinicians in making a precise decision.
Collapse
|
30
|
WU JOHSUAN, NISHIDA TAKASHI, WEINREB ROBERTN, LIN JOUWEI. Performances of Machine Learning in Detecting Glaucoma Using Fundus and Retinal Optical Coherence Tomography Images: A Meta-Analysis. Am J Ophthalmol 2022; 237:1-12. [PMID: 34942113 DOI: 10.1016/j.ajo.2021.12.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 11/24/2021] [Accepted: 12/03/2021] [Indexed: 11/01/2022]
Abstract
PURPOSE To evaluate the performance of machine learning (ML) in detecting glaucoma using fundus and retinal optical coherence tomography (OCT) images. DESIGN Meta-analysis. METHODS PubMed and EMBASE were searched on August 11, 2021. A bivariate random-effects model was used to pool ML's diagnostic sensitivity, specificity, and area under the curve (AUC). Subgroup analyses were performed based on ML classifier categories and dataset types. RESULTS One hundred and five studies (3.3%) were retrieved. Seventy-three (69.5%), 30 (28.6%), and 2 (1.9%) studies tested ML using fundus, OCT, and both image types, respectively. Total testing data numbers were 197,174 for fundus and 16,039 for OCT. Overall, ML showed excellent performances for both fundus (pooled sensitivity = 0.92 [95% CI, 0.91-0.93]; specificity = 0.93 [95% CI, 0.91-0.94]; and AUC = 0.97 [95% CI, 0.95-0.98]) and OCT (pooled sensitivity = 0.90 [95% CI, 0.86-0.92]; specificity = 0.91 [95% CI, 0.89-0.92]; and AUC = 0.96 [95% CI, 0.93-0.97]). ML performed similarly using all data and external data for fundus and the external test result of OCT was less robust (AUC = 0.87). When comparing different classifier categories, although support vector machine showed the highest performance (pooled sensitivity, specificity, and AUC ranges, 0.92-0.96, 0.95-0.97, and 0.96-0.99, respectively), results by neural network and others were still good (pooled sensitivity, specificity, and AUC ranges, 0.88-0.93, 0.90-0.93, 0.95-0.97, respectively). When analyzed based on dataset types, ML demonstrated consistent performances on clinical datasets (fundus AUC = 0.98 [95% CI, 0.97-0.99] and OCT AUC = 0.95 [95% 0.93-0.97]). CONCLUSIONS Performance of ML in detecting glaucoma compares favorably to that of experts and is promising for clinical application. Future prospective studies are needed to better evaluate its real-world utility.
Collapse
|
31
|
Chaurasia AK, Greatbatch CJ, Hewitt AW. Diagnostic Accuracy of Artificial Intelligence in Glaucoma Screening and Clinical Practice. J Glaucoma 2022; 31:285-299. [PMID: 35302538 DOI: 10.1097/ijg.0000000000002015] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 02/26/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Artificial intelligence (AI) has been shown as a diagnostic tool for glaucoma detection through imaging modalities. However, these tools are yet to be deployed into clinical practice. This meta-analysis determined overall AI performance for glaucoma diagnosis and identified potential factors affecting their implementation. METHODS We searched databases (Embase, Medline, Web of Science, and Scopus) for studies that developed or investigated the use of AI for glaucoma detection using fundus and optical coherence tomography (OCT) images. A bivariate random-effects model was used to determine the summary estimates for diagnostic outcomes. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis of Diagnostic Test Accuracy (PRISMA-DTA) extension was followed, and the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool was used for bias and applicability assessment. RESULTS Seventy-nine articles met inclusion criteria, with a subset of 66 containing adequate data for quantitative analysis. The pooled area under receiver operating characteristic curve across all studies for glaucoma detection was 96.3%, with a sensitivity of 92.0% (95% confidence interval: 89.0-94.0) and specificity of 94.0% (95% confidence interval: 92.0-95.0). The pooled area under receiver operating characteristic curve on fundus and OCT images was 96.2% and 96.0%, respectively. Mixed data set and external data validation had unsatisfactory diagnostic outcomes. CONCLUSION Although AI has the potential to revolutionize glaucoma care, this meta-analysis highlights that before such algorithms can be implemented into clinical care, a number of issues need to be addressed. With substantial heterogeneity across studies, many factors were found to affect the diagnostic performance. We recommend implementing a standard diagnostic protocol for grading, implementing external data validation, and analysis across different ethnicity groups.
Collapse
Affiliation(s)
- Abadh K Chaurasia
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Connor J Greatbatch
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Alex W Hewitt
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
- Centre for Eye Research Australia, University of Melbourne, Melbourne, Australia
| |
Collapse
|
32
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 89] [Impact Index Per Article: 44.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
33
|
A Comprehensive Review of Methods and Equipment for Aiding Automatic Glaucoma Tracking. Diagnostics (Basel) 2022; 12:diagnostics12040935. [PMID: 35453985 PMCID: PMC9031684 DOI: 10.3390/diagnostics12040935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 04/05/2022] [Accepted: 04/07/2022] [Indexed: 02/01/2023] Open
Abstract
Glaucoma is a chronic optic neuropathy characterized by irreversible damage to the retinal nerve fiber layer (RNFL), resulting in changes in the visual field (VC). Glaucoma screening is performed through a complete ophthalmological examination, using images of the optic papilla obtained in vivo for the evaluation of glaucomatous characteristics, eye pressure, and visual field. Identifying the glaucomatous papilla is quite important, as optical papillary images are considered the gold standard for tracking. Therefore, this article presents a review of the diagnostic methods used to identify the glaucomatous papilla through technology over the last five years. Based on the analyzed works, the current state-of-the-art methods are identified, the current challenges are analyzed, and the shortcomings of these methods are investigated, especially from the point of view of automation and independence in performing these measurements. Finally, the topics for future work and the challenges that need to be solved are proposed.
Collapse
|
34
|
Gampala V, Maram B, Vigneshwari S, Cristin R. Glaucoma detection using hybrid architecture based on optimal deep neuro fuzzy network. INT J INTELL SYST 2022. [DOI: 10.1002/int.22845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Veerraju Gampala
- Department of Computer Science and Engineering Koneru Lakshmaiah Education Foundation Vaddeswaram, Guntur Andhra Pradesh India
| | - Balajee Maram
- Department of Computer Science and Engineering GMR Institute of Technology Rajam Andhra Pradesh India
| | - S. Vigneshwari
- Department of Computer Science and Engineering Sathyabama Institute of Science and Technology Chennai Tamil Nadu India
| | - R. Cristin
- Department of Computer Science and Engineering GMR Institute of Technology Rajam Andhra Pradesh India
| |
Collapse
|
35
|
Latif J, Tu S, Xiao C, Ur Rehman S, Imran A, Latif Y. ODGNet: a deep learning model for automated optic disc localization and glaucoma classification using fundus images. SN APPLIED SCIENCES 2022. [DOI: 10.1007/s42452-022-04984-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
AbstractGlaucoma is one of the prevalent causes of blindness in the modern world. It is a salient chronic eye disease that leads to irreversible vision loss. The impediments of glaucoma can be restricted if it is identified at primary stages. In this paper, a novel two-phase Optic Disk localization and Glaucoma Diagnosis Network (ODGNet) has been proposed. In the first phase, a visual saliency map incorporated with shallow CNN is used for effective OD localization from the fundus images. In the second phase, the transfer learning-based pre-trained models are used for glaucoma diagnosis. The transfer learning-based models such as AlexNet, ResNet, and VGGNet incorporated with saliency maps are evaluated on five public retinal datasets (ORIGA, HRF, DRIONS-DB, DR-HAGIS, and RIM-ONE) to differentiate between normal and glaucomatous images. This study’s experimental results demonstrate that the proposed ODGNet evaluated on ORIGA for glaucoma diagnosis is the most predictive model and achieve 95.75, 94.90, 94.75, and 97.85% of accuracy, specificity, sensitivity, and area under the curve, respectively. These results indicate that the proposed OD localization method based on the saliency map and shallow CNN is robust, accurate and saves the computational cost.
Collapse
|
36
|
Singh LK, Khanna M, Pooja. A novel multimodality based dual fusion integrated approach for efficient and early prediction of glaucoma. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103468] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
37
|
Tripathi PC, Bag S. A computer-aided grading of glioma tumor using deep residual networks fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106597. [PMID: 34974232 DOI: 10.1016/j.cmpb.2021.106597] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 10/19/2021] [Accepted: 12/20/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES Among different cancer types, glioma is considered as a potentially fatal brain cancer that arises from glial cells. Early diagnosis of glioma helps the physician in offering effective treatment to the patients. Magnetic Resonance Imaging (MRI)-based Computer-Aided Diagnosis for the brain tumors has attracted a lot of attention in the literature in recent years. In this paper, we propose a novel deep learning-based computer-aided diagnosis for glioma tumors. METHODS The proposed method incorporates a two-level classification of gliomas. Firstly, the tumor is classified into low-or high-grade and secondly, the low-grade tumors are classified into two types based on the presence of chromosome arms 1p/19q. The feature representations of four residual networks have been used to solve the problem by utilizing transfer learning approach. Furthermore, we have fused these trained models using a novel Dempster-shafer Theory (DST)-based fusion scheme in order to enhance the classification performance. Extensive data augmentation strategies are also utilized to avoid over-fitting of the discrimination models. RESULTS Extensive experiments have been performed on an MRI dataset to show the effectiveness of the method. It has been found that our method achieves 95.87% accuracy for glioma classification. The results obtained by our method have also been compared with different existing methods. The comparative study reveals that our method not only outperforms traditional machine learning-based methods, but it also produces better results to state-of-the-art deep learning-based methods. CONCLUSION The fusion of different residual networks enhances the tumor classification performance. The experimental findings indicates that Dempster-shafer Theory (DST)-based fusion technique produces superior performance in comparison to other fusion schemes.
Collapse
Affiliation(s)
- Prasun Chandra Tripathi
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines) Dhanabd, Dhanbad 826004, India.
| | - Soumen Bag
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines) Dhanabd, Dhanbad 826004, India.
| |
Collapse
|
38
|
Singh LK, Pooja, Garg H, Khanna M. Deep learning system applicability for rapid glaucoma prediction from fundus images across various data sets. EVOLVING SYSTEMS 2022. [DOI: 10.1007/s12530-022-09426-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
39
|
Detecting glaucoma with only OCT: Implications for the clinic, research, screening, and AI development. Prog Retin Eye Res 2022; 90:101052. [PMID: 35216894 DOI: 10.1016/j.preteyeres.2022.101052] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 01/21/2022] [Accepted: 02/01/2022] [Indexed: 12/25/2022]
Abstract
A method for detecting glaucoma based only on optical coherence tomography (OCT) is of potential value for routine clinical decisions, for inclusion criteria for research studies and trials, for large-scale clinical screening, as well as for the development of artificial intelligence (AI) decision models. Recent work suggests that the OCT probability (p-) maps, also known as deviation maps, can play a key role in an OCT-based method. However, artifacts seen on the p-maps of healthy control eyes can resemble patterns of damage due to glaucoma. We document in section 2 that these glaucoma-like artifacts are relatively common and are probably due to normal anatomical variations in healthy eyes. We also introduce a simple anatomical artifact model based upon known anatomical variations to help distinguish these artifacts from actual glaucomatous damage. In section 3, we apply this model to an OCT-based method for detecting glaucoma that starts with an examination of the retinal nerve fiber layer (RNFL) p-map. While this method requires a judgment by the clinician, sections 4 and 5 describe automated methods that do not. In section 4, the simple model helps explain the relatively poor performance of commonly employed summary statistics, including circumpapillary RNFL thickness. In section 5, the model helps account for the success of an AI deep learning model, which in turn validates our focus on the RNFL p-map. Finally, in section 6 we consider the implications of OCT-based methods for the clinic, research, screening, and the development of AI models.
Collapse
|
40
|
Akbar S, Hassan SA, Shoukat A, Alyami J, Bahaj SA. Detection of microscopic glaucoma through fundus images using deep transfer learning approach. Microsc Res Tech 2022; 85:2259-2276. [PMID: 35170136 DOI: 10.1002/jemt.24083] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Revised: 01/05/2022] [Accepted: 01/27/2022] [Indexed: 11/07/2022]
Abstract
Glaucoma disease in humans can lead to blindness if it progresses to the point where it affects the oculus' optic nerve head. It is not easily detected since there are no symptoms, but it can be detected using tonometry, ophthalmoscopy, and perimeter. However, advances in artificial intelligence approaches have permitted machine learning techniques to diagnose at an early stage. Numerous methods have been proposed using Machine Learning to diagnose glaucoma with different data sets and techniques but these are complex methods. Although, medical imaging instruments are used as glaucoma screening methods, fundus imaging specifically is the most used screening technique for glaucoma detection. This study presents a novel DenseNet and DarkNet combination to classify normal and glaucoma affected fundus image. These frameworks have been trained and tested on three data sets of high-resolution fundus (HRF), RIM 1, and ACRIMA. A total of 658 images have been used for healthy eyes and 612 images for glaucoma-affected eyes classification. It has also been observed that the fusion of DenseNet and DarkNet outperforms the two CNN networks and achieved 99.7% accuracy, 98.9% sensitivity, 100% specificity for the HRF database. In contrast, for the RIM1 database, 89.3% accuracy, 93.3% sensitivity, 88.46% specificity has been attained. Moreover, for the ACRIMA database, 99% accuracy, 100% sensitivity, 99% specificity has been achieved. Therefore, the proposed method is robust and efficient with less computational time and complexity compared to the literature available.
Collapse
Affiliation(s)
- Shahzad Akbar
- Riphah College of Computing, Riphah International University, Faisalabad Campus, Faisalabad, Pakistan
| | - Syed Ale Hassan
- Riphah College of Computing, Riphah International University, Faisalabad Campus, Faisalabad, Pakistan
| | - Ayesha Shoukat
- Riphah College of Computing, Riphah International University, Faisalabad Campus, Faisalabad, Pakistan
| | - Jaber Alyami
- Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, 21589, Saudi Arabia.,Imaging Unit, King Fahd Medical Research Center, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam Bin Abdulaziz University, Alkharj, 11942, Saudi Arabia
| |
Collapse
|
41
|
Neto A, Camara J, Cunha A. Evaluations of Deep Learning Approaches for Glaucoma Screening Using Retinal Images from Mobile Device. SENSORS 2022; 22:s22041449. [PMID: 35214351 PMCID: PMC8874723 DOI: 10.3390/s22041449] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 02/09/2022] [Accepted: 02/10/2022] [Indexed: 02/04/2023]
Abstract
Glaucoma is a silent disease that leads to vision loss or irreversible blindness. Current deep learning methods can help glaucoma screening by extending it to larger populations using retinal images. Low-cost lenses attached to mobile devices can increase the frequency of screening and alert patients earlier for a more thorough evaluation. This work explored and compared the performance of classification and segmentation methods for glaucoma screening with retinal images acquired by both retinography and mobile devices. The goal was to verify the results of these methods and see if similar results could be achieved using images captured by mobile devices. The used classification methods were the Xception, ResNet152 V2 and the Inception ResNet V2 models. The models’ activation maps were produced and analysed to support glaucoma classifier predictions. In clinical practice, glaucoma assessment is commonly based on the cup-to-disc ratio (CDR) criterion, a frequent indicator used by specialists. For this reason, additionally, the U-Net architecture was used with the Inception ResNet V2 and Inception V3 models as the backbone to segment and estimate CDR. For both tasks, the performance of the models reached close to that of state-of-the-art methods, and the classification method applied to a low-quality private dataset illustrates the advantage of using cheaper lenses.
Collapse
Affiliation(s)
- Alexandre Neto
- Escola de Ciências de Tecnologia, University of Trás-os-Montes and Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal;
| | - José Camara
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal;
- Departamento de Ciências e Tecnologia, University Aberta, 1250-100 Lisboa, Portugal
| | - António Cunha
- Escola de Ciências de Tecnologia, University of Trás-os-Montes and Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal;
- Correspondence: ; Tel.: +351-931-636-373
| |
Collapse
|
42
|
End-to-end multi-task learning for simultaneous optic disc and cup segmentation and glaucoma classification in eye fundus images. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2021.108347] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
43
|
Glaucoma disease diagnosis with an artificial algae-based deep learning algorithm. Med Biol Eng Comput 2022; 60:785-796. [PMID: 35080695 DOI: 10.1007/s11517-022-02510-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 01/18/2022] [Indexed: 10/19/2022]
Abstract
Glaucoma disease is optic neuropathy; in glaucoma, the optic nerve is damaged because the long duration of intraocular pressure can be caused blindness. Nowadays, deep learning classification algorithms are widely used to diagnose various diseases. However, in general, the training of deep learning algorithms is carried out by traditional gradient-based learning techniques that converge slowly and are highly likely to fall to the local minimum. In this study, we proposed a novel decision support system based on deep learning to diagnose glaucoma. The proposed system has two stages. In the first stage, the preprocessing of glaucoma disease data is performed by normalization and mean absolute deviation method, and in the second stage, the training of the deep learning is made by the artificial algae optimization algorithm. The proposed system is compared to traditional gradient-based deep learning and deep learning trained with other optimization algorithms like genetic algorithm, particle swarm optimization, bat algorithm, salp swarm algorithm, and equilibrium optimizer. Furthermore, the proposed system is compared to the state-of-the-art algorithms proposed for the glaucoma detection. The proposed system has outperformed other algorithms in terms of classification accuracy, recall, precision, false positive rate, and F1-measure by 0.9815, 0.9795, 0.9835, 0.0165, and 0.9815, respectively.
Collapse
|
44
|
Camara J, Neto A, Pires IM, Villasana MV, Zdravevski E, Cunha A. Literature Review on Artificial Intelligence Methods for Glaucoma Screening, Segmentation, and Classification. J Imaging 2022; 8:jimaging8020019. [PMID: 35200722 PMCID: PMC8878383 DOI: 10.3390/jimaging8020019] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 01/11/2022] [Accepted: 01/17/2022] [Indexed: 12/20/2022] Open
Abstract
Artificial intelligence techniques are now being applied in different medical solutions ranging from disease screening to activity recognition and computer-aided diagnosis. The combination of computer science methods and medical knowledge facilitates and improves the accuracy of the different processes and tools. Inspired by these advances, this paper performs a literature review focused on state-of-the-art glaucoma screening, segmentation, and classification based on images of the papilla and excavation using deep learning techniques. These techniques have been shown to have high sensitivity and specificity in glaucoma screening based on papilla and excavation images. The automatic segmentation of the contours of the optic disc and the excavation then allows the identification and assessment of the glaucomatous disease’s progression. As a result, we verified whether deep learning techniques may be helpful in performing accurate and low-cost measurements related to glaucoma, which may promote patient empowerment and help medical doctors better monitor patients.
Collapse
Affiliation(s)
- José Camara
- R. Escola Politécnica, Universidade Aberta, 1250-100 Lisboa, Portugal;
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal;
| | - Alexandre Neto
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal;
- Escola de Ciências e Tecnologia, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
| | - Ivan Miguel Pires
- Escola de Ciências e Tecnologia, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
- Instituto de Telecomunicações, Universidade da Beira Interior, 6200-001 Covilhã, Portugal
| | - María Vanessa Villasana
- Centro Hospitalar Universitário Cova da Beira, 6200-251 Covilhã, Portugal;
- UICISA:E Research Centre, School of Health, Polytechnic Institute of Viseu, 3504-510 Viseu, Portugal
| | - Eftim Zdravevski
- Faculty of Computer Science and Engineering, University Ss Cyril and Methodius, 1000 Skopje, North Macedonia;
| | - António Cunha
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal;
- Escola de Ciências e Tecnologia, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
- Correspondence: ; Tel.: +351-931-636-373
| |
Collapse
|
45
|
Nawaz M, Nazir T, Javed A, Tariq U, Yong HS, Khan MA, Cha J. An Efficient Deep Learning Approach to Automatic Glaucoma Detection Using Optic Disc and Optic Cup Localization. SENSORS (BASEL, SWITZERLAND) 2022; 22:434. [PMID: 35062405 PMCID: PMC8780798 DOI: 10.3390/s22020434] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 12/29/2021] [Accepted: 01/05/2022] [Indexed: 02/04/2023]
Abstract
Glaucoma is an eye disease initiated due to excessive intraocular pressure inside it and caused complete sightlessness at its progressed stage. Whereas timely glaucoma screening-based treatment can save the patient from complete vision loss. Accurate screening procedures are dependent on the availability of human experts who performs the manual analysis of retinal samples to identify the glaucomatous-affected regions. However, due to complex glaucoma screening procedures and shortage of human resources, we often face delays which can increase the vision loss ratio around the globe. To cope with the challenges of manual systems, there is an urgent demand for designing an effective automated framework that can accurately identify the Optic Disc (OD) and Optic Cup (OC) lesions at the earliest stage. Efficient and effective identification and classification of glaucomatous regions is a complicated job due to the wide variations in the mass, shade, orientation, and shapes of lesions. Furthermore, the extensive similarity between the lesion and eye color further complicates the classification process. To overcome the aforementioned challenges, we have presented a Deep Learning (DL)-based approach namely EfficientDet-D0 with EfficientNet-B0 as the backbone. The presented framework comprises three steps for glaucoma localization and classification. Initially, the deep features from the suspected samples are computed with the EfficientNet-B0 feature extractor. Then, the Bi-directional Feature Pyramid Network (BiFPN) module of EfficientDet-D0 takes the computed features from the EfficientNet-B0 and performs the top-down and bottom-up keypoints fusion several times. In the last step, the resultant localized area containing glaucoma lesion with associated class is predicted. We have confirmed the robustness of our work by evaluating it on a challenging dataset namely an online retinal fundus image database for glaucoma analysis (ORIGA). Furthermore, we have performed cross-dataset validation on the High-Resolution Fundus (HRF), and Retinal Image database for Optic Nerve Evaluation (RIM ONE DL) datasets to show the generalization ability of our work. Both the numeric and visual evaluations confirm that EfficientDet-D0 outperforms the newest frameworks and is more proficient in glaucoma classification.
Collapse
Affiliation(s)
- Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology Taxila, Rawalpindi 47050, Pakistan; (M.N.); (T.N.); (A.J.)
| | - Tahira Nazir
- Department of Computer Science, University of Engineering and Technology Taxila, Rawalpindi 47050, Pakistan; (M.N.); (T.N.); (A.J.)
| | - Ali Javed
- Department of Computer Science, University of Engineering and Technology Taxila, Rawalpindi 47050, Pakistan; (M.N.); (T.N.); (A.J.)
| | - Usman Tariq
- Information Systems Department, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al Khraj 11942, Saudi Arabia;
| | - Hwan-Seung Yong
- Department of Computer Science and Engineering, Ewha Womans University, Seoul 03760, Korea;
| | | | - Jaehyuk Cha
- Department of Computer Science, Hanyang University, Seoul 04763, Korea;
| |
Collapse
|
46
|
Ganesh SS, Kannayeram G, Karthick A, Muhibbullah M. A Novel Context Aware Joint Segmentation and Classification Framework for Glaucoma Detection. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:2921737. [PMID: 34777561 PMCID: PMC8589492 DOI: 10.1155/2021/2921737] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 10/19/2021] [Accepted: 10/23/2021] [Indexed: 11/17/2022]
Abstract
Glaucoma is a chronic ocular disease characterized by damage to the optic nerve resulting in progressive and irreversible visual loss. Early detection and timely clinical interventions are critical in improving glaucoma-related outcomes. As a typical and complicated ocular disease, glaucoma detection presents a unique challenge due to its insidious onset and high intra- and interpatient variabilities. Recent studies have demonstrated that robust glaucoma detection systems can be realized with deep learning approaches. The optic disc (OD) is the most commonly studied retinal structure for screening and diagnosing glaucoma. This paper proposes a novel context aware deep learning framework called GD-YNet, for OD segmentation and glaucoma detection. It leverages the potential of aggregated transformations and the simplicity of the YNet architecture in context aware OD segmentation and binary classification for glaucoma detection. Trained with the RIGA and RIMOne-V2 datasets, this model achieves glaucoma detection accuracies of 99.72%, 98.02%, 99.50%, and 99.41% with the ACRIMA, Drishti-gs, REFUGE, and RIMOne-V1 datasets. Further, the proposed model can be extended to a multiclass segmentation and classification model for glaucoma staging and severity assessment.
Collapse
Affiliation(s)
- S. Sankar Ganesh
- Department of Artificial Intelligence and Data Science, KPR Institute of Engineering and Technology, Coimbatore, 641407 Tamil Nadu, India
| | - G. Kannayeram
- Department of Electrical and Electronics Engineering, National Engineering College, Kovilpatti, 628503 Tamil Nadu, India
| | - Alagar Karthick
- Renewable Energy Lab, Department of Electrical and Electronics Engineering, KPR Institute of Engineering and Technology, Coimbatore, 641407 Tamil Nadu, India
| | - M. Muhibbullah
- Department of Electrical and Electronic Engineering, Bangladesh University, Dhaka 1207, Bangladesh
| |
Collapse
|
47
|
|
48
|
Interobserver and Intertest Agreement in Telemedicine Glaucoma Screening with Optic Disk Photos and Optical Coherence Tomography. J Clin Med 2021; 10:jcm10153337. [PMID: 34362120 PMCID: PMC8347319 DOI: 10.3390/jcm10153337] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 07/24/2021] [Accepted: 07/26/2021] [Indexed: 11/04/2022] Open
Abstract
Purpose: To evaluate interobserver and intertest agreement between optical coherence tomography (OCT) and retinography in the detection of glaucoma through a telemedicine program. Methods: A stratified sample of 4113 individuals was randomly selected, and those who accepted underwent examination including visual acuity, intraocular pressure (IOP), non-mydriatic retinography, and imaging using a portable OCT device. Participants’ data and images were uploaded and assessed by 16 ophthalmologists on a deferred basis. Two independent evaluations were performed for all participants. Agreement between methods was assessed using the kappa coefficient and the prevalence-adjusted bias-adjusted kappa (PABAK). We analyzed potential factors possibly influencing the level of agreement. Results: The final sample comprised 1006 participants. Of all suspected glaucoma cases (n = 201), 20.4% were identified in retinographs only, 11.9% in OCT images only, 46.3% in both, and 21.4% were diagnosed based on other data. Overall interobserver agreement outcomes were moderate to good with a kappa coefficient of 0.37 and a PABAK index of 0.58. Higher values were obtained by experienced evaluators (kappa = 0.61; PABAK = 0.82). Kappa and PABAK values between OCT and photographs were 0.52 and 0.82 for the first evaluation. Conclusion: In a telemedicine screening setting, interobserver agreement on diagnosis was moderate but improved with greater evaluator expertise.
Collapse
|
49
|
Nakahara K, Asaoka R, Tanito M, Shibata N, Mitsuhashi K, Fujino Y, Matsuura M, Inoue T, Azuma K, Obata R, Murata H. Deep learning-assisted (automatic) diagnosis of glaucoma using a smartphone. Br J Ophthalmol 2021; 106:587-592. [PMID: 34261663 DOI: 10.1136/bjophthalmol-2020-318107] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 01/07/2021] [Indexed: 11/04/2022]
Abstract
BACKGROUND/AIMS To validate a deep learning algorithm to diagnose glaucoma from fundus photography obtained with a smartphone. METHODS A training dataset consisting of 1364 colour fundus photographs with glaucomatous indications and 1768 colour fundus photographs without glaucomatous features was obtained using an ordinary fundus camera. The testing dataset consisted of 73 eyes of 73 patients with glaucoma and 89 eyes of 89 normative subjects. In the testing dataset, fundus photographs were acquired using an ordinary fundus camera and a smartphone. A deep learning algorithm was developed to diagnose glaucoma using a training dataset. The trained neural network was evaluated by prediction result of the diagnostic of glaucoma or normal over the test datasets, using images from both an ordinary fundus camera and a smartphone. Diagnostic accuracy was assessed using the area under the receiver operating characteristic curve (AROC). RESULTS The AROC with a fundus camera was 98.9% and 84.2% with a smartphone. When validated only in eyes with advanced glaucoma (mean deviation value < -12 dB, N=26), the AROC with a fundus camera was 99.3% and 90.0% with a smartphone. There were significant differences between these AROC values using different cameras. CONCLUSION The usefulness of a deep learning algorithm to automatically screen for glaucoma from smartphone-based fundus photographs was validated. The algorithm had a considerable high diagnostic ability, particularly in eyes with advanced glaucoma.
Collapse
Affiliation(s)
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Shizuoka, Japan .,Seirei Christopher University, Shizuoka, Hamamatsu, Japan.,Nanovision Research Division, Research Institute of Electronics, Shizuoka University, Hamamatsu, Japan.,The Graduate School for the Creation of New Photonics Industries, Hamamatsu, Japan.,Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Masaki Tanito
- Department of Ophthalmology, Shimane University Faculty of Medicine, Shimane, Japan
| | | | | | - Yuri Fujino
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Shizuoka, Japan.,Department of Ophthalmology, University of Tokyo, Tokyo, Japan.,Department of Ophthalmology, Shimane University Faculty of Medicine, Shimane, Japan
| | - Masato Matsuura
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Tatsuya Inoue
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan.,Department of Ophthalmology and Microtechnology, Yokohama City University School of Medicine, Kanagawa, Japan
| | - Keiko Azuma
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Ryo Obata
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Hiroshi Murata
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| |
Collapse
|
50
|
Suguna G, Lavanya R. Performance Assessment of EyeNet Model in Glaucoma Diagnosis. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [DOI: 10.1134/s1054661821020164] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|